Guardrails

2025-04-17

Introducing Guardrails: The contextual security layer for the agentic era

We are releasing Invariant Guardrails, our state-of-the-art contextual guardrailing system for AI applications. It supports tool calling, MCP as well as data flow control and contextual constraints.


As we are entering the agentic era of highly-connected AI systems, relying on multi-turn tool use to achieve complex workflows, the security of these systems has never been more important. This is why today, we are releasing Invariant Guardrails, our state-of-the-art guardrailing system for MCP and agentic AI applications. Guardrails is designed to provide a robust and flexible framework for ensuring the safety and reliability of AI agents, with a focus on contextual guardrailing.

Guardrails is a transparent security layer located at the LLM and MCP-level, enabling agent builders and security engineers to augment existing agentic models, with a set of expressive deterministic rules, going beyond simple system prompting:

Guardrails demo

Different from traditional LLM security, Guardrails enables you to impose contextual rules on your AI systems, such as data flow requirements, if-this-then-that patterns and tool call restrictions.

Invariant Guardrails are enforced by Gateway, Invariant's transparent MCP and LLM proxy. This means, Invariant can be deployed within minutes, to your existing agent and AI systems, merely by changing the base URLs for LLM and MCP interaction.

Context Is Key

Context is key

Different from traditional LLM security, Guardrails enables you to impose contextual rules on your AI systems, such as data flow requirements, if-this-then-that patterns and tool call restrictions. In the agentic era, AI security no longer reduces to the simple question about malicious prompts, but becomes a complex, organization-specific set of rules, that needs to be enforced across the entire system.

Guardrails enables you to specify complex flow-based rules, controlling data flows from important internal systems to untrusted, public sinkholes like untrusted web pages, emails or other external systems.

For instance, the following Guardrail policy ensures that no PII is leaked to an external email address:

raise "External email to unknown address" if:
    # detect flows between tools
    (call: ToolCall) -> (call2: ToolCall)

    # check if the first call obtains the user's inbox
    call is tool:get_inbox

    # second call sends an email to an unknown address
    call2 is tool:send_email({
      to: ".*@[^ourcompany.com$].*"
    })

Of course, it doesn't end here. Rules can be customized to your needs, extending to prompt injections, tool call restrictions, data flow control, content moderation patterns, loop detection and much more.

Built-In Observability

Next to Guardrails, Invariant Explorer provides built-in observability for your Invariant-augmented AI system, allowing for configuration, testing and monitoring of your guardrails.

Guardrails in Explorer
Testing, monitoring and updating your guardrails in Invariant Explorer

Explorer provides a powerful interface for testing and debugging your guardrails, allowing you to simulate different scenarios and see how your guardrails respond. This is especially useful for fine-tuning your guardrails and ensuring that they are working as intended.

Open Source for Transparency and Control

Guardrails is open source and available on GitHub. This allows you to inspect, audit and modify the guardrails code to your needs.

Please feel free to open issues and give feedback via the repository. We are looking forward to the feedback from the community.

Getting Started

To get started with Guardrails, you can sign up for Explorer and start guardrailing your agentic AI applications in minutes. For custom dedicated deployments, you can also contact us directly.

You can also start by reading the documentation or by playing with examples in our interactive Guardrails Playground.

If you want to jump right in, you can also check one of the following highlighted documentation chapters and use-cases, to learn more about how to use Guardrails:

Authors:

Luca Beurer-Kellner
Marc Fischer
Hemang Sarkar
Kristian Bonde Nielsen
Marco Milanta
Aleksei Kudrinskii
See all blog posts →