Integrations

Admina integrates with the most popular AI agent frameworks and orchestration tools. Every integration routes governance through the Admina proxy โ€” no code changes required in your existing agents.

Sidecar pattern: Admina runs as a sidecar proxy next to your agent. Integrations simply redirect traffic through Admina, which applies all 4 governance domains transparently before forwarding to the upstream.

OpenClaw

Admina ships with a ready-to-use OpenClaw skill that routes all MCP tool calls from your agent through the Admina governance proxy. See the full OpenClaw Integration page for detailed setup instructions.

How it works

The admina-governance SKILL.md routes all agent actions through /api/v1/validate and /api/v1/audit. Governed action types include:

  • llm_call โ€” LLM prompt and completion governance
  • shell_exec โ€” Shell command execution
  • file_write โ€” File system write operations
  • http_request โ€” Outbound HTTP requests
  • message_send โ€” Agent-to-agent or agent-to-user messages

Skill install

The setup.sh script rewrites ~/.openclaw/mcp.json to route all MCP traffic through Admina:

# Install the Admina skill
cd openclaw-skill
chmod +x setup.sh
./setup.sh

# Restart OpenClaw gateway
openclaw gateway restart

Adapter (manual)

For fine-grained control, use the python adapter.py to rewrite and restore your MCP config:

# Rewrite mcp.json to use Admina as proxy
python adapter.py rewrite --config ~/.openclaw/mcp.json

# Restore original mcp.json
python adapter.py restore --config ~/.openclaw/mcp.json

Stdio bridge

For MCP servers that use the stdio transport (filesystem, npx-based servers), Admina provides a stdio bridge that wraps the server process and intercepts all JSON-RPC messages on stdin/stdout.

# In mcp.json, wrap stdio servers with the bridge
{
  "mcpServers": {
    "filesystem": {
      "command": "python",
      "args": [
        "/path/to/admina/openclaw-adapter/stdio_bridge.py",
        "--",
        "npx", "-y", "@modelcontextprotocol/server-filesystem", "/home"
      ]
    }
  }
}

LangChain

The AdminaCallbackHandler is a drop-in callback for ChatOpenAI and any other LangChain model. It governs all LLM and tool events through Admina without changing your existing chain logic.

Governed events

  • on_llm_start โ€” Validates prompts before they reach the model
  • on_llm_end โ€” Scans completions for PII, toxicity, compliance
  • on_tool_start โ€” Firewall check before tool execution
  • on_tool_end โ€” Audit trail of tool results

Configuration

The handler accepts the following parameters:

  • admina_url โ€” Admina proxy URL (default: http://localhost:8080)
  • api_key โ€” API key for authentication
  • session_id โ€” Session identifier for tracking
  • pii_redaction โ€” Enable PII redaction (Data Sovereignty domain)
  • firewall โ€” Enable firewall (Agent Security domain)
  • loop_detection โ€” Enable loop breaker (Agent Security domain)
  • on_block โ€” Behavior when a request is blocked (raise or warn)

Example

# pip install admina[langchain]

from admina.integrations.langchain import AdminaCallbackHandler
from langchain_openai import ChatOpenAI

handler = AdminaCallbackHandler(
    admina_url="http://localhost:8080",
    api_key="your-key",
    pii_redaction=True,
    firewall=True,
)
llm = ChatOpenAI(callbacks=[handler])

# Every call to llm.invoke() is now governed
response = llm.invoke("Summarize this document")

CrewAI

Admina provides two callbacks for CrewAI: AdminaStepCallback fires after each agent step, and AdminaTaskCallback fires after task completion. Both support per-agent configuration.

Callbacks

  • AdminaStepCallback โ€” Governs each agent step (tool calls, LLM calls, reasoning)
  • AdminaTaskCallback โ€” Audits completed tasks with full context (inputs, outputs, agent info)

Example

# pip install admina[crewai]

from admina.integrations.crewai import AdminaStepCallback, AdminaTaskCallback
from crewai import Crew, Agent, Task

researcher = Agent(name="Researcher", ...)
writer = Agent(name="Writer", ...)

crew = Crew(
    agents=[researcher, writer],
    tasks=[...],
    step_callback=AdminaStepCallback(
        admina_url="http://localhost:8080",
    ),
    task_callback=AdminaTaskCallback(
        admina_url="http://localhost:8080",
    ),
)

# Kick off โ€” every step and task is governed
result = crew.kickoff()

n8n

The n8n-nodes-admina package adds three governance nodes to your n8n workflows:

Install

npm install n8n-nodes-admina

Nodes

  • AdminaGovern โ€” Inline governance node. Place it before any AI node in your workflow. Validates inputs, blocks non-compliant requests, and adds governance metadata to the output.
  • AdminaAudit โ€” Passive logging node. Place it after any AI node to log all requests and responses to the Admina audit trail without blocking.
  • AdminaDashboard โ€” WebSocket trigger node. Streams real-time governance events from Admina into n8n, enabling reactive workflows (alerts, Slack notifications, compliance reports).
n8n community nodes: After installing, restart your n8n instance. The three Admina nodes will appear in the node palette under the "AI Governance" category.

Cheshire Cat AI

Admina integrates with Cheshire Cat AI as a Python plugin. The plugin hooks into the Cat's event system to govern agent actions at three critical points.

Governed hooks

  • agent_fast_reply โ€” Intercepts the agent's reply before it is sent to the user. Enables PII redaction, toxicity filtering, and compliance checks on outputs.
  • before_cat_sends_message โ€” Final checkpoint before the message leaves the Cat. Applied after all other plugins have processed the response.
  • before_cat_recalls_memories โ€” Governs memory recall to prevent leaking sensitive data from the vector store.

Install

Install the Admina plugin from the Cheshire Cat plugin registry or manually:

# Copy the plugin into your Cat's plugins directory
cp -r admina-cheshire-cat-plugin/ cat/plugins/admina-governance/

# Set the Admina proxy URL in the plugin settings
# via the Cat admin UI or in the plugin's settings.json:
{
  "admina_url": "http://localhost:8080",
  "api_key": "your-key"
}

GuardrailsAI

Admina provides a GuardrailsAI Guard plugin that wraps popular validators with Admina's governance layer. All inference runs locally by default โ€” no data leaves your infrastructure.

Install

pip install admina[guardrailsai]

Wrapped validators

  • toxic_language โ€” Detects and blocks toxic or harmful language in prompts and completions
  • detect_pii โ€” Identifies and redacts personally identifiable information
  • detect_jailbreak โ€” Catches prompt injection and jailbreak attempts

Example

from admina.integrations.guardrailsai import AdminaGuard

guard = AdminaGuard(
    admina_url="http://localhost:8080",
    validators=["toxic_language", "detect_pii", "detect_jailbreak"],
)

# Validate a prompt before sending to the LLM
result = guard.validate(
    "Tell me the SSN of John Smith"
)

# result.outcome โ†’ "fail"
# result.validated_output โ†’ "[PII REDACTED]"
Local-only inference: All GuardrailsAI validators run locally by default. No prompts or completions are sent to external services. Configure GUARDRAILS_REMOTE=true only if you want to use GuardrailsAI's hosted validators.