Integrations
Admina integrates with the most popular AI agent frameworks and orchestration tools. Every integration routes governance through the Admina proxy โ no code changes required in your existing agents.
OpenClaw
Admina ships with a ready-to-use OpenClaw skill that routes all MCP tool calls from your agent through the Admina governance proxy. See the full OpenClaw Integration page for detailed setup instructions.
How it works
The admina-governance SKILL.md routes all agent actions through
/api/v1/validate and /api/v1/audit. Governed action types include:
llm_callโ LLM prompt and completion governanceshell_execโ Shell command executionfile_writeโ File system write operationshttp_requestโ Outbound HTTP requestsmessage_sendโ Agent-to-agent or agent-to-user messages
Skill install
The setup.sh script rewrites ~/.openclaw/mcp.json to route all
MCP traffic through Admina:
# Install the Admina skill cd openclaw-skill chmod +x setup.sh ./setup.sh # Restart OpenClaw gateway openclaw gateway restart
Adapter (manual)
For fine-grained control, use the python adapter.py to rewrite and restore
your MCP config:
# Rewrite mcp.json to use Admina as proxy python adapter.py rewrite --config ~/.openclaw/mcp.json # Restore original mcp.json python adapter.py restore --config ~/.openclaw/mcp.json
Stdio bridge
For MCP servers that use the stdio transport (filesystem, npx-based servers), Admina provides a stdio bridge that wraps the server process and intercepts all JSON-RPC messages on stdin/stdout.
# In mcp.json, wrap stdio servers with the bridge
{
"mcpServers": {
"filesystem": {
"command": "python",
"args": [
"/path/to/admina/openclaw-adapter/stdio_bridge.py",
"--",
"npx", "-y", "@modelcontextprotocol/server-filesystem", "/home"
]
}
}
} LangChain
The AdminaCallbackHandler is a drop-in callback for ChatOpenAI
and any other LangChain model. It governs all LLM and tool events through Admina without
changing your existing chain logic.
Governed events
on_llm_startโ Validates prompts before they reach the modelon_llm_endโ Scans completions for PII, toxicity, complianceon_tool_startโ Firewall check before tool executionon_tool_endโ Audit trail of tool results
Configuration
The handler accepts the following parameters:
admina_urlโ Admina proxy URL (default:http://localhost:8080)api_keyโ API key for authenticationsession_idโ Session identifier for trackingpii_redactionโ Enable PII redaction (Data Sovereignty domain)firewallโ Enable firewall (Agent Security domain)loop_detectionโ Enable loop breaker (Agent Security domain)on_blockโ Behavior when a request is blocked (raiseorwarn)
Example
# pip install admina[langchain] from admina.integrations.langchain import AdminaCallbackHandler from langchain_openai import ChatOpenAI handler = AdminaCallbackHandler( admina_url="http://localhost:8080", api_key="your-key", pii_redaction=True, firewall=True, ) llm = ChatOpenAI(callbacks=[handler]) # Every call to llm.invoke() is now governed response = llm.invoke("Summarize this document")
CrewAI
Admina provides two callbacks for CrewAI:
AdminaStepCallback fires after each agent step, and
AdminaTaskCallback fires after task completion.
Both support per-agent configuration.
Callbacks
AdminaStepCallbackโ Governs each agent step (tool calls, LLM calls, reasoning)AdminaTaskCallbackโ Audits completed tasks with full context (inputs, outputs, agent info)
Example
# pip install admina[crewai] from admina.integrations.crewai import AdminaStepCallback, AdminaTaskCallback from crewai import Crew, Agent, Task researcher = Agent(name="Researcher", ...) writer = Agent(name="Writer", ...) crew = Crew( agents=[researcher, writer], tasks=[...], step_callback=AdminaStepCallback( admina_url="http://localhost:8080", ), task_callback=AdminaTaskCallback( admina_url="http://localhost:8080", ), ) # Kick off โ every step and task is governed result = crew.kickoff()
n8n
The n8n-nodes-admina package adds three governance nodes to your
n8n workflows:
Install
npm install n8n-nodes-admina Nodes
- AdminaGovern โ Inline governance node. Place it before any AI node in your workflow. Validates inputs, blocks non-compliant requests, and adds governance metadata to the output.
- AdminaAudit โ Passive logging node. Place it after any AI node to log all requests and responses to the Admina audit trail without blocking.
- AdminaDashboard โ WebSocket trigger node. Streams real-time governance events from Admina into n8n, enabling reactive workflows (alerts, Slack notifications, compliance reports).
Cheshire Cat AI
Admina integrates with Cheshire Cat AI as a Python plugin. The plugin hooks into the Cat's event system to govern agent actions at three critical points.
Governed hooks
-
agent_fast_replyโ Intercepts the agent's reply before it is sent to the user. Enables PII redaction, toxicity filtering, and compliance checks on outputs. -
before_cat_sends_messageโ Final checkpoint before the message leaves the Cat. Applied after all other plugins have processed the response. -
before_cat_recalls_memoriesโ Governs memory recall to prevent leaking sensitive data from the vector store.
Install
Install the Admina plugin from the Cheshire Cat plugin registry or manually:
# Copy the plugin into your Cat's plugins directory cp -r admina-cheshire-cat-plugin/ cat/plugins/admina-governance/ # Set the Admina proxy URL in the plugin settings # via the Cat admin UI or in the plugin's settings.json: { "admina_url": "http://localhost:8080", "api_key": "your-key" }
GuardrailsAI
Admina provides a GuardrailsAI Guard plugin that wraps popular validators with Admina's governance layer. All inference runs locally by default โ no data leaves your infrastructure.
Install
pip install admina[guardrailsai] Wrapped validators
toxic_languageโ Detects and blocks toxic or harmful language in prompts and completionsdetect_piiโ Identifies and redacts personally identifiable informationdetect_jailbreakโ Catches prompt injection and jailbreak attempts
Example
from admina.integrations.guardrailsai import AdminaGuard guard = AdminaGuard( admina_url="http://localhost:8080", validators=["toxic_language", "detect_pii", "detect_jailbreak"], ) # Validate a prompt before sending to the LLM result = guard.validate( "Tell me the SSN of John Smith" ) # result.outcome โ "fail" # result.validated_output โ "[PII REDACTED]"
GUARDRAILS_REMOTE=true only if you want to use GuardrailsAI's hosted validators.