Universal

Drop-in governance facade for any AI agent. Add multi-persona deliberation in 3 lines of code.

Overview

@consensus-tools/universal adds a governance layer to any AI agent in one line. Two operating modes:

  • Regex mode (default): Three rule-based reviewers (security, compliance, user-impact) evaluate tool output using pattern matching — no LLM calls, no network requests, sub-millisecond overhead.
  • LLM Persona mode (when config.model is provided): Multiple AI personas deliberate on each tool call before execution, with reputation-weighted voting, automatic persona respawn, and risk-tier classification.

Installation

pnpm add @consensus-tools/universal

Optional peer dependencies (install only what you need):

pnpm add @consensus-tools/langchain   # LangChain adapter
pnpm add @consensus-tools/ai-sdk      # Vercel AI SDK adapter
pnpm add @consensus-tools/mcp         # MCP adapter

Quick start

Regex mode — any tool executor

import { consensus } from "@consensus-tools/universal";

async function myExecutor(toolName: string, args: Record<string, unknown>) {
  // call the actual tool
}

const safe = consensus.wrap(myExecutor);

const result = await safe("send_email", {
  to: "user@example.com",
  body: "Your invoice is attached.",
});

Objects with .execute, .invoke, or .call methods are also accepted.

LLM Persona mode — multi-model deliberation

Provide a model adapter to activate LLM Persona Mode:

import { consensus } from "@consensus-tools/universal";
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic();

const model = async (messages) => {
  const system = messages.filter((m) => m.role === "system").map((m) => m.content).join("\n");
  const userMsgs = messages.filter((m) => m.role === "user").map((m) => ({ role: "user" as const, content: m.content }));
  const res = await client.messages.create({
    model: "claude-sonnet-4-20250514",
    max_tokens: 512,
    system,
    messages: userMsgs,
  });
  return res.content[0].text;
};

const safe = consensus.wrap(myExecutor, {
  model,
  policy: "weighted_reputation",
  pack: "governance",
  mode: "enforce",
});

Shadow mode — runs governance but never blocks. Useful for evaluating before enforcing:

const safe = consensus.wrap(myExecutor, {
  model,
  mode: "shadow",
  onDecision: (decision) => metrics.track("consensus", decision),
});

API reference

consensus.wrap(wrappable, config?)

function wrap(wrappable: Wrappable, config?: Partial<UniversalConfig>): ToolExecutor
ParameterTypeDescription
wrappableWrappableFunction, or object with .execute / .invoke / .call
configPartial<UniversalConfig>Optional configuration

Returns a ToolExecutor(toolName: string, args: Record<string, unknown>) => Promise<unknown>.

In LLM mode, the returned executor also has .feedback(signal) for sending human feedback to the reputation system.

Framework adapters

// LangChain — returns a ConsensusGuardCallbackHandler
const handler = await consensus.langchain(null, { policy: "supermajority" });
const executor = AgentExecutor.fromAgentAndTools({ agent, tools, callbacks: [handler] });

// Vercel AI SDK
const safeGenerate = await consensus.aiSdk(generate, { policy: "majority" });

// MCP server
const server = await consensus.mcp({ policy: "unanimous", failPolicy: "closed" });

Configuration

OptionDefaultDescription
policy"majority"Aggregation strategy. Supports "majority", "supermajority", "unanimous", "threshold:X", and all 9 core policy types.
guards["agent_action"]Guard domain names to use as reviewers
failPolicy"closed""closed" throws ConsensusBlockedError on block. "open" allows through (dev/test only).
storage"memory""memory" or an IStorage instance for persisting decisions
loggertruetrue, false, or a custom (event) => void handler
onDecisionCalled after every deliberation
onErrorCalled on unexpected errors
modelActivates LLM mode. Provider-agnostic (messages) => Promise<string>
pack"default"Persona pack: "default" (3 personas) or "governance" (5 personas)
personasCustom persona array (overrides pack)
mode"enforce""enforce" blocks on rejection. "shadow" logs but never blocks.
riskTiersPer-tool risk tier overrides: { "my_tool": "low" | "high" }
personaTimeout3000Per-persona LLM timeout in ms
respawnThreshold0.15Trigger respawn below this reputation score
reputationStoreIStorage instance for persisting reputation across restarts
onFeedbackCalled after .feedback() updates reputation

Risk tier classification (LLM mode)

Tools are auto-classified by name. High-risk tools get full LLM deliberation; low-risk tools fast-path through regex only.

TierPattern examplesDefault
Highsend, delete, write, deploy, merge, grant, execute, transferFalls back to high
Lowget, list, search, read, check, verify, view, describe

Override with riskTiers: { "my_tool": "low" }.

Error types

ErrorWhen thrown
ConsensusBlockedErrorDeliberation blocked and failPolicy is "closed"
MissingDependencyErrorOptional peer dep not installed (langchain / ai-sdk / mcp)
ConfigErrorInvalid policy name

Exports reference

ExportKindDescription
consensusObjectMain facade with .wrap(), .langchain(), .aiSdk(), .mcp()
ConsensusBlockedErrorClassThrown when deliberation blocks
MissingDependencyErrorClassThrown when an optional peer dep is missing
ConfigErrorClassThrown for invalid configuration
ReputationManagerClassPer-persona reputation tracking (LLM mode)
classifyToolFunctionClassify a tool name into a risk tier
deliberateFunctionLLM persona deliberation engine
WrappableTypeToolExecutor | { execute } | { invoke } | { call }
ToolExecutorType(toolName: string, args: Record<string, unknown>) => Promise<unknown>
UniversalConfigTypeFull configuration interface
ModelAdapterType(messages: ModelMessage[]) => Promise<string>
LlmDecisionResultTypeDecision result from LLM persona deliberation
FeedbackSignalTypeHuman feedback signal for reputation updates
  • wrapper -- consensus.wrap() uses this for regex mode
  • personas -- persona packs and reputation engine for LLM mode
  • guards -- guard templates used as reviewers in regex mode