AI Specialists
Not one AI. Paralegent AI deploys 18+ domain experts.
ScopeAnalyst knows deliverables. IPRightsAnalyst knows ownership clauses. TerminationAnalyst knows exit rights. Each specialist agent analyzes only the clauses in its domain — and the orchestrator resolves conflicts between them using confidence scoring.
Built on LangGraph + Google ADK. LLM-agnostic: Azure OpenAI, AWS Bedrock, or Google Vertex AI.

How It Works
Domain expertise per category. Not one model for everything.
Each of the 18+ agents is trained on a single legal domain from your 80-150-term rulebook. They analyze in parallel and the orchestrator resolves conflicts.
Specialization
Domain expertise per legal category
Each specialist agent is trained exclusively on one legal domain. WarrantyLiabilityAnalyst receives only warranty and liability rulebook terms. IPRightsAnalyst receives only intellectual property terms. This focused context eliminates the cross-domain confusion that causes 69-88% hallucination rates in general-purpose LLMs (Stanford, 2024). The system scales to 18+ agents based on your rulebook's 80-150 terms.

Parallel Processing
All agents analyze simultaneously
Contract matching routes each chunk to 2-3 relevant agents in 15-20 seconds using 1536-dimensional semantic routing. Then all agents analyze in parallel — not one after another. Full analysis across all 18+ agents completes in 2-8 minutes. A sequential single-model system processing the same 80-page MSA takes 5-10 minutes. Smart routing also reduces LLM API calls by approximately 75% — each chunk goes only to the agents whose domain matches.

Orchestration
Confidence-based conflict resolution
Multiple agents can flag the same clause with different 3-tier classifications. The orchestrator resolves every conflict using confidence scoring. When WarrantyLiabilityAnalyst says ORANGE (0.72) and ComplianceAnalyst says RED (0.91), the higher-confidence classification wins. Both rationales are preserved so the reviewer sees the commercial view and the regulatory view side by side.

The Problem
Stanford (2024) measured 69-88% error rates when general-purpose LLMs attempt legal analysis.
The core failure is context dilution. A single model must hold warranty law, IP law, compliance regulations, and 18+ more legal domains in one context window. Paralegent AI eliminates this by giving each specialist agent only its domain context — the same way law firms staff specialist attorneys on complex deals.
Agent Registry
Meet every agent by name.
Each specialist is a standalone AI agent with deep expertise in one legal domain. The system scales dynamically to 18+ agents based on your 80-150-term rulebook.
ScopeAnalyst
Scope of Supply
CommercialAnalyst
Commercial Terms
DeliveryAnalyst
Delivery & Acceptance
WarrantyLiabilityAnalyst
Warranty & Liability
IPRightsAnalyst
Intellectual Property
ComplianceAnalyst
Regulatory Compliance
ConfidentialityAnalyst
Confidentiality & Data
InsuranceAnalyst
Insurance
TerminationAnalyst
Termination
ForceMajeureAnalyst
Force Majeure
DisputeAnalyst
Dispute Resolution
Scoring Agents
Confidence Scoring
Orchestrator
Conflict Resolution
18+ is the branded minimum — the agent count scales dynamically with your rulebook. Add a category to your rulebook, and a new specialist agent is created automatically. No code changes required.
Architecture Comparison
Why 18+ specialists beat one generalist.
A single LLM analyzing an 80-page MSA must hold warranty law, IP law, compliance regulations, and 18+ more domains in one context window. Multi-agent specialization eliminates context dilution entirely.
| Paralegent AI (18+ Agents) | Single-Model AI | Manual Review | |
|---|---|---|---|
| Architecture | 18+ named specialists + orchestrator | 1 general-purpose LLM | Individual attorneys |
| Domain Expertise | Deep — each agent sees only its legal category | Shallow — one model covers all | Deep but inconsistent |
| Context Window | Focused: domain terms + relevant clauses only | Saturated: entire contract + all rules | Expert memory |
| Clause Routing | 1536-dim semantic matching — 2-3 agents per clause | No routing — every clause processed identically | Manual assignment |
| Conflict Resolution | Confidence scoring with rationale preservation | None — single opinion per clause | Senior partner judgment |
| Error Rate | Cross-agent validation catches edge cases | 69-88% hallucination rate (Stanford, 2024) | Varies by experience |
| LLM API Cost | ~75% reduction via smart routing | Full cost — every clause to one model | Billable hours |
| Speed | All agents in parallel (2-8 minutes) | Sequential — 5-10 min per contract | 20-30 hours per MSA |
| Scalability | Add agents by adding rulebook categories (80-150 terms) | Prompt engineering only | Hire more associates |
| Framework | LangGraph + Google ADK | Custom or no framework | N/A |
What Changes
When every legal domain has a dedicated expert.
18+ agents, each a domain specialist — ScopeAnalyst, CommercialAnalyst, IPRightsAnalyst, and more — one per legal category in your 80-150-term rulebook.
Parallel analysis, not sequential — All agents analyze simultaneously. Full analysis in 2-8 minutes — transforming what was 30 hours → 30 minutes per 80-page MSA.
Cross-agent conflict resolution — When two agents flag the same clause differently, the orchestrator picks the higher-confidence classification. Both rationales preserved.
75% fewer LLM API calls — 1536-dimensional semantic routing sends each chunk to only 2-3 relevant agents. 400 contract chunks generate approximately 800-1,200 calls, not thousands.
LLM-agnostic deployment — Built on LangGraph + Google ADK. LLM-agnostic (Azure OpenAI, Bedrock, Vertex AI) — switch providers by changing a configuration, not source code.
Paralegent AI deploys 18+ specialist agents that analyze contracts in parallel — completing full analysis in 2-8 minutes with 75% fewer LLM API calls.
See It Live
See 18+ agents analyze your contract.
Request a demo — we run your MSA through all agents and show the orchestrator resolving conflicts in real time.
FAQ
Multi-agent architecture questions.
Technical and operational questions about the 18+ specialist agents, semantic routing, orchestrator conflict resolution, and the LangGraph + Google ADK foundation.
What are the specialist agents and how were they chosen?
The specialist agents map to the standard categories in enterprise contract rulebooks: ScopeAnalyst, CommercialAnalyst, DeliveryAnalyst, WarrantyLiabilityAnalyst, IPRightsAnalyst, ComplianceAnalyst, ConfidentialityAnalyst, InsuranceAnalyst, TerminationAnalyst, ForceMajeureAnalyst, and DisputeAnalyst. Each corresponds to a legal domain that requires distinct expertise. The number is not fixed — the system creates agents dynamically based on your rulebook categories, scaling to 18+ for organizations with granular rulebooks.
How does semantic routing decide which agents receive a clause?
The orchestrator converts each contract chunk and every rulebook term into 1536-dimensional vector embeddings. It calculates cosine similarity between the chunk and all rulebook terms, then routes the chunk only to the 2-3 agents whose domain terms have the highest similarity scores. A warranty limitation clause scores highest against WarrantyLiabilityAnalyst terms and may also route to CommercialAnalyst if pricing-related penalty language is detected — but it never reaches DisputeAnalyst or ForceMajeureAnalyst.
What happens when two agents flag the same clause differently?
The orchestrator resolves conflicts using confidence scoring. Example: WarrantyLiabilityAnalyst flags a liability clause as ORANGE with 0.72 confidence, but ComplianceAnalyst flags the same clause as RED with 0.91 confidence due to a regulatory violation. The RED classification wins because 0.91 exceeds 0.72. Both rationales are preserved in the final output so the reviewer sees why each agent reached its conclusion. This prevents false negatives where a generalist model might miss the regulatory angle entirely.
Why does smart routing reduce LLM API costs by 75%?
Without smart routing, every clause would be sent to all agents — an 80-page MSA with approximately 400 chunks would generate thousands of LLM calls. With 1536-dimensional semantic routing, each chunk goes to only 2-3 relevant agents, reducing total calls to approximately 800-1,200. That is roughly 75% fewer API calls with lower token consumption per call due to focused, domain-specific prompts.
What framework powers the multi-agent orchestration?
Paralegent AI is built on LangGraph for agent workflow orchestration and Google ADK (Agent Development Kit) for agent lifecycle management. LangGraph provides the directed graph structure that routes contract chunks through specialist agents in parallel, manages state between analysis steps, and handles the final orchestrator aggregation. Google ADK provides the agent runtime, tool binding, and observability layer. Neither framework locks you into a specific LLM provider.
Can the agents run on different LLM providers?
Yes — the architecture is fully LLM-agnostic. Each agent calls the LLM through a provider abstraction layer. You can deploy all agents on Azure OpenAI, all on AWS Bedrock, all on Google Vertex AI, or even mix providers per agent if your organization requires it. Switching providers requires a configuration change, not a code change. Your legal team experiences zero disruption during a provider switch.
How does specialization improve accuracy over a single LLM?
Stanford research (2024) measured 69-88% error rates when general-purpose LLMs attempt legal contract analysis. The core problem is context dilution — a single model must hold warranty law, IP law, compliance regulations, and more in one context window. Each Paralegent AI agent receives only its domain context, eliminating cross-domain confusion and producing findings that match how specialist attorneys actually review contracts.
Can I add custom specialist agents beyond the defaults?
Yes — the agent count scales dynamically with your rulebook. If your rulebook defines 15 legal categories, the system instantiates 15 specialist agents. If you add a "Sustainability & ESG" category to your rulebook, a new agent is created for that domain automatically. Organizations with granular rulebooks routinely run 18+ agents. You never modify code to add agents — you modify your rulebook, and the orchestrator adapts.
How fast is the multi-agent analysis compared to single-model systems?
All agents analyze in parallel, not sequentially. Contract matching (routing chunks to agents) completes in 15-20 seconds. Full parallel analysis across all agents completes in 2-8 minutes. A sequential single-model system processing the same contract would take 5-10 minutes because it must analyze each clause category one at a time. Parallel orchestration is the reason Paralegent AI delivers 40-50 findings within minutes, not hours.
Where do the agents run and who controls them?
All agents run in your own cloud environment — Azure, AWS, or Google Cloud. The deployed system is yours permanently as a one-time licensed asset. Your contracts and rulebook never leave your infrastructure. You use your own LLM API keys and your own compute. There is no shared multi-tenant environment and no vendor-hosted processing.


