Metaprise LLM — America's third open-source large language model, purpose-built for enterprise compliance. Read the changelog →
Platform Cloud Hybrid Enterprise Six Engines Orchestration Observability Harness Metaprise LLM
OPEN-SOURCE · ENTERPRISE-GRADE

Metaprise LLM

America's third open-source large language model. Built on a hybrid Llama architecture and trained on 8+ years of financial, medical, and legal compliance data — purpose-built for regulated enterprise environments.

MODEL SPECIFICATION
Metaprise LLM
Proprietary Open-Source Model
Architecture Hybrid Llama
Training Data 8+ Years
Domains Financial · Medical · Legal
License Open-Source
Version Lock Supported
Auditable Inference AuditChain
Deployment Cloud · Private · Air-Gap
AMERICA'S 3RD OPEN-SOURCE LLM
3rd
US Open-Source LLM
8+
Years Training Data
3
Compliance Domains
100%
Auditable Inference

ARCHITECTURE

Hybrid Llama Architecture

Built on Meta's Llama foundation with proprietary modifications optimized for enterprise compliance reasoning. The hybrid architecture combines Llama's proven language understanding with domain-specific attention layers trained on regulated industry data.

ARCHITECTURE DECISIONS

Llama Foundation: Proven, open-source base architecture with strong multilingual support and instruction-following capability — the same foundation trusted by thousands of enterprises worldwide
Domain Attention Layers: Additional attention heads specifically trained on regulatory language patterns, compliance terminology, and structured financial/medical/legal reasoning
Structured Output Optimization: Fine-tuned for consistent structured outputs (JSON, tables, reports) that enterprise integrations require — not just conversational fluency
Long-Document Understanding: Extended context handling optimized for the long documents common in regulated industries — contracts, regulatory filings, clinical trial reports, and audit documentation

WHY OPEN-SOURCE MATTERS

Full weight inspection — your security team can audit every parameter
No vendor lock-in — deploy on any infrastructure, including air-gapped environments
Regulatory transparency — demonstrate to auditors exactly how the model works
Community contributions — benefit from the broader open-source ecosystem while maintaining enterprise-grade quality

TRAINING DATA

8+ Years of Compliance Data

General-purpose models are trained on internet-scale data — broad but shallow on regulated industries. Metaprise LLM is trained on 8+ years of deeply curated financial, medical, and legal compliance data, giving it native fluency in the language of regulated enterprise.

FINANCIAL COMPLIANCE

Regulatory Filings: SEC filings, OCC guidance, Federal Reserve regulations, FINRA rules, and Dodd-Frank compliance documentation
Transaction Patterns: AML/KYC documentation, suspicious activity reports, trade surveillance records, and compliance investigation workflows
Risk Assessment: Credit risk models, operational risk frameworks, Basel III/IV compliance requirements, and stress testing methodologies

MEDICAL COMPLIANCE

Clinical Documentation: Clinical notes, diagnostic coding (ICD-10/11), procedure documentation, and patient care workflows
Regulatory Standards: HIPAA compliance requirements, FDA guidelines, clinical trial protocols, and adverse event reporting
Operational Workflows: Prior authorization processes, claims adjudication, formulary management, and utilization review documentation

LEGAL COMPLIANCE

Contract Analysis: Commercial contracts, master service agreements, SLAs, NDAs, and regulatory licensing agreements
Case Law: Regulatory enforcement actions, compliance case precedents, and administrative proceedings
Policy Frameworks: Internal compliance policies, risk management frameworks, and governance documentation across regulated industries

ENTERPRISE FEATURES

Version Lock & Auditable Inference

Enterprise AI requires two guarantees that general-purpose models cannot provide: predictable behavior and complete auditability. Metaprise LLM delivers both as built-in capabilities, not afterthoughts.

VERSION LOCK

Frozen Behavior: Pin your agents to a specific model version. Even as the broader model ecosystem evolves, your agent behavior remains exactly the same — predictable and reproducible
Controlled Upgrades: Test new model versions in staging before promoting to production. Compare outputs side-by-side using the Observability layer's Offline Eval suite
Regulatory Compliance: Demonstrate to auditors that the model powering your compliance workflows has not changed since the last validation — a requirement in many regulated environments
Rollback Capability: If a new version introduces regressions, instantly roll back to the previously validated version with zero downtime

AUDITABLE INFERENCE

AuditChain Integration: Every inference pass is logged through AURA's AuditChain — SHA-256 hash-chained, tamper-proof, written synchronously
Full Provenance: For any model output, trace back to the exact input, model version, configuration parameters, and timestamp that produced it
Compliance Reporting: Generate audit reports showing every model decision for a given time period, agent, or mission — exportable for regulatory review
Real-Time Monitoring: Integrates with the Observability layer for continuous monitoring of inference quality, cost, and latency

Built for regulated environments

General-purpose models excel at broad tasks. Metaprise LLM is purpose-built for environments where compliance, auditability, and predictability are non-negotiable.

Capability
Metaprise LLM
General-Purpose Models
Compliance Domain Training
8+ years financial, medical, legal
Internet-scale, broad coverage
Version Lock
Built-in, per-agent pinning
Limited or unavailable
Auditable Inference
SHA-256 hash-chained AuditChain
External logging required
Air-Gap Deployment
Local Ollama, fully offline
Cloud-dependent or complex setup
Regulatory Transparency
Open-source, full weight inspection
Closed-source, opaque
Structured Output Quality
Optimized for JSON, tables, reports
Conversational focus
Model Library Integration
Native, alongside 642 models
Standalone API

Run anywhere your data lives

Three deployment options aligned with the Metaprise Agent OS deployment modes. Choose the option that matches your data sovereignty and compliance requirements.

Cloud Inference

Fastest time to value

Access Metaprise LLM through the cloud API alongside 642 other models. No infrastructure management, instant access, pay-per-token billing.

Serverless, no provisioning required
Unified API with the full Model Library
Auto-scaling for production workloads
Best for: SMB, dev teams, prototyping

Private Deployment

Your VPC, your control

Deploy Metaprise LLM inside your own VPC with your API keys. Sensitive data stays on-premise while leveraging cloud compute for elastic scaling.

Runs in your VPC or private cloud
Data never leaves your perimeter
Hybrid routing: confidential data local, public data cloud
Best for: Regional banks, insurance, healthcare

Air-Gapped (Ollama)

Zero network dependency

Run Metaprise LLM entirely offline via local Ollama deployment. Fully air-gapped, FedRAMP-ready, designed for the most sensitive environments.

Fully offline, zero external calls
FedRAMP Ready deployment architecture
Local Ollama runtime, customer-managed
Best for: Federal agencies, defense, large enterprises

Part of the Model Library: Metaprise LLM is available alongside 642 other models through the unified Model Library API. Use it as your primary model, or combine it with other models for specialized tasks — same API, same Dashboard, same Observability.

An LLM that speaks compliance natively.

Open-source, version-locked, and auditable. Purpose-built for regulated enterprise.