Your AI Universe.

You set the rules. You own the stack. Every user, every agent, every model β€” governed by policies you define and no one beneath you can override.

Total Sovereignty
Infinite Scale
You Make The Rules

The Universe Hierarchy

Hover each tier to see how policies connect across your deployment.

🌌

Universe

Your entire Helix deployment. Master policies, data residency, approved models, compliance posture. Nothing overrides this level.

πŸŒ€

Galaxies

A tenant (a client, a business division, an institution). Inherits Universe rules, adds its own. E.g. Finance Galaxy enforces HIPAA on top of company-wide SOC 2.

β˜€οΈ

Solar Systems

A business unit, team, or department within a Galaxy. Scoped tool access, model limits, budget caps, persona libraries.

πŸͺ

Worlds

A project, workflow, or application. Inherits everything above. Users here cannot change the rules β€” they can only work within them.

PRIVIAN AI Core 🌌 UNIVERSE Acme Corp Β· Master Policies SOC 2 GDPR πŸŒ€ GALAXY Finance Div Β· Tenant HIPAA PCI β˜€οΈ SOLAR SYSTEM Risk Team Β· BU / Dept Model Limits Budget Caps πŸͺ WORLD Fraud Detection Β· Project Workflow Β· App Β· Inherits all above

Policies flow DOWN. Users cannot override what their Galaxy, Solar System, or Universe has set. Companies control their reality.

Explore Universe Architecture β†’
Live Council Session Β· GC-2026-0312

Councils. Policy. Control.

Every governance decision is made by a Council β€” a circle of humans and AI advisors who vote, log, and enforce policy at every tier of your deployment.

Council Statement
OPA Policy Engine

Every request evaluated against Open Policy Agent rules before execution β€” deny policies enforced in milliseconds with zero model invocation.

Immutable Audit Trail

Every council vote, policy decision, and blocked request written to a tamper-evident log. Exportable to SIEM β€” Splunk, Elastic, Datadog.

Hierarchical Enforcement

Councils exist at Universe, Galaxy, and Solar System tiers. Lower councils can only restrict β€” they can never override the tier above them.

πŸ›‘οΈ Data Sovereignty

Your Data Never Leaves Unguarded.

Before any data crosses a boundary β€” cloud model, remote tool, or external API β€” Helix scans, detects, and redacts. The scrubbed version travels. The original stays home.

πŸ“
User Input
Raw data
πŸ”
PII Scanner
40+ entity types
🎯
Detect
Flag & classify
πŸ”
Tokenize
Reversible tokens
☁️
External API
Zero PII in transit
πŸ”„
Re-inject
Tokens swapped back
βœ…
Response
Full data restored
Incoming Data
⚑ Scrub Chamber
πŸ”¬
Helix PII Engine
Scrubbed: 0 tokens
Scrubbed Output
πŸ”

Auto-detect

40+ PII entity types: names, SSNs, emails, phone numbers, IBANs, medical record numbers, IP addresses, and custom regex patterns.

βš™οΈ

Custom PII Rules

Define organisation-specific patterns per Galaxy or Solar System. Healthcare Galaxies flag MRNs; Finance Galaxies flag account numbers.

πŸ”

Tokenised Re-injection

PII replaced with reversible tokens before external processing. Tokens swapped back locally after response β€” cloud providers never see real data.

πŸ›‘οΈ

Zero-Trust Boundary

Even on-premise tools in a different Solar System cannot receive PII from another unless an explicit cross-system policy allows it.

Hosted AI. Cloud Connected. Policy Gated.

Helix runs your private models on your hardware, and connects to cloud providers when policy allows β€” always through the PII scrubber, always logged, always council-approved.

🏠 Hosted AI

  • β€’ NVIDIA H100 / A100 / MIG-partitioned GPU inference
  • β€’ LoRA / QLoRA fine-tuned models on your private data
  • β€’ MLX-accelerated inference on Apple Silicon appliances
  • β€’ Model registry with version pinning, staging, and rollback
  • β€’ On-prem embedding pipelines (no data leaves the network)

☁️ Cloud Connectors

  • β€’ OpenAI (GPT-4o, o3)
  • β€’ Anthropic (Claude Opus / Sonnet / Haiku)
  • β€’ Azure OpenAI (private endpoint)
  • β€’ Google Gemini Pro / Ultra
  • β€’ Custom / self-hosted OpenAI-compatible endpoints
  • β€’ All cloud calls: PII-scrubbed Β· logged Β· council-approved Β· rate-limited

Intelligent Routing

Task complexity score, cost policy, latency budget, and model capability flags determine which model runs. The router never guesses; it enforces.

β€’ Complexity β‰₯ 8.5 β†’ Opus (reasoning-intensive)
β€’ Cost cap < $0.01 β†’ Helix-Analyst-70B (on-prem)
β€’ Latency SLA < 200ms β†’ MLX-Mistral (local)
β€’ Vision required β†’ GPT-4o (policy-approved)
⚑ Connectivity Layer

Synapse. The Nervous System.

The Synapse MCP Router is Helix's unified tool orchestration layer. Models discover tools dynamically β€” every call intercepted, policy-checked, and logged before the tool fires.

AI MODELS MCP TOOLS ⚑ SYNAPSE MCP Router ✓ OPA: ALLOW 🟒 GPT-4o OpenAI Agent model Β· REST 🧑 Claude Anthropic Agent model Β· REST πŸ’Ž Gemini Google Agent model Β· REST πŸ—„οΈ DB Database gRPC πŸ’¬ Slack Slack REST βš™οΈ Compute Compute gRPC πŸ“Š CRM CRM REST πŸ“… Cal Calendar WebSocket
Live Call Log
🀝

Unified Tool Handshake

Models discover all available tools via a single manifest β€” no hard-coded integrations, no restart required.

πŸ”’

OPA Interception

Every call policy-checked before the tool fires. Deny rules enforced in milliseconds with full audit logging.

πŸ”„

Dynamic Tool Discovery

Tools register and deregister live. New capabilities instantly available to all models in the Solar System.

🌐

Multi-Protocol

REST, gRPC, WebSocket, and event streams β€” all routed through a single governed gateway.

# helix-mcp-config.yaml
synapse:
  router_mode: policy_gated
  endpoint: http://synapse:8080
  
mcp_servers:
  - name: database-tools
    url: grpc://db-mcp:5000
    policy: <finance_galaxy_db_policy>
  - name: slack-integration
    url: http://slack-mcp:3000
    policy: <comms_policy>
  - name: cloud-compute
    url: http://compute-mcp:9000
    policy: <cost_gated_policy>

Multi-Agent Orchestration.

πŸ”„

n8n Workflows

Visual, event-driven automation. Triggers, conditions, loops, HTTP calls, database queries β€” all policy-governed.

πŸ€–

CrewAI Agents

Autonomous agent crews for long-running research, analysis, and action pipelines. Agent roles scoped by RBAC.

πŸ“‘

Event Streaming

Webhooks, Kafka-compatible event topics, and real-time trigger chains across your entire Universe.

πŸ‘₯

Human-in-the-Loop

Any workflow step can require human approval before proceeding. Gates are enforced by policy, not by convention.

Industrial Compute. Your Hardware.

Kubernetes Orchestration

K3s for appliance deployments; full K8s for enterprise clusters. Namespaced workloads per Galaxy. Helm chart deployment. ArgoCD/Flux GitOps. Auto-scaling inference pods.

NVIDIA GPU Fleet

H100 / A100 / L40S support. MIG partitioning for multi-tenant GPU sharing. DCGM health monitoring. CUDA-optimised inference serving. vGPU for virtualised environments.

Hyper-Converged Infrastructure

Nutanix AHV Β· VMware vSAN Β· Dell VxRail. Helix runs natively on your existing HCI investment. Zero new hardware vendor required. Storage policies enforced per Galaxy namespace.

On-Premise
Private Cloud VPC
Hybrid Burst

Built-In. Not Bolted On.

Every Helix deployment ships with enterprise-grade platform capabilities as first-class citizens β€” not add-ons.

Auth & Identity

SAML 2.0, OIDC, LDAP/AD, MFA, FIDO2, JIT provisioning. SSO at Universe level cascades to all Galaxies. Per-tenant identity isolation.

RBAC & Policy Engine

Hierarchical roles, ABAC attributes, OPA enforcement. Roles defined at Universe level are immutable in child scopes. Resource-scoped permissions down to individual tool calls.

Feature Flags & Kill Switches

Canary rollouts, A/B model routing, emergency kill switches per model or tool. Flags scoped per Galaxy, Solar System, or Planet. Instant propagation.

Monitoring & Observability

Prometheus metrics, Grafana dashboards, Alertmanager. Per-Galaxy resource quotas. SIEM export. GPU DCGM integration. Fleet health without crossing tenant boundaries.

GitOps & Secret Management

ArgoCD / Flux for declarative infrastructure. HashiCorp Vault for secrets, model credentials, API keys. Model version pinning with one-command rollback.

Backstage Developer Portal

Internal service catalog, OpenAPI specs, software templates, TechDocs. Onboard new Galaxies with pre-approved templates in minutes.

See Helix Think.

πŸ›οΈ
Helix
Governance Agent
Council Request Filed
GPT-4o isn't on your Solar System's approved model list. Your AI Council has authorised: Helix-Analyst-70B (on-prem), Claude-3.5-Sonnet (cloud burst, PII-scrubbed). I've submitted a council request to add GPT-4o β€” ref GC-2026-0312, expected review 48 hrs. Want me to proceed with Helix-Analyst-70B now?
Proceed with Helix-Analyst-70B.
Routing to Helix-Analyst-70B. Analysis complete. Council request GC-2026-0312 logged and queued for your Galaxy admin. You'll be notified by Helix when the decision is made.
↑
πŸ›‘οΈ
Helix
Privacy Agent
4 PII Entities Protected
PII detected before processing β€” [NAME], [DOB], [SSN], [EMAIL] tokenised. Processing anonymised content only. Summary: The subject closed $2.4M in Q3 revenue, exceeding target by 18%. Consistent with prior quarter trajectory. Full context retained locally β€” zero PII transmitted externally.
Proceed with the anonymised analysis.
PII re-injected into your local context. 4 entities scrubbed in transit, restored on return. Audit entry written: PII-2026-0441. This interaction is logged in your immutable compliance record.
↑
πŸ€–
Helix
Router Agent
47k Records Β· On-Prem Only
Large-scale batch job (47k records). Routing strategy: Helix-Classify-8B for per-ticket sentiment scoring (on-prem GPU, no PII risk), Helix-Insights-34B for trend synthesis and anomaly detection. Estimated runtime 4 min 20 sec. Zero data leaves your Universe. Starting now β€” I'll surface the summary when complete.
Perfect. Run the full analysis.
Done. 47,000 tickets scored. Overall sentiment: 71% positive (+4% vs Q2). Top negative driver: onboarding friction (18% of negative). 3 anomaly clusters flagged for review. Full report queued in your Research Solar System.
↑

Models That Know Your World.

fine-tune-pipeline.sh
$ helix finetune start --base-model=Mistral-7B --adapter=peft/lora
Loading training dataset (15,234 examples)...
Validating PII scrubbing on 2,000 samples...
$ Starting LoRA training on GPU 0 (H100)...
Epoch 1/5 [==============================] - Loss: 1.234
Epoch 2/5 [==============================] - Loss: 0.987
Epoch 3/5 [==============================] - Loss: 0.654
Epoch 4/5 [==============================] - Loss: 0.432
Epoch 5/5 [==============================] - Loss: 0.218
$ Adapter saved: models/helix-finance-lora-v1.adapter
$ Promoting to staging...
βœ“ Canary deployment complete (5% traffic)
$ Monitoring latency and accuracy metrics...
πŸ€–

Automated Capture

Corrections, approvals, high-quality outputs flagged automatically and queued for training.

⚑

LoRA / QLoRA Pipelines

Efficient fine-tuning on your GPU nodes. Nightly runs, no manual intervention. Base models stay pristine; adapters layer your domain knowledge.

πŸ“¦

Model Registry & Promotion

Fine-tuned adapters progress through staging β†’ canary β†’ production gates. Rollback in one command.

Voice. Vision. Documents.

🎀 Voice

Whisper STT (multilingual transcription) + Piper TTS (natural speech synthesis). Voice-activated agents, meeting transcription, accessibility-first interfaces.

πŸ–ΌοΈ Image & Video

ComfyUI for image generation and analysis. Document OCR. Chart and diagram interpretation. All processed on your GPU fleet.

πŸ“„ Documents & RAG

Ingest PDFs, Word docs, spreadsheets, wikis. Per-tenant RAG pipelines with chunking, embedding, and retrieval scoped to each Solar System.

Ready to own your AI Universe?

Every Helix deployment is yours from the ground up. Your models. Your policies. Your rules.