What we're building

Private AI.
Physical device.
Your models.

ANV-Gyro is a desk AI appliance — a cluster of Raspberry Pi 5 boards inside a translucent blue acrylic enclosure, running Ministral 3B entirely on-device. No cloud. No subscriptions. No data leaving the room.

3B Parameters on-device
Raspberry Pi 5
5 AI providers
0 Data sent to cloud
// Core concept

Three things that make this different.

🧲

Physical AI device

The device sits on your desk. It has spinning spheres that physically show CPU load — fans push air through turbine vanes, spinning two Ø80 mm piano-black aluminium spheres. Idle: ~2 RPM. Full AI load: ~25 RPM. No dashboard needed — you see the AI working.

Blue acrylic housing Flip-up 10" display Pico projector DLP2000
🔒

Fully private, offline-first

Inference runs inside the box. Your queries, documents and outputs never touch an external server. Works on a disconnected network. GDPR compliance is structural — not contractual — because the data never moves.

No cloud required GDPR by design Made in EU Hosted in EU

Multi-model, your choice

Run Ministral 3B locally, or route queries to Claude or OpenAI when you need more power. Use MixMatch to send the same prompt to multiple providers simultaneously and compare responses side-by-side — with AI-judged evaluation.

Ministral 3B Claude Sonnet GPT-4o MixMatch
// Console features

Eight modules. One console.

The ANV-Gyro console is a tab-based web application served from the device itself. Every tab is a live module — role-gated, real-time, and accessible from any browser on your local network.

all users 💬

AI Chat

Multi-provider streaming chat with persistent history, system prompt control, temperature/top-p tuning, and per-message provider switching.

admin ▸_

Terminal

Live Linux shell on the Raspberry Pi cluster. Full bash access for system management, model reloading, and diagnostics.

power+ 📤

Export

Preview, download or email full conversation transcripts. Supports formatted text export for documentation and compliance.

admin 💾

Database

Live table editor connected to the PostgreSQL backend. Browse, filter and modify records directly from the console.

all users 📊

Stats

Usage metrics per user, global prompt counts, daily activity graphs, and real-time cluster health indicators.

power+ 📄

Log

Structured application event log. Timestamped entries for all system events, auth actions, and AI requests.

power+ 🔧

Hardware

Device inventory view showing all connected hardware, component status, temperatures, and firmware versions.

power+ 🔗

Kafka

KRaft-mode Kafka cluster monitoring. Topic list, consumer group status, lag metrics, and broker health in real time.

// AI providers

Run local. Reach out when you need to.

The console routes each conversation to whichever provider you choose. API keys never touch the browser — all proxy calls are server-side.

ANV-Gyro Local

llama.cpp · GGUF · on-device

Default provider. Ministral 3B running via llama.cpp on the RPi 5 cluster. OpenAI-compatible API. No internet, no key, no cost.

Ministral 3B (quantised GGUF) Configurable endpoint Temperature / top_p / repeat penalty

Claude (Anthropic)

Server-side proxy · streaming SSE

Full Claude model suite available via server-side key management. Streamed responses via SSE, same UI as local mode.

claude-opus-4-6 claude-sonnet-4-6 claude-haiku-4-5

OpenAI

Chat Completions API · streaming

GPT model family, proxied server-side. Same system prompt and parameter controls as local mode.

gpt-4o / gpt-4o-mini gpt-4-turbo o3-mini

Ministral (direct)

Mistral AI API · EU model

Connects directly to Mistral AI's cloud API. Same model family as the on-device version — useful for comparison.

ministral-3b-latest EU-origin model (Paris)
⚡ MixMatch — Compare all providers at once
Send the same prompt to multiple providers simultaneously. Responses appear side-by-side in a scrollable comparison view. Optionally invoke an AI judge (Claude or OpenAI) to evaluate and rank the answers — useful for benchmarking local vs cloud quality.
Your prompt
Local Claude OpenAI
AI judge
// The physical device

A computer you can watch think.

The hardware is the product. Spinning spheres, a flip-up display, a pico projector, round status gauges — all inside a blue acrylic enclosure you can see through.

🔵
Blue acrylic housing

300 × 240 × 80 mm · translucent sapphire PMMA · RGB underglow · boards visible inside

🔧
3× Raspberry Pi 5 (4GB)

Cluster compute for distributed llama.cpp inference · PWM fan control via GPIO temperature sensors

Airflow-driven spheres (×2)

Ø80 mm aluminium · turbine vanes spin from fan exhaust · idle 2 RPM → full load 25 RPM

🖥️
Flip-up 10" display

IPS · 1920×1200 · hinged at rear edge · 70° friction-lock · closed = silent slab

🔵
Round status displays (×2)

GC9A01 · 1.28" · 240×240 · front-face · CPU load + clock always visible

📽️
Pico projector

DLP2000 · rear-mounted · projects upward onto wall behind device · no second screen needed

📷
Camera + Microphone

OV5647 · magnetic mount · MEMS mic on pogo-pin connector · both detachable

RPi 1 RPi 2 RPi 3 sphere L intake fan sphere R exhaust fan ANV-Gyro · 300×240×80mm Ministral 3B · on-device · no cloud AI LOAD 38% IDLE → 25 RPM
// Tech stack

What it runs on.

Hardware
3× Raspberry Pi 5 Blue PMMA enclosure DLP2000 projector GC9A01 round displays OV5647 camera MEMS microphone
AI / Inference
Ministral 3B (GGUF) llama.cpp Claude API (Anthropic) OpenAI API Mistral AI API MixMatch evaluation
Backend
Node.js + Express PostgreSQL JWT auth SSE streaming bcrypt Docker + Compose
Frontend
Vanilla JS modules DM Sans + JetBrains Mono i18n (EN / CS / DE) Role-based tabs iOS-optimised No framework deps
Data / Infra
Apache Kafka (KRaft) PostgreSQL chat history Per-user config store Event log system EU hosting only
// Ready?

Open the console.

Log in and start a conversation — local, private, no internet required.

Open Console → Full Product Page