Opaque
You can't see what your AI tools are doing at a system level. What files they read, what APIs they called, what data left the machine.
Building in public - Kameas AI
Kenaz gives knowledge workers a sandboxed, observable, on-device environment for AI-assisted work - with policy-as-code governance and a cryptographic audit trail.
01 / The Problem
You can't see what your AI tools are doing at a system level. What files they read, what APIs they called, what data left the machine.
Running an AI agent against credit files or patient records means trusting it with production credentials, unrestricted egress, and no kill switch.
Every AI interaction trains someone's model - but not yours. Your work patterns compound in the cloud for someone else's benefit.
02 / The Workbench
A Kameas AI product.
The host daemon watches signals the AI layer needs to reason about work - and records them locally. Nothing leaves the machine.
Ephemeral KVM/QEMU workbenches. Each session gets a policy - egress allowlist, tool-call restrictions, PII filters. On teardown, the session ledger is filtered and merged into your training corpus.
Workbench
Policy (YAML)
egress: allow: - github.com - *.anthropic.com deny: "*" tools: allow: [read, write, exec] deny: [network.raw] pii: filter: on policies: [pci, phi]
On teardown
A continuous training loop fine-tunes a local LLM using your corpus. Hot-reload without restart. Weekly suggestions surface patterns, friction, and tool recommendations - evidence-backed and yours alone. Weights never leave the device.
Base model
LFM2-24B · Q4_K_M
MoE, ~2B active per token
Adapter
LoRA · rank 16
fine-tuned on your corpus
Hot-reload
< 300ms
no service restart
Suggestion cadence
Weekly digest
evidence-linked to events
Every VM lifecycle event, policy decision, merge operation, and training run is appended to a hash-chained, Ed25519-signed ledger. Tamper-evident. Independently verifiable. Compliance-ready.
Hardware-aware routing. Full local inference on capable hardware. Hybrid on mid-range. Kameas SaaS fallback on light hardware - previewed before anything leaves the device.
Everything - observer, training, inference - runs on-device.
Local for most work; SaaS fallback for long-context reasoning.
Opt-in. Every call previewed. No logging of prompts or completions.
03 / Architecture
Three layers, one direction of data flow. The native app is the control plane. The host holds the corpus. The VM is ephemeral.
04 / Privacy
On your machine - always
Can leave - opt-in only
Read the code
The observer daemon is open source. The inference layer runs llama.cpp - also open source. "Trust us" is not the privacy model - "read the code" is.
05 / Enterprise
Adoption tiers, session counts, tool mix. No individual attribution.
Cycle time, friction events, pattern adoption - rolled up by team.
Local vs. SaaS inference ratio, cost per developer, trend over quarters.
Policy coverage, egress-deny rates, audit integrity checks.
The fleet layer deploys on your infrastructure. No data flows through Kameas servers. Engineering leadership gets the dashboard. Engineers keep full individual control.
Open source
Free
Apache 2.0 · single user
Enterprise
Per seat · annual
Deployed on your infrastructure
06 / Open Source
Kenaz is being built in public. The daemon, observer, audit layer, and VM plumbing are Apache 2.0 from day one.
07 / Waitlist
Join the waitlist for early access, or talk to us about enterprise.
You're on the list.
Check your inbox for confirmation. We'll be in touch as Kenaz ships.