shenas.ai / whats-next experimental updated may 2026

The shape of the next thing we're building.

shenas today is a quiet pipeline: services in, canonical schemas, dashboards. The next thing is bigger -- an entity graph, an agent that watches it, and a careful way to ask a frontier model for help without telling it who's asking.

This page is a sketch. The diagram below is the whole architecture in one figure. The notes after it walk through each region, in the order a request passes through them.

your home mesh · local boundary entity graph person · company · vehicle · project · … ~10³ nodes agentic feedback observe · propose · learn from reaction observe propose learn LLM client logic decides when / what to ask externally policy: redact PII · cap rate · cache locally prepares envelope for mixnet privacy boundary LLMMixNet anonymous credentials · multi-hop relay relay 1 relay 2 relay 3 + cover traffic on idle presents anon. credential proves entitlement, not identity frontier LLM opaque · third-party sees: redacted prompt sees: anon. credential does not see: who, when, where outside the local boundary response (no identifier on return path)
fig. 01 — local device · privacy boundary · frontier LLM. sage = local · orange = mixnet · dashed = anonymized
§ 01 · local

What stays on your home mesh.

The left column of the diagram is your home mesh -- the laptop, phone, tablet, and server that run shenas. Everything in it runs on the same FastAPI process that already serves shenas today. No accounts, no remote storage, no syncing of raw records beyond the mesh. Three things sit inside.

// entity graph

People, companies, projects.

Canonical schemas in DuckDB, related by typed edges. The graph is the source of truth -- every downstream component reads from it.

// agentic feedback

Watches, proposes, learns.

Observes patterns in the graph, surfaces them as small suggestions, adjusts when you say "no thanks". Cannot write to the graph directly.

// LLM client logic

Decides when to ask out.

Most of the work happens locally. When a question genuinely needs a frontier model, this layer redacts the prompt, rate-limits the call, and hands the envelope to the mixnet.

§ 02 · the wall

How a request leaves without leaving a name.

The orange band in the middle of the figure is the LLMMixNet: a small set of relays that route prompts between devices and frontier models. Each relay forwards but does not know both ends of a connection. A client presents an anonymous credential at the first hop -- proof of entitlement, not proof of identity.

The relays also generate cover traffic when they're idle, so the existence of a request is not, by itself, information. The system is derived from the same line of work as Tor and from more recent anonymous-credential schemes; the writeup is in how it works.

§ 03 · outside

What the frontier model sees.

On the right of the diagram is the frontier LLM -- a third-party endpoint, chosen per-request. It receives a redacted prompt and a valid credential. It does not receive your IP, your account, or any link between this call and the previous one.

The response travels back along the same anonymized path. The arrow is drawn dashed in the diagram for a reason: there is no return identifier. The relay that hands you the answer cannot tell which model produced it for which device.

where this stands

Experimental. Not yet shipping.

The entity graph and agentic feedback layer are partially landed in shenas-net/shenas. The LLMMixNet is a separate prototype and is not yet integrated. We expect to ship the local pieces before the privacy layer; the privacy layer requires more review.