Uni¶
Reasoning and memory infrastructure for intelligent systems.¶
Uni gives AI agents structured memory, formal reasoning, what-if simulation, and explainable decisions — in one embedded engine backed by object storage. No servers. No infrastructure. One pip install.
The Agent Reasoning Gap¶
Today's AI agents can generate text, but they cannot reason over structured knowledge, remember across sessions, simulate consequences before acting, or explain why they reached a conclusion. These are cognitive capabilities, not database features — and without them, agents remain fluent but unreliable.
The workaround is stitching together four or five systems — a graph database, a vector store, a text index, a rules engine, and custom glue code — each with its own data model, consistency boundary, and operational overhead. Uni closes that gap with a single embedded library where graph traversals, vector search, full-text retrieval, logic programming, and hypothetical reasoning all execute against the same data, in-process, with no ETL pipelines or cross-system joins.
Five Pillars of Machine Cognition¶
Uni is organized around five cognitive capabilities that intelligent systems need:
- Structured Memory — a typed property graph for entities and relationships (OpenCypher + 36 graph algorithms)
- Associative Recall — hybrid retrieval that fuses semantic and lexical search (HNSW/IVF_PQ vectors + BM25 full-text +
uni.searchfusion) - Domain Physics — declarative rules that encode how a domain actually works (Locy recursive rules with stratified negation)
- Mental Simulation — hypothetical reasoning that explores consequences before committing (ASSUME … THEN in a rollback boundary)
- Explainable Decisions — proof traces and abductive reasoning that show why and what would need to change (EXPLAIN RULE + ABDUCE)
The following example puts four of these pillars to work in a single scenario.
See It In Action¶
A network-ops team needs to answer three questions about their service dependency graph: What breaks if the auth service goes down? Why is it reachable? What would need to change so it isn't?
Domain Physics — define the rules once:
-- Transitive reachability over service dependencies
CREATE RULE reachable AS
MATCH (a:Service)-[:DEPENDS_ON]->(b:Service)
YIELD KEY a, KEY b
CREATE RULE reachable AS
MATCH (a:Service)-[:DEPENDS_ON]->(mid:Service)
WHERE mid IS reachable TO b
YIELD KEY a, KEY b
Mental Simulation — what breaks if auth goes down?
ASSUME {
MATCH (s:Service {name: 'auth-service'})
SET s.status = 'DOWN'
} THEN {
QUERY reachable
WHERE a.name = 'auth-service'
RETURN b.name AS affected_service
}
Explainability — why is payment-service reachable from auth-service?
Abductive Reasoning — what would need to change so auth-service can't reach a service?
Locy Overview | Language Guide | ASSUME / ABDUCE / DERIVE | Use Cases
The Engine Underneath¶
The five pillars run on a unified substrate — one process, one data model, one consistency boundary.
Structured Memory + Domain Physics:
- Full OpenCypher with recursive CTEs, window functions, and time travel (
VERSION AS OF) - 36 built-in graph algorithms — PageRank, Louvain, Dijkstra, betweenness centrality, k-shortest paths, and more
- Locy logic layer — recursive rules, stratified negation, goal-directed evaluation (SLG resolution)
Associative Recall:
- Vector indexes — HNSW, IVF_PQ, and flat with auto-embedding support
- Full-text search — BM25 ranking over text fields
- JSON-path search — query nested document properties
- Hybrid fusion — reciprocal-rank or weighted fusion across vector and text results via
uni.search
Operational:
- In-process execution — link as a Rust crate or Python package, no network round-trips
- Object-store backed — S3, GCS, Azure, or local disk with automatic local caching
- Automatic compaction — semantic compaction in the background, no manual tuning
- Snapshot isolation — single-writer, multi-reader with no lock contention
Architecture | Features | Storage Engine
Performance¶
Cognitive operations need to be fast enough for an agent's decision loop. Indicative numbers from internal benchmarks — see the Benchmarks doc for methodology.
| Operation | Latency |
|---|---|
| Point lookup (indexed) | 2–5 ms |
| Structured memory traversal (1-hop, cached) | 4–8 ms |
| Associative recall (vector KNN, k=10) | 1–3 ms |
| Aggregation over 1M rows | 50–200 ms |
| Memory update (batch insert, 10K nodes) | 5–10 ms |
Performance Tuning | Benchmarks
Get Started¶
- Install Uni
- Quick Start — create a graph, define rules, and run your first simulation in five minutes
- Programming Guide — Rust and Python APIs in depth
- AI Agent Skill — give your agent structured reasoning capabilities