signetai

Skill from Signet-AI/signetai

Signet

S I G N E T A I

Local-first persistent memory for AI agents

CI statusGitHub releasenpmDiscussionsDiscordApache-2.0 LicenseOpenClaw Compatible

Website · Docs · Vision · Discussions · Discord · Contributing · AI Policy


Persistent memory for AI agents, across sessions, tools, and environments.

TL;DR

  • Installs under your existing harness, not instead of it
  • Captures and injects relevant memory automatically between sessions
  • Runs local-first, with inspectable storage and no vendor lock-in

Most agents only remember when explicitly told to.

That is not memory, that's a filing cabinet.

Signet makes memory ambient. It extracts and injects context automatically, between sessions, before the next prompt starts. Your agent just has memory.

Why teams adopt it:

  • less prompt re-explaining between sessions
  • one memory layer across Claude Code, OpenCode, OpenClaw, and Codex
  • clear visibility into what was recalled, why, and from which scope

Benchmark note: early LoCoMo results show 87.5% answer accuracy and 100% Hit@10 retrieval on an 8-question full-stack sample. Larger evaluation runs are in progress. Details

Quick start (about 5 minutes)

BASH
bun add -g signetai        # or: npm install -g signetai
signet setup               # interactive setup wizard
signet status              # confirm daemon + pipeline health
signet dashboard           # open memory + retrieval inspector

If you already use Claude Code, OpenCode, OpenClaw, or Codex, keep your existing harness. Signet installs under it.

First proof of value (2-session test)

Run this once:

BASH
signet remember "my primary stack is bun + typescript + sqlite"

Then in your next session, ask your agent:

what stack am i using for this project?

You should see continuity without manually reconstructing context. If not, inspect recall and provenance in the dashboard or run:

BASH
signet recall "primary stack"

Want the deeper architecture view? Jump to How it works or Architecture.

Core capabilities

These are the product surface areas Signet is optimized around:

CoreWhat it does
🧠 Ambient memory extractionSessions are distilled automatically, no memory tool calls required
🕸️ Hybrid retrievalGraph traversal + FTS5 + vector search for robust recall under real prompts
💾 Session continuityCheckpoint and transcript-backed context carried across sessions
🏠 Local-first storageData lives on your machine in SQLite and markdown, portable by default
🤝 Cross-harness runtimeClaude Code, OpenCode, OpenClaw, Codex, one shared memory substrate

Is Signet right for you?

Use Signet if you want:

  • memory continuity across sessions without manual prompt bootstrapping
  • local ownership of agent state and history
  • one memory layer across multiple agent harnesses

Signet may be overkill if you only need short-lived chat memory inside a single hosted assistant.

Why you can trust this

  • runs local-first by default
  • memory is stored in SQLite + markdown
  • recall is inspectable with provenance and scopes
  • memory can be repaired (edit, supersede, delete, reclassify)
  • no vendor lock-in, your data stays portable

What keeps it reliable

These systems improve quality and reliability of the core memory loop:

SupportingWhat it does
📜 Lossless transcriptsRaw session history preserved alongside extracted memories
🎯 Predictive scorerLearns your interaction patterns to prioritize likely-useful context
🔬 Noise filteringHub and similarity controls reduce low-signal memory surfacing
📄 Document ingestionPull PDFs, markdown, and URLs into the same retrieval pipeline
🖥️ CLI + DashboardOperate and inspect the system from terminal or web UI

Advanced capabilities (optional)

These extend Signet for larger deployments and custom integrations:

AdvancedWhat it does
🔐 Agent-blind secretsEncrypted secret storage, injected at execution time, not exposed to agent text
👯 Multi-agent policiesIsolated/shared/group memory visibility for multiple named agents
🔄 Git syncIdentity and memory can be versioned in your own remote
📦 SDK + middlewareTyped client, React hooks, and Vercel AI SDK middleware
🔌 MCP aggregationRegister MCP servers once, expose across connected harnesses
👥 Team controlsRBAC, token policy, and rate limits for shared deployments
🏪 Ecosystem installsInstall skills and MCP servers from skills.sh and ClawHub
⚖️ Apache 2.0Fully open source, forkable, and self-hostable

When memory is wrong

Memory quality is not just recall quality. It is governance quality.

Signet is built to support:

  • provenance inspection (where a memory came from)
  • scoped visibility controls (who can see what)
  • memory repair (edit, supersede, delete, or reclassify)
  • transcript fallback (verify extracted memory against raw source)
  • lifecycle controls (retention, decay, and conflict handling)

Harness support

Signet is not a harness. It doesn't replace Claude Code, OpenClaw, or OpenCode — it runs alongside them as an enhancement. Bring the harness you already use. Signet handles the memory layer underneath it.

HarnessStatusIntegration
Claude CodeSupportedHooks
OpenCodeSupportedPlugin + Hooks
OpenClawSupportedRuntime plugin + NemoClaw compatible
CodexSupportedHooks + MCP server
Gemini CLIPlanned

Don't see your favorite harness? file an issue and request that it be added!

LoCoMo Benchmark

LoCoMo is the standard benchmark for conversational memory systems. No standardized leaderboard exists — each system uses different judge models, question subsets, and evaluation prompts. These numbers are collected from published papers and repos.

RankSystemScoreMetricOpen SourceLocal?LLM at Search?
1Kumiho97.5% adv, 0.565 F1Official F1 + adv subsetSDK openNoYes
2EverMemOS93.05%Judge (self-reported)NoNoYes
3MemU92.09%JudgeYesNoYes
4MemMachine91.7%JudgeNoNoYes
5Hindsight89.6%JudgeYes (MIT)NoYes
6SLM V3 Mode C87.7%JudgeYes (MIT)PartialYes
7Signet87.5%Judge (GPT-4o)Yes (Apache)YesNo
8Zep/Graphiti~85%Judge (third-party est)PartialNoYes
9Letta/MemGPT~83%JudgeYes (Apache)NoYes
10Engram80%JudgeYesNoYes
11SLM V3 Mode A74.8%JudgeYes (MIT)YesNo
12Mem0+Graph68.4%J-score (disputed)PartialNoYes
13SLM Zero-LLM60.4%JudgeYes (MIT)YesNo
14Mem0 (independent)~58%JudgePartialNoYes

Current Signet run: 87.5% answer accuracy, 100% Hit@10 retrieval, MRR 0.615 on an 8-question sample.

We treat this as an encouraging early signal, not a final claim. The sample size is small and larger-scale runs are in progress.

What this result does show today:

  • retrieval hit rate was 100% for this run (no empty recalls)
  • the correct supporting memory typically surfaced near the top (MRR 0.615)
  • search-time recall operated without extra LLM inference calls

See Benchmarks for methodology, progression, and how to run your own evaluation.

Install (detailed)

BASH
bun add -g signetai        # or: npm install -g signetai
signet setup               # interactive setup wizard

The wizard initializes $SIGNET_WORKSPACE/, configures your harnesses, sets up an embedding provider, creates the database, and starts the daemon.

Path note: $SIGNET_WORKSPACE means your active Signet workspace path. Default is ~/.agents, configurable via signet workspace set <path>.

Tell your agent to install it

Paste this to your AI agent:

Install and fully configure Signet AI by following this guide exactly: https://signetai.sh/skill.md

CLI use

BASH
signet status              # check daemon health
signet dashboard           # open the web UI

signet remember "prefers bun over npm"
signet recall "coding preferences"

Multi-agent

Multiple named agents share one daemon and database. Each agent gets its own identity directory (~/.agents/agents/<name>/) and configurable memory visibility:

BASH
signet agent add alice --memory isolated   # alice sees only her own memories
signet agent add bob --memory shared       # bob sees all global memories
signet agent add ci --memory group --group eng  # ci sees memories from the eng group

signet agent list                          # roster + policies
signet remember "deploy key" --agent alice --private  # alice-only secret
signet recall "deploy" --agent alice       # scoped to alice's visible memories
signet agent info alice                    # identity files, policy, memory count

OpenClaw users get zero-config routing — session keys like agent:alice:discord:direct:u123 are parsed automatically; no agentId header needed.

In connected harnesses, skills work directly:

/remember critical: never commit secrets to git
/recall release process

How it works

session ends
  → distillation engine extracts entities, facts, and relationships
  → knowledge graph links them to existing memory
  → decisions auto-detected and promoted to always-surface constraints
  → raw transcript preserved alongside extracted facts (lossless retention)
  → predictive scorer ranks candidates against your interaction patterns
  → post-fusion dampening separates signal from noise
  → right context injected before the next prompt starts

No configuration required. No tool calls. The pipeline runs in the background and the agent wakes up with its memory intact.

Read more: Why Signet · Architecture · Knowledge Graph · Pipeline

Architecture

CLI (signet)
  setup, knowledge, secrets, skills, hooks, git sync, service mgmt

Daemon (@signet/daemon, localhost:3850)
  |-- HTTP API (modular endpoints for memory, retrieval, auth, and tooling)
  |-- Distillation Layer
  |     extraction -> decision -> graph -> retention
  |-- Retrieval
  |     traversal-primary -> cosine re-scoring -> dampening -> hybrid fallback
  |-- Lossless Transcripts
  |     raw session storage -> expand-on-recall join
  |-- Hints Worker
  |     prospective indexing -> FTS5 index
  |-- Inline Entity Linker
  |     write-time entity extraction (no LLM), decision auto-protection
  |-- Predictive Scorer
  |     entity-weight traversal, per-user trained model
  |-- Document Worker
  |     ingest -> chunk -> embed -> index
  |-- MCP Server
  |     tool registration, aggregation, blast radius endpoint
  |-- Auth Middleware
  |     local / team / hybrid, RBAC, rate limiting
  |-- File Watcher
        identity sync, per-agent workspace sync, git auto-commit
  |-- Multi-Agent
        roster sync, agent_id scoping, read-policy SQL enforcement

Core (@signet/core)
  types, identity, SQLite, hybrid search, graph traversal

SDK (@signet/sdk)
  typed client, React hooks, Vercel AI SDK middleware

Connectors
  claude-code, opencode, openclaw, codex

Packages

PackageRole
@signet/coreTypes, identity, SQLite, hybrid + graph search
@signet/cliCLI, setup wizard, dashboard
@signet/daemonAPI server, distillation layer, auth, analytics, diagnostics
@signet/sdkTyped client, React hooks, Vercel AI SDK middleware
@signet/connector-baseShared connector primitives and utilities
@signet/connector-claude-codeClaude Code integration
@signet/connector-opencodeOpenCode integration
@signet/connector-openclawOpenClaw integration
@signet/connector-codexCodex CLI integration
@signet/opencode-pluginOpenCode runtime plugin — memory tools and session hooks
@signetai/signet-memory-openclawOpenClaw runtime plugin
@signet/extensionBrowser extension for Chrome and Firefox
@signet/trayDesktop system tray application
@signet/nativeNative accelerators
predictorPredictive memory scorer sidecar (Rust)
signetaiMeta-package (signet binary)

Documentation

Research

Paper / ProjectRelevance
Lossless Context Management (Voltropy, 2026)Hierarchical summarization, guaranteed convergence. Patterns adapted in LCM-PATTERNS.md.
Recursive Language Models (Zhang et al., 2026)Active context management. LCM builds on and departs from RLM's approach.
acpx (OpenClaw)Agent Client Protocol. Structured agent coordination.
lossless-claw (Martian Engineering)LCM reference implementation as an OpenClaw plugin.
openclaw (OpenClaw)Agent runtime reference.
arscontextaAgentic notetaking patterns.
ACAN (Hong et al.)LLM-enhanced memory retrieval for generative agents.
Kumiho (Park et al., 2026)Prospective indexing. Hypothetical query generation at write time. Reports 0.565 F1 on the official split and 97.5% on the adversarial subset.

Development

BASH
git clone https://github.com/Signet-AI/signetai.git
cd signetai

bun install
bun run build
bun test
bun run lint
BASH
cd packages/daemon && bun run dev        # Daemon dev (watch mode)
cd packages/cli/dashboard && bun run dev # Dashboard dev

Requirements: Node.js 18+, Bun, Ollama (recommended) or OpenAI API key. macOS or Linux.

Contributing

See CONTRIBUTING.md. Build on existing patterns. Open an issue before contributing significant features. Read the AI Policy before submitting AI-assisted work.

Contributors

NicholaiVogelBusyBee3333stephenwoska2-cpuPatchyToesaaf2tbz

License

Apache-2.0.


signetai.sh · docs · spec · discussions · issues