Gnosys gives LLMs a centralized brain that survives across sessions and projects. 50+ MCP tools, 8 LLM providers, federated search, Web Knowledge Base for serverless chatbots, sandbox runtime, process tracing, and Dream Mode consolidation. Works with any MCP client.
npm install -g gnosys
copy
A centralized brain with 50+ MCP tools. Sub-10ms reads, federated search, Web Knowledge Base, sandbox runtime, process tracing, and Dream Mode consolidation. No vector databases, no black boxes.
One ~/.gnosys/gnosys.db shared across all projects. 6-table schema with project_id and scope columns. Sub-10ms reads, automatic backups, one-command Obsidian export.
Cross-scope search with tier boosting: current project 1.8x, project 1.5x, user 1.0x, global 0.7x. Recency and reinforcement boosting. Ambiguity detection across projects.
Project, user, and global scopes in one central DB. Project memory stays with code, user preferences follow you everywhere, global knowledge spans the org.
Filter by category, tags, status, author, confidence, dates. Compound lenses combine criteria with AND/OR logic.
Every memory write is tracked in the audit log. Export to Obsidian-compatible markdown anytime with gnosys export. View full version history and non-destructively rollback any memory.
Cross-reference memories with [[wikilinks]]. Build a knowledge graph with backlinks, see connections, find orphaned links.
Feed raw text and an LLM structures it into an atomic memory with title, tags, category, and relevance keywords.
Idle-time consolidation: confidence decay, self-critique, summaries, and relationship discovery. Like biological sleep for your knowledge base.
Import thousands of records from CSV, JSON, or JSONL. LLM-powered ingestion generates keyword clouds automatically. Batch commits, dedup, and resume support.
Central project registry, auto-detection from .git/package.json, project briefings, user preferences, and agent rules generation for Cursor and Claude Code.
Turn any website into a /knowledge/ directory of searchable markdown. Pre-computed JSON index, zero-dependency runtime. Powers Sir Chats‑A‑Lot. Supports llms.txt for AI discoverability.
Persistent background process with Unix socket server. Agents import a tiny helper library and call memory operations like regular code. Near-zero context cost, no MCP overhead.
Build call chains from source code with leads_to, follows_from, and requires relationships. Reflection API updates confidence based on real-world outcomes.
Cross-project status dashboard with readiness scores, blocker tracking, and production-readiness assessment. gnosys status --web opens an interactive HTML dashboard.
gnosys update-status walks AI agents through an 8-section checklist to create comprehensive status snapshots covering progress, blockers, risks, and next steps.
Ingest PDFs, images (OCR), audio, video, and DOCX files as structured memories. Automatic text extraction feeds into the LLM structuring pipeline.
Gnosys is an MCP server. Point your AI client at it, and your agent gains persistent memory.
{ "mcpServers": { "gnosys": { "command": "gnosys", "args": ["serve"] } } }
{ "mcpServers": { "gnosys": { "command": "gnosys", "args": ["serve"] } } }
$ claude mcp add gnosys gnosys serve
[mcp.gnosys] type = "local" command = ["gnosys", "serve"]
{ "mcp": { "gnosys": { "type": "local", "command": ["gnosys", "serve"] } } }
--- description: Gnosys persistent memory alwaysApply: true --- # Gnosys Memory System ## Retrieve memories - At task start, call gnosys_discover with keywords - Load results with gnosys_read - Trigger on: "recall", "remember when", "what did we decide" ## Write memories - Trigger on: "remember", "memorize", "save this", "don't forget" - Also write on decisions, preferences, specs, post-task findings ## Key Tools gnosys_discover → find memories gnosys_add → write gnosys_read → load content gnosys_update → modify gnosys_hybrid_search → best search gnosys_ask → Q&A with citations … plus 25 more tools (maintain, history, graph, etc.)
# Gnosys Memory This project uses Gnosys for persistent memory via MCP. ## Read first - At task start, call gnosys_discover with keywords - Load results with gnosys_read - On "recall", "remember when", "what did we decide" → search first ## Write automatically - On "remember", "memorize", "save this" → call gnosys_add - Decisions/preferences → commit to decisions/ - Specs → commit BEFORE starting work - After implementation → commit findings ## Key tools | Action | Tool | | Find | gnosys_discover → gnosys_read | | Search | gnosys_hybrid_search, gnosys_ask | | Write | gnosys_add, gnosys_add_structured | | Update | gnosys_update, gnosys_reinforce |
npm install -g gnosys then gnosys setup to configure your LLM provider and connect to your IDE. Works with Claude Code, Cursor, Codex, Claude Desktop, or any MCP client.
gnosys init registers the project in the central brain and creates .gnosys/gnosys.json for project identity.
gnosys sync --target all writes tool instructions into your IDE's rules file (CLAUDE.md, .cursorrules, etc.). Use --global for machine-wide rules.
Decisions, architecture choices, conventions, requirements — all persisted as atomic markdown files your agent can discover and reference across sessions.
SQLite is the sole source of truth. Dream Mode consolidates knowledge while you sleep. Export to Obsidian-compatible markdown anytime with one command.
Open source, MIT licensed. Built for developers who want their AI to remember what matters.