API & Integrations
Engram acts as a background knowledge plane for your everyday AI chat environments. You can talk to the background daemon through Model Context Protocol (MCP) or standard REST API calls.
MCP Server Auto-Setup
The fastest way to utilize Engram is hooking it directly into tools like Cursor, VS Code, or Claude Code. By injecting our Python MCP Server via standard `stdio` configurations, the agent gains four critical commands:
- engram_capture: Placed at the end of sessions automatically so Claude can dump decisions before closing.
- engram_context: Run automatically at session initialization. Fetches the previous context based on the current workspace description.
- engram_warn: Claude is instructed specifically to fire this tool when creating a new architecture decision, verifying that you haven't rejected an identical component in the past.
- engram_stats: Diagnostics.
1-Command CLI Installation
Instead of manually configuring JSON files, just type this into your terminal and the Engram CLI finds Cursor and Claude automatically:
FastAPI REST Layer
If you're building a custom client or integrating into an enterprise stack, Engram spins up a Uvicorn server locally on localhost:8000.
Endpoint: /ingest
Send session texts here for processing. Triggers the LangGraph extraction state machine returning structured results on success.
Endpoint: /context
The 4-level traverse logic execution layer. You provide concerns and the target query, and Engram walks the graph and returns an optimally compressed injection string alongside counterfactual warnings.
Endpoint: /search
For instantaneous, fast lookups skipping the heavy synthesis block. Useful if writing a raw debug terminal UI rendering direct Chroma vector results overlaid with Counterfactual triggers.