Skip to content

Graphiti Claude Code Integration Guide

What is this guide?

GuardKit accesses the Graphiti knowledge graph via two complementary methods depending on context: MCP server (when running inside a Claude Code session) and Python client (for CLI workflows and AutoBuild). This guide covers both, explains when each is used, and how they stay in sync.


Table of Contents


Architecture Overview

Two Access Methods, One Knowledge Graph

Graphiti is a temporal knowledge graph backed by FalkorDB. GuardKit accesses it in two ways:

Method When Used Tools / API
MCP server Inside a Claude Code session mcp__graphiti__search_nodes, mcp__graphiti__search_memory_facts, mcp__graphiti__add_memory
Python client CLI commands, AutoBuild, seeding scripts guardkit.knowledge.get_graphiti(), guardkit graphiti * CLI

Both methods connect to the same FalkorDB instance and read/write the same group IDs. There is no separate MCP-only or Python-only dataset — they are fully interchangeable views of the same graph.

When Each Method Is Active

Claude Code session open
        ├── MCP tools available? (mcp__graphiti__*)
        │       YES → Use MCP access
        │             (see .claude/rules/graphiti-knowledge-graph.md)
        └── MCP tools NOT available
                    └── Use Python client access
                          (guardkit graphiti search / get_graphiti())
                          (see .claude/rules/graphiti-knowledge.md)

The graphiti-knowledge-graph.md rule file loads in sessions where MCP tools are available and contains the correct group IDs and search patterns for MCP use.

The graphiti-knowledge.md rule file covers Python client access for CLI workflows.

Both rule files cross-reference each other so you always land in the right place.


Infrastructure Topology

┌──────────────────────────────────────────────────────────────┐
│                      Developer Machine                        │
│                                                              │
│  ┌─────────────────┐        ┌──────────────────────────────┐ │
│  │  Claude Code    │        │  guardkit CLI / AutoBuild    │ │
│  │  Session        │        │  (Python client)             │ │
│  │                 │        │                              │ │
│  │  mcp__graphiti__│        │  guardkit.knowledge          │ │
│  │  search_nodes   │        │  get_graphiti()              │ │
│  │  add_memory     │        │  GraphitiClient              │ │
│  └────────┬────────┘        └──────────────┬───────────────┘ │
│           │  MCP protocol                  │  Direct Redis    │
│           │  (stdio / HTTP)                │  protocol        │
└───────────┼────────────────────────────────┼─────────────────┘
            │                                │
            │         Tailscale VPN          │
            ▼                                ▼
┌───────────────────────────────────────────────────────────────┐
│                     GB10 Workstation (promaxgb10-41b1)         │
│                                                               │
│  ┌──────────────────────┐   ┌─────────────────────────────┐  │
│  │  Graphiti MCP Server │   │  vLLM (port 8000)           │  │
│  │  (Python process,    │   │  Qwen2.5-14B                │  │
│  │   launched by        │   │  (entity extraction)        │  │
│  │   .mcp.json)         │   └─────────────────────────────┘  │
│  └──────────┬───────────┘   ┌─────────────────────────────┐  │
│             │               │  vLLM (port 8001)           │  │
│             │               │  nomic-embed-text-v1.5      │  │
│             │               │  (embeddings, 1024 dims)    │  │
│             │               └─────────────────────────────┘  │
└─────────────┼─────────────────────────────────────────────────┘
              │ Redis protocol
┌─────────────────────────────────────────────────────────────────┐
│                   Synology NAS (whitestocks)                     │
│                                                                  │
│   FalkorDB (port 6379)   ·   Browser UI (port 3000)             │
│   Knowledge graph storage                                        │
└─────────────────────────────────────────────────────────────────┘

Key infrastructure components:

Component Location Purpose
FalkorDB whitestocks:6379 Graph storage (Synology NAS via Tailscale)
Graphiti MCP server GB10 workstation Claude Code MCP access
vLLM LLM promaxgb10-41b1:8000 Entity extraction (Qwen2.5-14B)
vLLM Embeddings promaxgb10-41b1:8001 Vector embeddings (nomic-embed-text-v1.5, 1024 dims)
FalkorDB Browser http://whitestocks:3000 Graph inspection UI

Setup

Prerequisites

  • Tailscale connected (for access to whitestocks and promaxgb10-41b1)
  • GuardKit installed (pip install guardkit-py)
  • FalkorDB running on NAS (see below)

Start FalkorDB on NAS

ssh richardwoollcott@whitestocks
cd /volume1/guardkit/docker
sudo docker-compose -f docker-compose.falkordb.yml up -d

Initialize a New Project with MCP

# Standard init (inherits Graphiti settings from parent project)
guardkit init --copy-graphiti

# Or specify source explicitly
guardkit init --copy-graphiti-from /path/to/parent/project

Using --copy-graphiti ensures your project inherits the correct FalkorDB host, embedding model, and dimension settings. Without it, the project defaults to OpenAI embeddings, which causes dimension mismatches if the shared FalkorDB was seeded with nomic-embed-text-v1.5 (1024 dims vs OpenAI's 1536 dims).

Seed System Knowledge

After init, seed GuardKit system knowledge (one-time per FalkorDB instance):

guardkit graphiti seed-system

Seed project-specific knowledge:

guardkit graphiti capture --interactive

Configure MCP Server (.mcp.json)

To enable MCP access in Claude Code sessions, add the Graphiti MCP server to .mcp.json:

{
  "mcpServers": {
    "graphiti": {
      "command": "uvx",
      "args": [
        "--from", "graphiti-core[falkordb]",
        "graphiti-service",
        "--transport", "stdio",
        "--group-id", "guardkit"
      ],
      "env": {
        "FALKORDB_HOST": "whitestocks",
        "FALKORDB_PORT": "6379",
        "OPENAI_API_KEY": "not-used",
        "LLM_BASE_URL": "http://promaxgb10-41b1:8000/v1",
        "LLM_MODEL": "neuralmagic/Qwen2.5-14B-Instruct-FP8-dynamic",
        "EMBEDDER_BASE_URL": "http://promaxgb10-41b1:8001/v1",
        "EMBEDDER_MODEL": "nomic-embed-text-v1.5"
      }
    }
  }
}

Note: The .mcp.json in this repository currently has an empty mcpServers object because the MCP server is configured per-developer. Each developer adds their own server configuration to suit their infrastructure.

After updating .mcp.json, restart Claude Code. If the MCP server starts correctly, you will see mcp__graphiti__* tools available in your session.


Configuration Files

.guardkit/graphiti.yaml — Python Client Configuration

This file configures the Python client used by CLI commands and AutoBuild:

# Project ID — prefix for all project-specific group IDs
project_id: guardkit

# Enable/disable Graphiti
enabled: true

# Graph database backend
graph_store: falkordb
falkordb_host: whitestocks
falkordb_port: 6379
timeout: 30.0

# Parallelism for seeding (1-10, default: 3)
max_concurrent_episodes: 3

# LLM for entity extraction (vLLM in this project)
llm_provider: vllm
llm_base_url: http://promaxgb10-41b1:8000/v1
llm_model: neuralmagic/Qwen2.5-14B-Instruct-FP8-dynamic
llm_max_tokens: 4096

# Embedding model (must match what FalkorDB was seeded with)
embedding_provider: vllm
embedding_base_url: http://promaxgb10-41b1:8001/v1
embedding_model: nomic-embed-text-v1.5

# System-scoped group IDs for seed-system command
group_ids:
  - product_knowledge
  - command_workflows
  - architecture_decisions

Critical: embedding_model must match the model used when FalkorDB was first seeded. Changing the model after seeding causes dimension mismatches. Use --copy-graphiti when initializing new projects to inherit the correct settings automatically.

.mcp.json — MCP Server Configuration

Configures the Graphiti MCP server launched by Claude Code. The MCP server connects to the same FalkorDB instance using the same embedding model. The key constraint: the EMBEDDER_MODEL in .mcp.json must match embedding_model in .guardkit/graphiti.yaml.


Project Isolation and Group ID Namespacing

The Problem

Multiple projects can share a single FalkorDB instance. Without isolation, a query for "authentication patterns" in Project B might return Project A's JWT decision instead.

The Solution: project_id Prefixing

The Python client (GraphitiClient) automatically prefixes all project-specific group IDs with {project_id}__. This happens transparently via GraphitiClient.get_group_id().

Example for project_id: guardkit:

Logical Group Stored As
project_overview guardkit__project_overview
project_architecture guardkit__project_architecture
feature_specs guardkit__feature_specs
task_outcomes guardkit__task_outcomes
turn_states guardkit__turn_states

System groups are never prefixed — they're intentionally shared:

Group Stored As Shared?
product_knowledge product_knowledge Yes — all projects
command_workflows command_workflows Yes — all projects
architecture_decisions architecture_decisions Yes — all projects

MCP Access and Namespacing

When using MCP tools, pass group IDs as stored (with the prefix already applied):

# Correct — pass the actual stored group IDs
group_ids = [
    "product_knowledge",          # system (no prefix)
    "command_workflows",          # system (no prefix)
    "guardkit__project_overview", # project-specific (prefixed)
    "guardkit__project_decisions",
]

The MCP server does not auto-prefix — you must pass the full group IDs. The .claude/rules/graphiti-knowledge-graph.md file contains the complete group ID reference for this project.

Python Client Access and Namespacing

The Python client auto-prefixes via get_group_id(). Pass the logical (unprefixed) name:

client = get_graphiti()

# Client auto-prefixes project groups
results = await client.search(
    query="authentication patterns",
    group_ids=["project_architecture"],  # stored as guardkit__project_architecture
    num_results=5
)

For system groups, pass them as-is (no prefix needed either way):

results = await client.search(
    query="task-work command workflow",
    group_ids=["command_workflows"],  # system group, no prefix
    num_results=5
)

Troubleshooting

MCP server won't start

Symptom: mcp__graphiti__* tools not available in Claude Code session.

  1. Check .mcp.json has the correct server configuration (see Setup)
  2. Restart Claude Code after modifying .mcp.json
  3. Verify Tailscale is connected and whitestocks and promaxgb10-41b1 are reachable:
    ping whitestocks
    ping promaxgb10-41b1
    
  4. Check FalkorDB is running:
    redis-cli -h whitestocks -p 6379 ping
    # Expected: PONG
    
  5. Check MCP server logs in Claude Code (View → Output → MCP)

Episode written but not retrievable on search (LLM-extraction failure)

Symptom: mcp__graphiti__add_memory returns a successful response ("Episode 'X' queued for processing in group 'Y'"), but a subsequent get_episodes(group_ids=["Y"]) or search_nodes(query=..., group_ids=["Y"]) returns no results — even though the response confirmed the requested group.

Cause: The episode is correctly queued under the requested group, but the queue worker's background LLM-extraction step fails. With graphiti-core 0.28.1 and provider: openai in the MCP server's config, the LLM call goes to https://api.openai.com/v1/responses (the new Responses API) instead of the configured local LLM endpoint — the MCP server's LLM factory openai branch silently ignores config.providers.openai.api_url. With a placeholder API key, the call returns 401 and the episode is dropped after 2 retries.

This is the actual root cause of the symptom that originally motivated TASK-FIX-B1F7's "MCP write group_id coercion" diagnosis — see TASK-INF-5053 audit (docs/state/TASK-INF-5053/audit.md) for the investigation that disambiguated the two.

Detection:

# On promaxgb10-41b1, check the queue worker's recent log
ssh promaxgb10-41b1 "docker logs graphiti-mcp --tail 50 2>&1 | grep -E 'Processing episode|Failed to process|api.openai.com|401'"

# Look for the pattern:
#   services.queue_service - INFO  - Processing episode None for group <X>
#   httpx                  - INFO  - HTTP Request: POST https://api.openai.com/v1/responses "401 Unauthorized"
#   services.queue_service - ERROR - Failed to process episode None for group <X>

If you see those lines, routing is fine and the LLM endpoint is the problem. If episodes process without the 401 path, extraction is working.

Workaround:

  • Use the Python CLI for any write that must persist:
    guardkit graphiti capture-outcome --from-task-file <path> --timeout 300
    
    The CLI uses GraphitiClient directly and is configured via .guardkit/graphiti.yaml, which honours llm_base_url. This is the same fallback that /task-complete Step 2a uses (kept as defence-in-depth — see below).
  • For ad-hoc inline mcp__graphiti__add_memory calls, episodes will queue under the correct group but may not persist if extraction fails. If the write is important, follow up with the CLI path.

Note on the defence-in-depth workaround in /task-complete: Step 2a of installer/core/commands/task-complete.md still parses the MCP response message to detect a hypothetical group_id override and falls back to the CLI when divergence is detected. As of TASK-INF-5053 no such divergence is observed in practice — the server honours group_id. The detection + fallback is retained as cheap regression insurance, not as mitigation for a known live bug.

Status: The real underlying issue (LLM endpoint misrouting in the MCP server's openai provider branch) is tracked as TASK-INF-5054. Resolution will let queued episodes complete extraction so writes become retrievable on search.

Group_id routing itself works correctly. The earlier "MCP write group_id coercion" diagnosis (TASK-FIX-B1F7) was invalidated by direct verification against the running server — see docs/state/TASK-INF-5053/audit.md for the full audit (image SHA, source line numbers, probe response, correlated server log).

Group ID mismatch — no results returned

Symptom: Searches return empty results even though knowledge was seeded.

  1. Always pass explicit group_ids — searching without them returns nothing:
    # Wrong — no group_ids
    mcp__graphiti__search_nodes(query="...")
    
    # Correct — explicit group_ids
    mcp__graphiti__search_nodes(query="...", group_ids=["product_knowledge", "guardkit__project_overview"])
    
  2. Verify the correct prefix for project groups. Check project_id in .guardkit/graphiti.yaml:
    grep project_id .guardkit/graphiti.yaml
    # project_id: guardkit → prefix is "guardkit__"
    
  3. Use the Python client to verify knowledge exists:
    guardkit graphiti search "your query" --group product_knowledge
    
  4. Re-seed if knowledge is missing:
    guardkit graphiti seed-system   # system groups
    guardkit graphiti capture --interactive  # project groups
    

Embedding dimension mismatch

Symptom: Errors like dimension mismatch: expected 1024, got 1536 when seeding or querying.

Cause: The FalkorDB instance was seeded with one embedding model but the current config uses a different model. This is most common when a new project uses the OpenAI default (text-embedding-3-small, 1536 dims) but the shared FalkorDB was seeded with nomic-embed-text-v1.5 (1024 dims).

Fix: Always use --copy-graphiti when initialising new projects on a shared FalkorDB:

guardkit init --copy-graphiti

If already initialised with wrong settings, copy the embedding config from an existing project:

guardkit init --copy-graphiti-from /path/to/working/project

Or manually update .guardkit/graphiti.yaml to match:

embedding_provider: vllm
embedding_base_url: http://promaxgb10-41b1:8001/v1
embedding_model: nomic-embed-text-v1.5

And ensure .mcp.json matches:

"EMBEDDER_MODEL": "nomic-embed-text-v1.5"

Python client connection failure

Symptom: guardkit graphiti status shows Connection: Failed.

  1. Check FalkorDB is reachable:
    redis-cli -h whitestocks -p 6379 ping
    
  2. Start FalkorDB on the NAS if needed:
    ssh richardwoollcott@whitestocks
    cd /volume1/guardkit/docker
    sudo docker-compose -f docker-compose.falkordb.yml up -d
    
  3. Check enabled: true in .guardkit/graphiti.yaml
  4. Verify Tailscale connection

GuardKit degrades gracefully when Graphiti is unavailable — all commands continue to work, skipping knowledge capture and context loading.


See Also

  • .claude/rules/graphiti-knowledge-graph.md — MCP access: group IDs, search tools, add_memory patterns
  • .claude/rules/graphiti-knowledge.md — Python client access: CLI commands, threading model, seeding
  • docs/guides/graphiti-integration-guide.md — Full Graphiti integration overview
  • docs/guides/graphiti-project-namespaces.md — Multi-project isolation deep-dive
  • docs/guides/graphiti-shared-infrastructure.md — Shared FalkorDB setup
  • .guardkit/graphiti.yaml — Python client configuration
  • .mcp.json — MCP server configuration