Open Source
Explore the latest AI open-source projects from GitHub and HuggingFace.
Explore the latest AI open-source projects from GitHub and HuggingFace.
Graphiti is an open-source framework by Zep for building temporally-aware knowledge graphs designed specifically for AI agents operating in dynamic environments. Unlike traditional RAG systems that rely on batch document processing, Graphiti enables continuous, real-time integration of user interactions and enterprise data into a queryable graph structure with sub-second latency. ## Why Graphiti Matters AI agents need memory that evolves in real time. Standard RAG pipelines process documents in batches, creating a static snapshot that quickly becomes stale as new interactions occur. Graphiti addresses this fundamental limitation by treating knowledge as a living graph that updates incrementally with every new piece of information, tracking both when events occurred and when they were recorded. With over 22,900 GitHub stars and backing from the Zep team, Graphiti has emerged as a leading solution for developers building agents that need to reason about changing state over time. ## Bi-Temporal Data Model Graphiti's most distinctive technical feature is its bi-temporal data model, which tracks two separate time dimensions for every piece of information: when the event actually occurred in the real world and when it was ingested into the graph. This dual tracking enables agents to answer questions like "What did we know about X at time Y?" and to correctly handle out-of-order data arrival, contradictions, and temporal reasoning that single-timestamp systems cannot support. ## Hybrid Search Architecture The framework combines three search methods without requiring LLM summarization at query time: semantic embeddings for meaning-based retrieval, BM25 keyword search for exact term matching, and native graph traversal for relationship-based queries. This hybrid approach delivers sub-second query latency compared to the seconds-to-minutes response times typical of GraphRAG systems that must invoke an LLM to summarize results. ## Custom Entity Definitions Developers can define custom entity types using Pydantic models, allowing the knowledge graph schema to match their specific domain. Whether the application tracks customer interactions, code repositories, medical records, or financial transactions, Graphiti adapts its entity model rather than forcing data into a generic structure. Developer-defined ontologies provide additional control over how information is organized and connected. ## Multi-Database Support Graphiti supports five graph database backends: Neo4j 5.26+, FalkorDB 1.1.2+, Kuzu 0.11.2+, Amazon Neptune Database Cluster, and Neptune Analytics Graph. The driver system is pluggable and backend-agnostic, implementing 11 operation interfaces that abstract away database-specific details. Teams can choose their preferred infrastructure without modifying application logic. ## MCP Server and REST API The framework ships with a Model Context Protocol (MCP) server for direct integration with Claude and Cursor, plus a FastAPI-based REST service for general-purpose access. The MCP integration is particularly significant as it allows AI coding assistants and chat agents to natively read from and write to the knowledge graph during conversations, creating persistent memory across sessions. ## LLM Provider Flexibility Graphiti defaults to OpenAI for its LLM operations but also supports Anthropic, Groq, and Google Gemini as alternative providers. The structured output capability of the chosen LLM is leveraged for reliable entity extraction and relationship identification, ensuring consistent graph quality regardless of the provider selected. ## Enterprise Scalability The framework supports parallel processing for ingestion and includes community detection algorithms that automatically organize related entities into groups. These communities provide a hierarchical view of the knowledge graph, enabling efficient navigation of large-scale enterprise datasets without sacrificing query performance.

Shubhamsaboo
Collection of 100+ production-ready LLM apps with AI agents, RAG, voice agents, and MCP using OpenAI, Anthropic, Gemini, and open-source models
infiniflow
Leading open-source RAG engine with deep document understanding, grounded citations, and agent capabilities, with 73K+ GitHub stars.