
Multi-model AI context layer
Freemium
SurrealDB is a next-generation multi-model database designed to serve as the unified context layer for AI agents. Unlike traditional databases that force a choice between document, graph, or relational models, SurrealDB natively supports all these paradigms alongside vector search and time-series data. By consolidating these data structures into a single engine, it eliminates the need for complex data synchronization pipelines between disparate databases. It features a proprietary query language, SurrealQL, which allows for complex join operations and graph traversal in a single statement, significantly reducing latency for RAG (Retrieval-Augmented Generation) pipelines and agentic memory management.
SurrealDB combines document, graph, relational, and vector data models into a single engine. This eliminates the 'database sprawl' common in AI stacks, where developers previously had to manage separate stores for relational data and vector embeddings. By unifying these, developers can perform ACID-compliant transactions across graph relationships and vector similarity searches in one atomic query, ensuring data consistency and reducing architectural complexity.
The engine includes built-in support for vector embeddings, allowing for high-performance similarity searches (k-NN) directly within the database. It supports various distance metrics like Cosine, Euclidean, and Manhattan. By keeping vectors alongside the associated metadata and graph relationships, it enables agents to perform 'context-aware' retrieval, fetching not just the most similar document, but also related entities and historical time-series data in a single request.
SurrealQL is a powerful, SQL-inspired language designed for modern data structures. It supports advanced features like nested subqueries, graph traversal (e.g., '->follows->user'), and built-in functions for data manipulation. Unlike standard SQL, it is optimized for non-relational data, allowing developers to query deeply nested JSON documents and graph edges without complex joins or external processing, which is critical for real-time agentic decision-making.
SurrealDB supports live queries, which allow clients to subscribe to data changes in real-time via WebSockets. When a record is created, updated, or deleted, the database pushes the change to the client. This is essential for AI agents that need to react to state changes in the environment immediately, enabling event-driven architectures without the need for polling or external message brokers like Kafka.
The database can run in a distributed, highly available cluster, a single-node server, or even embedded directly within an application (in-memory or on-disk). This flexibility makes it suitable for everything from edge-computing AI agents running on local devices to large-scale cloud-native applications. It provides a consistent API across all deployment modes, allowing developers to prototype locally and scale to production without changing their codebase.
Developers use SurrealDB to store both raw documents and their vector embeddings. When an agent queries for information, it performs a single query that retrieves the relevant vector-matched document and its associated graph-based metadata, providing the LLM with richer, more structured context than a standard vector database.
AI agents use SurrealDB to store long-term episodic memory. By using graph edges to link past interactions, user preferences, and time-series logs, the agent can traverse its own history to provide personalized responses based on complex, multi-dimensional relationships rather than simple keyword matching.
Companies build dynamic knowledge graphs where entities and relationships are updated in real-time. SurrealDB allows these entities to be queried as documents while maintaining the integrity of the graph, enabling agents to navigate complex organizational or technical hierarchies instantly.
Need a robust backend to manage RAG pipelines and agent memory. They require high-performance vector search combined with structured data to provide LLMs with accurate, context-rich information.
Looking to simplify their tech stack by replacing multiple specialized databases (e.g., Postgres, Pinecone, Neo4j) with a single, unified solution that handles all data types efficiently.
Require a lightweight, embeddable database that can run locally on edge devices while maintaining the power of a full-featured multi-model database for offline AI processing.
Open source (Apache 2.0). SurrealDB Cloud offers a free tier, with Pro and Enterprise plans available for production-grade scaling and support.