Open source vector database and search startup Qdrant has developed a lightweight vector database designed to run locally on robots, kiosks, mobile devices, and other embedded systems.
Qdrant Edge enables developers to run hybrid and multimodal search locally on edge devices without a connected server process or background threads. Edge devices are typically resource-constrained, with high latency, limited compute, and minimal network access. Qdrant has implemented the core facilities in its cloud-native vector database in the Edge product. Vector databases are used by GenAI models when responding to natural language-based user requests.

André Zayarni, Qdrant CEO and co-founder, said: “Developers need infrastructure that runs where many decisions are made – on the device itself. Qdrant Edge is a clean-slate vector search engine designed for Embedded AI. It brings local search, deterministic performance, and multimodal support into a minimal runtime footprint.”
The Edge product offers full control over lifecycle, memory usage, and in-process execution without background services, Qdrant says. It will support in-process execution, advanced filtering, and compatibility with real-time agent workloads. The envisaged applications include robotic navigation with multimodal sensor inputs, local retrieval on smart retail kiosks and point-of-sale systems, and privacy-first assistants running on mobile or embedded hardware.
Qdrant initially stored its vectors in an underlying RocksDB key-value store but suffered from random latency spikes due to the inherent compaction, and found tuning it was a pain due to there being too many options. As a consequence, it developed its own Gridstore key-value store in Rust. This has a data layer to store values in fixed-size blocks for fast lookups, a mask layer to track used and unused blocks without needing compaction, plus a gaps layer to manage space allocation. You can read more about Gridstore here.
Qdrant says it’s already seeing early traction from robotics and mobile AI developers looking to deploy locally and get better performance than connecting to central or cloud vector databases, as well as from companies that need privacy-first AI at the edge.
We understand that, from the point of view of kiosk developers, having a system that can respond to customer natural language queries could be advantageous.
Qdrant’s Edge product is available now through a private beta. Teams building robotics, on-device assistants, or embedded inference pipelines can apply here.