Vector Database

April 23, 2025

How to implement Better Binary Quantization (BBQ) into your use case and why you should

Exploring why you would implement Better Binary Quantization (BBQ) in your use case and how to do it.

How to implement Better Binary Quantization (BBQ) into your use case and why you should
Elasticsearch BBQ vs. OpenSearch FAISS: Vector search performance comparison

April 15, 2025

Elasticsearch BBQ vs. OpenSearch FAISS: Vector search performance comparison

A performance comparison between Elasticsearch BBQ and OpenSearch FAISS.

Speeding up merging of HNSW graphs

Speeding up merging of HNSW graphs

Explore the work we’ve been doing to reduce the overhead of building multiple HNSW graphs, particularly reducing the cost of merging graphs.

 Scaling late interaction models in Elasticsearch - part 2

Scaling late interaction models in Elasticsearch - part 2

This article explores techniques for making late interaction vectors ready for large-scale production workloads, such as reducing disk space usage and improving computation efficiency.

Exploring GPU-accelerated Vector Search in Elasticsearch with NVIDIA

Exploring GPU-accelerated Vector Search in Elasticsearch with NVIDIA

Powered by NVIDIA cuVS, the collaboration looks to provide developers with GPU-acceleration for vector search in Elasticsearch.

Searching complex documents with ColPali - part 1

Searching complex documents with ColPali - part 1

The article introduces the ColPali model, a late-interaction model that simplifies the process of searching complex documents with images and tables, and discusses its implementation in Elasticsearch.

Semantic Text: Simpler, better, leaner, stronger

March 13, 2025

Semantic Text: Simpler, better, leaner, stronger

Our latest semantic_text iteration brings a host of improvements. In addition to streamlining representation in _source, benefits include reduced verbosity, more efficient disk utilization, and better integration with other Elasticsearch features. You can now use highlighting to retrieve the chunks most relevant to your query. And perhaps best of all, it is now a generally available (GA) feature!

Unifying Elastic vector database and LLM functions for intelligent query

Unifying Elastic vector database and LLM functions for intelligent query

Leverage LLM functions for query parsing and Elasticsearch search templates to translate complex user requests into structured, schema-based searches for highly accurate results.

Semantic search, leveled up: now with native match, knn and sparse_vector support

Semantic search, leveled up: now with native match, knn and sparse_vector support

Semantic text search becomes even more powerful, with native support for match, knn and sparse_vector queries. This allows us to keep the simplicity of the semantic query while offering the flexibility of the Elasticsearch query DSL.

Ready to build state of the art search experiences?

Sufficiently advanced search isn’t achieved with the efforts of one. Elasticsearch is powered by data scientists, ML ops, engineers, and many more who are just as passionate about search as your are. Let’s connect and work together to build the magical search experience that will get you the results you want.

Try it yourself