If you swap the queries between the two examples above, and use each one with the other’s embedding, both will produce the wrong result. This demonstrates the fact that each method has its strengths but also its weaknesses. Hybrid search combines the two, aiming to leverage the best from both worlds. By indexing data with both dense and sparse embeddings, we can perform searches that consider both semantic relevance and keyword matching, balancing results based on custom weights. Again, the internal implementation is more complicated, but langchain-milvus makes it pretty simple to use. Let’s look at how this works:
vector_store = Milvus(
embedding_function=[
sparse_embedding,
dense_embedding,
],
connection_args={"uri": "./milvus_hybrid.db"},
auto_id=True,
)
vector_store.add_texts(documents)
In this setup, both sparse and dense embeddings are applied. Let’s test the hybrid search with equal weighting:
query = "Does Hot cover weather changes during weekends?"
hybrid_output = vector_store.similarity_search(
query=query,
k=1,
ranker_type="weighted",
ranker_params={"weights": [0.49, 0.51]}, # Combine both results!
)
print(f"Hybrid search results:\n{hybrid_output[0].page_content}")# output: Hybrid search results:
# In Israel, Hot is a TV provider that broadcast 7 days a week
This searches for similar results using each embedding function, gives each score a weight, and returns the result with the best weighted score. We can see that with slightly more weight to the dense embeddings, we get the result we desired. This is true for the second query as well.
If we give more weight to the dense embeddings, we will once again get non-relevant results, as with the dense embeddings alone:
query = "When and where is Hot active?"
hybrid_output = vector_store.similarity_search(
query=query,
k=1,
ranker_type="weighted",
ranker_params={"weights": [0.2, 0.8]}, # Note -> the weights changed
)
print(f"Hybrid search results:\n{hybrid_output[0].page_content}")# output: Hybrid search results:
# Today was very warm during the day but cold at night
Finding the right balance between dense and sparse is not a trivial task, and can be seen as part of a wider hyper-parameter optimization problem. There is an ongoing research and tools that trying to solve such issues in this area, for example IBM’s AutoAI for RAG.
There are many more ways you can adapt and use the hybrid search approach. For instance, if each document has an associated title, you could use two dense embedding functions (possibly with different models) — one for the title and another for the document content — and perform a hybrid search on both indices. Milvus currently supports up to 10 different vector fields, providing flexibility for complex applications. There are also additional configurations for indexing and reranking methods. You can see Milvus documentation about the available params and options.
Source link
#Dance #Dense #Sparse #Embeddings #Enabling #Hybrid #Search #LangChainMilvus #Omri #Levy #Ohad #Eytan