Releases: deepset-ai/haystack
v1.26.1
Release Notes
v1.26.1
🚀 New Features
- Add previously removed fetch_archive_from_http util function to fetch zip and gzip archives from url
v1.26.0
Release Notes
v1.26.0
Prelude
We are announcing that Haystack 1.26 is the final minor release for Haystack 1.x. Although we will continue to release bug fixes for this version, we will neither be adding nor removing any functionalities. Instead, we will focus our efforts on Haystack 2.x. Haystack 1.26 will reach its end-of-life on March 11, 2025.
The utility functions fetch_archive_from_http, build_pipeline and add_example_data were removed from Haystack.
This release changes the PDFToTextConverter so that it doesn't support PyMuPDF anymore. The converter will always assume xpdf is used by default.
⬆️ Upgrade Notes
- We recommend replacing calls to the fetch_archive_from_http function with other tools available in Python or in the operating system of use.
- To keep using PyMuPDF you must create a custom node, you can use the previous Haystack version for inspiration.
⚡️ Enhancement Notes
-
Add raise_on_failure flag to BaseConverter class so that big processes can optionally continue without breaking from exceptions.
-
Support for Llama3 models on AWS Bedrock.
-
Support for MistralAI and new Claude 3 models on AWS Bedrock.
-
Upgrade Transformers to the latest version 4.37.2. This version adds support for the Phi-2 and Qwen2 models and improves support for quantization.
-
Upgrade transformers to version 4.39.3 so that Haystack can support the new Cohere Command R models.
-
Add support for latest OpenAI embedding models text-embedding-3-large and text-embedding-3-small.
-
API_BASE can now be passed as an optional parameter in the getting_started sample. Only openai provider is supported in this set of changes. PromptNode and PromptModel were enhanced to allow passing of this parameter. This allows RAG against a local endpoint (e.g, http://localhost:1234/v1), so long as it is OpenAI compatible (such as LM Studio)
Logging in the getting started sample was made more verbose, to make it easier for people to see what was happening under the covers.
-
Added new option split_by="page" to the preprocessor so we can chunk documents by page break.
-
Review and update context windows for OpenAI GPT models.
-
Support gated repos for Huggingface inference.
-
Add a check to verify that the embedding dimension set in the FAISS Document Store and retriever are equal before running embedding calculations.
🐛 Bug Fixes
-
Pipeline run error when using the FileTypeClassifier with the raise_on_error: True option. Instead of returning an unexpected NoneType, we route the file to a dead-end edge.
-
Ensure that the crawled files are downloaded to the output_dir directory, as specified in the Crawler constructor. Previously, some files were incorrectly downloaded to the current working directory.
-
Fixes SearchEngineDocumentStore.get_metadata_values_by_key method to make use of self.index if no index is provided.
-
Fixes OutputParser usage in PromptTemplate after making invocation context immutable in #7510.
-
When using a Pipeline with a JoinNode (e.g. JoinDocuments) all information from the previous nodes was lost other than a few select fields (e.g. documents). This was due to the JoinNode not properly passing on the information from the previous nodes. This has been fixed and now all information from the previous nodes is passed on to the next node in the pipeline.
For example, this is a pipeline that rewrites the query during pipeline execution combined with a hybrid retrieval setup that requires a JoinDocuments node. Specifically the first prompt node rewrites the query to fix all spelling errors, and this new query is used for retrieval. And now the JoinDocuments node will now pass on the rewritten query so it can be used by the QAPromptNode node whereas before it would pass on the original query.
`python from haystack import Pipeline from haystack.nodes import BM25Retriever, EmbeddingRetriever, PromptNode, Shaper, JoinDocuments, PromptTemplate from haystack.document_stores import InMemoryDocumentStore document_store = InMemoryDocumentStore(use_bm25=True) dicts = [{"content": "The capital of Germany is Berlin."}, {"content": "The capital of France is Paris."}] document_store.write_documents(dicts) query_prompt_node = PromptNode( model_name_or_path="gpt-3.5-turbo", api_key="", default_prompt_template=PromptTemplate("You are a spell checker. Given a user query return the same query with all spelling errors fixed.\nUser Query: {query}\nSpell Checked Query:") ) shaper = Shaper( func="join_strings", inputs={"strings": "results"}, outputs=["query"], ) qa_prompt_node = PromptNode( model_name_or_path="gpt-3.5-turbo", api_key="", default_prompt_template=PromptTemplate("Answer the user query. Query: {query}") ) sparse_retriever = BM25Retriever( document_store=document_store, top_k=2 ) dense_retriever = EmbeddingRetriever( document_store=document_store, embedding_model="intfloat/e5-base-v2", model_format="sentence_transformers", top_k=2 ) document_store.update_embeddings(dense_retriever) pipeline = Pipeline() pipeline.add_node(component=query_prompt_node, name="QueryPromptNode", inputs=["Query"]) pipeline.add_node(component=shaper, name="ListToString", inputs=["QueryPromptNode"]) pipeline.add_node(component=sparse_retriever, name="BM25", inputs=["ListToString"]) pipeline.add_node(component=dense_retriever, name="Embedding", inputs=["ListToString"]) pipeline.add_node( component=JoinDocuments(join_mode="concatenate"), name="Join", inputs=["BM25", "Embedding"] ) pipeline.add_node(component=qa_prompt_node, name="QAPromptNode", inputs=["Join"]) out = pipeline.run(query="What is the captial of Grmny?", debug=True) print(out["invocation_context"]) # Before Fix # {'query': 'What is the captial of Grmny?', <-- Original Query!! # 'results': ['The capital of Germany is Berlin.'], # 'prompts': ['Answer the user query. Query: What is the captial of Grmny?'], <-- Original Query!! # After Fix # {'query': 'What is the capital of Germany?', <-- Rewritten Query!! # 'results': ['The capital of Germany is Berlin.'], # 'prompts': ['Answer the user query. Query: What is the capital of Germany?'], <-- Rewritten Query!!
` -
When passing empty inputs (such as query="") to PromptNode, the node would raise an error. This has been fixed.
-
Change the dummy vector used internally in the Pinecone Document Store. A recent change to the Pinecone API does not allow to use vectors filled with zeros as was the previous dummy vector.
-
The types of meta data values accepted by RouteDocuments was unnecessarily restricted to string types. This causes validation errors (for example when loading from a yaml file) if a user tries to use a boolean type for example. We add boolean and int types as valid types for metadata_values.
-
Fixed a bug that made it impossible to write Documents to Weaviate when some of the fields were empty lists (e.g. split_overlap for preprocessed documents).
v2.2.0
Release Notes
v2.2.0
Highlights
The Multiplexer component proved to be hard to explain and to understand. After reviewing its use cases, the documentation was rewritten and the component was renamed to BranchJoiner to better explain its functionalities.
Add the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES' to the OpenAI components.
⬆️ Upgrade Notes
- BranchJoiner has the very same interface as Multiplexer. To upgrade your code, just rename any occurrence of Multiplexer to BranchJoiner and ajdust the imports accordingly.
🚀 New Features
- Add BranchJoiner to eventually replace Multiplexer
- AzureOpenAIGenerator and AzureOpenAIChatGenerator can now be configured passing a timeout for the underlying AzureOpenAI client.
⚡️ Enhancement Notes
- ChatPromptBuilder now supports changing its template at runtime. This allows you to define a default template and then change it based on your needs at runtime.
- If an LLM-based evaluator (e.g., Faithfulness or ContextRelevance) is initialised with raise_on_failure=False, and if a call to an LLM fails or an LLM outputs an invalid JSON, the score of the sample is set to NaN instead of raising an exception. The user is notified with a warning indicating the number of requests that failed.
- Adds inference mode to model call of the ExtractiveReader. This prevents gradients from being calculated during inference time in pytorch.
- The DocumentCleaner class has the optional attribute keep_id that if set to True it keeps the document ids unchanged after cleanup.
- DocumentSplitter now has an optional split_threshold parameter. Use this parameter if you want to rather not split inputs that are only slightly longer than the allowed split_length. If when chunking one of the chunks is smaller than the split_threshold, the chunk will be concatenated with the previous one. This avoids having too small chunks that are not meaningful.
- Re-implement InMemoryDocumentStore BM25 search with incremental indexing by avoiding re-creating the entire inverse index for every new query. This change also removes the dependency on haystack_bm25. Please refer to [PR #7549](#7549) for the full context.
- Improved MIME type management by directly setting MIME types on ByteStreams, enhancing the overall handling and routing of different file types. This update makes MIME type data more consistently accessible and simplifies the process of working with various document formats.
- PromptBuilder now supports changing its template at runtime (e.g. for Prompt Engineering). This allows you to define a default template and then change it based on your needs at runtime.
- Now you can set the timeout and max_retries parameters on OpenAI components by setting the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES' environment vars or passing them at __init__.
- The DocumentJoiner component's run method now accepts a top_k parameter, allowing users to specify the maximum number of documents to return at query time. This fixes issue #7702.
- Enforce JSON mode on OpenAI LLM-based evaluators so that the they always return valid JSON output. This is to ensure that the output is always in a consistent format, regardless of the input.
- Make warm_up() usage consistent across the codebase.
- Create a class hierarchy for pipeline classes, and move the run logic into the child class. Preparation work for introducing multiple run stratgegies.
- Make the SerperDevWebSearch more robust when snippet is not present in the request response.
- Make SparseEmbedding a dataclass, this makes it easier to use the class with Pydantic
- `HTMLToDocument`: change the HTML conversion backend from boilerpy3 to trafilatura, which is more robust and better maintained.
⚠️ Deprecation Notes
- Mulitplexer is now deprecated.
- DynamicChatPromptBuilder has been deprecated as ChatPromptBuilder fully covers its functionality. Use ChatPromptBuilder instead.
- DynamicPromptBuilder has been deprecated as PromptBuilder fully covers its functionality. Use PromptBuilder instead.
- The following parameters of HTMLToDocument are ignored and will be removed in Haystack 2.4.0: extractor_type and try_others.
🐛 Bug Fixes
- FaithfullnessEvaluator and ContextRelevanceEvaluator now return 0 instead of NaN when applied to an empty context or empty statements.
- Azure generators components fixed, they were missing the @component decorator.
- Updates the from_dict method of SentenceTransformersTextEmbedder, SentenceTransformersDocumentEmbedder, NamedEntityExtractor, SentenceTransformersDiversityRanker and LocalWhisperTranscriber to allow None as a valid value for device when deserializing from a YAML file. This allows a deserialized pipeline to auto-determine what device to use using the ComponentDevice.resolve_device logic.
- Fix the broken serialization of HuggingFaceAPITextEmbedder, HuggingFaceAPIDocumentEmbedder, HuggingFaceAPIGenerator, and HuggingFaceAPIChatGenerator.
- Fix NamedEntityExtractor crashing in Python 3.12 if constructed using a string backend argument.
- Fixed the PdfMinerToDocument converter's outputs to be properly wired up to 'documents'.
- Add to_dict method to DocumentRecallEvaluator to allow proper serialization of the component.
- Improves/fixes type serialization of PEP 585 types (e.g. list[Document], and their nested version). This improvement enables better serialization of generics and nested types and improves/fixes matching of list[X] and List[X] types in component connections after serialization.
- Fixed (de)serialization of NamedEntityExtractor. Includes updated tests verifying these fixes when NamedEntityExtractor is used in pipelines.
- The include_outputs_from parameter in Pipeline.run correctly returns outputs of components with multiple outputs.
- Return an empty list of answers when ExtractiveReader receives an empty list of documents instead of raising an exception.
v1.26.0-rc1
Release Notes
v1.26.0-rc1
Prelude
The utility functions fetch_archive_from_http, build_pipeline and add_example_data were removed from Haystack.
This release changes the PDFToTextConverter so that it doesn't support PyMuPDF anymore. The converter will always assume xpdf is used by default.
⬆️ Upgrade Notes
- We recommend replacing calls to the fetch_archive_from_http function with other tools available in Python or in the operating system of use.
- To keep using PyMuPDF you must create a custom node, you can use the previous Haystack version for inspiration.
⚡️ Enhancement Notes
- Support for Llama3 models on AWS Bedrock.
- Support for MistralAI and new Claude 3 models on AWS Bedrock.
- Upgrade transformers to version 4.39.3 so that Haystack can support the new Cohere Command R models.
- Review and update context windows for OpenAI GPT models.
- Support gated repos for Huggingface inference.
- Add a check to verify that the embedding dimension set in the FAISS Document Store and retriever are equal before running embedding calculations.
🐛 Bug Fixes
-
Pipeline run error when using the FileTypeClassifier with the raise_on_error: True option. Instead of returning an unexpected NoneType, we route the file to a dead-end edge.
-
Ensure that the crawled files are downloaded to the output_dir directory, as specified in the Crawler constructor. Previously, some files were incorrectly downloaded to the current working directory.
-
Fixes SearchEngineDocumentStore.get_metadata_values_by_key method to make use of self.index if no index is provided.
-
Fixes OutputParser usage in PromptTemplate after making invocation context immutable in #7510.
-
When using a Pipeline with a JoinNode (e.g. JoinDocuments) all information from the previous nodes was lost other than a few select fields (e.g. documents). This was due to the JoinNode not properly passing on the information from the previous nodes. This has been fixed and now all information from the previous nodes is passed on to the next node in the pipeline.
For example, this is a pipeline that rewrites the query during pipeline execution combined with a hybrid retrieval setup that requires a JoinDocuments node. Specifically the first prompt node rewrites the query to fix all spelling errors, and this new query is used for retrieval. And now the JoinDocuments node will now pass on the rewritten query so it can be used by the QAPromptNode node whereas before it would pass on the original query.
`python from haystack import Pipeline from haystack.nodes import BM25Retriever, EmbeddingRetriever, PromptNode, Shaper, JoinDocuments, PromptTemplate from haystack.document_stores import InMemoryDocumentStore document_store = InMemoryDocumentStore(use_bm25=True) dicts = [{"content": "The capital of Germany is Berlin."}, {"content": "The capital of France is Paris."}] document_store.write_documents(dicts) query_prompt_node = PromptNode( model_name_or_path="gpt-3.5-turbo", api_key="", default_prompt_template=PromptTemplate("You are a spell checker. Given a user query return the same query with all spelling errors fixed.\nUser Query: {query}\nSpell Checked Query:") ) shaper = Shaper( func="join_strings", inputs={"strings": "results"}, outputs=["query"], ) qa_prompt_node = PromptNode( model_name_or_path="gpt-3.5-turbo", api_key="", default_prompt_template=PromptTemplate("Answer the user query. Query: {query}") ) sparse_retriever = BM25Retriever( document_store=document_store, top_k=2 ) dense_retriever = EmbeddingRetriever( document_store=document_store, embedding_model="intfloat/e5-base-v2", model_format="sentence_transformers", top_k=2 ) document_store.update_embeddings(dense_retriever) pipeline = Pipeline() pipeline.add_node(component=query_prompt_node, name="QueryPromptNode", inputs=["Query"]) pipeline.add_node(component=shaper, name="ListToString", inputs=["QueryPromptNode"]) pipeline.add_node(component=sparse_retriever, name="BM25", inputs=["ListToString"]) pipeline.add_node(component=dense_retriever, name="Embedding", inputs=["ListToString"]) pipeline.add_node( component=JoinDocuments(join_mode="concatenate"), name="Join", inputs=["BM25", "Embedding"] ) pipeline.add_node(component=qa_prompt_node, name="QAPromptNode", inputs=["Join"]) out = pipeline.run(query="What is the captial of Grmny?", debug=True) print(out["invocation_context"]) # Before Fix # {'query': 'What is the captial of Grmny?', <-- Original Query!! # 'results': ['The capital of Germany is Berlin.'], # 'prompts': ['Answer the user query. Query: What is the captial of Grmny?'], <-- Original Query!! # After Fix # {'query': 'What is the capital of Germany?', <-- Rewritten Query!! # 'results': ['The capital of Germany is Berlin.'], # 'prompts': ['Answer the user query. Query: What is the capital of Germany?'], <-- Rewritten Query!!
` -
When passing empty inputs (such as query="") to PromptNode, the node would raise an error. This has been fixed.
v1.26.0-rc0
⚡️ Enhancement Notes
-
Add raise_on_failure flag to BaseConverter class so that big processes can optionally continue without breaking from exceptions.
-
Upgrade Transformers to the latest version 4.37.2. This version adds support for the Phi-2 and Qwen2 models and improves support for quantization.
-
Add support for latest OpenAI embedding models text-embedding-3-large and text-embedding-3-small.
-
API_BASE can now be passed as an optional parameter in the getting_started sample. Only openai provider is supported in this set of changes. PromptNode and PromptModel were enhanced to allow passing of this parameter. This allows RAG against a local endpoint (e.g, http://localhost:1234/v1), so long as it is OpenAI compatible (such as LM Studio)
Logging in the getting started sample was made more verbose, to make it easier for people to see what was happening under the covers.
-
Added new option split_by="page" to the preprocessor so we can chunk documents by page break.
🐛 Bug Fixes
- Change the dummy vector used internally in the Pinecone Document Store. A recent change to the Pinecone API does not allow to use vectors filled with zeros as was the previous dummy vector.
- The types of meta data values accepted by RouteDocuments was unnecessarily restricted to string types. This causes validation errors (for example when loading from a yaml file) if a user tries to use a boolean type for example. We add boolean and int types as valid types for metadata_values.
- Fixed a bug that made it impossible to write Documents to Weaviate when some of the fields were empty lists (e.g. split_overlap for preprocessed documents).
v2.2.0-rc2
Release Notes
v2.2.0-rc1
Highlights
The Multiplexer component proved to be hard to explain and to understand. After reviewing its use cases, the documentation was rewritten and the component was renamed to BranchJoiner to better explain its functionalities.
Add the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES' to the OpenAI components.
⬆️ Upgrade Notes
- BranchJoiner has the very same interface as Multiplexer. To upgrade your code, just rename any occurrence of Multiplexer to BranchJoiner and ajdust the imports accordingly.
🚀 New Features
- Add BranchJoiner to eventually replace Multiplexer
- AzureOpenAIGenerator and AzureOpenAIChatGenerator can now be configured passing a timeout for the underlying AzureOpenAI client.
⚡️ Enhancement Notes
- ChatPromptBuilder now supports changing its template at runtime. This allows you to define a default template and then change it based on your needs at runtime.
- If an LLM-based evaluator (e.g., Faithfulness or ContextRelevance) is initialised with raise_on_failure=False, and if a call to an LLM fails or an LLM outputs an invalid JSON, the score of the sample is set to NaN instead of raising an exception. The user is notified with a warning indicating the number of requests that failed.
- Adds inference mode to model call of the ExtractiveReader. This prevents gradients from being calculated during inference time in pytorch.
- The DocumentCleaner class has the optional attribute keep_id that if set to True it keeps the document ids unchanged after cleanup.
- DocumentSplitter now has an optional split_threshold parameter. Use this parameter if you want to rather not split inputs that are only slightly longer than the allowed split_length. If when chunking one of the chunks is smaller than the split_threshold, the chunk will be concatenated with the previous one. This avoids having too small chunks that are not meaningful.
- Re-implement InMemoryDocumentStore BM25 search with incremental indexing by avoiding re-creating the entire inverse index for every new query. This change also removes the dependency on haystack_bm25. Please refer to [PR #7549](#7549) for the full context.
- Improved MIME type management by directly setting MIME types on ByteStreams, enhancing the overall handling and routing of different file types. This update makes MIME type data more consistently accessible and simplifies the process of working with various document formats.
- PromptBuilder now supports changing its template at runtime (e.g. for Prompt Engineering). This allows you to define a default template and then change it based on your needs at runtime.
- Now you can set the timeout and max_retries parameters on OpenAI components by setting the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES' environment vars or passing them at __init__.
- The DocumentJoiner component's run method now accepts a top_k parameter, allowing users to specify the maximum number of documents to return at query time. This fixes issue #7702.
- Enforce JSON mode on OpenAI LLM-based evaluators so that the they always return valid JSON output. This is to ensure that the output is always in a consistent format, regardless of the input.
- Make warm_up() usage consistent across the codebase.
- Create a class hierarchy for pipeline classes, and move the run logic into the child class. Preparation work for introducing multiple run stratgegies.
- Make the SerperDevWebSearch more robust when snippet is not present in the request response.
- Make SparseEmbedding a dataclass, this makes it easier to use the class with Pydantic
- `HTMLToDocument`: change the HTML conversion backend from boilerpy3 to trafilatura, which is more robust and better maintained.
⚠️ Deprecation Notes
- Mulitplexer is now deprecated.
- DynamicChatPromptBuilder has been deprecated as ChatPromptBuilder fully covers its functionality. Use ChatPromptBuilder instead.
- DynamicPromptBuilder has been deprecated as PromptBuilder fully covers its functionality. Use PromptBuilder instead.
- The following parameters of HTMLToDocument are ignored and will be removed in Haystack 2.4.0: extractor_type and try_others.
🐛 Bug Fixes
- FaithfullnessEvaluator and ContextRelevanceEvaluator now return 0 instead of NaN when applied to an empty context or empty statements.
- Azure generators components fixed, they were missing the @component decorator.
- Updates the from_dict method of SentenceTransformersTextEmbedder, SentenceTransformersDocumentEmbedder, NamedEntityExtractor, SentenceTransformersDiversityRanker and LocalWhisperTranscriber to allow None as a valid value for device when deserializing from a YAML file. This allows a deserialized pipeline to auto-determine what device to use using the ComponentDevice.resolve_device logic.
- Fix the broken serialization of HuggingFaceAPITextEmbedder, HuggingFaceAPIDocumentEmbedder, HuggingFaceAPIGenerator, and HuggingFaceAPIChatGenerator.
- Fix NamedEntityExtractor crashing in Python 3.12 if constructed using a string backend argument.
- Fixed the PdfMinerToDocument converter's outputs to be properly wired up to 'documents'.
- Add to_dict method to DocumentRecallEvaluator to allow proper serialization of the component.
- Improves/fixes type serialization of PEP 585 types (e.g. list[Document], and their nested version). This improvement enables better serialization of generics and nested types and improves/fixes matching of list[X] and List[X] types in component connections after serialization.
- Fixed (de)serialization of NamedEntityExtractor. Includes updated tests verifying these fixes when NamedEntityExtractor is used in pipelines.
- The include_outputs_from parameter in Pipeline.run correctly returns outputs of components with multiple outputs.
- Return an empty list of answers when ExtractiveReader receives an empty list of documents instead of raising an exception.
v2.2.0-rc1
v2.2.0-rc1
v2.1.2
Release Notes
v2.1.2
⚡️ Enhancement Notes
- Enforce JSON mode on OpenAI LLM-based evaluators so that the they always return valid JSON output. This is to ensure that the output is always in a consistent format, regardless of the input.
🐛 Bug Fixes
FaithfullnessEvaluator
andContextRelevanceEvaluator
now return0
instead ofNaN
when applied to an empty context or empty statements.- Azure generators components fixed, they were missing the
@component
decorator. - Updates the
from_dict
method ofSentenceTransformersTextEmbedder
,SentenceTransformersDocumentEmbedder
,NamedEntityExtractor
,SentenceTransformersDiversityRanker
andLocalWhisperTranscriber
to allowNone
as a valid value for device when deserializing from a YAML file. This allows a deserialized pipeline to auto-determine what device to use using theComponentDevice.resolve_device
logic. - Improves/fixes type serialization of PEP 585 types (e.g.
list[Document]
, and their nested version). This improvement enables better serialization of generics and nested types and improves/fixes matching oflist[X]
and List[X]` types in component connections after serialization. - Fixed (de)serialization of
NamedEntityExtractor
. Includes updated tests verifying these fixes whenNamedEntityExtractor
is used in pipelines. - The
include_outputs_from
parameter inPipeline.run
correctly returns outputs of components with multiple outputs.
v2.1.1-rc1
Release Notes
v2.1.1-rc1
⚡️ Enhancement Notes
- Make
SparseEmbedding
a dataclass, this makes it easier to use the class with Pydantic
🐛 Bug Fixes
- Fix the broken serialization of
HuggingFaceAPITextEmbedder
,HuggingFaceAPIDocumentEmbedder
,HuggingFaceAPIGenerator
, andHuggingFaceAPIChatGenerator
. - Add
to_dict
method toDocumentRecallEvaluator
to allow proper serialization of the component.
v2.1.1
Release Notes
v2.1.1
⚡️ Enhancement Notes
- Make
SparseEmbedding
a dataclass, this makes it easier to use the class with Pydantic
🐛 Bug Fixes
- Fix the broken serialization of
HuggingFaceAPITextEmbedder
,HuggingFaceAPIDocumentEmbedder
,HuggingFaceAPIGenerator
, andHuggingFaceAPIChatGenerator
. - Add
to_dict
method toDocumentRecallEvaluator
to allow proper serialization of the component.
v2.1.0-rc2
Release Notes
Highlights
📊 New Evaluator Components
Haystack introduces new components for both with model-based, and statistical evaluation: AnswerExactMatchEvaluator
, ContextRelevanceEvaluator
, DocumentMAPEvaluator
, DocumentMRREvaluator
, DocumentRecallEvaluator
, FaithfulnessEvaluator
, LLMEvaluator
, SASEvaluator
Here's an example of how to use DocumentMAPEvaluator
to evaluate retrieved documents and calculate mean average precision score:
from haystack import Document
from haystack.components.evaluators import DocumentMAPEvaluator
evaluator = DocumentMAPEvaluator()
result = evaluator.run(
ground_truth_documents=[
[Document(content="France")],
[Document(content="9th century"), Document(content="9th")],
],
retrieved_documents=[
[Document(content="France")],
[Document(content="9th century"), Document(content="10th century"), Document(content="9th")],
],
)
result["individual_scores"]
>> [1.0, 0.8333333333333333]
result["score"]
>> 0 .9166666666666666
To learn more about evaluating RAG pipelines both with model-based, and statistical metrics available in the Haystack, check out Tutorial: Evaluating RAG Pipelines.
🕸️ Support For Sparse Embeddings
Haystack offers robust support for Sparse Embedding Retrieval techniques, including SPLADE. Here's how to create a simple retrieval Pipeline with sparse embeddings:
from haystack import Pipeline
from haystack_integrations.components.retrievers.qdrant import QdrantSparseEmbeddingRetriever
from haystack_integrations.components.embedders.fastembed import FastembedSparseTextEmbedder
sparse_text_embedder = FastembedSparseTextEmbedder(model="prithvida/Splade_PP_en_v1")
sparse_retriever = QdrantSparseEmbeddingRetriever(document_store=document_store)
query_pipeline = Pipeline()
query_pipeline.add_component("sparse_text_embedder", sparse_text_embedder)
query_pipeline.add_component("sparse_retriever", sparse_retriever)
query_pipeline.connect("sparse_text_embedder.sparse_embedding", "sparse_retriever.query_sparse_embedding")
Learn more about this topic in our documentation on Sparse Embedding-based Retrievers
Start building with our new cookbook: 🧑🍳 Sparse Embedding Retrieval using Qdrant and FastEmbed.
🧐 Inspect Component Outputs
As of 2.1.0, you can now inspect each component output after running a pipeline. Provide component names with include_outputs_from
key to pipeline.run
:
pipe.run(data, include_outputs_from=["prompt_builder", "llm", "retriever"])
And the pipeline output should look like this:
{'llm': {'replies': ['The Rhodes Statue was described as being built with iron tie bars to which brass plates were fixed to form the skin. It stood on a 15-meter-high white marble pedestal near the Rhodes harbor entrance. The statue itself was about 70 cubits, or 32 meters, tall.'],
'meta': [{'model': 'gpt-3.5-turbo-0125',
...
'usage': {'completion_tokens': 57,
'prompt_tokens': 446,
'total_tokens': 503}}]},
'retriever': {'documents': [Document(id=a3ee3a9a55b47ff651ae11dc56d84d2b6f8d931b795bd866c14eacfa56000965, content: 'Within it, too, are to be seen large masses of rock, by the weight of which the artist steadied it w...', meta: {'url': 'https://en.wikipedia.org/wiki/Colossus_of_Rhodes', '_split_id': 9}, score: 0.648961685430463),...]},
'prompt_builder': {'prompt': "\nGiven the following information, answer the question.\n\nContext:\n\n Within it, too, are to be seen large masses of rock, by the weight of which the artist steadied it while...
... levels during construction.\n\n\n\nQuestion: What does Rhodes Statue look like?\nAnswer:"}}
🚀 New Features
-
Add several new Evaluation components, i.e:
AnswerExactMatchEvaluator
ContextRelevanceEvaluator
DocumentMAPEvaluator
DocumentMRREvaluator
DocumentRecallEvaluator
FaithfulnessEvaluator
LLMEvaluator
SASEvaluator
-
Introduce a new
SparseEmbedding
class that can store a sparse vector representation of a document. It will be instrumental in supporting sparse embedding retrieval with the subsequent introduction of sparse embedders and sparse embedding retrievers. -
Added a
SentenceTransformersDiversityRanker
. The diversity ranker orders documents to maximize their overall diversity. The ranker leverages sentence-transformer models to calculate semantic embeddings for each document and the query. -
Introduced new HuggingFace API components, namely:
HuggingFaceAPIChatGenerator
, which will replace theHuggingFaceTGIChatGenerator
in the future.HuggingFaceAPIDocumentEmbedder
, which will replace theHuggingFaceTEIDocumentEmbedder
in the future.HuggingFaceAPIGenerator
, which will replace theHuggingFaceTGIGenerator
in the future.HuggingFaceAPITextEmbedder
, which will replace theHuggingFaceTEITextEmbedder
in the future.- These components support different Hugging Face APIs:
- free Serverless Inference API
- paid Inference Endpoints
- self-hosted Text Generation Inference
⚡️ Enhancement Notes
-
Compatibility with
huggingface_hub>=0.22.0
forHuggingFaceTGIGenerator
andHuggingFaceTGIChatGenerator
components. -
Adds truncate and normalize parameters to
HuggingFaceTEITextEmbedder
andHuggingFaceTEITextEmbedder
to allow truncation and normalization of embeddings. -
Adds
trust_remote_code
parameter toSentenceTransformersDocumentEmbedder
andSentenceTransformersTextEmbedder
for allowing custom models and scripts. -
Adds
streaming_callback
parameter toHuggingFaceLocalGenerator
, allowing users to handle streaming responses. -
Adds a
ZeroShotTextRouter
that uses an NLI model from HuggingFace to classify texts based on a set of provided labels and routes them based on the label they were classified with. -
Adds dimensions parameter to Azure OpenAI Embedders (
AzureOpenAITextEmbedder
andAzureOpenAIDocumentEmbedder
) to fully support new embedding models liketext-embedding-3-small
,text-embedding-3-large
and upcoming ones -
Now the
DocumentSplitter
adds thepage_number
field to the metadata of all output documents to keep track of the page of the original document it belongs to. -
Allows users to customise text extraction from PDF files. This is particularly useful for PDFs with unusual layouts, such as multiple text columns. For instance, users can configure the object to retain the reading order.
-
Enhanced
PromptBuilder
to specify and enforce required variables in prompt templates. -
Set
max_new_tokens
default to 512 in HuggingFace generators. -
Enhanced the
AzureOCRDocumentConverter
to include advanced handling of tables and text. Features such as extracting preceding and following context for tables, merging multiple column headers, and enabling single-column page layout for text have been introduced. This update furthers the flexibility and accuracy of document conversion within complex layouts. -
Enhanced
DynamicChatPromptBuilder
's capabilities by allowing all user and system messages to be templated with provided variables. This update ensures a more versatile and dynamic templating process, making chat prompt generation more efficient and customised to user needs. -
Improved HTML content extraction by attempting to use multiple extractors in order of priority until successful. An additional
try_others
parameter inHTMLToDocument
,True
by default, determines whether subsequent extractors are used after a failure. This enhancement decreases extraction failures, ensuring more dependable content retrieval. -
Enhanced
FileTypeRouter
with regex pattern support for MIME types. This powerful addition allows for more granular control and flexibility in routing files based on their MIME types, enabling the handling of broad categories or specific MIME type patterns with ease. This feature particularly benefits applications requiring sophisticated file classification and routing logic. -
In Jupyter notebooks, the image of the
Pipeline
will no longer be displayed automatically. Instead, the textual representation of the Pipeline will be displayed. To display thePipeline
image, use the show method of thePipeline
object. -
Add support for callbacks during pipeline deserialization. Currently supports a pre-init hook for components that can be used to inspect and modify the initialization parameters before the invocation of the component's
__init__
method. -
pipeline.run()
accepts a set of component names whose intermediate outputs are returned in the final pipeline output dictionary. -
Refactor
PyPDFToDocument
to simplify support for custom PDF converters. PDF converters are classes that implement thePyPDFConverter
protocol and have 3 methods:convert
,to_dict
andfrom_dict
.
⚠️ Deprecation Notes
- Deprecate
HuggingFaceTGIChatGenerator
, will be removed in Haystack 2.3.0. UseHuggingFaceAPIChatGenerator
instead. - Deprecate `HuggingFaceTEIDocumentEmbedder...