-
Notifications
You must be signed in to change notification settings - Fork 16.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DOC: Could load GGUF models from https #14514
Comments
🤖 Hello! It seems like you're experiencing an issue with the current documentation for LangChain. I'd be happy to help you with that. Could you please provide more details about the problem you're encountering when trying to load GGUF models from HTTPS? Additionally, if you have any specific ideas or requests for content, feel free to share them so that I can assist you more effectively. This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@dosu-bot |
🤖 Based on the provided context, the LangChain framework does support loading models from HuggingFace Hub and directly from a URL, including HTTPS URLs. However, the context does not specify whether the models can be loaded via HTTPS from HuggingFace Hub. To load models from HuggingFace Hub, you can use the from langchain.llms import HuggingFaceHub
hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key") In this example, To load models directly from a URL, you can use the from langchain.document_loaders.url import UnstructuredURLLoader This class can be used to load documents directly from a URL. However, it's not clear from the context whether you can load a model from HuggingFace Hub via HTTPS using the If you're still encountering issues, could you please provide more details about the specific issues you're facing? This could include error messages, the specific steps you're taking, and any relevant code snippets. This will help me provide more targeted assistance. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@dosu-bot |
🤖 Based on the information you've provided, it seems like you're trying to load a HuggingFace GGUF model using the HuggingFaceHub class in the LangChain framework. Here's how you can do it: repo_id = "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/resolve/main/llama-2-13b-chat.ggmlv3.q4_0.bin"
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64}
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question)) Please note that the If you're encountering an error, it could be due to a few reasons:
These errors are raised in the If none of these solutions work, could you please provide the exact error message you're encountering? This will help me better understand the issue and provide a more accurate solution. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Thanks! |
@dosu-bot from langchain.llms import HuggingFaceHub Encounter ValidationError as folllows: ValidationError Traceback (most recent call last) 1 frames ValidationError: 1 validation error for HuggingFaceHub It seems URLs are invalid to load moldels. |
🤖 Based on the error message you've provided, it seems like the You can try using just the namespace and repository name as the from langchain.llms import HuggingFaceHub
repo_id = "TheBloke/OpenOrca-Zephyr-7B-GGUF"
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 128}, huggingfacehub_api_token="mytoken"
) This should correctly load the model from the HuggingFace Hub. If you're still encountering issues, please provide any new error messages or further details about the problem. The If you're sure that the For more details, you can refer to the source code and this similar issue that was resolved in the past. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
It seems the prio one is invalid: Based on the information you've provided, it seems like you're trying to load a HuggingFace GGUF model using the HuggingFaceHub class in the LangChain framework. Here's how you can do it: repo_id = "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/resolve/main/llama-2-13b-chat.ggmlv3.q4_0.bin" llm = HuggingFaceHub( |
🤖 Based on the error message you're receiving, it seems that the The Here's how you can modify your code: from langchain.llms import HuggingFaceHub
repo_id = "TheBloke/Llama-2-13B-chat-GGML"
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64}
) This should correctly load the model from the HuggingFace Hub using the Please note that the You can find more details about this in the LangChain repository. I hope this helps! If you have any other questions, feel free to ask. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@dosu-bot |
🤖 Based on the provided context, it seems that the LangChain framework does support loading models directly from an HTTPS URL using the To load a GGUF model from an HTTPS URL, you might want to use the from langchain.document_loaders.url import UnstructuredURLLoader
loader = UnstructuredURLLoader(
"https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/resolve/main/llama-2-13b-chat.ggmlv3.q4_0.bin"
)
model = loader.load() Please note that this is a hypothetical usage and might not work as expected. I recommend checking the LangChain documentation or source code for more information on how to use the If you're still having trouble, could you provide more information about the error message you're receiving and the code you're using to load the model? This could help me provide a more accurate solution. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
from langchain.document_loaders.url import UnstructuredURLLoader loader = UnstructuredURLLoader( Still encounter ERRORS: ERROR:langchain_community.document_loaders.url:Error fetching or processing 3, exception: Invalid URL '3': No scheme supplied. Perhaps you meant https://3? |
🤖 Based on the error messages you've shared, it seems like the URL is being incorrectly parsed as a sequence of characters, which is why each character is being treated as a separate URL. This issue is similar to a previously solved issue in the LangChain repository, titled WebBaseLoader interprets incorrectly the web_path parameter. The problem was due to a regression in the latest update to WebBaseLoader, where a string passed to the To correctly load a GGUF model from an HTTPS URL using the UnstructuredURLLoader class in the LangChain framework, you can follow the example provided in the class docstring: from langchain.document_loaders import UnstructuredURLLoader
loader = UnstructuredURLLoader(
urls=["https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/resolve/main/llama-2-13b-chat.ggmlv3.q4_0.bin"],
mode="elements",
strategy="fast",
)
docs = loader.load() In this example, the If the problem persists, it might be due to the version of the unstructured library you're using. The UnstructuredURLLoader class checks the version of the unstructured library and logs a warning if you're using an old version that doesn't support the headers parameter. To fix this, you can update the unstructured library to the latest version using pip: pip install --upgrade unstructured Please try this solution and let me know if it resolves your issue. If not, please provide more details about the error and I'll be glad to assist further. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@dosu-bot from langchain.document_loaders.url import UnstructuredURLLoader loader = UnstructuredURLLoader( Still encounter ERRORS: ERROR:langchain_community.document_loaders.url:Error fetching or processing 3, exception: Invalid URL '3': No scheme supplied. Perhaps you meant https://3? loader = UnstructuredURLLoader( Still encounter ERRORS: ERROR:langchain_community.document_loaders.url:Error fetching or processing 3, exception: Invalid URL '3': No scheme supplied. Perhaps you meant https://3? |
🤖 Based on the error messages you've shared, it seems like the URL you're trying to load is being incorrectly parsed as a sequence of characters. This issue has been reported before in the LangChain repository, specifically in the issue titled WebBaseLoader interprets incorrectly the web_path parameter. The problem was identified as a regression in the latest update to WebBaseLoader, where a string passed to the To resolve this issue, you might need to modify the condition in the code to correctly handle strings passed to the
If the issue persists, you might want to catch the exception and print it out to get more details about the error. Here's an example of how you can do this: for url in urls:
try:
# existing code...
except Exception as e:
print(e)
if self.continue_on_failure:
logger.error(f"Error fetching or processing {url}, exception: {e}")
continue
else:
raise e This will print out the exception message, which should give you more information about the error. If you could share the exact version of the Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@dosu-bot loader = UnstructuredURLLoader( However let OOM. |
🤖 Hello @dosubot[bot]! I'm Dosu, a friendly bot here to lend a hand with bugs, answer your queries, and help you become a better contributor to the LangChain project. While we wait for a human maintainer, feel free to poke me with any problems you're encountering. I'm here to make your coding journey smoother. Let's work together on this! Based on the error messages you're receiving, it seems like the URL you're passing to the To resolve this issue, you need to ensure that the URL you're passing to the from urllib.parse import urlunparse
# Define the components of your URL
scheme = "https"
netloc = "huggingface.co"
path = "/TheBloke/Llama-2-13B-chat-GGML/resolve/main/llama-2-13b-chat.ggmlv3.q4_0.bin"
params = ""
query = ""
fragment = ""
# Use urlunparse to generate a properly formatted URL
url = urlunparse((scheme, netloc, path, params, query, fragment))
# Now you can pass this URL to the UnstructuredURLLoader
loader = UnstructuredURLLoader(url)
model = loader.load() This should prevent the Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@dosu-bot Define the components of your URLscheme = "https" Use urlunparse to generate a properly formatted URLurl = urlunparse((scheme, netloc, path, params, query, fragment)) Now you can pass this URL to the UnstructuredURLLoaderloader = UnstructuredURLLoader(url) Encounter errors: |
🤖 Based on the information you've provided and the context from the LangChain repository, it seems like the issue you're encountering is related to how the The Here is the relevant code from the for url in urls:
try:
if self.__is_non_html_available():
if self.__is_headers_available_for_non_html():
elements = partition(
url=url, headers=self.headers, **self.unstructured_kwargs
)
else:
elements = partition(url=url, **self.unstructured_kwargs)
else:
if self.__is_headers_available_for_html():
elements = partition_html(
url=url, headers=self.headers, **self.unstructured_kwargs
)
else:
elements = partition_html(url=url, **self.unstructured_kwargs)
except Exception as e:
if self.continue_on_failure:
logger.error(f"Error fetching or processing {url}, exception: {e}")
continue
else:
raise e You can find this code in the libs/langchain/langchain/document_loaders/url.py file in the LangChain repository. I also found a similar issue in the LangChain repository: WebBaseLoader interprets incorrectly the web_path parameter. This issue was caused by a regression in the latest update to WebBaseLoader, where a string passed to the I hope this information helps you understand the issue and how to potentially fix it. If you have any further questions or need more clarification, please let me know. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Could load GGUF models from https ? |
Issue with current documentation:
No response
Idea or request for content:
No response
The text was updated successfully, but these errors were encountered: