You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When deploying Helicone using docker-compose, the helicone-jawn container encounters an error while loading the PromptGuardModel. The error suggests an incorrect path to the model.
Error Message
helicone-jawn | OSError: Incorrect path_or_model_id: '/usr/src/app/valhalla/prompt_security/./prompt-guard-86m'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
How I Solved It
# Navigate to the prompt security directorycd helicone/valhalla/prompt_security/
# Log in to HuggingFace CLI
huggingface-cli login --token=${HF_TOKEN}# Download the Prompt-Guard model# Note: You need to have granted access to llama models on HuggingFace before downloading
huggingface-cli download meta-llama/Prompt-Guard-86M --local-dir prompt-guard-86m
# Navigate back to the parent directorycd ../
# Edit the Dockerfile
vi dockerfile
# Add the following line after the existing COPY command:# COPY ./valhalla/prompt_security/prompt-guard-86m /usr/src/app/valhalla/prompt_security/prompt-guard-86m# Navigate to the docker directorycd ../docker
# Build and run the Docker containers
docker compose build
docker compose up -d
Relevant log output
helicone-jawn | The above exception was the direct cause of the following exception:
helicone-jawn |
helicone-jawn | Traceback (most recent call last):
helicone-jawn | File "/usr/src/app/valhalla/prompt_security/main.py", line 166, in<module>
helicone-jawn | global_model = PromptGuardModel(num_workers=cpu_count // 2).load_model()
helicone-jawn | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
helicone-jawn | File "/usr/src/app/valhalla/prompt_security/main.py", line 108, in load_model
helicone-jawn | self.tokenizer = AutoTokenizer.from_pretrained(
helicone-jawn | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
helicone-jawn | File "/usr/src/app/valhalla/prompt_security/venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 881, in from_pretrained
helicone-jawn | tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
helicone-jawn | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
helicone-jawn | File "/usr/src/app/valhalla/prompt_security/venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 713, in get_tokenizer_config
helicone-jawn | resolved_config_file = cached_file(
helicone-jawn | ^^^^^^^^^^^^
helicone-jawn | File "/usr/src/app/valhalla/prompt_security/venv/lib/python3.11/site-packages/transformers/utils/hub.py", line 408, in cached_file
helicone-jawn | raise EnvironmentError(
helicone-jawn | OSError: Incorrect path_or_model_id: '/usr/src/app/valhalla/prompt_security/./prompt-guard-86m'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered:
What happened?
When deploying Helicone using docker-compose, the helicone-jawn container encounters an error while loading the PromptGuardModel. The error suggests an incorrect path to the model.
Error Message
How I Solved It
Relevant log output
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: