Skip to content
This repository was archived by the owner on Jan 24, 2024. It is now read-only.

build(docker): update example to use huggyllama/llama-7b #169

Merged
merged 1 commit into from
Apr 20, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions deployments/bundle/llama-7b.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@ FROM hyperonym/basaran:0.16.2
WORKDIR /app

# Download the model to be bundled
RUN python utils/download.py Enoch/llama-7b-hf /model
RUN python utils/download.py huggyllama/llama-7b /model

# Provide default environment variables
ENV MODEL="/model"
ENV MODEL_LOCAL_FILES_ONLY="true"
ENV MODEL_HALF_PRECISION="true"
ENV SERVER_MODEL_NAME="LLaMA-7B"
ENV SERVER_MODEL_NAME="huggyllama/llama-7b"