Skip to content

Commit

Permalink
Updates
Browse files Browse the repository at this point in the history
  • Loading branch information
Josh-XT committed Dec 1, 2024
1 parent 7e64960 commit b8a7669
Show file tree
Hide file tree
Showing 4 changed files with 8 additions and 5 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,5 @@ tests/*.wav
.env.local
test.ipynb
output.md
ezlocalai.yml
ezlocalai.yml
.env
5 changes: 3 additions & 2 deletions cuda.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,9 @@ WORKDIR /app
ENV HOST=0.0.0.0 \
CUDA_DOCKER_ARCH=all \
LLAMA_CUBLAS=1 \
GGML_CUDA=1
RUN pip install llama-cpp-python==0.2.90 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu124 --no-cache-dir
GGML_CUDA=1 \
CMAKE_ARGS="-DGGML_CUDA=on"
RUN CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python==0.3.1 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu124 --no-cache-dir
RUN git clone https://github.com/Josh-XT/DeepSeek-VL deepseek
RUN pip install torch==2.3.1+cu121 torchaudio==2.3.1+cu121 --index-url https://download.pytorch.org/whl/cu121
COPY cuda-requirements.txt .
Expand Down
3 changes: 2 additions & 1 deletion ezlocalai/VLM.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,8 @@ def __init__(self, model="deepseek-ai/deepseek-vl-1.3b-chat"):
cache_dir=os.path.join(os.getcwd(), "models"),
)
self.vl_gpt = self.vl_gpt.to(torch.bfloat16).cuda().eval()
except:
except Exception as e:
print(f"[VLM] Error: {e}")
self.vl_chat_processor = None
self.tokenizer = None
self.vl_gpt = None
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,4 @@ optimum
onnx
diffusers[torch]
torchaudio==2.3.1
llama-cpp-python==0.2.90
llama-cpp-python==0.3.1

0 comments on commit b8a7669

Please sign in to comment.