You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an updated version of (#321 (comment)). Using the default Dockerfile for me never seems to work but the update below for Ubuntu 2204 with an RTX3080 seems to work just fine to get things going.
It was previously suggested that the updates to v2.1.2 would fix this, but it doenst seem to have worked.
Hopefully someone else finds it useful.
# Copyright 2021 DeepMind Technologies Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ARG CUDA=12.1.0
# ARG CUDA=11.1.1
FROM nvidia/cuda:${CUDA}-runtime-ubuntu22.04
# FROM nvidia/cuda:${CUDA}-runtime
# FROM directive resets ARGS, so we specify again (the value is retained if
# previously set).
ARG CUDA
# ARG CUDA_VERSION=11.1.1
# Use bash to support string substitution.
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
build-essential \
cmake \
cuda-command-line-tools-$(cut -f1,2 -d- <<< ${CUDA//./-}) \
git \
hmmer \
kalign \
tzdata \
wget \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get autoremove -y \
&& apt-get clean
CMD ["wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin"]
CMD ["sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600"]
CMD ["wget https://developer.download.nvidia.com/compute/cuda/12.1.0/local_installers/cuda-repo-ubuntu2204-12-1-local_12.1.0-530.30.02-1_amd64.deb"]
CMD ["sudo dpkg -i cuda-repo-ubuntu2204-12-1-local_12.1.0-530.30.02-1_amd64.deb"]
CMD ["apt-key add /var/cuda-repo-ubuntu2204-12-1-local/3bf863cc.pub"]
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y cuda
# Compile HHsuite from source.
RUN git clone --branch v3.3.0 https://github.com/soedinglab/hh-suite.git /tmp/hh-suite \
&& mkdir /tmp/hh-suite/build \
&& pushd /tmp/hh-suite/build \
&& cmake -DCMAKE_INSTALL_PREFIX=/opt/hhsuite .. \
&& make -j 4 && make install \
&& ln -s /opt/hhsuite/bin/* /usr/bin \
&& popd \
&& rm -rf /tmp/hh-suite
# Install Miniconda package manager.
RUN wget -q -P /tmp \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& bash /tmp/Miniconda3-latest-Linux-x86_64.sh -b -p /opt/conda \
&& rm /tmp/Miniconda3-latest-Linux-x86_64.sh
# Install conda packages.
ENV PATH="/opt/conda/bin:$PATH"
# RUN conda install -qy conda==4.13.0 \
RUN conda install -qy conda=23.1.0 \
&& conda install -y -c conda-forge \
openmm=7.5.1 \
# cudatoolkit==${CUDA_VERSION} \
pdbfixer \
pip \
python=3.8 \
&& conda clean --all --force-pkgs-dirs --yes
COPY . /app/alphafold
RUN wget -q -P /app/alphafold/alphafold/common/ \
https://git.scicore.unibas.ch/schwede/openstructure/-/raw/7102c63615b64735c4941278d92b554ec94415f8/modules/mol/alg/src/stereo_chemical_props.txt
# Install pip packages.
RUN pip3 install --upgrade pip --no-cache-dir \
&& pip3 install -r /app/alphafold/requirements.txt --no-cache-dir \
# && pip3 install --upgrade --no-cache-dir \
&& pip3 install https://storage.googleapis.com/jax-releases/cuda11/jaxlib-0.3.25+cuda11.cudnn82-cp38-cp38-manylinux2014_x86_64.whl
# jax==0.3.25 \
# jaxlib==0.3.25+cuda11.cudnn805 \
# -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
# Apply OpenMM patch.
WORKDIR /opt/conda/lib/python3.8/site-packages
RUN patch -p0 < /app/alphafold/docker/openmm.patch
# Add SETUID bit to the ldconfig binary so that non-root users can run it.
RUN chmod u+s /sbin/ldconfig.real
# We need to run `ldconfig` first to ensure GPUs are visible, due to some quirk
# with Debian. See https://github.com/NVIDIA/nvidia-docker/issues/1399 for
# details.
# ENTRYPOINT does not support easily running multiple commands, so instead we
# write a shell script to wrap them up.
WORKDIR /app/alphafold
RUN echo $'#!/bin/bash\n\
ldconfig\n\
python /app/alphafold/run_alphafold.py "$@"' > /app/run_alphafold.sh \
&& chmod +x /app/run_alphafold.sh
ENTRYPOINT ["/app/run_alphafold.sh"]
111
The text was updated successfully, but these errors were encountered:
This is an updated version of (#321 (comment)). Using the default Dockerfile for me never seems to work but the update below for Ubuntu 2204 with an RTX3080 seems to work just fine to get things going.
It was previously suggested that the updates to v2.1.2 would fix this, but it doenst seem to have worked.
Hopefully someone else finds it useful.
The text was updated successfully, but these errors were encountered: