Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shim package to interact with an **existing** NVCC install #8229

Merged
merged 20 commits into from
Aug 16, 2019
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions recipes/nvcc/LICENSE.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:

* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of NVIDIA CORPORATION nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
34 changes: 34 additions & 0 deletions recipes/nvcc/build.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
#!/bin/bash

set -xeuo pipefail

# Default to using `nvcc` to specify `CUDA_HOME`.
if [ -z ${CUDA_HOME+x} ]
then
CUDA_HOME="$(dirname $(dirname $(which nvcc)))"
fi

# Set `CUDA_HOME` in an activation script.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If CUDA_HOME is already set should we store the value into BACKUP_CUDA_HOME or similar and restore it in the deactivate script?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do that. Though this will probably be a little gross to write/read (we need to escape some stuff). Also it probably won't matter that much if users are already using this with our Docker image (the intent here).

mkdir -p "${PREFIX}/etc/conda/activate.d"
cat > "${PREFIX}/etc/conda/activate.d/${PKG_NAME}_activate.sh" <<EOF
#!/bin/bash
export CUDA_HOME="${CUDA_HOME}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This hardcodes the cuda install location right? How about making the user supply this? (This is what we do with MacOS SDK and CONDA_BUILD_SYSROOT)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does. Though the intent is for this to come from the docker image where this location is fixed. So it probably doesn't matter much that it is hardcoded.

I suppose we could do that. Though I'm not clear on what the intent would be. Do we want users to use this package outside of the Docker image? It's not clear to me how we can do that reasonably yet. Would prefer getting this out in a limited form and seeing what we can accomplish with it already before thinking about other ways to use it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want users to use this package outside of the Docker image?

Yes. CONDA_BUILD_SYSROOT is a way for users to use the macos compilers outside of conda-build.

I think we want to get this correct the first time to ensure that we don't need to keep on supporting older conventions.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Users are still able to build packages with this convention outside of conda-forge (have been doing that for a bit in fact). This works the same as it does with other Linux builds. IOW users run a build script that starts one of our Docker images and does the build within it.

No objections to getting things right. Am just not seeing how this is causing issues. Could you please clarify on the problems you are seeing here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens to users who don't have access to docker?

For ex: I was building qt on a machine without access to docker, but it works, because you really don't need the docker image. With this package, you are tying the building of conda packages to the docker image.

Copy link
Member Author

@jakirkham jakirkham Aug 15, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This opens us up to figuring out how people have installed the NVIDIA toolchain, whether it was done correctly, and trying to debug the various issues they may encounter by doing this incorrectly. I'd prefer not to have that exposure by limiting our supported case to one that we know behaves well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, then please add some code to the activate script to error out if ${CUDA_HOME} and ${CUDA_HOME}/lib64/stubs/libcuda.so is not found.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds reasonable. Have pushed such a test. Please take a look.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also check the nvcc version matches the package version?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

EOF

# Unset `CUDA_HOME` in a deactivation script.
mkdir -p "${PREFIX}/etc/conda/deactivate.d"
cat > "${PREFIX}/etc/conda/deactivate.d/${PKG_NAME}_deactivate.sh" <<EOF
#!/bin/bash
unset CUDA_HOME
EOF

# Symlink `nvcc` into `bin` so it can be easily run.
mkdir -p "${PREFIX}/bin"
ln -s "${CUDA_HOME}/bin/nvcc" "${PREFIX}/bin/nvcc"

# Add `libcuda.so` shared object stub to the compiler sysroot.
# Needed for things that want to link to `libcuda.so`.
# Stub is used to avoid getting driver code linked into binaries.
CONDA_BUILD_SYSROOT="$(${CC} --print-sysroot)"
mkdir -p "${CONDA_BUILD_SYSROOT}/lib"
ln -s "${CUDA_HOME}/lib64/stubs/libcuda.so" "${CONDA_BUILD_SYSROOT}/lib/libcuda.so"
51 changes: 51 additions & 0 deletions recipes/nvcc/meta.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
{% set name = "nvcc" %}
{% set version = "9.2" %}

package:
name: "{{ name }}_{{ target_platform }}"
version: {{ version }}

build:
number: 0
skip: true # [not linux]
script_env:
- CUDA_HOME
ignore_run_exports:
- libgcc-ng
run_exports:
strong:
- cudatoolkit {{ version }}|{{ version }}.*

requirements:
host:
- {{ compiler("c") }}
run:
- {{ compiler("c") }}
# Host code is forwarded to a C++ compiler
- {{ compiler("cxx") }}

test:
commands:
# Verify the symlink to the libcuda stub library exists.
- test -f "$(${CC} --print-sysroot)/lib/libcuda.so"

# Verify the activation scripts are in-place.
{% for state in ["activate", "deactivate"] %}
- test -f "${PREFIX}/etc/conda/{{ state }}.d/{{ PKG_NAME }}_{{ state }}.sh"
{% endfor %}

# Try using the activation scripts.
- source ${PREFIX}/etc/conda/activate.d/{{ PKG_NAME }}_activate.sh
- if [ -z ${CUDA_HOME+x} ]; then echo "CUDA_HOME is unset after activation" && exit 1; else echo "CUDA_HOME is set to '$CUDA_HOME'"; fi
- source ${PREFIX}/etc/conda/deactivate.d/{{ PKG_NAME }}_deactivate.sh
- if [ -z ${CUDA_HOME+x} ]; then echo "CUDA_HOME is unset after deactivation "; else echo "CUDA_HOME is set to '$CUDA_HOME' after deactivation" && exit 1; fi

jakirkham marked this conversation as resolved.
Show resolved Hide resolved
about:
home: https://github.com/conda-forge/nvcc-feedstock
license: BSD 3-Clause
license_file: LICENSE.txt
summary: A meta-package to enable the right nvcc.

extra:
recipe-maintainers:
- jakirkham
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add me as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure would be happy to have your help here. 🙂 Added you in the last commit.