-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mysterious permissions errors when running test_run.sh twice #59
Comments
Tried again after a fresh reinstall of my system. I'm on Ubuntu 24.04.1 LTS.
Initial run works. Second run stops after:
Removed the -f argument from chmod. Now we see:
Tried adding my user to the docker group to run docker as non-root and see if that fixes it Did not fix it… Also tried taking ownership using |
Ok. So it fails on this part somewhere: if [ -d "$OUTPUT_DIR" ]; then
# Ensure permissions are setup correctly
# This allows for the Docker user to write to this location
rm -rf "${OUTPUT_DIR}"/*
chmod -f o+rwx "$OUTPUT_DIR"
else
mkdir --mode=o+rwx "$OUTPUT_DIR"
fi What are the permissions/ownership of the output directory right when you clone the repo, and what are they after the first run? |
Oh and while we are at it, what does |
both return 1000 output of id command:
|
After clone I'm not sure, will check
after a successfull run, the owner is 100999:
|
Thanks for testing. Huh, docker run --rm \
--quiet \
--env HOST_UID=`id -u` \
--env HOST_GID=`id -g` \
--volume "$OUTPUT_DIR":/output \
alpine:latest \
/bin/sh -c 'chown -R ${HOST_UID}:${HOST_GID} /output' |
Well... When I remove that section, it actually works just fine? |
https://docs.docker.com/desktop/faqs/linuxfaqs/#how-do-i-enable-file-sharing
|
Current suspect: with |
disabling |
@koopmant could you try a test_run.sh with this: #!/usr/bin/env bash
# Stop at first error
set -e
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
DOCKER_TAG="example-algorithm"
DOCKER_NOOP_VOLUME="${DOCKER_TAG}-volume"
INPUT_DIR="${SCRIPT_DIR}/test/input"
OUTPUT_DIR="${SCRIPT_DIR}/test/output"
cleanup() {
echo "=+= Cleaning up ..."
# Ensure permissions are set correctly on the output
# This allows the host user (e.g. you) to access and handle these files
docker run --rm \
--quiet \
--env HOST_UID=`id -u` \
--env HOST_GID=`id -g` \
--volume "$OUTPUT_DIR":/output \
alpine:latest \
/bin/sh -c 'chown -R ${HOST_UID}:${HOST_GID} /output'
}
trap cleanup EXIT
if [ -d "$OUTPUT_DIR" ]; then
# Ensure permissions are setup correctly
# This allows for the Docker user to write to this location
cleanup
rm -rf "${OUTPUT_DIR}"/*
chmod -f o+rwx "$OUTPUT_DIR"
else
mkdir -m o+rwx "$OUTPUT_DIR"
fi
echo "=+= (Re)build the container"
docker build "$SCRIPT_DIR" \
--platform=linux/amd64 \
--tag $DOCKER_TAG 2>&1
echo "=+= Doing a forward pass"
## Note the extra arguments that are passed here:
# '--network none'
# entails there is no internet connection
# '--volume <NAME>:/tmp'
# is added because on Grand Challenge this directory cannot be used to store permanent files
docker volume create "$DOCKER_NOOP_VOLUME" > /dev/null
docker run --rm \
--platform=linux/amd64 \
--network none \
--volume "$INPUT_DIR":/input:ro \
--volume "$OUTPUT_DIR":/output \
--volume "$DOCKER_NOOP_VOLUME":/tmp \
$DOCKER_TAG
docker volume rm "$DOCKER_NOOP_VOLUME" > /dev/null
echo "=+= Wrote results to ${OUTPUT_DIR}"
echo "=+= Save this image for uploading via ./save.sh"
|
output on second run:
|
Was that all the output? Not sure if it worked from your comment. Sorry. |
Sorry, that was not very clear. Yes, that's all the output; so it's not working. It runs cleanup first, then runs into the permission error again during chmod (leading to immediate exit) and then runs cleanup again on exit. |
No problem! @amickan also ran into a similar looking problem. Hower above solution worked for her. Now I think it's possible that was from a different origin (the Albeit if you temp fix it via sudo, forcing ownership to your own user, does it return running the test_run.sh twice? I am hesitant to add calls like this, because it really kicks the interoperability of the script down a few nodges. # Set the ACL recursively (-R) for the specified user and group
setfacl -R -m u:$USER:rwx -m g:$GROUP:rwx $DIRECTORY
# Set default ACL for future files and directories (so new files inherit the ACL)
setfacl -R -d -m u:$USER:rwx -m g:$GROUP:rwx $DIRECTORY
|
Sorry, I don't quite understand what you mean. FWIW, if you change this line: the line |
Sorry, let me clarify: if you fix ownership once. Does the proposed test_run.sh then keep working, even if ran multiple times? The initial chmod is meant to ensure that the Docker's internal user can actually write to the output directory in the first place. The second chown is to ensure that the host user has full access to the files and is allowed to change any and all permission on it. |
Yes, that sort of works. The chown attempted in the last docker command then fails with a permission error. So I remain the owner of the directory and the rest of the script is successfull. Still, that means the current user does not become owner of the files in the output directory. But at least the script runs and creates output. |
@koopmant , let's discuss this during out meet-meeting. =) |
Hint for fix, there seems to be a new mapping in some Linux distros (or new Docker Engine, dunno) that maps internal docker uids to system ones.
|
See fix in: DIAGNijmegen/rse-workshop-2024@5cae844 Should be replicated to forge. |
* Fix some pointers to resources * Fix permissions setting on test_runs #59
During the workshop run, @koopmant ran into the problem that running the test script twice resulted in permission errors.
The test_run.sh has a secondary docker run, introduced to specifically target these permissions. Root user v.s. build created user. Hence, it is unclear why this occurred. It's being investigated.
The text was updated successfully, but these errors were encountered: