-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to give users / groups write access to buckets + multi-zone GPUs #2406
Conversation
I just successfully tested this on one of the Lamont servers! Thanks a lot for making this work @yuvipanda. |
I will further test this with LEAP members this week. It would also be great if this could be set up for the m2lines hub, where I collaborate with folks on bringing large datasets to the cloud from HPC. Many thanks. |
GCS allows individual Google Users as well as Google Groups to have permissions to read / write to GCS buckets (unlike AWS). We can use this to allow community leaders to manage who can read and write to GCS buckets from outside the cloud by managing membership in a Google Group! In this commit, we set up the persistent buckets of the LEAP hubs to have this functionality. Access is managed via a Google Group - I have temporarily created this under the 2i2c org and invited Julius (the community champion) as an administrator. But perhaps it should be just created as a regular google group. Using groups here allows management of this access to not require any 2i2c engineering work. Future work would probably fold the separate variable we have for determining if a bucket is accessible publicly as an attribute as well. Ref https://github.com/2i2c-org/infrastructure/issues/2096
The previous 2i2c.org one could not be managed by users outside the 2i2c.org org it looks like.
@jbusecke has written useful documentation for this too! https://leap-stc.github.io/leap-pangeo/jupyterhub.html#uploading-data-from-an-hpc-system |
@yuvipanda what's needed to move this from draft to review - asking due to link with support tickets. |
In https://2i2c.freshdesk.com/a/tickets/764, LEAP users are running out of GPU on the one zone their notebook nodes are in. This just expands that to all possible zones just for GPU nodes, to maximize the amount of GPUs made available to them. This comes at the cost of home directory access maybe being slightly slower, but that's ok.
@pnasrat documentation! I'll try get this done over the next week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While trying to fix #2696, I realized that the terraform config has lagged behind and then I discovered this PR.
@yuvipanda, is there anything I can do to help push this forward? Maybe we can track the documentation bit in another issue + PR?
@GeorgianaElena ok, I've pushed docs and this is ready to go! Sorry for the delay :( This has been apply'd already. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing! Thank you @yuvipanda! 🚀
Thanks @GeorgianaElena! |
GCS allows individual Google Users as well as Google Groups to have permissions to read / write to GCS buckets (unlike AWS). We can use this to allow community leaders to manage who can read and write to GCS buckets from outside the cloud by managing membership in a Google Group!
In this commit, we set up the persistent buckets of the LEAP hubs to have this functionality. Access is managed via a Google Group - I have temporarily created this under the 2i2c org and invited Julius (the community champion) as an administrator. But perhaps it should be just created as a regular google group. Using groups here allows management of this access to not require any 2i2c engineering work.
Future work would probably fold the separate variable we have for determining if a bucket is accessible publicly as an attribute as well.
Ref 2i2c-org/features#22
TODO:
Additionally, a request to make GPU nodes available in whatever zone possible in
the region we are in came in before this PR could be merged, and so was added on
here. This includes documentation on how to set up GPUs on GCP too!