Running On-Prem Compute Servers on CoCalc (Video) #7465
williamstein
announced in
Announcements
Replies: 1 comment 1 reply
-
This looks really promising. We will look into it. Do you know if it is possible to run the server (multipass, docker etc.) rootless? Running these things with root access on the compute nodes is currently a pretty big no-go on our current clusters (until we build and maintain a Kubernetes cluster). In principle, I don't see why the compute server would need root access, but I recall I could not run CoCalc Docker rootless for some reason. (https://github.com/sagemathinc/cocalc-docker/issues/118 and #2170). |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
VIDEO: https://youtu.be/NkNx6tx3nu0
LINK: https://github.com/sagemathinc/cocalc-howto/blob/main/onprem.md
We add an on-prem compute server running on my Macbook Pro laptop to a CoCalc (https://cocalc.com) project, and using the compute server via a Jupyter notebook and a terminal. This involves creating an Ubuntu 22.04 virtual machine via multipass, and pasting a line of code into the VM to connect it to CoCalc.
After using a compute server running on my laptop, I create another compute server running on Lambda cloud (https://lambdalabs.com/). This involves renting a powerful server with an H100 GPU, waiting a few minutes for it to boot up, then pasting in a line of code. The compute server gets configured, starts up, and we are able to confirm that the H100 is available. We then type "conda install -y pytorch" to install pytorch, and use Claude3 to run a demo involving the GPU and train a toy model.
Beta Was this translation helpful? Give feedback.
All reactions