-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core]Read working_dir zip from private s3 #34708
Comments
Hmm this seems to be a pretty reasonable request. cc @architkulkarni for thoughts? |
I agree, it seems reasonable. It may take some care to get the API right. |
This is as similar issue as #31122. |
Hello @rkooo567 & @architkulkarni I can work on this issue ! |
Requirements :
|
that sounds great! @rynewang can shepherd the contribution! |
The same mechanism is used by We can either type the
Or we can add a special field in runtime_env
and let different runtime env plugins to read them. I personally prefer the former one, but I'd like to hear from you. cc @jjyao @rkooo567 |
I believe If we can verify that this workflow can succeed by setting the relevant variables in the |
Description
Currently, boto3 cannot inject endpoint_url from ~/.aws/config or env_vars.
So I cannot submit a job which located at my private s3-like storage using s3-prefix working_dir.
I see the packing.py create boto3.client very simple:
Can ray expose kwags here so I can inject endpoint_url, access_key_id, access_secret or other configurations?
Use case
No response
The text was updated successfully, but these errors were encountered: