-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Forbidden S3 bucket in the example: amazon-sagemaker-examples/introduction_to_applying_machine_learning/gluon_recommender_system/gluon_recommender_system.ipynb #267
Comments
A temporary solution I found (See this discussion) is running the |
Hi @pilhokim . Could you confirm that you're running in a GPU instance? The MXNet context is currently set at mx.gpu() which will fail if you're running in a CPU instance. Thanks. |
You are right, @djarpin! Thanks a lot for your advice. I forgot the fact that this code runs on the local machine, not deploying training instances. I tested |
* UCI heart example for step decorator with EMR step for preprocessing (#266) * UCI heart example for step decorator with EMR step for preprocessing * UCI heart example for step decorator with EMR step, after linter * UCI heart example for step decorator with EMR step, after linter * removed content_types * using prod S3 bucket * using XGBClassifier * fix code format --------- Co-authored-by: feliplp <[email protected]> Co-authored-by: Dewen Qi <[email protected]> * add notebook example on basic pipeline for batch inference using step decorator (#264) * add basic pipeline for batch inference using step decorator * change Booster to XGBClassifier; incorporating feedback from aws/amazon-sagemaker-examples-staging#264 * fix minor typos * incorporate comments from PR aws/amazon-sagemaker-examples-staging#264 * incorporate feedback from aws/amazon-sagemaker-examples-staging#264 --------- Co-authored-by: pprasety <[email protected]> * Add remote-function notebook fixes (#265) Co-authored-by: svia3 <[email protected]> * Add pipeline step decorator quick start notebook (#267) add pipeline scheduler examples Address comments and refine Add pipeline step decorator ablone notebook Address review meeting comments Update clean up sections Add rate-based schedules back Udpate notebooks for Public Beta Fix the colliding endpoint name across different executions Upgrading pandas to fix ImportError in Studio DataScience 3.0 image add scheduler-light additions to quick_start notebook add scheduler-light additions to quick_start notebook fix invalid notebook json Update notebooks for GA Add modular package for lightsaber Add a simple notebook to demonstrate mix use of training step and step deco Address comments fix pipeline delete resouce leak issue in using_step_decorator notebook dummy commit Co-authored-by: Dewen Qi <[email protected]> * remove local SDK tar and retrieve SDK from public --------- Co-authored-by: Felipe Lopez <[email protected]> Co-authored-by: feliplp <[email protected]> Co-authored-by: Dewen Qi <[email protected]> Co-authored-by: Philips Kokoh <[email protected]> Co-authored-by: pprasety <[email protected]> Co-authored-by: Stephen Via <[email protected]> Co-authored-by: svia3 <[email protected]>
* feat: Add notebooks for step decorator (#272) * UCI heart example for step decorator with EMR step for preprocessing (#266) * UCI heart example for step decorator with EMR step for preprocessing * UCI heart example for step decorator with EMR step, after linter * UCI heart example for step decorator with EMR step, after linter * removed content_types * using prod S3 bucket * using XGBClassifier * fix code format --------- Co-authored-by: feliplp <[email protected]> Co-authored-by: Dewen Qi <[email protected]> * add notebook example on basic pipeline for batch inference using step decorator (#264) * add basic pipeline for batch inference using step decorator * change Booster to XGBClassifier; incorporating feedback from aws/amazon-sagemaker-examples-staging#264 * fix minor typos * incorporate comments from PR aws/amazon-sagemaker-examples-staging#264 * incorporate feedback from aws/amazon-sagemaker-examples-staging#264 --------- Co-authored-by: pprasety <[email protected]> * Add remote-function notebook fixes (#265) Co-authored-by: svia3 <[email protected]> * Add pipeline step decorator quick start notebook (#267) add pipeline scheduler examples Address comments and refine Add pipeline step decorator ablone notebook Address review meeting comments Update clean up sections Add rate-based schedules back Udpate notebooks for Public Beta Fix the colliding endpoint name across different executions Upgrading pandas to fix ImportError in Studio DataScience 3.0 image add scheduler-light additions to quick_start notebook add scheduler-light additions to quick_start notebook fix invalid notebook json Update notebooks for GA Add modular package for lightsaber Add a simple notebook to demonstrate mix use of training step and step deco Address comments fix pipeline delete resouce leak issue in using_step_decorator notebook dummy commit Co-authored-by: Dewen Qi <[email protected]> * remove local SDK tar and retrieve SDK from public --------- Co-authored-by: Felipe Lopez <[email protected]> Co-authored-by: feliplp <[email protected]> Co-authored-by: Dewen Qi <[email protected]> Co-authored-by: Philips Kokoh <[email protected]> Co-authored-by: pprasety <[email protected]> Co-authored-by: Stephen Via <[email protected]> Co-authored-by: svia3 <[email protected]> * Notebook Job Step Example (#274) * Create README.md * Adding notebooks * Delete sagemaker-pipelines/notebook-job-step/README.md * Adding example for inference components and managed instance scaling for SageMaker real time hosting and inference (#275) * Cleaned up notebooks Cleaned up for initial push to staging * removed references to goldfinch * Updated readme * moved to proper directory * fixed session object reference * Updated session variable * Updated with logic to check store vars. Need to remove internal only code * linted notebooks and added test header and footers * Fixed prompt and parameters for codegen25 * Updated descriptions, added handling * Removed custom model shapes * Making Jumpstart notebooks Python 3.10 compatible (#269) * removed roleARN which is not needed * smart sifting notebooks (#281) * smart sifting notebooks * smart sifting notebooks updated description * Added new flow diagram --------- Co-authored-by: Arun Lokanatha <[email protected]> --------- Co-authored-by: qidewenwhen <[email protected]> Co-authored-by: Felipe Lopez <[email protected]> Co-authored-by: feliplp <[email protected]> Co-authored-by: Dewen Qi <[email protected]> Co-authored-by: Philips Kokoh <[email protected]> Co-authored-by: pprasety <[email protected]> Co-authored-by: Stephen Via <[email protected]> Co-authored-by: svia3 <[email protected]> Co-authored-by: Ram Vegiraju <[email protected]> Co-authored-by: James Park <[email protected]> Co-authored-by: Pooja Karadgi <[email protected]> Co-authored-by: Arun Lokanatha <[email protected]> Co-authored-by: Arun Lokanatha <[email protected]>
In the example code "amazon-sagemaker-examples/introduction_to_applying_machine_learning/gluon_recommender_system/gluon_recommender_system.ipynb", I couldn't copy the data file located at "s3://amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Video_Download_v1_00.tsv.gz". Please check the below code part in that Jupyter file and replace the below code with a valid S3 address:
aws s3 cp s3://amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Video_Download_v1_00.tsv.gz /tmp/recsys/
The text was updated successfully, but these errors were encountered: