Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

suggest stringify method in readme and reflect changes in modal api #6

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -224,6 +224,8 @@ save it in the same `.env` file under the name **GCS_SERVICE_ACCOUNT_INFO**.
Unfortunately we can't just copy/paste - we need to "stringify" the data. You
should probably do this in Python or your preferred programming language by
reading in the JSON file you saved, serializing to a string, and outputting.
One simple way to do this is through the popular cli program jq:
`cat YOUR_JSON | jq '@json'`.

### Configuring `cdsapi`

Expand Down
2 changes: 1 addition & 1 deletion ai-models-modal/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,6 @@ def download_model_assets():
# Set up a storage volume for sharing model outputs between processes.
# TODO: Explore adding a modal.Volume to cache model weights since it should be
# much faster for loading them at runtime.
volume = modal.NetworkFileSystem.persisted("ai-models-cache")
volume = modal.NetworkFileSystem.from_name("ai-models-cache", create_if_missing=True)

stub = modal.Stub(name="ai-models-for-all", image=inference_image)
4 changes: 3 additions & 1 deletion ai-models-modal/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
import shutil

import modal
from modal import enter
from ai_models import model
from tqdm import tqdm
from tqdm.contrib.logging import logging_redirect_tqdm
Expand Down Expand Up @@ -294,7 +295,8 @@ def __init__(

self.use_gfs = use_gfs

def __enter__(self):
@enter()
def __run_on_startup__(self):
logger.info(f" Model: {self.model_name}")
logger.info(f" Run initialization datetime: {self.model_init}")
logger.info(f" Forecast lead time: {self.lead_time}")
Expand Down