Skip to content

Commit

Permalink
MAINT: Migrate from EC2 backend to GA backend (#285)
Browse files Browse the repository at this point in the history
* MAINT: Migrate from EC2 backend to GA backend

* update machine details to github actions
  • Loading branch information
mmcky authored Aug 15, 2023
1 parent 94ee2de commit c47131e
Show file tree
Hide file tree
Showing 5 changed files with 55 additions and 92 deletions.
39 changes: 9 additions & 30 deletions .github/workflows/cache.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,40 +4,19 @@ on:
branches:
- main
jobs:
deploy-runner:
runs-on: ubuntu-latest
steps:
- uses: iterative/setup-cml@v1
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Deploy runner on EC2
env:
REPO_TOKEN: ${{ secrets.QUANTECON_SERVICES_PAT }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
cml runner launch \
--cloud=aws \
--cloud-region=us-west-2 \
--cloud-type=p3.2xlarge \
--labels=cml-gpu \
--cloud-hdd-size=40
cache:
needs: deploy-runner
runs-on: [self-hosted, cml-gpu]
container:
image: docker://mmcky/quantecon-lecture-python:cuda-12.1.0-anaconda-2023-03-py310
options: --gpus all
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Anaconda
uses: conda-incubator/setup-miniconda@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
# Install Hardware Dependant Libraries
- name: Check nvidia drivers
shell: bash -l {0}
run: |
nvidia-smi
auto-update-conda: true
auto-activate-base: true
miniconda-version: 'latest'
python-version: "3.10"
environment-file: environment.yml
activate-environment: quantecon
- name: Build HTML
shell: bash -l {0}
run: |
Expand Down
50 changes: 21 additions & 29 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -1,40 +1,32 @@
name: Build Project [using jupyter-book]
on: [pull_request]
jobs:
deploy-runner:
runs-on: ubuntu-latest
steps:
- uses: iterative/setup-cml@v1
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Deploy runner on EC2
env:
REPO_TOKEN: ${{ secrets.QUANTECON_SERVICES_PAT }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
cml runner launch \
--cloud=aws \
--cloud-region=us-west-2 \
--cloud-type=p3.2xlarge \
--labels=cml-gpu \
--cloud-hdd-size=40
preview:
needs: deploy-runner
runs-on: [self-hosted, cml-gpu]
container:
image: docker://mmcky/quantecon-lecture-python:cuda-12.1.0-anaconda-2023-03-py310
options: --gpus all
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Anaconda
uses: conda-incubator/setup-miniconda@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
# Install Hardware Dependant Libraries
- name: Check nvidia drivers
shell: bash -l {0}
auto-update-conda: true
auto-activate-base: true
miniconda-version: 'latest'
python-version: "3.10"
environment-file: environment.yml
activate-environment: quantecon
- name: Install latex dependencies
run: |
nvidia-smi
sudo apt-get -qq update
sudo apt-get install -y \
texlive-latex-recommended \
texlive-latex-extra \
texlive-fonts-recommended \
texlive-fonts-extra \
texlive-xetex \
latexmk \
xindy \
dvipng \
cm-super
- name: Display Conda Environment Versions
shell: bash -l {0}
run: conda list
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/linkcheck.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,10 @@ jobs:
fail-fast: false
matrix:
os: ["ubuntu-latest"]
python-version: ["3.9"]
python-version: ["3.10"]
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Setup Anaconda
uses: conda-incubator/setup-miniconda@v2
with:
Expand Down
50 changes: 22 additions & 28 deletions .github/workflows/publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,40 +4,34 @@ on:
tags:
- 'publish*'
jobs:
deploy-runner:
runs-on: ubuntu-latest
steps:
- uses: iterative/setup-cml@v1
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Deploy runner on EC2
env:
REPO_TOKEN: ${{ secrets.QUANTECON_SERVICES_PAT }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
cml runner launch \
--cloud=aws \
--cloud-region=us-west-2 \
--cloud-type=p3.2xlarge \
--labels=cml-gpu \
--cloud-hdd-size=40
publish:
if: github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags')
needs: deploy-runner
runs-on: [self-hosted, cml-gpu]
container:
image: docker://mmcky/quantecon-lecture-python:cuda-12.1.0-anaconda-2023-03-py310
options: --gpus all
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
# Install Hardware Dependant Libraries
- name: Check nvidia drivers
shell: bash -l {0}
- name: Setup Anaconda
uses: conda-incubator/setup-miniconda@v2
with:
auto-update-conda: true
auto-activate-base: true
miniconda-version: 'latest'
python-version: "3.10"
environment-file: environment.yml
activate-environment: quantecon
- name: Install latex dependencies
run: |
nvidia-smi
sudo apt-get -qq update
sudo apt-get install -y \
texlive-latex-recommended \
texlive-latex-extra \
texlive-fonts-recommended \
texlive-fonts-extra \
texlive-xetex \
latexmk \
xindy \
dvipng \
cm-super
- name: Display Conda Environment Versions
shell: bash -l {0}
run: conda list
Expand Down
4 changes: 1 addition & 3 deletions lectures/status.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,4 @@ This table contains the latest execution statistics.

(status:machine-details)=

These lectures are built on `linux` instances through `github actions` and `amazon web services (aws)` to
enable access to a `gpu`. These lectures are built on a [p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/)
that has access to `8 vcpu's`, a `V100 NVIDIA Tesla GPU`, and `61 Gb` of memory.
These lectures are built on `linux` instances through `github actions`.

0 comments on commit c47131e

Please sign in to comment.