Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[deployer] Update README and docs #3316

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
514 changes: 3 additions & 511 deletions deployer/README.md

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions docs/howto/bill.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,12 @@ that has monthly costs for all the clusters that are configured to have
[bigquery export](new-gcp-project:billing-export).

This sheet is currently manually updated. You can update it by running
`deployer generate-cost-table --output 'google-sheet'`. It will by default
`deployer generate cost-table --output 'google-sheet'`. It will by default
update the sheet to provide information for the last 12 months. You can control
the period by passing in the `start_month` and `end_month` parameters.

If you just want to take a look at the costs in the terminal, you can also run
`deployer generate-cost-table --output 'terminal'` instead.
`deployer generate cost-table --output 'terminal'` instead.

## Caveats

Expand Down
2 changes: 1 addition & 1 deletion docs/howto/upgrade-cluster/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ cluster is unused or that the maintenance is communicated ahead of time.
git status

# generates a few new files
deployer generate-aws-cluster --cluster-name=$CLUSTER_NAME --cluster-region=$CLUSTER_REGION --hub-type=$HUB_TYPE
deployer generate dedicated-cluster aws --cluster-name=$CLUSTER_NAME --cluster-region=$CLUSTER_REGION --hub-type=$HUB_TYPE

# overview changed files
git status
Expand Down
4 changes: 2 additions & 2 deletions docs/hub-deployment-guide/configure-auth/cilogon.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,14 @@
The steps to enable the JupyterHub CILogonOAuthenticator for a hub are similar with the ones for enabling [GitHubOAuthenticator](auth:github-orgs):

### Create a CILogon OAuth client
This can be achieved by using the `deployer cilogon-client-create` command.
This can be achieved by using the `deployer cilogon-client create` command.

The command needs to be passed the cluster and hub name for which a client id and secret will be generated, but also the hub type, and the hub domain, as specified in `cluster.yaml` (ex: staging.2i2c.cloud).

Example script invocation that creates a CILogon OAuth client for the 2i2c dask-staging hub:

```bash
deployer cilogon-client-create 2i2c dask-staging daskhub dask-staging.2i2c.cloud
deployer cilogon-client create 2i2c dask-staging daskhub dask-staging.2i2c.cloud
```

````{note}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,4 +49,4 @@ deployer deploy-support $CLUSTER_NAME

## Link the cluster's Prometheus server to the central Grafana

Run `deployer update-central-grafana-datasources` to register the new prometheus with the default central grafana.
Run `deployer grafana update-central-datasources` to register the new prometheus with the default central grafana.
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ export CLUSTER_NAME=<cluster-name>
```

```bash
deployer new-grafana-token $CLUSTER_NAME
deployer grafana new-token $CLUSTER_NAME
```

If the command succeeded, it should have created:
Expand All @@ -58,7 +58,7 @@ This key will be used by the [`deploy-grafana-dashboards` workflow](https://gith
You can deploy the dashboards locally using the deployer:

```bash
deployer deploy-grafana-dashboards $CLUSTER_NAME
deployer grafana deploy-dashboards $CLUSTER_NAME
```

## Deploying the Grafana Dashboards from CI/CD
Expand Down
12 changes: 6 additions & 6 deletions docs/hub-deployment-guide/hubs/other-hub-ops/delete-hub.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@ Especially if we think that users will want this information in the future (or i

### 1.2. Delete data

Delete user home directories using the [deployer `exec-homes-shell`](https://github.com/2i2c-org/infrastructure/blob/master/deployer/README.md#exec-homes-shell) option.
Delete user home directories using the `deployer exec homes`command.

```bash
export CLUSTER_NAME=<cluster-name>
export HUB_NAME=<hub-name>
```

```bash
deployer exec-homes-shell $CLUSTER_NAME $HUB_NAME
deployer exec homes $CLUSTER_NAME $HUB_NAME
```

This should get you a shell with the home directories of all the users on the given hub. Delete all user home directories with:
Expand All @@ -53,19 +53,19 @@ The naming convention followed when creating these apps is: `$CLUSTER_NAME-$HUB_

### CILogon OAuth application

Similarly, for each hub that uses CILogon, we dynamically create an OAuth [client application](https://cilogon.github.io/oa4mp/server/manuals/dynamic-client-registration.html) in CILogon using the `deployer cilogon-client-create` command.
Use the `deployer cilogon-client-delete` command to delete this CILogon client when a hub is removed:
Similarly, for each hub that uses CILogon, we dynamically create an OAuth [client application](https://cilogon.github.io/oa4mp/server/manuals/dynamic-client-registration.html) in CILogon using the `deployer cilogon-client create` command.
Use the `deployer cilogon-client delete` command to delete this CILogon client when a hub is removed:

You'll need to get all clients with:

```bash
deployer cilogon-client-get-all
deployer cilogon-client get-all
```

And then identify the client of the hub and delete based on its id with:

```bash
deployer cilogon-client-delete --client-id cilogon:/client_id/<id> $CLUSTER_NAME $HUB_NAME
deployer cilogon-client delete --client-id cilogon:/client_id/<id> $CLUSTER_NAME $HUB_NAME
```

This will clean up some of the hub values related to auth and must be done prior to removing the hub files.
Expand Down
2 changes: 1 addition & 1 deletion docs/hub-deployment-guide/new-cluster/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ export HUB_TYPE=<hub-type-like-basehub>
```

```bash
deployer generate-aws-cluster --cluster-name=$CLUSTER_NAME --cluster-region=$CLUSTER_REGION --hub-type=$HUB_TYPE
deployer generate dedicated-cluster aws --cluster-name=$CLUSTER_NAME --cluster-region=$CLUSTER_REGION --hub-type=$HUB_TYPE
```

This will generate the following files:
Expand Down
6 changes: 3 additions & 3 deletions docs/sre-guide/common-problems-solutions.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ Read more about [](cicd)

Sometimes we need to inspect the job matrices the deployer generates for correctness.
We can do this either by [inspecting the deployment plan that is posted to PRs](cicd/hub/pr-comment)
or by running the `generate-helm-upgrade-jobs` command of the deployer [locally](tutorials:setup).
or by running the `generate helm-upgrade-jobs` command of the deployer [locally](tutorials:setup).

This will output the same deployment plan that is used in the PR comment, which is
a table formatted by [`rich`](https://rich.readthedocs.io). However, we sometimes
Expand All @@ -186,7 +186,7 @@ export CI=true
```

This will trigger the deployer to behave as if it is running in a CI environment.
Principally, this means that executing `generate-helm-upgrade-jobs` will write
Principally, this means that executing `generate helm-upgrade-jobs` will write
two files to your local environment. The first file is called `pr-number.txt`
and can be ignored (it is used by the workflow that posts the deployment plan
as a comment and therefore requires the PR number). The second file we set the
Expand All @@ -203,7 +203,7 @@ our JSON formatted job matrices will be written to.
Now we're setup, we can run:

```bash
deployer generate-helm-update-jobs {comma separated list of changed files}
deployer generate helm-update-jobs {comma separated list of changed files}
```

where the list of changed files you can either provide yourself or you can copy-paste
Expand Down
4 changes: 2 additions & 2 deletions docs/sre-guide/support/build-image-remotely.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ scale upload / download speeds.

## Building images remotely

1. From a clone of the `infrastructure` repository, use the `start-docker-proxy` command.
1. From a clone of the `infrastructure` repository, use the `debug start-docker-proxy` command.

```bash
deployer start-docker-proxy
deployer debug start-docker-proxy
```

This will forward your *local* computer's port `23760` to the port `2376` running
Expand Down
6 changes: 2 additions & 4 deletions docs/sre-guide/support/home-dir.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,15 @@ Sample notebook log from non-starting pod due to a dotfile that doesn't have cor
/srv/start: line 23: exec: jupyterhub-singleuser: not found
```

The
[`exec-homes-shell`](https://github.com/2i2c-org/infrastructure/blob/master/deployer/README.md#exec-homes-shell)
subcommand of the deployer can help us here.
The `exec homes` subcommand of the deployer can help us here.

```bash
export CLUSTER_NAME=<cluster-name>
export HUB_NAME=<hub-name>
```

```bash
deployer exec-homes-shell $CLUSTER_NAME $HUB_NAME
deployer exec homes $CLUSTER_NAME $HUB_NAME
```

Will open a bash shell with all the home directories of all the users of `$HUB_NAME`
Expand Down
4 changes: 2 additions & 2 deletions docs/topic/access-creds/cloud-auth.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,13 +166,13 @@ are used to provide access to the AWS account from your terminal.
- `arn-of-the-mfa-device` can be found by visiting the 'Security Credentials' page when you're logged into the web console, after
- `code-from-token` is a 6 digit integer code generated by your MFA device

Alternatively, the deployer has a convenience command - `exec-aws-shell`
Alternatively, the deployer has a convenience command - `exec aws`
to simplify this, purely implementing the suggestions from
[the AWS docs](https://repost.aws/knowledge-center/authenticate-mfa-cli).
You can execute it like so:

```bash
$ deployer exec-aws-shell <aws-profile-name> <arn-of-mfa-device> <code-from-token>
$ deployer exec aws <aws-profile-name> <arn-of-mfa-device> <code-from-token>
```

where `<aws-profile-name>` must match the name of the profile in `~/.aws/credentials`
Expand Down