Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(release): cherry pick #4350 to 1.0.1 release #4490

Merged
merged 1 commit into from
Sep 12, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion manifests/kustomize/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ To install Kubeflow Pipelines, you have several options.
- Via [GCP AI Platform UI](http://console.cloud.google.com/ai-platform/pipelines).
- Via an upcoming commandline tool.
- Via Kubectl with Kustomize, it's detailed here.
- Community maintains a repo [here](https://github.com/e2fyi/kubeflow-aws/tree/master/pipelines) for AWS.

## Install via Kustomize

Expand Down Expand Up @@ -51,6 +50,13 @@ Its storage is based on CloudSQL & GCS. It's better than others for production u

Please following [sample](sample/README.md) for a customized installation.

### Option-4 Install it to AWS with S3 and RDS MySQL
Its storage is based on S3 & AWS RDS. It's more natural for AWS users to use this option.

Please following [AWS Instructions](env/aws/README.md) for installation.

Note: Community maintains a repo [e2fyi/kubeflow-aws](https://github.com/e2fyi/kubeflow-aws/tree/master/pipelines) for AWS.

## Uninstall

If the installation is based on CloudSQL/GCS, after the uninstall, the data is still there,
Expand Down
3 changes: 3 additions & 0 deletions manifests/kustomize/env/aws/OWNERS
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
approvers:
- Jeffwan
- PatrickXYS
82 changes: 82 additions & 0 deletions manifests/kustomize/env/aws/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Sample installation

1. Create an EKS cluster and setup kubectl context

Using configuration file to simplify EKS cluster creation process:
```
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: kfworkshop
region: us-west-2
version: '1.17'
# If your region has multiple availability zones, you can specify 3 of them.
availabilityZones: ["us-west-2b", "us-west-2c", "us-west-2d"]

# NodeGroup holds all configuration attributes that are specific to a nodegroup
# You can have several node group in your cluster.
nodeGroups:
- name: cpu-nodegroup
instanceType: m5.xlarge
desiredCapacity: 2
minSize: 0
maxSize: 4
volumeSize: 50
# ssh:
# allow: true
# publicKeyPath: '~/.ssh/id_rsa.pub'

# Example of GPU node group
- name: Tesla-V100
instanceType: p3.8xlarge
# Make sure the availability zone here is one of cluster availability zones.
availabilityZones: ["us-west-2b"]
desiredCapacity: 0
minSize: 0
maxSize: 4
volumeSize: 50
# ssh:
# allow: true
# publicKeyPath: '~/.ssh/id_rsa.pub'
```
Run this command to create EKS cluster
```
eksctl create cluster -f cluster.yaml
```

2. Prepare S3

Create S3 bucket. [Console](https://console.aws.amazon.com/s3/home).

Run this command to create S3 bucket by changing `<YOUR_S3_BUCKET_NAME>` to your prefer s3 bucket name.

```
export S3_BUCKET=<YOUR_S3_BUCKET_NAME>
export AWS_REGION=us-west-2
aws s3 mb s3://$S3_BUCKET --region $AWS_REGION
```

3. Prepare RDS

Follow this [doc](https://www.kubeflow.org/docs/aws/rds/#deploy-amazon-rds-mysql-in-your-environment) to set up AWS RDS instance.

4. Customize your values
- Edit [params.env](params.env), [secret.env](secret.env) and [minio-artifact-secret-patch.env](minio-artifact-secret-patch.env)

5. Install

```
kubectl apply -k ../../cluster-scoped-resources

kubectl wait crd/applications.app.k8s.io --for condition=established --timeout=60s

kubectl apply -k ./
# If upper one action got failed, e.x. you used wrong value, try delete, fix and apply again
# kubectl delete -k ./

kubectl wait applications/mypipeline -n kubeflow --for condition=Ready --timeout=1800s

kubectl port-forward -n kubeflow svc/ml-pipeline-ui 8080:80
```

Now you can access via `localhost:8080`
61 changes: 61 additions & 0 deletions manifests/kustomize/env/aws/aws-configuration-patch.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ml-pipeline-ui
spec:
template:
metadata:
labels:
app: ml-pipeline-ui
spec:
volumes:
- name: config-volume
configMap:
name: ml-pipeline-ui-configmap
containers:
- name: ml-pipeline-ui
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: mlpipeline-minio-artifact
key: accesskey
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: mlpipeline-minio-artifact
key: secretkey

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ml-pipeline
spec:
template:
metadata:
labels:
app: ml-pipeline
spec:
containers:
- env:
- name: OBJECTSTORECONFIG_SECURE
value: "true"
- name: OBJECTSTORECONFIG_BUCKETNAME
valueFrom:
configMapKeyRef:
name: pipeline-install-config
key: bucketName
- name: OBJECTSTORECONFIG_HOST
valueFrom:
configMapKeyRef:
name: pipeline-install-config
key: minioServiceHost
- name: OBJECTSTORECONFIG_REGION
valueFrom:
configMapKeyRef:
name: pipeline-install-config
key: minioServiceRegion
- name: OBJECTSTORECONFIG_PORT
value: ""
name: ml-pipeline-api-server
23 changes: 23 additions & 0 deletions manifests/kustomize/env/aws/config
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
{
namespace: $(kfp-namespace),
executorImage: gcr.io/ml-pipeline/argoexec:v2.7.5-license-compliance,
containerRuntimeExecutor: $(kfp-container-runtime-executor),
artifactRepository:
{
s3: {
bucket: $(kfp-artifact-bucket-name),
keyPrefix: artifacts,
endpoint: s3.amazonaws.com,
insecure: true,
accessKeySecret: {
name: mlpipeline-minio-artifact,
key: accesskey
},
secretKeySecret: {
name: mlpipeline-minio-artifact,
key: secretkey
}
},
archiveLogs: true
}
}
33 changes: 33 additions & 0 deletions manifests/kustomize/env/aws/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kubeflow
bases:
- ../../env/platform-agnostic
configMapGenerator:
- name: pipeline-install-config
env: params.env
behavior: merge
- name: workflow-controller-configmap
behavior: replace
files:
- config
- name: ml-pipeline-ui-configmap
behavior: replace
files:
- viewer-pod-template.json
secretGenerator:
- name: mysql-secret
env: secret.env
behavior: merge
- name: mlpipeline-minio-artifact
env: minio-artifact-secret-patch.env
behavior: merge
generatorOptions:
disableNameSuffixHash: true
patchesStrategicMerge:
- aws-configuration-patch.yaml
# Identifier for application manager to apply ownerReference.
# The ownerReference ensures the resources get garbage collected
# when application is deleted.
commonLabels:
application-crd-id: kubeflow-pipelines
2 changes: 2 additions & 0 deletions manifests/kustomize/env/aws/minio-artifact-secret-patch.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
accesskey=YOUR_AWS_ACCESS_ID
secretkey=YOUR_AWS_SECRET_KEY
5 changes: 5 additions & 0 deletions manifests/kustomize/env/aws/params.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
dbHost=YOUR_RDS_ENDPOINT

bucketName=YOUR_S3_BUCKET_NAME
minioServiceHost=s3.amazonaws.com
minioServiceRegion=YOUR_AWS_REGION
2 changes: 2 additions & 0 deletions manifests/kustomize/env/aws/secret.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
username=YOUR_RDS_USERNAME
password=YOUR_RDS_PASSWORD
37 changes: 37 additions & 0 deletions manifests/kustomize/env/aws/viewer-pod-template.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
{
"spec": {
"containers": [
{
"env": [
{
"name": "AWS_ACCESS_KEY_ID",
"valueFrom": {
"secretKeyRef": {
"name": "mlpipeline-minio-artifact",
"key": "accesskey"
}
}
},
{
"name": "AWS_SECRET_ACCESS_KEY",
"valueFrom": {
"secretKeyRef": {
"name": "mlpipeline-minio-artifact",
"key": "secretkey"
}
}
},
{
"name": "AWS_REGION",
"valueFrom": {
"configMapKeyRef": {
"name": "pipeline-install-config",
"key": "minioServiceRegion"
}
}
}
]
}
]
}
}