diff --git a/f9dab4cf7cc9c72c3013ef6b8b0f7eb2/quicklab.md b/0524a8b6e000303e9d4dd17fc0fb6647/quicklab.md similarity index 74% rename from f9dab4cf7cc9c72c3013ef6b8b0f7eb2/quicklab.md rename to 0524a8b6e000303e9d4dd17fc0fb6647/quicklab.md index 33917d89c5c8..16f8e4071ba5 100644 --- a/f9dab4cf7cc9c72c3013ef6b8b0f7eb2/quicklab.md +++ b/0524a8b6e000303e9d4dd17fc0fb6647/quicklab.md @@ -18,64 +18,48 @@ 6. Now click on **New Bundle** button in **Product information** section -7. Select **openshift4upi** bundle. A new form loads - you can keep all the values as they are (you can ignore the warning on top as well, since this is the first install attempt of Openshift on that cluster): +7. Select **openshift4upi** bundle. A new form loads. **Opt-in for the `htpasswd` credentials provider.** (You can ignore the warning on top as well, since this is the first install attempt of Openshift on that cluster): ![Select a bundle](../assets/images/quicklab/bundle_select.png) 8. Wait for OCP4 to install. After successful installation you should see a cluster history log like this: ![Cluster log after OCP4 install](../assets/images/quicklab/cluster_log_2.png) -9. Use the link and credentials from the **Cluster Information** section to access your cluster. +9. Use the link and credentials from the **Cluster Information** section to access your cluster. Verify it contains login information for both `kube:admin` and `quicklab` user. ![Cluster information](../assets/images/quicklab/cluster_information.png) -10. Login as the `kubeadmin`, take the value from "Hosts" and port 6443.\ +10. Login as the `kube:admin`, take the value from "Hosts" and port 6443. For example: - ```sh - oc login upi-0.tcoufaltest.lab.upshift.rdu2.redhat.com:6443 - ``` +```sh +oc login upi-0.tcoufaltest.lab.upshift.rdu2.redhat.com:6443 +``` ## Install Argo CD on your cluster -1. kube:admin is not supported in user api, therefore you have to create additional user. Simplest way is to deploy an Oauth via Htpasswd: +1. `kube:admin` is not supported in user api, that's why we've opted in for the `htpasswd` provider during the bundle install. -2. Create a htpasswd config file and deploy it to OpenShift: +2. Log in as the `quicklab` user using the `htpasswd` provider in the web console. To create the Openshift user. Then log out. - ```sh - $ htpasswd -nb username password > oc.htpasswd - $ oc create secret generic htpass-secret --from-file=htpasswd=oc.htpasswd -n openshift-config - $ cat <404: Not Found | Operate First
ODH Logo

Not Found

You just hit a route that doesn't exist... the sadness.

\ No newline at end of file + }404: Not Found | Operate First
ODH Logo

Not Found

You just hit a route that doesn't exist... the sadness.

\ No newline at end of file diff --git a/404/index.html b/404/index.html index b92969839b0a..0243f3d08a43 100644 --- a/404/index.html +++ b/404/index.html @@ -14,4 +14,4 @@ - }404: Not Found | Operate First
ODH Logo

Not Found

You just hit a route that doesn't exist... the sadness.

\ No newline at end of file + }404: Not Found | Operate First
ODH Logo

Not Found

You just hit a route that doesn't exist... the sadness.

\ No newline at end of file diff --git a/blueprints/blueprint/README/index.html b/blueprints/blueprint/README/index.html index 2cc4f6333e02..3c9c91e48a23 100644 --- a/blueprints/blueprint/README/index.html +++ b/blueprints/blueprint/README/index.html @@ -14,7 +14,7 @@ - }
ODH Logo

Operate First Blueprint

This repository containers documentation of the Operate First Blueprint, it covers topics like architecture (incl. + }

ODH Logo

Operate First Blueprint

This repository containers documentation of the Operate First Blueprint, it covers topics like architecture (incl. logical diagrams, documents of decisions taken), deployment (schema or physical diagrams).

Architectural decisions

We keep track of architectural decisions using a lightweigh architectural decision records. More information on the used format is available at https://adr.github.io/madr/. General information about architectural decision records is available at https://adr.github.io/ .

Architectural decisions

  • ADR-0000 - Use Markdown Architectural Decision Records
  • ADR-0001 - Use GNU GPL as license
\ No newline at end of file diff --git a/blueprints/blueprint/docs/adr/0000-use-markdown-architectural-decision-records/index.html b/blueprints/blueprint/docs/adr/0000-use-markdown-architectural-decision-records/index.html index bcc08434d478..400c538cdaec 100644 --- a/blueprints/blueprint/docs/adr/0000-use-markdown-architectural-decision-records/index.html +++ b/blueprints/blueprint/docs/adr/0000-use-markdown-architectural-decision-records/index.html @@ -14,4 +14,4 @@ - }
ODH Logo

Use Markdown Architectural Decision Records

Context and Problem Statement

We want to record architectural decisions made in Operate First. Which format and structure should these records follow?

Considered Options

Decision Outcome

Chosen option: “MADR 2.1.2”, because

  • Implicit assumptions should be made explicit.

    Design documentation is important to enable people understanding the decisions later on.

    See also A rational design process: How and why to fake it.

  • The MADR format is lean and fits our development style.

  • The MADR structure is comprehensible and facilitates usage & maintenance.

  • The MADR project is vivid.

  • Version 2.1.2 is the latest one available when starting to document ADRs.

\ No newline at end of file + }
ODH Logo

Use Markdown Architectural Decision Records

Context and Problem Statement

We want to record architectural decisions made in Operate First. Which format and structure should these records follow?

Considered Options

Decision Outcome

Chosen option: “MADR 2.1.2”, because

  • Implicit assumptions should be made explicit.

    Design documentation is important to enable people understanding the decisions later on.

    See also A rational design process: How and why to fake it.

  • The MADR format is lean and fits our development style.

  • The MADR structure is comprehensible and facilitates usage & maintenance.

  • The MADR project is vivid.

  • Version 2.1.2 is the latest one available when starting to document ADRs.

\ No newline at end of file diff --git a/blueprints/blueprint/docs/adr/0001-use-gpl3-as-license/index.html b/blueprints/blueprint/docs/adr/0001-use-gpl3-as-license/index.html index 09181d93ac02..2299e3e31235 100644 --- a/blueprints/blueprint/docs/adr/0001-use-gpl3-as-license/index.html +++ b/blueprints/blueprint/docs/adr/0001-use-gpl3-as-license/index.html @@ -14,7 +14,7 @@ - }
ODH Logo

Use GNU GPL as license

Everything needs to be licensed, otherwise the default copyright laws apply. + }

ODH Logo

Use GNU GPL as license

Everything needs to be licensed, otherwise the default copyright laws apply. For instance, in Germany that means users may not alter anything without explicitly asking for permission. For more information see https://help.github.com/articles/licensing-a-repository/.

We want to have all source code related to Operate First to be used without any hassle and as free as possible, so that users can just execute and enjoy the four freedoms.

Considered Options

Decision Outcome

Chosen option: “GNU GPL”, because this license supports a strong copyleft model.

\ No newline at end of file diff --git a/blueprints/blueprint/docs/adr/0003-feature-selection-policy/index.html b/blueprints/blueprint/docs/adr/0003-feature-selection-policy/index.html index 46ca96af8b25..40fdea3a0cf1 100644 --- a/blueprints/blueprint/docs/adr/0003-feature-selection-policy/index.html +++ b/blueprints/blueprint/docs/adr/0003-feature-selection-policy/index.html @@ -14,7 +14,7 @@ - }
ODH Logo

Users of an Operate First deployment might need different features than provided by upstream project’s release

  • Status: approved
  • Date: 2020-Nov-09

Context and Problem Statement

Open Data Hub has release v0.8.0, some of the Elyra features required by Thoth Station experiments are + }

ODH Logo

Users of an Operate First deployment might need different features than provided by upstream project’s release

  • Status: approved
  • Date: 2020-Nov-09

Context and Problem Statement

Open Data Hub has release v0.8.0, some of the Elyra features required by Thoth Station experiments are not part of this ODH release. This would require to update certain components to the HEAD of main branch of ODH upstream project.

Decision Drivers

  • Opertational complexity of an environment diverging from an upstream release
  • User needs of more current software components

Considered Options

  • stay with upstream release
  • deploy specific versions of components

Decision Outcome

Chosen option: “deploy specific versions of components”, because this will give the most efficient deployment to Operate First operators and users.

Positive Consequences

  • operators can gain a maximum of experience, enabling feedback on component versions that might have not been tested diff --git a/blueprints/blueprint/docs/adr/0004-argocd-apps-of-apps-structure/index.html b/blueprints/blueprint/docs/adr/0004-argocd-apps-of-apps-structure/index.html index fb36a2b69346..4a97416ab1b3 100644 --- a/blueprints/blueprint/docs/adr/0004-argocd-apps-of-apps-structure/index.html +++ b/blueprints/blueprint/docs/adr/0004-argocd-apps-of-apps-structure/index.html @@ -14,6 +14,6 @@ - }
    ODH Logo

    ArgoCD Apps of Apps Structure

    Context and Problem Statement

    ArgoCD Applications manifests are a declarative way to manage ArgoCD Applications in git. Often times these are manifests that are stored alongside ArgoCD deployment manifests.

    This has been fine in the past since we controlled the deployment of ArgoCD and had merge access to the repo where the applications were stored. So if we wanted to onboard a new app, we make a PR with the application manifest and someone on our team would merge it.

    But there can be a situation where another team, like cluster-admins or infra, store the ArgoCD deployments in their own repo.

    If we applied our current practice, we’d store our app manifests in this external repo. The problem is that we may not have merge access to this repo, and it wouldn’t really make much sense for people who manage the infrastructure to also handle PR’s that don’t pertain directly to cluster management.

    Considered Options

    1) Just have All ArgoCD Manifests in one repo and give Operate-First team members access to infra repo so they can review and merge ArgoCD Applications. + }

    ODH Logo

    ArgoCD Apps of Apps Structure

    Context and Problem Statement

    ArgoCD Applications manifests are a declarative way to manage ArgoCD Applications in git. Often times these are manifests that are stored alongside ArgoCD deployment manifests.

    This has been fine in the past since we controlled the deployment of ArgoCD and had merge access to the repo where the applications were stored. So if we wanted to onboard a new app, we make a PR with the application manifest and someone on our team would merge it.

    But there can be a situation where another team, like cluster-admins or infra, store the ArgoCD deployments in their own repo.

    If we applied our current practice, we’d store our app manifests in this external repo. The problem is that we may not have merge access to this repo, and it wouldn’t really make much sense for people who manage the infrastructure to also handle PR’s that don’t pertain directly to cluster management.

    Considered Options

    1) Just have All ArgoCD Manifests in one repo and give Operate-First team members access to infra repo so they can review and merge ArgoCD Applications. 2) Have separate teams handle Applications for their Projects in their own Repos, in this way tracking Applications is not a concern for Infra/Operate-first, but rather the individual team belonging to an ArgoCD project 3) Have a separate Repo that Operate-First manages, and have a an ArgoCD App of Apps that manages this repo.

    Decision Outcome

    Chosen Option (3). Problems with (1) have been outlined above. The issues with (3) is that there is no way to effectively enforce teams to ensure their App Projects belong to their team’s project (this is further described below).

    The Proposed Solution is captured by this diagram:

    image

    The idea here is that all our operate-first/team-1/team-2/…/team-n ArgoCD Applications would go in the opf-argocd-apps repo. Then we’d have an App of Apps i.e. the OPF Parent App that manages all these apps. This way we can add new applications declaratively to ArgoCD without having to make PR’s to the Infra Repo (e.g. moc-cnv-sandbox). Operate-first admins would manage the opf-argocd-app repo. Any other ArgoCD Applications that manage cluster resources like clusterrolebindings or operator subscriptions etc. can remain in the infra repo since that’s a concern for cluster admins. We would direct any user that wants to use ArgoCD to manage their apps to add their ArgoCD Applications to the opf-argocd-apps repo.

    Positive Consequences

    • Infrastructure/cluster-admins are not bombarded with PR’s for ArgoCD App onboarding
    • OperateFirst maintainers can handle the PR’s unhindered
    • The “OPF-ArgoCD-Apps” repo can be leveraged by CRC/Quicklab/Other OCP Clusters to quickly setup ArgoCD ODH/Thoth/etc. Applications.

    Negative Consequences

    Biggest concern here is that there is no way to automatically enforce that Applications in opf-argocd-apps repo belong to the Operate First ArgoCD project (see diagaram). Why is this a problem? Because we use ArgoCD projects to restrict what types of resources applications in that project can deploy. For example ArgoCD apps in the Infra Apps project in the diagram can deploy clusterrolebinding, operators, etc. So while OPF Parent App cannot deploy clusterrolebindings because it belongs to the Operate First ArgoCD project, it could deploy another ArgoCD application that belongs to Infra apps and that ArgoCD app could deploy clusterrolebindings.

    You can read more about this issue here. The individual there used admission hooks to get around this but I don’t think we want to go there just yet. My suggestion is we begin by enforcing this at the PR level, and transition to maybe catching this in CI until there’s a proper solution upstream.

    \ No newline at end of file diff --git a/blueprints/blueprint/docs/adr/template/index.html b/blueprints/blueprint/docs/adr/template/index.html index 7f6bc050ef49..9252ea0e8ab9 100644 --- a/blueprints/blueprint/docs/adr/template/index.html +++ b/blueprints/blueprint/docs/adr/template/index.html @@ -14,4 +14,4 @@ - }
    ODH Logo

    [short title of solved problem and solution]

    • Status: [proposed | rejected | accepted | deprecated | … | superseded by ADR-0005]
    • Deciders: [list everyone involved in the decision]
    • Date: [YYYY-MM-DD when the decision was last updated]

    Technical Story: [description | ticket/issue URL]

    Context and Problem Statement

    [Describe the context and problem statement, e.g., in free form using two to three sentences. You may want to articulate the problem in form of a question.]

    Decision Drivers

    • [driver 1, e.g., a force, facing concern, …]
    • [driver 2, e.g., a force, facing concern, …]

    Considered Options

    • [option 1]
    • [option 2]
    • [option 3]

    Decision Outcome

    Chosen option: ”[option 1]”, because [justification. e.g., only option, which meets k.o. criterion decision driver | which resolves force force | … | comes out best (see below)].

    Positive Consequences

    • [e.g., improvement of quality attribute satisfaction, follow-up decisions required, …]

    Negative Consequences

    • [e.g., compromising quality attribute, follow-up decisions required, …]

    Pros and Cons of the Options

    [option 1]

    [example | description | pointer to more information | …]

    • Good, because [argument a]
    • Good, because [argument b]
    • Bad, because [argument c]

    [option 2]

    [example | description | pointer to more information | …]

    • Good, because [argument a]
    • Good, because [argument b]
    • Bad, because [argument c]

    [option 3]

    [example | description | pointer to more information | …]

    • Good, because [argument a]
    • Good, because [argument b]
    • Bad, because [argument c]

    Links

    • [Link type][Link to ADR]
    \ No newline at end of file + }
    ODH Logo

    [short title of solved problem and solution]

    • Status: [proposed | rejected | accepted | deprecated | … | superseded by ADR-0005]
    • Deciders: [list everyone involved in the decision]
    • Date: [YYYY-MM-DD when the decision was last updated]

    Technical Story: [description | ticket/issue URL]

    Context and Problem Statement

    [Describe the context and problem statement, e.g., in free form using two to three sentences. You may want to articulate the problem in form of a question.]

    Decision Drivers

    • [driver 1, e.g., a force, facing concern, …]
    • [driver 2, e.g., a force, facing concern, …]

    Considered Options

    • [option 1]
    • [option 2]
    • [option 3]

    Decision Outcome

    Chosen option: ”[option 1]”, because [justification. e.g., only option, which meets k.o. criterion decision driver | which resolves force force | … | comes out best (see below)].

    Positive Consequences

    • [e.g., improvement of quality attribute satisfaction, follow-up decisions required, …]

    Negative Consequences

    • [e.g., compromising quality attribute, follow-up decisions required, …]

    Pros and Cons of the Options

    [option 1]

    [example | description | pointer to more information | …]

    • Good, because [argument a]
    • Good, because [argument b]
    • Bad, because [argument c]

    [option 2]

    [example | description | pointer to more information | …]

    • Good, because [argument a]
    • Good, because [argument b]
    • Bad, because [argument c]

    [option 3]

    [example | description | pointer to more information | …]

    • Good, because [argument a]
    • Good, because [argument b]
    • Bad, because [argument c]

    Links

    • [Link type][Link to ADR]
    \ No newline at end of file diff --git a/blueprints/blueprint/environments/operate_first/index.html b/blueprints/blueprint/environments/operate_first/index.html index 11901a76e71b..5093ca7a61a5 100644 --- a/blueprints/blueprint/environments/operate_first/index.html +++ b/blueprints/blueprint/environments/operate_first/index.html @@ -14,7 +14,7 @@ - }
    ODH Logo

    Operate First Environment

    + }

    ODH Logo

    Operate First Environment

    drawing diff --git a/blueprints/continuous-delivery/README/index.html b/blueprints/continuous-delivery/README/index.html index cab9cdb2d6a7..7424228f8d10 100644 --- a/blueprints/continuous-delivery/README/index.html +++ b/blueprints/continuous-delivery/README/index.html @@ -14,7 +14,7 @@ - }

    ODH Logo

    Continous Delivery

    This repository contains an opinionated reference architecture to setup, manage and operate a continous delivery + }

    ODH Logo

    Continous Delivery

    This repository contains an opinionated reference architecture to setup, manage and operate a continous delivery pipeline. The continous delivery pipeline not only consists of the Tekton/OpenShift Pipelines parts, but also supporting Cyborg for maintaining the source code (creating releases/tags, updating dependencies, …).

    Prerequisites

    Kustomize 3.8.1+ SOPS 3.4.0+ diff --git a/blueprints/continuous-delivery/docs/continuous_delivery/index.html b/blueprints/continuous-delivery/docs/continuous_delivery/index.html index 586549c763cc..e8710833a16d 100644 --- a/blueprints/continuous-delivery/docs/continuous_delivery/index.html +++ b/blueprints/continuous-delivery/docs/continuous_delivery/index.html @@ -14,7 +14,7 @@ - }

    ODH Logo

    (Opinionated) Continuous Delivery

    With “Operate First: Continous Delivery” we seek to describe an opinionated continous delivery concept, show its + }

    ODH Logo

    (Opinionated) Continuous Delivery

    With “Operate First: Continous Delivery” we seek to describe an opinionated continous delivery concept, show its implementation and operation. We want to improve the capacity of cloud native developers and operators to deliver software artifacts faster and with less friction.

    We focus on the OpenShift Container Platform and its capabilities to run open hybride cloud workloads.

    We use OpenShift Pipeline (or the coresponding Tekton release) to deploy a Continous Integration and Continious Delivery system that:

    • delivers Python Module Artifacts to pypi.org
    • delivers Container Image to quay.io.
    • introduces changes to GitOps repositories on github.com

    Any pipeline or task declaration is published as open-source software, and operational documentation is published on diff --git a/blueprints/continuous-delivery/docs/setup_cd_pipeline/index.html b/blueprints/continuous-delivery/docs/setup_cd_pipeline/index.html index a8a0e1f74ec0..7e46dfc6500b 100644 --- a/blueprints/continuous-delivery/docs/setup_cd_pipeline/index.html +++ b/blueprints/continuous-delivery/docs/setup_cd_pipeline/index.html @@ -14,4 +14,4 @@ - }

    \ No newline at end of file + } \ No newline at end of file diff --git a/blueprints/continuous-delivery/docs/setup_ci_pipeline/index.html b/blueprints/continuous-delivery/docs/setup_ci_pipeline/index.html index a4b3cdc58aa5..1405a8f1bf7f 100644 --- a/blueprints/continuous-delivery/docs/setup_ci_pipeline/index.html +++ b/blueprints/continuous-delivery/docs/setup_ci_pipeline/index.html @@ -14,4 +14,4 @@ - } \ No newline at end of file + } \ No newline at end of file diff --git a/blueprints/continuous-delivery/docs/setup_source_operations/index.html b/blueprints/continuous-delivery/docs/setup_source_operations/index.html index 40d6eeadf565..747ab58ad977 100644 --- a/blueprints/continuous-delivery/docs/setup_source_operations/index.html +++ b/blueprints/continuous-delivery/docs/setup_source_operations/index.html @@ -14,4 +14,4 @@ - } \ No newline at end of file + } \ No newline at end of file diff --git a/blueprints/index.html b/blueprints/index.html index da3afed361e8..239acf92b954 100644 --- a/blueprints/index.html +++ b/blueprints/index.html @@ -14,4 +14,4 @@ - }Blueprints | Operate First \ No newline at end of file + }Blueprints | Operate First \ No newline at end of file diff --git a/data-science/categorical-encoding/CHANGELOG/index.html b/data-science/categorical-encoding/CHANGELOG/index.html index 302a99a9658a..f5061fff4174 100644 --- a/data-science/categorical-encoding/CHANGELOG/index.html +++ b/data-science/categorical-encoding/CHANGELOG/index.html @@ -14,4 +14,4 @@ - }
    ODH Logo

    Release 1.0.0 (2020-10-06T12:58:00)

    Features

    • Remove gitmodules
    • Move version to a variable
    • Remove zuul config
    • Pull issue templates
    • :truck: include aicoe-ci configuration file
    • Add relevant files
    \ No newline at end of file + }
    ODH Logo

    Release 1.0.0 (2020-10-06T12:58:00)

    Features

    • Remove gitmodules
    • Move version to a variable
    • Remove zuul config
    • Pull issue templates
    • :truck: include aicoe-ci configuration file
    • Add relevant files
    \ No newline at end of file diff --git a/data-science/categorical-encoding/README/index.html b/data-science/categorical-encoding/README/index.html index 934608c410a4..cebf737f328c 100644 --- a/data-science/categorical-encoding/README/index.html +++ b/data-science/categorical-encoding/README/index.html @@ -14,7 +14,7 @@ - }
    ODH Logo

    Categorical Encoding

    Unsupervised learning problems such as anomaly detection and clustering are challenging due to the lack of labels required for training embeddings and validating the results. Therefore, it becomes essential to use the right encoding schemes, dimensionality reduction methods, and models. In these types of learning problems, manipulating numerical variables is straightforward as they can be easily plugged into statistical methods. For example, it is easy to find mean and standard deviations in the height of a population.

    Categorical variables need to be handled carefully as they have to be converted to numbers. Ordinal categorical variables have an inherent ordering from one extreme to the other, for e.g., sentiment can be very negative, negative, neutral, positive, and very positive. We can use simple integer encoding or contrast encoding for these variables.

    + }

    ODH Logo

    Categorical Encoding

    Unsupervised learning problems such as anomaly detection and clustering are challenging due to the lack of labels required for training embeddings and validating the results. Therefore, it becomes essential to use the right encoding schemes, dimensionality reduction methods, and models. In these types of learning problems, manipulating numerical variables is straightforward as they can be easily plugged into statistical methods. For example, it is easy to find mean and standard deviations in the height of a population.

    Categorical variables need to be handled carefully as they have to be converted to numbers. Ordinal categorical variables have an inherent ordering from one extreme to the other, for e.g., sentiment can be very negative, negative, neutral, positive, and very positive. We can use simple integer encoding or contrast encoding for these variables.

    encoders diff --git a/data-science/categorical-encoding/docs/blog/blog/index.html b/data-science/categorical-encoding/docs/blog/blog/index.html index 22717a3e35ed..b9ad200529de 100644 --- a/data-science/categorical-encoding/docs/blog/blog/index.html +++ b/data-science/categorical-encoding/docs/blog/blog/index.html @@ -14,7 +14,7 @@ - }Categorical Encoding | Operate First

    ODH Logo

    Categorical Encoding

    Authors: Shrey Anand, AICoE

    Date Created: 10th September 2020

    Date Updated: 10th September 2020

    Tags: Categorical encoding, unsupervised learning, word embeddings, tabular data, nominal categorical variables, explainability, decision making

    Introduction

    Unsupervised learning problems such as anomaly detection and clustering are challenging due to the lack of labels required for training embeddings and validating the results. Therefore, it becomes essential to use the right encoding schemes, dimensionality reduction methods, and models. In these types of learning problems, manipulating numerical variables is straightforward as they can be easily plugged into statistical methods. For example, it is easy to find mean and standard deviations in the height of a population.

    Categorical variables need to be handled carefully as they have to be converted to numbers. Ordinal categorical variables have an inherent ordering from one extreme to the other, for e.g., sentiment can be very negative, negative, neutral, positive, and very positive. We can use simple integer encoding or contrast encoding for these variables.

    + }Categorical Encoding | Operate First

    ODH Logo

    Categorical Encoding

    Authors: Shrey Anand, AICoE

    Date Created: 10th September 2020

    Date Updated: 10th September 2020

    Tags: Categorical encoding, unsupervised learning, word embeddings, tabular data, nominal categorical variables, explainability, decision making

    Introduction

    Unsupervised learning problems such as anomaly detection and clustering are challenging due to the lack of labels required for training embeddings and validating the results. Therefore, it becomes essential to use the right encoding schemes, dimensionality reduction methods, and models. In these types of learning problems, manipulating numerical variables is straightforward as they can be easily plugged into statistical methods. For example, it is easy to find mean and standard deviations in the height of a population.

    Categorical variables need to be handled carefully as they have to be converted to numbers. Ordinal categorical variables have an inherent ordering from one extreme to the other, for e.g., sentiment can be very negative, negative, neutral, positive, and very positive. We can use simple integer encoding or contrast encoding for these variables.

    Approach diff --git a/data-science/categorical-encoding/manifests/README/index.html b/data-science/categorical-encoding/manifests/README/index.html index d84336ced01b..44334ed193c1 100644 --- a/data-science/categorical-encoding/manifests/README/index.html +++ b/data-science/categorical-encoding/manifests/README/index.html @@ -14,7 +14,7 @@ - }

    ODH Logo

    Automated Argo workflows

    If you’d like to automate your Jupyter notebooks using Argo, please use these kustomize manifests. If you follow the steps bellow, your application is fully set and ready to be deployed via Argo CD.

    For a detailed guide on how to adjust your notebooks etc, please consult documentation

    1. Replace all <VARIABLE> mentions with your project name, respective url or any fitting value

    2. Define your automation run structure in the templates section of cron-workflow.yaml

    3. Set up sops:

      1. Install go from your distribution repository

      2. Setup GOPATH

        echo 'export GOPATH="$HOME/.go"' >> ~/.bashrc
        +      }
        ODH Logo

        Automated Argo workflows

        If you’d like to automate your Jupyter notebooks using Argo, please use these kustomize manifests. If you follow the steps bellow, your application is fully set and ready to be deployed via Argo CD.

        For a detailed guide on how to adjust your notebooks etc, please consult documentation

        1. Replace all <VARIABLE> mentions with your project name, respective url or any fitting value

        2. Define your automation run structure in the templates section of cron-workflow.yaml

        3. Set up sops:

          1. Install go from your distribution repository

          2. Setup GOPATH

            echo 'export GOPATH="$HOME/.go"' >> ~/.bashrc
             echo 'export PATH="${GOPATH//://bin:}/bin:$PATH"' >> ~/.bashrc
             source  ~/.bashrc
          3. Install sops from your distribution repository if possible or use sops GitHub release binaries

          4. Import AICoE-SRE’s public key EFDB9AFBD18936D9AB6B2EECBD2C73FF891FBC7E:

            gpg --keyserver keyserver.ubuntu.com --recv EFDB9AFBD18936D9AB6B2EECBD2C73FF891FBC7E
          5. Import tcoufal’s (A76372D361282028A99F9A47590B857E0288997C) and mhild’s 04DAFCD9470A962A2F272984E5EB0DA32F3372AC keys (so they can help)

            gpg --keyserver keyserver.ubuntu.com --recv A76372D361282028A99F9A47590B857E0288997C  # tcoufal
             gpg --keyserver keyserver.ubuntu.com --recv 04DAFCD9470A962A2F272984E5EB0DA32F3372AC  # mhild
          6. If you’d like to be able to build the manifest on your own as well, please list your GPG key in the .sops.yaml file, pgp section (add to the comma separated list). With your key present there, you can later generate the full manifests using kustomize yourself (ksops has to be installed, please follow ksops guide.

        4. Create a secret and encrypt it with sops:

          # If you're not already in the `manifest` folder, cd here
          diff --git a/data-science/categorical-encoding/notebooks/demo/demo/index.html b/data-science/categorical-encoding/notebooks/demo/demo/index.html
          index 4a8bd1264894..51ca1f257b5d 100644
          --- a/data-science/categorical-encoding/notebooks/demo/demo/index.html
          +++ b/data-science/categorical-encoding/notebooks/demo/demo/index.html
          @@ -14,7 +14,7 @@
                 
                 
                 
          -      }
          ODH Logo

          Categorical Encoding

          + }
          ODH Logo

          Categorical Encoding

          In this notebook, we focus on encoding schemes for nominal categorical variables. These variables are particularly challenging because there is no inherent ordering in the variables, for e.g., weather can be rainy, sunny, snowy, etc. Encoding to numbers is challenging because we don't want to distort the distances between the levels of the variables. In other words, if we encode rainy as 0, sunny as 1, and snowy as 2 then the model will interpret rainy to be closer to sunny than snowy which is not true. A common approach is to use one-hot encoding scheme. The method works well because all the one hot vectors are orthogonal to each other preserving the true distances. However, when the cardinality of the variables increase, one-hot encoding explodes the computation. For example, if we have 1000 different types of weather conditions then one-hot would give a 1000 dimension vector. To improve performance, we may choose to reduce dimensions using various forms matrix decomposition techniques. However, since we cannot go back to original dimensional space, we lose explainability in this process. Therefore, we search for encoders that optimally balance the trade-off between performance and explainability.

          [1]
          import warnings
           warnings.filterwarnings('ignore')
           import os, sys
          diff --git a/data-science/configuration-files-analysis/README/index.html b/data-science/configuration-files-analysis/README/index.html
          index 81b1ef5ba39c..36a2c870b834 100644
          --- a/data-science/configuration-files-analysis/README/index.html
          +++ b/data-science/configuration-files-analysis/README/index.html
          @@ -14,7 +14,7 @@
                 
                 
                 
          -      }
          ODH Logo

          Configuration file analysis

          Overview

          Software systems have become more flexible and feature-rich. For example, the configuration file for MySQL has more than 200 configuration entries with different subentries. As a result, configuring these systems is a complicated task and frequently causes configuration errors. Currently, in most cases, misconfigurations are detected by manually specified rules. However, this process is tedious and not scalable. In this project, we propose data-driven methods to detect misconfigurations by discovering frequently occurring patterns in configuration files.

          Misconfiguration detection framework

          The misconfiguration detection framework adopted in this project is inspired by the research paper “Synthesizing Configuration File Specifications with Association Rule Learning”. Association rule learning is a method to discover frequently occurring patterns or associations between variables in a dataset.

          + }

          ODH Logo

          Configuration file analysis

          Overview

          Software systems have become more flexible and feature-rich. For example, the configuration file for MySQL has more than 200 configuration entries with different subentries. As a result, configuring these systems is a complicated task and frequently causes configuration errors. Currently, in most cases, misconfigurations are detected by manually specified rules. However, this process is tedious and not scalable. In this project, we propose data-driven methods to detect misconfigurations by discovering frequently occurring patterns in configuration files.

          Misconfiguration detection framework

          The misconfiguration detection framework adopted in this project is inspired by the research paper “Synthesizing Configuration File Specifications with Association Rule Learning”. Association rule learning is a method to discover frequently occurring patterns or associations between variables in a dataset.

          image alt text diff --git a/data-science/configuration-files-analysis/docs/blog/configuration-file-analysis-blog/index.html b/data-science/configuration-files-analysis/docs/blog/configuration-file-analysis-blog/index.html index b598dc97bceb..9b86bdfce0a4 100644 --- a/data-science/configuration-files-analysis/docs/blog/configuration-file-analysis-blog/index.html +++ b/data-science/configuration-files-analysis/docs/blog/configuration-file-analysis-blog/index.html @@ -14,7 +14,7 @@ - }Configuration file analysis | Operate First

          ODH Logo

          Configuration file analysis

          Author(s): Sanket Badhe, Shrey Anand, Marcel Hild

          Date Created: 10/06/2020

          Date Updated: 10/06/2020

          Tags: configuration files, similarity index, misconfiguration detection, unsupervised learning, association Rule Learning

          Abstract

          Software systems have become more flexible and feature-rich. For example, the configuration file for MySQL has more than 200 configuration entries with different subentries. As a result, configuring these systems is a complicated task and frequently causes configuration errors. Currently, in most cases, misconfigurations are detected by manually specified rules. However, this process is tedious and not scalable. In this project, we propose data-driven methods to detect misconfigurations by discovering frequently occurring patterns in configuration files.

          Introduction

          Configuration errors are one of the major underlying causes of modern software system failures [1]. In 2017, AT&T’s 911 service went down for 5 hours because of a system configuration change [2]. About 12600 unique callers were not able to reach 911 during that period. In another similar incident, Facebook and Instagram went down because of a change that affected facebook’s configuration systems [3]. These critical system failures are ubiquitous - In one empirical study, researchers found that the percentage of system failure caused by configuration errors is higher than the percentage of failure resulting from bugs, 30% and 20% respectively [4].

          Some of the configuration files are written by experts and customized by users such as tuned files, while others are completely configured by end-users. When writing configuration files, users usually take existing files and modify them with little knowledge of the system. The non-expert user can then easily introduce errors. Even worse, the original file may already be corrupted, and the errors are propagated further. In this blog, we explored misconfiguration detection in MySQL configuration files using data-driven methods.

          Misconfiguration detection framework

          The misconfiguration detection framework adopted in this project is inspired by the research paper “Synthesizing Configuration File Specifications with Association Rule Learning” [5]. Association rule learning is a method to discover frequently occurring patterns or associations between variables in a dataset. In association rule learning, support and confidence are two metrics widely used to filter the proposed rules. Support is the percentage of times that the keywords in the proposed rule have been seen in the training configuration files. Confidence is the percentage of times the proposed rule has held true over the training configuration files.

          + }Configuration file analysis | Operate First

          ODH Logo

          Configuration file analysis

          Author(s): Sanket Badhe, Shrey Anand, Marcel Hild

          Date Created: 10/06/2020

          Date Updated: 10/06/2020

          Tags: configuration files, similarity index, misconfiguration detection, unsupervised learning, association Rule Learning

          Abstract

          Software systems have become more flexible and feature-rich. For example, the configuration file for MySQL has more than 200 configuration entries with different subentries. As a result, configuring these systems is a complicated task and frequently causes configuration errors. Currently, in most cases, misconfigurations are detected by manually specified rules. However, this process is tedious and not scalable. In this project, we propose data-driven methods to detect misconfigurations by discovering frequently occurring patterns in configuration files.

          Introduction

          Configuration errors are one of the major underlying causes of modern software system failures [1]. In 2017, AT&T’s 911 service went down for 5 hours because of a system configuration change [2]. About 12600 unique callers were not able to reach 911 during that period. In another similar incident, Facebook and Instagram went down because of a change that affected facebook’s configuration systems [3]. These critical system failures are ubiquitous - In one empirical study, researchers found that the percentage of system failure caused by configuration errors is higher than the percentage of failure resulting from bugs, 30% and 20% respectively [4].

          Some of the configuration files are written by experts and customized by users such as tuned files, while others are completely configured by end-users. When writing configuration files, users usually take existing files and modify them with little knowledge of the system. The non-expert user can then easily introduce errors. Even worse, the original file may already be corrupted, and the errors are propagated further. In this blog, we explored misconfiguration detection in MySQL configuration files using data-driven methods.

          Misconfiguration detection framework

          The misconfiguration detection framework adopted in this project is inspired by the research paper “Synthesizing Configuration File Specifications with Association Rule Learning” [5]. Association rule learning is a method to discover frequently occurring patterns or associations between variables in a dataset. In association rule learning, support and confidence are two metrics widely used to filter the proposed rules. Support is the percentage of times that the keywords in the proposed rule have been seen in the training configuration files. Confidence is the percentage of times the proposed rule has held true over the training configuration files.

          framework diff --git a/data-science/configuration-files-analysis/manifests/README/index.html b/data-science/configuration-files-analysis/manifests/README/index.html index 245a703f6c63..97ad70ca9588 100644 --- a/data-science/configuration-files-analysis/manifests/README/index.html +++ b/data-science/configuration-files-analysis/manifests/README/index.html @@ -14,7 +14,7 @@ - }

          ODH Logo

          Automated Argo workflows

          If you’d like to automate your Jupyter notebooks using Argo, please use these kustomize manifests. If you follow the steps bellow, your application is fully set and ready to be deployed via Argo CD.

          For a detailed guide on how to adjust your notebooks etc, please consult documentation

          1. Replace all <VARIABLE> mentions with your project name, respective url or any fitting value

          2. Define your automation run structure in the templates section of cron-workflow.yaml

          3. Set up sops:

            1. Install go from your distribution repository

            2. Setup GOPATH

              echo 'export GOPATH="$HOME/.go"' >> ~/.bashrc
              +      }
              ODH Logo

              Automated Argo workflows

              If you’d like to automate your Jupyter notebooks using Argo, please use these kustomize manifests. If you follow the steps bellow, your application is fully set and ready to be deployed via Argo CD.

              For a detailed guide on how to adjust your notebooks etc, please consult documentation

              1. Replace all <VARIABLE> mentions with your project name, respective url or any fitting value

              2. Define your automation run structure in the templates section of cron-workflow.yaml

              3. Set up sops:

                1. Install go from your distribution repository

                2. Setup GOPATH

                  echo 'export GOPATH="$HOME/.go"' >> ~/.bashrc
                   echo 'export PATH="${GOPATH//://bin:}/bin:$PATH"' >> ~/.bashrc
                   source  ~/.bashrc
                3. Install sops from your distribution repository if possible or use sops GitHub release binaries

                4. Import AICoE-SRE’s public key EFDB9AFBD18936D9AB6B2EECBD2C73FF891FBC7E:

                  gpg --keyserver keyserver.ubuntu.com --recv EFDB9AFBD18936D9AB6B2EECBD2C73FF891FBC7E
                5. Import tcoufal’s (A76372D361282028A99F9A47590B857E0288997C) and mhild’s 04DAFCD9470A962A2F272984E5EB0DA32F3372AC keys (so they can help)

                  gpg --keyserver keyserver.ubuntu.com --recv A76372D361282028A99F9A47590B857E0288997C  # tcoufal
                   gpg --keyserver keyserver.ubuntu.com --recv 04DAFCD9470A962A2F272984E5EB0DA32F3372AC  # mhild
                6. If you’d like to be able to build the manifest on your own as well, please list your GPG key in the .sops.yaml file, pgp section (add to the comma separated list). With your key present there, you can later generate the full manifests using kustomize yourself (ksops has to be installed, please follow ksops guide.

              4. Create a secret and encrypt it with sops:

                # If you're not already in the `manifest` folder, cd here
                diff --git a/data-science/configuration-files-analysis/notebooks/Misconfiguration_detection_framework_for_data_type_errors/index.html b/data-science/configuration-files-analysis/notebooks/Misconfiguration_detection_framework_for_data_type_errors/index.html
                index f94cd8888f47..b9dd6730c0ed 100644
                --- a/data-science/configuration-files-analysis/notebooks/Misconfiguration_detection_framework_for_data_type_errors/index.html
                +++ b/data-science/configuration-files-analysis/notebooks/Misconfiguration_detection_framework_for_data_type_errors/index.html
                @@ -14,7 +14,7 @@
                       
                       
                       
                -      }
                ODH Logo

                Misconfiguration detection framework

                + }
                ODH Logo

                Misconfiguration detection framework

                The misconfiguration detection framework adopted in this project is inspired by the research paper 'Synthesizing Configuration File Specifications with Association Rule Learning'. Association rule learning is a method to discover frequently occurring patterns or associations between variables in a dataset. In association rule learning, support and confidence are two metrics widely used to filter the proposed rules. Support is the percentage of times that the keywords in the proposed rule have been seen in the training configuration files. Confidence is the percentage of times the proposed rule has held true over the training configuration files.

                [1]
                from IPython.display import Image
                 Image('images/framework.png')

                As you can see in the above image, misconfiguration detection framework has two important modules: translator and learner.

                diff --git a/data-science/configuration-files-analysis/notebooks/Misconfiguration_detection_framework_for_spelling_errors/index.html b/data-science/configuration-files-analysis/notebooks/Misconfiguration_detection_framework_for_spelling_errors/index.html index b36906ae65bf..e4aa49f85409 100644 --- a/data-science/configuration-files-analysis/notebooks/Misconfiguration_detection_framework_for_spelling_errors/index.html +++ b/data-science/configuration-files-analysis/notebooks/Misconfiguration_detection_framework_for_spelling_errors/index.html @@ -14,7 +14,7 @@ - }
                ODH Logo

                Misconfiguration detection framework

                + }
                ODH Logo

                Misconfiguration detection framework

                The misconfiguration detection framework adopted in this project is inspired by the research paper 'Synthesizing Configuration File Specifications with Association Rule Learning'. Association rule learning is a method to discover frequently occurring patterns or associations between variables in a dataset. In association rule learning, support and confidence are two metrics widely used to filter the proposed rules. Support is the percentage of times that the keywords in the proposed rule have been seen in the training configuration files. Confidence is the percentage of times the proposed rule has held true over the training configuration files.

                [1]
                from IPython.display import Image
                 Image('images/framework.png')

                Misconfiguration detection framework has two important modules: translator and learner.

                diff --git a/data-science/data-science-workflows/README/index.html b/data-science/data-science-workflows/README/index.html index 636986e92047..cb71a8a2d4df 100644 --- a/data-science/data-science-workflows/README/index.html +++ b/data-science/data-science-workflows/README/index.html @@ -14,4 +14,4 @@ - }
                ODH Logo

                data-science-workflows

                Start Here: AI Ops DS Project

                Please use the following outline to get started with a new AI Ops Data Science Project:

                1. Review the project workflow document here

                2. Create a new description using this template doc . See example here .

                3. Put the doc into a subfolder of this shared directory

                4. Copy this project board to a new organization level project board including automation. See Copying a project board

                5. Request a Ceph bucket here and add your data to it

                6. Create a CookieCutter formatted data science project from this template repo. Follow these instructions to create a new repo from this template.

                7. Create a private GitHub repo in this org

                GitHub workflow

                1. Create an issue for every task you plan to work on
                2. Add the issue to the New column of the corresponding project board
                3. Let somebody from the team review the issue and refine until it’s clear what should be done and what defines done
                  1. provide some sort acceptance criteria
                  2. break the issue into smaller pieces if it can’t be done in one sprint
                4. Move the issue to the To Do column
                5. Once you start work on the issue move to the In Progress column and assign it to yourself
                6. Create a PR for the issue and reference the issue in the PR description
                7. Let somebody from the team review the PR - never merge your own PRs
                \ No newline at end of file + }
                ODH Logo

                data-science-workflows

                Start Here: AI Ops DS Project

                Please use the following outline to get started with a new AI Ops Data Science Project:

                1. Review the project workflow document here

                2. Create a new description using this template doc . See example here .

                3. Put the doc into a subfolder of this shared directory

                4. Copy this project board to a new organization level project board including automation. See Copying a project board

                5. Request a Ceph bucket here and add your data to it

                6. Create a CookieCutter formatted data science project from this template repo. Follow these instructions to create a new repo from this template.

                7. Create a private GitHub repo in this org

                GitHub workflow

                1. Create an issue for every task you plan to work on
                2. Add the issue to the New column of the corresponding project board
                3. Let somebody from the team review the issue and refine until it’s clear what should be done and what defines done
                  1. provide some sort acceptance criteria
                  2. break the issue into smaller pieces if it can’t be done in one sprint
                4. Move the issue to the To Do column
                5. Once you start work on the issue move to the In Progress column and assign it to yourself
                6. Create a PR for the issue and reference the issue in the PR description
                7. Let somebody from the team review the PR - never merge your own PRs
                \ No newline at end of file diff --git a/data-science/data-science-workflows/Thoth-bots/index.html b/data-science/data-science-workflows/Thoth-bots/index.html new file mode 100644 index 000000000000..25b6426c33e7 --- /dev/null +++ b/data-science/data-science-workflows/Thoth-bots/index.html @@ -0,0 +1,52 @@ +
                ODH Logo

                Instructions on how to set up various Thoth bots in your project

                Kebechet

                • Kebechet is the bot that you can use to automatically update your project dependencies.

                • Kebechet can be configured using a yaml configuration file (.thoth.yaml) in the root of your repo.

                  host: khemenu.thoth-station.ninja
                  +tls_verify: false
                  +requirements_format: pipenv
                  +
                  +runtime_environments:
                  +  - name: rhel:8
                  +    operating_system:
                  +      name: rhel
                  +      version: "8"
                  +    python_version: "3.6"
                  +    recommendation_type: latest
                  +
                  +managers:
                  +  - name: pipfile-requirements
                  +  - name: update
                  +    configuration:
                  +      labels: [bot]
                  +  - name: info
                  +  - name: version
                  +    configuration:
                  +      maintainers:
                  +        - goern   # Update this list of project maintainers
                  +        - fridex
                  +      assignees:
                  +        - sesheta
                  +      labels: [bot]
                  +      changelog_file: true

                Zuul (Sesheta)

                • You can use the zuul bot to set up automatic testing and merging for your PRs.

                • Zuul can be configured using a yaml configuration file (.zuul.yaml)

                  - project:
                  +    check:
                  +      jobs:
                  +        - "noop"
                  +    gate:
                  +      jobs:
                  +        - "noop"
                  +    kebechet-auto-gate:
                  +      jobs:
                  +        - "noop"
                • You can add different types of jobs:

                  • thoth-coala job - It uses Coala for code linting, it can be configured using a .coafile. in the root of your repo.
                  • thoth-pytest job - It uses the pytest module to run tests in your repo.
                • Zuul will not merge any PRs for which any of the specified jobs have failed.

                • If there are no jobs specified in the zuul config (only noops), zuul will merge any PR as long as it has been approved by an authorized reviewer.

                \ No newline at end of file diff --git a/data-science/data-science-workflows/docs/aiops-projects/index.html b/data-science/data-science-workflows/docs/aiops-projects/index.html index 0edb7309272c..b5be5a518faf 100644 --- a/data-science/data-science-workflows/docs/aiops-projects/index.html +++ b/data-science/data-science-workflows/docs/aiops-projects/index.html @@ -14,4 +14,4 @@ - }
                ODH Logo

                AI Ops Projects

                This document contains a list of active projects within the AI Ops Team

                1. AI-CoE SRE : The goal of this project is to outline and practice a common ‘bleeding edge technology’ approach for all AI CoE endeavors to operate their software components and services. All services target OpenShift as the primary platform, but still extending to the full stack - from underlying hardware up to the applications running on top of a service. E.g. supporting custom AI service on Seldon provided by DataHub on PSI on OpenStack.

                2. AI-Enablement Initiative : At AICoE, we have been actively involved in collaborating with teams within Red Hat to advance different products, services and operations using AI. In order to enable and/or educate teams and get engineers or stakeholders more involved in the projects, and in order to get them to be able to effectively use the solutions, we are working on a way to streamline the process in which we make the engagements and measure the impact of each project.

                3. Auto FAQ : The OpenShift product management team has reached out to the AI CoE regarding developing a tool that could generate and continuously update an FAQ based on the content in our mailing lists. The stated goal of this project is to, “use Machine Learning technique(s) to auto-generate user-visible FAQs and keep them updated.” Which could be framed as a question answering systems or a text generator. This is being done in order to reduce the amount of time Openshift PM’s spend answering similar or duplicate questions.

                4. Ceph Data Drive Failure : Recently, the Ceph team started collecting user data to create their own dataset. Since this data comes from actual Ceph clusters, the model can be made more accurate by re-training on this dataset. But since the data is not labelled, i.e. it does not have information on whether or not a hard drive failed, training our supervised model is not straightforward. The goal of this project is to explore whether, given the unlabeled data collected from Ceph users, it is possible to perform the same analysis as we did for Backblaze data, by implementing a heuristic or an active learning approach to generate labels.

                5. CJA Topic Modeling : The Voice of Customer team has been focused on developing a process to systematically track customer sentiment for a larger portfolio of our customers and products in order to know our customers better. The VoC team collaborated with AICoE to scale this process by using AI techniques to gather meaningful insights from customer data. In order to analyze customer feedback for themes we used topic analysis techniques and were able to detect key themes and topics in customer verbatim that need attention. By tying the key themes and topics to customer satisfaction scores and customer sentiment values, we can get a fair understanding of customer’s pain points and help one better improve one’s products and services.

                6. Cloud Price List Analysis : Most customers of Red Hat use various cloud services like Azure, AWS and many others for different tasks. These cloud providing companies keep changing their prices time to time. It would be really helpful to the customer to understand how the prices are changing and take appropriate measures to best manage the cost. This project aims to come up with solutions that will help the Cost Management team to make wise decisions on how cloud services should be managed with time.

                7. Data Science Workflows : AI Ops team has been working on developing a more structured process around how we manage, execute and deliver on our data science projects, especially those where we collaborate with other Red Hat teams. Having a common framework that we can all start to build from as the team grows and continues to take on more data science projects, it will be hugely beneficial to have an agreed upon and documented process like this in place. And more important than just the existence of some documentation, is that we actually use these tools and find that they provide us with some value. Meaning, that we should keep updating and evolving this process to suit our needs.

                8. Insights Configuration Files Analysis : Red Hat Insights is a tool that provides analytics for Red Hat systems. It collects the system data to find vulnerabilities through manually written rules. The systems data is collected periodically and stored in a warehouse for data analysis. The data comprises hardware and software description, configurations, and logs of the systems. In this project, the focus is on analysis of configuration files. Some of these files are written by experts and customized by users such as the tuned files. Others are completely configured by end users such as the sssd configuration files. This project aims to develop data-driven methods to analyze these configuration files and detect misconfiguration in these files.

                9. Insights Drift Analysis Baselines : Drift Analysis application enables users to compare system configuration of one system to other systems or baselines in the cloud management services inventory. Baselines are configurations (set of name / value facts) that can be defined from scratch, as a copy of an existing system configuration, or as a copy of an existing baseline. We propose a method to recommend baseline configurations automatically to the users. They can then use the recommendation or tweak it further to compare their systems with the baselines. This approach of recommending baselines by utilizing the knowledge of other systems in the account would save time for users by identifying and recommending baselines. This would assist in standardizing RHEL configurations and in improving RHEL configuration management and operations.

                10. Insights Invocation Hints : Given a timestamp of insight archive uploads, deduce how systems checkin and how they were initially registered.

                11. Insights SAP Analysis : In this project, we analyze SAP instances on user systems through data collected from Insights. We create a superset dashboard that illustrates the topology of SAP workloads running on RHEL systems.

                12. OCP Alert Prediction : If a customer’s OpenShift cluster goes down, it can have a significant impact on their business. Since there are a variety of reasons why an OpenShift cluster might fail, finding and fixing the issue that the cluster suffers from is not always trivial. However, if we can predict in advance whether a cluster will run into a given issue, then we may be able to fix it before it fails or before it severely impacts the customer. Issues in a cluster are often defined by, or closely related to, the alerts that it fires. So predicting alerts can be a step towards predicting the underlying issue. Thus, the goal of this project is to predict whether a cluster will fire a given alert within the next hour.

                13. OCP4 Anomaly Detection : OCP4 deployments can suffer from a number of different issues and bugs. It can be tedious for an engineer to inspect and diagnose each deployment individually, which in turn can adversely affect customer experience. In this project, we work on the following two initiatives to address this problem.

                  • Anomaly Detection: In this approach, we try to identify issues before they occur, or before they significantly impact customers. To do so, we find deployments that behave “anomalously” and try to explain this behaviour.
                  • Diagnosis Discovery: In this approach, we try to identify deployments that exhibit similar “symptoms” (issues), and determine exactly what makes these deployments similar to one another. The support engineer can then use this information to determine the “diagnosis” of the issues, and apply the same or similar fix to all the deployments.
                14. Openshift SME Mailing List Analysis : The Openshift-SME mailing list contains many discussions about the issues occurring with OpenShift deployments on a monthly basis and suggestions for how to address the issues. This project aims to help the Openshift product management team bring a more data driven approach to their planning process by performing text analysis on the openshift-sme mailing list and gathering insights into the trends in the email conversations.

                15. Prometheus-api-client python : A python library to make querying prometheus data simpler and also convert metric data into a more Data Science suitable format of a pandas dataframe.

                16. Sentiment Analysis : Red Hat has a variety of text based artifacts coming from sources starting from partner and customer engagements to documentation and communication logs. These text based artifacts are valuable and can be used to generate business insights and inform decisions if appropriately mined. The goal of this project is to allow other teams across Red Hat to have a tool at their disposal allowing them to analyze their text data and make informed decisions based on the insights gained from them.

                17. Sync Pipelines : Data ingress pipelines for DataHub via Argo pipelines.

                \ No newline at end of file + }
                ODH Logo

                AI Ops Projects

                This document contains a list of active projects within the AI Ops Team

                1. AI-CoE SRE : The goal of this project is to outline and practice a common ‘bleeding edge technology’ approach for all AI CoE endeavors to operate their software components and services. All services target OpenShift as the primary platform, but still extending to the full stack - from underlying hardware up to the applications running on top of a service. E.g. supporting custom AI service on Seldon provided by DataHub on PSI on OpenStack.

                2. AI-Enablement Initiative : At AICoE, we have been actively involved in collaborating with teams within Red Hat to advance different products, services and operations using AI. In order to enable and/or educate teams and get engineers or stakeholders more involved in the projects, and in order to get them to be able to effectively use the solutions, we are working on a way to streamline the process in which we make the engagements and measure the impact of each project.

                3. Auto FAQ : The OpenShift product management team has reached out to the AI CoE regarding developing a tool that could generate and continuously update an FAQ based on the content in our mailing lists. The stated goal of this project is to, “use Machine Learning technique(s) to auto-generate user-visible FAQs and keep them updated.” Which could be framed as a question answering systems or a text generator. This is being done in order to reduce the amount of time Openshift PM’s spend answering similar or duplicate questions.

                4. Ceph Data Drive Failure : Recently, the Ceph team started collecting user data to create their own dataset. Since this data comes from actual Ceph clusters, the model can be made more accurate by re-training on this dataset. But since the data is not labelled, i.e. it does not have information on whether or not a hard drive failed, training our supervised model is not straightforward. The goal of this project is to explore whether, given the unlabeled data collected from Ceph users, it is possible to perform the same analysis as we did for Backblaze data, by implementing a heuristic or an active learning approach to generate labels.

                5. CJA Topic Modeling : The Voice of Customer team has been focused on developing a process to systematically track customer sentiment for a larger portfolio of our customers and products in order to know our customers better. The VoC team collaborated with AICoE to scale this process by using AI techniques to gather meaningful insights from customer data. In order to analyze customer feedback for themes we used topic analysis techniques and were able to detect key themes and topics in customer verbatim that need attention. By tying the key themes and topics to customer satisfaction scores and customer sentiment values, we can get a fair understanding of customer’s pain points and help one better improve one’s products and services.

                6. Cloud Price List Analysis : Most customers of Red Hat use various cloud services like Azure, AWS and many others for different tasks. These cloud providing companies keep changing their prices time to time. It would be really helpful to the customer to understand how the prices are changing and take appropriate measures to best manage the cost. This project aims to come up with solutions that will help the Cost Management team to make wise decisions on how cloud services should be managed with time.

                7. Data Science Workflows : AI Ops team has been working on developing a more structured process around how we manage, execute and deliver on our data science projects, especially those where we collaborate with other Red Hat teams. Having a common framework that we can all start to build from as the team grows and continues to take on more data science projects, it will be hugely beneficial to have an agreed upon and documented process like this in place. And more important than just the existence of some documentation, is that we actually use these tools and find that they provide us with some value. Meaning, that we should keep updating and evolving this process to suit our needs.

                8. Insights Configuration Files Analysis : Red Hat Insights is a tool that provides analytics for Red Hat systems. It collects the system data to find vulnerabilities through manually written rules. The systems data is collected periodically and stored in a warehouse for data analysis. The data comprises hardware and software description, configurations, and logs of the systems. In this project, the focus is on analysis of configuration files. Some of these files are written by experts and customized by users such as the tuned files. Others are completely configured by end users such as the sssd configuration files. This project aims to develop data-driven methods to analyze these configuration files and detect misconfiguration in these files.

                9. Insights Drift Analysis Baselines : Drift Analysis application enables users to compare system configuration of one system to other systems or baselines in the cloud management services inventory. Baselines are configurations (set of name / value facts) that can be defined from scratch, as a copy of an existing system configuration, or as a copy of an existing baseline. We propose a method to recommend baseline configurations automatically to the users. They can then use the recommendation or tweak it further to compare their systems with the baselines. This approach of recommending baselines by utilizing the knowledge of other systems in the account would save time for users by identifying and recommending baselines. This would assist in standardizing RHEL configurations and in improving RHEL configuration management and operations.

                10. Insights Invocation Hints : Given a timestamp of insight archive uploads, deduce how systems checkin and how they were initially registered.

                11. Insights SAP Analysis : In this project, we analyze SAP instances on user systems through data collected from Insights. We create a superset dashboard that illustrates the topology of SAP workloads running on RHEL systems.

                12. OCP Alert Prediction : If a customer’s OpenShift cluster goes down, it can have a significant impact on their business. Since there are a variety of reasons why an OpenShift cluster might fail, finding and fixing the issue that the cluster suffers from is not always trivial. However, if we can predict in advance whether a cluster will run into a given issue, then we may be able to fix it before it fails or before it severely impacts the customer. Issues in a cluster are often defined by, or closely related to, the alerts that it fires. So predicting alerts can be a step towards predicting the underlying issue. Thus, the goal of this project is to predict whether a cluster will fire a given alert within the next hour.

                13. OCP4 Anomaly Detection : OCP4 deployments can suffer from a number of different issues and bugs. It can be tedious for an engineer to inspect and diagnose each deployment individually, which in turn can adversely affect customer experience. In this project, we work on the following two initiatives to address this problem.

                  • Anomaly Detection: In this approach, we try to identify issues before they occur, or before they significantly impact customers. To do so, we find deployments that behave “anomalously” and try to explain this behaviour.
                  • Diagnosis Discovery: In this approach, we try to identify deployments that exhibit similar “symptoms” (issues), and determine exactly what makes these deployments similar to one another. The support engineer can then use this information to determine the “diagnosis” of the issues, and apply the same or similar fix to all the deployments.
                14. Openshift SME Mailing List Analysis : The Openshift-SME mailing list contains many discussions about the issues occurring with OpenShift deployments on a monthly basis and suggestions for how to address the issues. This project aims to help the Openshift product management team bring a more data driven approach to their planning process by performing text analysis on the openshift-sme mailing list and gathering insights into the trends in the email conversations.

                15. Prometheus-api-client python : A python library to make querying prometheus data simpler and also convert metric data into a more Data Science suitable format of a pandas dataframe.

                16. Sentiment Analysis : Red Hat has a variety of text based artifacts coming from sources starting from partner and customer engagements to documentation and communication logs. These text based artifacts are valuable and can be used to generate business insights and inform decisions if appropriately mined. The goal of this project is to allow other teams across Red Hat to have a tool at their disposal allowing them to analyze their text data and make informed decisions based on the insights gained from them.

                17. Sync Pipelines : Data ingress pipelines for DataHub via Argo pipelines.

                \ No newline at end of file diff --git a/data-science/data-science-workflows/docs/publish/data-science-workflow-overview/index.html b/data-science/data-science-workflows/docs/publish/data-science-workflow-overview/index.html index 1d9deb35c7d2..507290214272 100644 --- a/data-science/data-science-workflows/docs/publish/data-science-workflow-overview/index.html +++ b/data-science/data-science-workflows/docs/publish/data-science-workflow-overview/index.html @@ -14,7 +14,7 @@ - }
                ODH Logo

                Data Science Workflow

                Start Here: AI Ops DS Project

                Please use the following outline to get started with a new AI Ops Data Science Project:

                1. Review the project workflow document here

                2. Create a new description using this template doc

                3. Copy this project board to a new organization level project board + }

                  ODH Logo

                  Data Science Workflow

                  Start Here: AI Ops DS Project

                  Please use the following outline to get started with a new AI Ops Data Science Project:

                  1. Review the project workflow document here

                  2. Create a new description using this template doc

                  3. Copy this project board to a new organization level project board including automation. See Copying a project board

                  4. Request a Ceph bucket here from the operate first team and add your data to it (feature currently unavailable)

                  5. Create a CookieCutter formatted data science project from this template repo. Follow these instructions diff --git a/data-science/data-science-workflows/docs/publish/project-document-template/index.html b/data-science/data-science-workflows/docs/publish/project-document-template/index.html index e46a34985ea9..7d43cd5cf694 100644 --- a/data-science/data-science-workflows/docs/publish/project-document-template/index.html +++ b/data-science/data-science-workflows/docs/publish/project-document-template/index.html @@ -14,7 +14,7 @@ - }

                    ODH Logo

                    Title - Project Description

                    Authors, 20xx-Month-Day vx.x.x-dev

                    Overview

                    A brief summary of the project based on initial research and stakeholder meetings. To the best of your abilities, + }

                    ODH Logo

                    Title - Project Description

                    Authors, 20xx-Month-Day vx.x.x-dev

                    Overview

                    A brief summary of the project based on initial research and stakeholder meetings. To the best of your abilities, explain at a high level the stakeholders’ desired outcome for the project as well as the potential business value or impact this project will have if successfully completed.

                    1. Situation and current issues
                    2. Key Questions
                    3. Hypothesis: Overview of how it could be done
                    4. Impact

                    A. Problem Statement:

                    In as direct terms as possible, provide the “Data Science” problem statement version of the overview. Think of this as translating the above into a more technical definition to execute on.

                    B. Checklist for project completion

                    Provide a bulleted list of the concrete deliverables and artifacts that, when complete, define the completion of the diff --git a/data-science/data-science-workflows/docs/publish/project-structure/index.html b/data-science/data-science-workflows/docs/publish/project-structure/index.html index 0a2f863a9690..4c02e40c2674 100644 --- a/data-science/data-science-workflows/docs/publish/project-structure/index.html +++ b/data-science/data-science-workflows/docs/publish/project-structure/index.html @@ -14,7 +14,7 @@ - }

                    ODH Logo

                    Data Science Projects Structure for AI Ops -

                    The purpose of this document is to provide Data Scientists in the AI CoE with a template for structuring their projects as well as encouraging a more consistent workflow across projects that promotes an “operate first” mentality; testing and taking advantage of the capabilities the AI CoE has been developing, as well as to ensure that as data scientists, focused on developing intelligent applications for the cloud, we are working from a cloud-native data science mindset from the start.

                    This should be seen as a living document that gets modified and updated as we continue to learn the best way to structure, execute and operate first AI CoE Data Science Projects. That said, I don’t want that to be misinterpreted to mean that what is outlined below are mere suggestions. Between updates of this document, we should try and follow its outline as close as possible to ensure consistency across projects and to learn together what is and is not working. Please track all suggestions as comments in this document. : )

                    General Info about project needs and structure:

                    1. All Data Scientists will request and receive a default Jupyterhub instance of 32gb Ram and larger pvc from the DH team. Create a tracking doc for all bugs / requests to DataHub
                    1. All data scientists will use the DH as their primary working environment.

                      1. If they are blocked due to resources or performance issues with the environment, they will bring the issue to the attention of their team lead and the DH team to be resolved together. If and ONLY IF the issue cannot be resolved on a reasonable timeline will the Data Scientist have the option to work on their local machines temporarily.

                      2. Any work done on a local machine that is not immediately reproducible on Openshift/ DH will be considered incomplete.

                      3. This tracking document will be used to collect any bugs/ issues we find while using the datahub.

                    1. Data Scientists will have their own bucket in DH Ceph as their primary storage.
                      1. Although Data Scientists will have a larger pvc on DH, this should primarily be used for notebooks, scripts and smaller POC data sets, whereas the actual data for projects should be stored primarily in Ceph.

                      2. This document provides information on how to get access to the current DH-PLAYPEN bucket in data hub for temporary use

                      3. This document provides information for requesting your own Ceph bucket in Data Hub.

                    1. Every project will be tracked in its parent’s organization level Github project board for task tracking and a shared google drive folder for artifacts.

                      1. Github Project Board

                      2. Google Drive

                    2. Every project will have a “Project Owner” on the AI Ops side, responsible for ensuring the delivery of the items outlined in the “Project Flow” section below. Ideally, this is the Data Scientist(s) responsible for the project.

                    Project Flow:

                    1. Project definition and Business Understanding (~5 days):

                    1. Stakeholders will meet to discuss projects, available data and desired outcomes, use cases and end-users.
                    +      }
                    ODH Logo

                    Data Science Projects Structure for AI Ops -

                    The purpose of this document is to provide Data Scientists in the AI CoE with a template for structuring their projects as well as encouraging a more consistent workflow across projects that promotes an “operate first” mentality; testing and taking advantage of the capabilities the AI CoE has been developing, as well as to ensure that as data scientists, focused on developing intelligent applications for the cloud, we are working from a cloud-native data science mindset from the start.

                    This should be seen as a living document that gets modified and updated as we continue to learn the best way to structure, execute and operate first AI CoE Data Science Projects. That said, I don’t want that to be misinterpreted to mean that what is outlined below are mere suggestions. Between updates of this document, we should try and follow its outline as close as possible to ensure consistency across projects and to learn together what is and is not working. Please track all suggestions as comments in this document. : )

                    General Info about project needs and structure:

                    1. All Data Scientists will request and receive a default Jupyterhub instance of 32gb Ram and larger pvc from the DH team. Create a tracking doc for all bugs / requests to DataHub
                    1. All data scientists will use the DH as their primary working environment.

                      1. If they are blocked due to resources or performance issues with the environment, they will bring the issue to the attention of their team lead and the DH team to be resolved together. If and ONLY IF the issue cannot be resolved on a reasonable timeline will the Data Scientist have the option to work on their local machines temporarily.

                      2. Any work done on a local machine that is not immediately reproducible on Openshift/ DH will be considered incomplete.

                      3. This tracking document will be used to collect any bugs/ issues we find while using the datahub.

                    1. Data Scientists will have their own bucket in DH Ceph as their primary storage.
                      1. Although Data Scientists will have a larger pvc on DH, this should primarily be used for notebooks, scripts and smaller POC data sets, whereas the actual data for projects should be stored primarily in Ceph.

                      2. This document provides information on how to get access to the current DH-PLAYPEN bucket in data hub for temporary use

                      3. This document provides information for requesting your own Ceph bucket in Data Hub.

                    1. Every project will be tracked in its parent’s organization level Github project board for task tracking and a shared google drive folder for artifacts.

                      1. Github Project Board

                      2. Google Drive

                    2. Every project will have a “Project Owner” on the AI Ops side, responsible for ensuring the delivery of the items outlined in the “Project Flow” section below. Ideally, this is the Data Scientist(s) responsible for the project.

                    Project Flow:

                    1. Project definition and Business Understanding (~5 days):

                    1. Stakeholders will meet to discuss projects, available data and desired outcomes, use cases and end-users.
                     
                     	1. Define concrete checklist of items that define when the project is complete (subject to ongoing  review)
                     
                    diff --git a/data-science/index.html b/data-science/index.html
                    index dadeea297bdf..450efa64cf68 100644
                    --- a/data-science/index.html
                    +++ b/data-science/index.html
                    @@ -14,4 +14,4 @@
                           
                           
                           
                    -      }Projects Overview | Operate First
                    ODH Logo

                    Projects Overview

                    This document contains a list of projects within the AI Ops Team at Red Hat.

                    1. Ceph Drive Failure Prediction : Many large-scale distributed storage systems, such as Ceph, use mirroring or erasure-coded redundancy to provide fault tolerance. Because of this, scaling storage up can be resource-intensive. This project seeks to mitigate this issue using machine learning. The primary goal here is to build a model to predict if a hard drive will fail within a predefined future time interval. These predictions can then be used by Ceph (or other similar systems) to create or destroy replicas accordingly. In addition to making storage more resource-efficient, this may also improve fault tolerance by up to an order of magnitude, since the probability of data loss is generally related to the probability of multiple, concurrent device failures.

                      Github Repo : https://github.com/aicoe-aiops/ceph_drive_failure

                    2. Cloud Price Analysis : Most companies nowadays are paying customers of one of the many cloud vendors in the industry, or are planning to be. These cloud providers keep changing their prices from time to time. However, a lack of information about how and when these prices change results in a lot of uncertainty for customers. Being able to understand price changes would help customers take appropriate measures to best manage their costs. Hence, given a dataset of cloud price lists, we aim to build a Cost-Optimization model that allows the user to make the best decision on how cloud services should be managed over time.

                      Github Repo : https://github.com/aicoe-aiops/cloud-price-analysis-public

                    3. Configuration Files Analysis : Software systems have become more flexible and feature-rich. For example, the configuration file for MySQL has more than 200 configuration entries with different subentries. As a result, configuring these systems is a complicated task and frequently causes configuration errors. Currently, in most cases, misconfigurations are detected by manually specified rules. However, this process is tedious and not scalable. In this project, we propose data-driven methods to detect misconfigurations by discovering frequently occurring patterns in configuration files.

                      Github Repo : https://github.com/aicoe-aiops/configuration-files-analysis

                    4. Data Science Workflows : AI Ops team has been working on developing a more structured process around how we manage, execute and deliver on our data science projects, especially those where we collaborate with other Red Hat teams. Having a common framework that we can all start to build from as the team grows and continues to take on more data science projects, it will be hugely beneficial to have an agreed upon and documented process like this in place. And more important than just the existence of some documentation, is that we actually use these tools and find that they provide us with some value. Meaning, that we should keep updating and evolving this process to suit our needs.

                      Github Repo : https://github.com/aicoe-aiops/data-science-workflows

                    5. OCP Alert Prediction : If a customer’s OpenShift cluster goes down, it can have a significant impact on their business. Since there are a variety of reasons why an OpenShift cluster might fail, finding and fixing the issue that the cluster suffers from is not always trivial. However, if we can predict in advance whether a cluster will run into a given issue, then we may be able to fix it before it fails or before it severely impacts the customer. Issues in a cluster are often defined by, or closely related to, the alerts that it fires. So predicting alerts can be a step towards predicting the underlying issue. Thus, the goal of this project is to predict whether a cluster will fire a given alert within the next hour.

                      Github Repo : https://github.com/aicoe-aiops/ocp-alert-prediction-public

                    6. Prometheus-api-client python : A python library to make querying prometheus data simpler and also convert metric data into a more Data Science suitable format of a pandas dataframe.

                      Github Repo : https://github.com/AICoE/prometheus-api-client-python

                    7. Sentiment Analysis : Red Hat has a variety of text based artifacts coming from sources starting from partner and customer engagements to documentation and communication logs. These text based artifacts are valuable and can be used to generate business insights and inform decisions if appropriately mined. The goal of this project is to allow other teams across Red Hat to have a tool at their disposal allowing them to analyze their text data and make informed decisions based on the insights gained from them.

                      Github Repo : https://github.com/aicoe-aiops/sentiment-analysis-public

                    8. Sync Pipelines : Data ingress pipelines for DataHub via Argo pipelines.

                      Github Repo : https://github.com/aicoe-aiops/sync-pipelines

                    \ No newline at end of file + }Projects Overview | Operate First
                    ODH Logo

                    Projects Overview

                    This document contains a list of projects within the AI Ops Team at Red Hat.

                    1. Ceph Drive Failure Prediction : Many large-scale distributed storage systems, such as Ceph, use mirroring or erasure-coded redundancy to provide fault tolerance. Because of this, scaling storage up can be resource-intensive. This project seeks to mitigate this issue using machine learning. The primary goal here is to build a model to predict if a hard drive will fail within a predefined future time interval. These predictions can then be used by Ceph (or other similar systems) to create or destroy replicas accordingly. In addition to making storage more resource-efficient, this may also improve fault tolerance by up to an order of magnitude, since the probability of data loss is generally related to the probability of multiple, concurrent device failures.

                      Github Repo : https://github.com/aicoe-aiops/ceph_drive_failure

                    2. Cloud Price Analysis : Most companies nowadays are paying customers of one of the many cloud vendors in the industry, or are planning to be. These cloud providers keep changing their prices from time to time. However, a lack of information about how and when these prices change results in a lot of uncertainty for customers. Being able to understand price changes would help customers take appropriate measures to best manage their costs. Hence, given a dataset of cloud price lists, we aim to build a Cost-Optimization model that allows the user to make the best decision on how cloud services should be managed over time.

                      Github Repo : https://github.com/aicoe-aiops/cloud-price-analysis-public

                    3. Configuration Files Analysis : Software systems have become more flexible and feature-rich. For example, the configuration file for MySQL has more than 200 configuration entries with different subentries. As a result, configuring these systems is a complicated task and frequently causes configuration errors. Currently, in most cases, misconfigurations are detected by manually specified rules. However, this process is tedious and not scalable. In this project, we propose data-driven methods to detect misconfigurations by discovering frequently occurring patterns in configuration files.

                      Github Repo : https://github.com/aicoe-aiops/configuration-files-analysis

                    4. Data Science Workflows : AI Ops team has been working on developing a more structured process around how we manage, execute and deliver on our data science projects, especially those where we collaborate with other Red Hat teams. Having a common framework that we can all start to build from as the team grows and continues to take on more data science projects, it will be hugely beneficial to have an agreed upon and documented process like this in place. And more important than just the existence of some documentation, is that we actually use these tools and find that they provide us with some value. Meaning, that we should keep updating and evolving this process to suit our needs.

                      Github Repo : https://github.com/aicoe-aiops/data-science-workflows

                    5. OCP Alert Prediction : If a customer’s OpenShift cluster goes down, it can have a significant impact on their business. Since there are a variety of reasons why an OpenShift cluster might fail, finding and fixing the issue that the cluster suffers from is not always trivial. However, if we can predict in advance whether a cluster will run into a given issue, then we may be able to fix it before it fails or before it severely impacts the customer. Issues in a cluster are often defined by, or closely related to, the alerts that it fires. So predicting alerts can be a step towards predicting the underlying issue. Thus, the goal of this project is to predict whether a cluster will fire a given alert within the next hour.

                      Github Repo : https://github.com/aicoe-aiops/ocp-alert-prediction-public

                    6. Prometheus-api-client python : A python library to make querying prometheus data simpler and also convert metric data into a more Data Science suitable format of a pandas dataframe.

                      Github Repo : https://github.com/AICoE/prometheus-api-client-python

                    7. Sentiment Analysis : Red Hat has a variety of text based artifacts coming from sources starting from partner and customer engagements to documentation and communication logs. These text based artifacts are valuable and can be used to generate business insights and inform decisions if appropriately mined. The goal of this project is to allow other teams across Red Hat to have a tool at their disposal allowing them to analyze their text data and make informed decisions based on the insights gained from them.

                      Github Repo : https://github.com/aicoe-aiops/sentiment-analysis-public

                    8. Sync Pipelines : Data ingress pipelines for DataHub via Argo pipelines.

                      Github Repo : https://github.com/aicoe-aiops/sync-pipelines

                    \ No newline at end of file diff --git a/data-science/ocp-ci-analysis/README/index.html b/data-science/ocp-ci-analysis/README/index.html index f1087566c970..6ee10bb48a03 100644 --- a/data-science/ocp-ci-analysis/README/index.html +++ b/data-science/ocp-ci-analysis/README/index.html @@ -14,7 +14,7 @@ - }
                    ODH Logo

                    AI Supported Continuous Integration Testing

                    Developing AI tools for developers by leveraging the open data made available by OpenShift and Kubernetes CI + }

                    ODH Logo

                    AI Supported Continuous Integration Testing

                    Developing AI tools for developers by leveraging the open data made available by OpenShift and Kubernetes CI platforms.

                    AI Ops is a critical component of supporting any Open Hybrid Cloud infrastructure. As the systems we operate become larger and more complex, intelligent monitoring tools and response agents will become a necessity. In an effort to accelerate the development, access and reliability of these intelligent operations solutions, our aim here is to diff --git a/data-science/ocp-ci-analysis/docs/publish/failure-type-classification-with-the-testgrid-data-project-doc/index.html b/data-science/ocp-ci-analysis/docs/publish/failure-type-classification-with-the-testgrid-data-project-doc/index.html index 889dddf3d5dd..42b9af1d8e45 100644 --- a/data-science/ocp-ci-analysis/docs/publish/failure-type-classification-with-the-testgrid-data-project-doc/index.html +++ b/data-science/ocp-ci-analysis/docs/publish/failure-type-classification-with-the-testgrid-data-project-doc/index.html @@ -14,4 +14,4 @@ - }

                    ODH Logo

                    Failure type classification with the TestGrid data

                    Sanket Badhe, Michael Clifford and Marcel Hild, 2020-10-27 v0.1.0-dev

                    Overview

                    In continuous integration (CI) project workflow, developers frequently integrate code into a shared repository. Each integration can then be verified by an automated build and numerous different automated tests. Whenever a failure occurs in a test, developers manually need to analyze failures. Failures in the build can be a legitimate failure or due to some other issues like infrastructure flake, install flake, flaky test, etc. SME can analyze the TestGrid data and determine if failures are legitimate or not. However, it takes a lot of manual effort and reduces the productivity of a team.

                    In this project, our objective is to automate the failure type classification task with the Testgrid data. As we don’t have labeled data to address this problem, we will focus on unsupervised learning methods and heuristics. Figure 1 shows the TestGrid data with different patterns to analyze the type of failure.

                    image alt text

                    Figure 1: Different type of failures in TestGrid

                    In the following section, we discuss the different patterns in more detail.

                    • Rows with red interspersed with green: This usually means a flakey test. Flakey tests pass and fail across multiple runs over a certain period of time. We can trigger this test behavior by using the concept of edge. Edge is the transition of a particular test case from pass to fail on the successive run. We can model edges using a different technique to detect a flakey test.

                    • Rows with solid red chunks: This behavior is almost always a regression either in the test or the product. We can analyze each row to check continuous red cells for detecting install flakes.

                    • Rows with solid red chunks and white to the right: This behavior usually means a new test was added that is failing when running in the release job. For each cell, we will check if there exist all failed test cases to the left and all passing test cases to the right. If there exists this pattern, we will trigger this error.

                    • Repeating vertical red bars: This behavior usually means the subsystem has a bug, and we will find a set of rows that all fail together on the same runs. For this failure type we can also analyze each column to check for continuous red cells to detect subsystem bugs.

                    • Failure waterfall: If there are meandering failures moving from bottom to top, right to the left, this almost always means Infrak flake. We can generate convolutional filters manually to detect Failure waterfall patterns. If it’s hectic to encode all the patterns manually, we can also develop a method to create convolution filters to detect ‘Failure waterfall’ patterns automatically.

                    If this project is successful, we will develop a tool to automatically analyze the TestGrid data. This tool will perform failure type classification with the Testgrid data to address an existing manual process executed by subject matter experts. Using the tool, the developers can focus on real issues and become more productive. Furthermore, we will provide insights about overall statistics about failures so that test developers can improve on existing test suites.

                    A. Problem Statement

                    Given a TestGrid, we want to classify/detect different failure patterns occurring over a certain period of time. In the later part, we will aggregate the results to conclude about the primary reasons behind failures for each release.

                    B. Checklist for project completion

                    1. A notebooks that shows classification and analysis of different types of test failures on TestGrid data.

                    2. Jupyterhub image to reproduce the results.

                    3. Public blog explaining analysis and results.

                    4. Results hosted for SME to review

                    C. Provide a solution in terms of human actions to confirm if the task is within the scope of automation through AI.

                    Without AI and automation tooling, SME will need to go to TestGrid data of a particular release and look at the failures. An SME will determine if that failure is following any of the patterns that we have discussed in earlier sections. Based on detected patterns, an SME tries to determine the reason behind failure.

                    D. Outline a path to operationalization.

                    Once we have notebooks ready, we will build a Notebook-based Pipeline using Elyra. The results will be stored in S3. We can then use Superset as our dashboard and visualization tool, which SME/developers can access and give feedback. If the tool is deemed useful, we could also look into integrating it with the existing TestGrid project.

                    \ No newline at end of file + }
                    ODH Logo

                    Failure type classification with the TestGrid data

                    Sanket Badhe, Michael Clifford and Marcel Hild, 2020-10-27 v0.1.0-dev

                    Overview

                    In continuous integration (CI) project workflow, developers frequently integrate code into a shared repository. Each integration can then be verified by an automated build and numerous different automated tests. Whenever a failure occurs in a test, developers manually need to analyze failures. Failures in the build can be a legitimate failure or due to some other issues like infrastructure flake, install flake, flaky test, etc. SME can analyze the TestGrid data and determine if failures are legitimate or not. However, it takes a lot of manual effort and reduces the productivity of a team.

                    In this project, our objective is to automate the failure type classification task with the Testgrid data. As we don’t have labeled data to address this problem, we will focus on unsupervised learning methods and heuristics. Figure 1 shows the TestGrid data with different patterns to analyze the type of failure.

                    image alt text

                    Figure 1: Different type of failures in TestGrid

                    In the following section, we discuss the different patterns in more detail.

                    • Rows with red interspersed with green: This usually means a flakey test. Flakey tests pass and fail across multiple runs over a certain period of time. We can trigger this test behavior by using the concept of edge. Edge is the transition of a particular test case from pass to fail on the successive run. We can model edges using a different technique to detect a flakey test.

                    • Rows with solid red chunks: This behavior is almost always a regression either in the test or the product. We can analyze each row to check continuous red cells for detecting install flakes.

                    • Rows with solid red chunks and white to the right: This behavior usually means a new test was added that is failing when running in the release job. For each cell, we will check if there exist all failed test cases to the left and all passing test cases to the right. If there exists this pattern, we will trigger this error.

                    • Repeating vertical red bars: This behavior usually means the subsystem has a bug, and we will find a set of rows that all fail together on the same runs. For this failure type we can also analyze each column to check for continuous red cells to detect subsystem bugs.

                    • Failure waterfall: If there are meandering failures moving from bottom to top, right to the left, this almost always means Infrak flake. We can generate convolutional filters manually to detect Failure waterfall patterns. If it’s hectic to encode all the patterns manually, we can also develop a method to create convolution filters to detect ‘Failure waterfall’ patterns automatically.

                    If this project is successful, we will develop a tool to automatically analyze the TestGrid data. This tool will perform failure type classification with the Testgrid data to address an existing manual process executed by subject matter experts. Using the tool, the developers can focus on real issues and become more productive. Furthermore, we will provide insights about overall statistics about failures so that test developers can improve on existing test suites.

                    A. Problem Statement

                    Given a TestGrid, we want to classify/detect different failure patterns occurring over a certain period of time. In the later part, we will aggregate the results to conclude about the primary reasons behind failures for each release.

                    B. Checklist for project completion

                    1. A notebooks that shows classification and analysis of different types of test failures on TestGrid data.

                    2. Jupyterhub image to reproduce the results.

                    3. Public blog explaining analysis and results.

                    4. Results hosted for SME to review

                    C. Provide a solution in terms of human actions to confirm if the task is within the scope of automation through AI.

                    Without AI and automation tooling, SME will need to go to TestGrid data of a particular release and look at the failures. An SME will determine if that failure is following any of the patterns that we have discussed in earlier sections. Based on detected patterns, an SME tries to determine the reason behind failure.

                    D. Outline a path to operationalization.

                    Once we have notebooks ready, we will build a Notebook-based Pipeline using Elyra. The results will be stored in S3. We can then use Superset as our dashboard and visualization tool, which SME/developers can access and give feedback. If the tool is deemed useful, we could also look into integrating it with the existing TestGrid project.

                    \ No newline at end of file diff --git a/data-science/ocp-ci-analysis/docs/publish/project-doc/index.html b/data-science/ocp-ci-analysis/docs/publish/project-doc/index.html index 1fdbcb232f57..26d6f0ade5e1 100644 --- a/data-science/ocp-ci-analysis/docs/publish/project-doc/index.html +++ b/data-science/ocp-ci-analysis/docs/publish/project-doc/index.html @@ -14,7 +14,7 @@ - }
                    ODH Logo

                    AI Supported Continuous Integration Testing

                    Developing AI tools for developers by leveraging the open data made available by OpenShift and Kubernetes CI + }

                    ODH Logo

                    AI Supported Continuous Integration Testing

                    Developing AI tools for developers by leveraging the open data made available by OpenShift and Kubernetes CI platforms.

                    AI Ops is a critical component of supporting any Open Hybrid Cloud infrastructure. As the systems we operate become larger and more complex, intelligent monitoring tools and response agents will become a necessity. In an effort to accelerate the development, access and reliability of these intelligent operations solutions, our aim here is to diff --git a/data-science/ocp-ci-analysis/manifests/README/index.html b/data-science/ocp-ci-analysis/manifests/README/index.html index 72c6a304b7b4..0075a4d02d74 100644 --- a/data-science/ocp-ci-analysis/manifests/README/index.html +++ b/data-science/ocp-ci-analysis/manifests/README/index.html @@ -14,7 +14,7 @@ - }

                    ODH Logo

                    Automated Argo workflows

                    If you’d like to automate your Jupyter notebooks using Argo, please use these kustomize manifests. If you follow the steps bellow, your application is fully set and ready to be deployed via Argo CD.

                    For a detailed guide on how to adjust your notebooks etc, please consult documentation

                    1. Replace all <VARIABLE> mentions with your project name, respective url or any fitting value

                    2. Define your automation run structure in the templates section of cron-workflow.yaml

                    3. Set up sops:

                      1. Install go from your distribution repository

                      2. Setup GOPATH

                        echo 'export GOPATH="$HOME/.go"' >> ~/.bashrc
                        +      }
                        ODH Logo

                        Automated Argo workflows

                        If you’d like to automate your Jupyter notebooks using Argo, please use these kustomize manifests. If you follow the steps bellow, your application is fully set and ready to be deployed via Argo CD.

                        For a detailed guide on how to adjust your notebooks etc, please consult documentation

                        1. Replace all <VARIABLE> mentions with your project name, respective url or any fitting value

                        2. Define your automation run structure in the templates section of cron-workflow.yaml

                        3. Set up sops:

                          1. Install go from your distribution repository

                          2. Setup GOPATH

                            echo 'export GOPATH="$HOME/.go"' >> ~/.bashrc
                             echo 'export PATH="${GOPATH//://bin:}/bin:$PATH"' >> ~/.bashrc
                             source  ~/.bashrc
                          3. Install sops from your distribution repository if possible or use sops GitHub release binaries

                          4. Import AICoE-SRE’s public key EFDB9AFBD18936D9AB6B2EECBD2C73FF891FBC7E:

                            gpg --keyserver keyserver.ubuntu.com --recv EFDB9AFBD18936D9AB6B2EECBD2C73FF891FBC7E
                          5. Import tcoufal’s (A76372D361282028A99F9A47590B857E0288997C) and mhild’s 04DAFCD9470A962A2F272984E5EB0DA32F3372AC keys (so they can help)

                            gpg --keyserver keyserver.ubuntu.com --recv A76372D361282028A99F9A47590B857E0288997C  # tcoufal
                             gpg --keyserver keyserver.ubuntu.com --recv 04DAFCD9470A962A2F272984E5EB0DA32F3372AC  # mhild
                          6. If you’d like to be able to build the manifest on your own as well, please list your GPG key in the .sops.yaml file, pgp section (add to the comma separated list). With your key present there, you can later generate the full manifests using kustomize yourself (ksops has to be installed, please follow ksops guide.

                        4. Create a secret and encrypt it with sops:

                          # If you're not already in the `manifest` folder, cd here
                          diff --git a/data-science/ocp-ci-analysis/notebooks/EDA/index.html b/data-science/ocp-ci-analysis/notebooks/EDA/index.html
                          index d3a9fecee2f7..59c8e5f8503d 100644
                          --- a/data-science/ocp-ci-analysis/notebooks/EDA/index.html
                          +++ b/data-science/ocp-ci-analysis/notebooks/EDA/index.html
                          @@ -14,7 +14,7 @@
                                 
                                 
                                 
                          -      }
                          ODH Logo

                          Sippy Export of OpenShift Data - EDA

                          + }
                          ODH Logo

                          Sippy Export of OpenShift Data - EDA

                          In this notebook we will take a look at some of the openshift CI data distilled by the Sippy project with the following goals in mind.

                          1. Uncover the structure and contents of the dataset
                          2. diff --git a/data-science/ocp-ci-analysis/notebooks/TestGrid_EDA/index.html b/data-science/ocp-ci-analysis/notebooks/TestGrid_EDA/index.html index 9ef3e2335121..54096fe9c98c 100644 --- a/data-science/ocp-ci-analysis/notebooks/TestGrid_EDA/index.html +++ b/data-science/ocp-ci-analysis/notebooks/TestGrid_EDA/index.html @@ -14,7 +14,7 @@ - }
                            ODH Logo

                            TestGrid EDA: initial EDA and data collection

                            + }
                            ODH Logo

                            TestGrid EDA: initial EDA and data collection

                            In this notebook we will explore how to access the existing testgrid data at testgrid.k8s.io, giving specific attention to Red Hat's CI dashboards.

                            To start, we will rely on some of the work already established by the sippy team here to access the data aggregated in testgrid and convert it into a format that can be directly analyzed in a notebook.

                            What is Testgrid? According to the project's readme it is a, "highly configurable, interactive dashboard for viewing your test results in a grid!" In other words, its an aggregation and visualization platform for CI data. Hopefully, this aggregation encodes some of the subject matter experts' knowledge, and will provide better initial features than going straight to the more complex underlying CI data here.

                            diff --git a/data-science/ocp-ci-analysis/notebooks/TestGrid_indepth_EDA/index.html b/data-science/ocp-ci-analysis/notebooks/TestGrid_indepth_EDA/index.html index 84209805bd07..134fd348cf21 100644 --- a/data-science/ocp-ci-analysis/notebooks/TestGrid_indepth_EDA/index.html +++ b/data-science/ocp-ci-analysis/notebooks/TestGrid_indepth_EDA/index.html @@ -14,7 +14,7 @@ - }
                            ODH Logo

                            TestGrid In-Depth EDA

                            + }
                            ODH Logo

                            TestGrid In-Depth EDA

                            In our previous notebook, TestGrid_EDA, we did some straightforward data access and preprocessing work in order to take a look at what data TestGrid exposes, how to access it and convert the test grids themselves into 2d numpy arrays. While performing that initial data exploration we came up with a few more questions around how to look at this data in aggregate that we want to address here.

                            In this notebook we will address the following questions:

                              diff --git a/data-science/ocp-ci-analysis/notebooks/initial_EDA/index.html b/data-science/ocp-ci-analysis/notebooks/initial_EDA/index.html index 0b7252f637c8..2090b585f2af 100644 --- a/data-science/ocp-ci-analysis/notebooks/initial_EDA/index.html +++ b/data-science/ocp-ci-analysis/notebooks/initial_EDA/index.html @@ -14,7 +14,7 @@ - }
                              ODH Logo

                              Initial EDA

                              + }
                              ODH Logo

                              Initial EDA

                              This is a short notebook to look at the available data provided and to see if we can determine which test failures appear to be correlated with each other.

                              In this notebook we will:

                                diff --git a/data-science/ocp-ci-analysis/notebooks/testgrid_feature_confirmation/index.html b/data-science/ocp-ci-analysis/notebooks/testgrid_feature_confirmation/index.html index fe2091e89d89..17002b1f850f 100644 --- a/data-science/ocp-ci-analysis/notebooks/testgrid_feature_confirmation/index.html +++ b/data-science/ocp-ci-analysis/notebooks/testgrid_feature_confirmation/index.html @@ -14,7 +14,7 @@ - }
                                ODH Logo

                                TestGrid Additional Features - uniform or unique?

                                + }
                                ODH Logo

                                TestGrid Additional Features - uniform or unique?

                                As can be seen in an earlier notebook, TestGrids have a more metadata (features) than just the test values we've been focused on.

                                In this notebook we are going to take a closer look at these other metadata fields for both Openshift and Kubernetes and determine if they are uniform across grids and worth taking a closer look at our are distinct by grid.

                                [1]
                                import requests
                                 from bs4 import BeautifulSoup
                                [2]
                                # access the testgrid.k8s.io to get the dashboards for Red Hat
                                diff --git a/e1addec5b3564a6a1ac472a0e48a23fd/README.md b/e1addec5b3564a6a1ac472a0e48a23fd/README.md
                                new file mode 100644
                                index 000000000000..9454eca8de46
                                --- /dev/null
                                +++ b/e1addec5b3564a6a1ac472a0e48a23fd/README.md
                                @@ -0,0 +1,245 @@
                                +# Set up on-cluster PersistentVolumes storage using NFS on local node
                                +
                                +Bare Openshift cluster installations, like for example Quicklab's Openshift 4 UPI clusters may lack persistent volume setup. This guide will help you set it up.
                                +
                                +Please verify that your cluster really lacks `pv`s:
                                +
                                +1. Login as a cluster admin
                                +2. Lookup available `PersistentVolume` resources:
                                +
                                +   ```bash
                                +   $ oc get pv
                                +   No resources found
                                +   ```
                                +
                                +If there are no `PersistentVolume`s available please continue and follow this guide. We're gonna set up NFS server on the cluster node and show Openshift how to connect to it.
                                +
                                +Note: This guide will lead you through the process of setting up PVs, which use the deprecated `Recycle` reclaim policy. This makes the `PersistentVolume` available again as soon as the `PersistentVolumeClaim` resource is terminated and removed. However the data are left on the NFS share untouched. While this is suitable for development purposes, be advised that old data (from previous mounts) will be still available on the volume. Please consult [Kubernetes docs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming) for other options.
                                +
                                +## Manual steps
                                +
                                +See automated Ansible playbook bellow for easier-to-use provisioning
                                +
                                +### Prepare remote host
                                +
                                +1. SSH to the Quicklab node, and become superuser:
                                +
                                +   ```sh
                                +   curl https://gitlab.cee.redhat.com/cee_ops/quicklab/raw/master/docs/quicklab.key --output ~/.ssh/quicklab.key
                                +   chmod 600 ~/.ssh/quicklab.key
                                +   ssh -i ~/.ssh/quicklab.key -o "UserKnownHostsFile /dev/null" -o "StrictHostKeyChecking no" quicklab@HOST
                                +
                                +   # On HOST
                                +   sudo su -
                                +   ```
                                +
                                +2. Install `nfs-utils` package
                                +
                                +   ```sh
                                +   yum install nfs-utils
                                +   ```
                                +
                                +3. Create exported directories (for example in `/mnt/nfs`) and set ownership and permissions
                                +
                                +   ```sh
                                +   mkdir -p /mnt/nfs/A ...
                                +   chown nfsnobody:nfsnobody /mnt/nfs/A
                                +   chmod 0777 /mnt/nfs/A
                                +   ```
                                +
                                +4. Populate `/etc/exports` file referencing directories from previous step to be accessible from your nodes as read,write:
                                +
                                +   ```txt
                                +    /mnt/nfs/A node1(rw) node2(rw) ...
                                +    ...
                                +   ```
                                +
                                +5. Allow NFS in firewall
                                +
                                +   ```sh
                                +   firewall-cmd --permanent --add-service mountd
                                +   firewall-cmd --permanent --add-service rpc-bind
                                +   firewall-cmd --permanent --add-service nfs
                                +   firewall-cmd --reload
                                +   ```
                                +
                                +6. Start and enable NFS service
                                +
                                +   ```sh
                                +   systemctl enable --now nfs-server
                                +   ```
                                +
                                +### Add PersistentVolumes to Openshift cluster
                                +
                                +Login as a cluster admin and create a `PersistentVolume` resource for each network share using this manifest:
                                +
                                +```yaml
                                +apiVersion: v1
                                +kind: PersistentVolume
                                +metadata:
                                +  name: NAME # Unique name
                                +spec:
                                +  capacity:
                                +    storage: CAPACITY # Keep in mind the total max size, the Quicklab host has a disk size of 20Gi total (usually ~15Gi of available and usable space)
                                +  accessModes:
                                +    - ReadWriteOnce
                                +  nfs:
                                +    path: /mnt/nfs/A # Path to the NFS share on the server
                                +    server: HOST_IP # Not a hostname
                                +  persistentVolumeReclaimPolicy: Recycle
                                +```
                                +
                                +## Using Ansible
                                +
                                +To avoid all the hustle with manual setup, we can use an Ansible playbook [`playbook.yaml`](playbook.yaml).
                                +
                                +### Setup
                                +
                                +Please install Ansible and some additional collections from Ansible Galaxy needed by this playbook: [ansible.posix](https://galaxy.ansible.com/ansible/posix) for `firewalld` module and [community.kubernetes](https://galaxy.ansible.com/community/kubernetes) for `k8s` module. Also install the underlying python dependency `openshift`.
                                +
                                +```bash
                                +$ ansible-galaxy collection install ansible.posix
                                +Starting galaxy collection install process
                                +Process install dependency map
                                +Starting collection install process
                                +Installing 'ansible.posix:1.1.1' to '/home/tcoufal/.ansible/collections/ansible_collections/ansible/posix'
                                +Downloading https://galaxy.ansible.com/download/ansible-posix-1.1.1.tar.gz to /home/tcoufal/.ansible/tmp/ansible-local-43567u9ge76rl/tmpyttcjmul
                                +ansible.posix (1.1.1) was installed successfully
                                +
                                +$ ansible-galaxy collection install community.kubernetes
                                +Starting galaxy collection install process
                                +Process install dependency map
                                +Starting collection install process
                                +Installing 'community.kubernetes:1.0.0' to '/home/tcoufal/.ansible/collections/ansible_collections/community/kubernetes'
                                +Downloading https://galaxy.ansible.com/download/community-kubernetes-1.0.0.tar.gz to /home/tcoufal/.ansible/tmp/ansible-local-29431yk2zoutk/tmpwgl4xsnb
                                +community.kubernetes (1.0.0) was installed successfully
                                +
                                +$ pip install --user openshift
                                +...
                                +Installing collected packages: kubernetes, openshift
                                +    Running setup.py install for openshift ... done
                                +Successfully installed kubernetes-11.0.0 openshift-0.11.2
                                +```
                                +
                                +Additionally please login to your Quicklab cluster via `oc login` as a cluster admin.
                                +
                                +### Configuration
                                +
                                +Please view and modify the `env.yaml` file (or create additional variable files, and select it before executing playbook via `vars_file` variable)
                                +
                                +Example environment file:
                                +
                                +```yaml
                                +quicklab_host: "upi-0.tcoufaldev.lab.upshift.rdu2.redhat.com"
                                +
                                +pv_count_per_size:
                                +  1Gi: 6
                                +  2Gi: 2
                                +  5Gi: 1
                                +```
                                +
                                +- `quicklab_host` - Points to one of the "Hosts" from your Quicklab Cluster info tab
                                +- `pv_count_per_size` - Defines PV counts in relation to maximal allocable sizes map:
                                +  - Use the target PV size as a key (follow GO/Kubernetes notation)
                                +  - Use volume count for that key "size" as the value
                                +  - Keep in mind the total size sum(key\*value for key,value in pv_count_per_size.items()) < Disk size of the Quicklab instance (usually ~15Gi of available space)
                                +
                                +### Run the playbook
                                +
                                +Run the `playbook.yaml` (if you created a new environment file and you'd like to use other than default `env.yaml`, please specify the file via `-e vars_file=any-filename.yaml`)
                                +
                                +```bash
                                +$ ansible-playbook playbook.yaml
                                +```
                                +
                                +
                                +Click to expand output + +```bash +PLAY [Dynamically create Quicklab host in Ansible] ********************************************************************** + +TASK [Gathering Facts] ************************************************************************************************** +ok: [localhost] + +TASK [Load variables file] ********************************************************************************************** +ok: [localhost] + +TASK [Preprocess the PV count per size map to a flat list] ************************************************************** +ok: [localhost] + +TASK [Fetch Quicklab certificate] *************************************************************************************** +ok: [localhost] + +TASK [Adding host] ****************************************************************************************************** +changed: [localhost] + +TASK [Get available Openshift nodes] ************************************************************************************ +ok: [localhost] + +TASK [Preprocess nodes k8s resource response to list of IPs] ************************************************************ +ok: [localhost] + +PLAY [Setup NFS on Openshift host] ************************************************************************************** + +TASK [Gathering Facts] ************************************************************************************************** +ok: [quicklab] + +TASK [Copy localhost variables for easier access] *********************************************************************** +ok: [quicklab] + +TASK [Install the NFS server] ******************************************************************************************* +ok: [quicklab] + +TASK [Create export dirs] *********************************************************************************************** +changed: [quicklab] => (item=['1Gi', 0]) +changed: [quicklab] => (item=['1Gi', 1]) +changed: [quicklab] => (item=['1Gi', 2]) +changed: [quicklab] => (item=['1Gi', 3]) +changed: [quicklab] => (item=['1Gi', 4]) +changed: [quicklab] => (item=['1Gi', 5]) +changed: [quicklab] => (item=['2Gi', 0]) +changed: [quicklab] => (item=['2Gi', 1]) +changed: [quicklab] => (item=['5Gi', 0]) + +TASK [Populate /etc/exports file] *************************************************************************************** +changed: [quicklab] + +TASK [Allow services in firewall] *************************************************************************************** +changed: [quicklab] => (item=nfs) +changed: [quicklab] => (item=rpc-bind) +changed: [quicklab] => (item=mountd) + +TASK [Reload firewall] ************************************************************************************************** +changed: [quicklab] + +TASK [Enable and start NFS server] ************************************************************************************** +changed: [quicklab] + +TASK [Reload exports when the server was already started] *************************************************************** +skipping: [quicklab] + +PLAY [Create PersistentVolumes in OpenShift] **************************************************************************** + +TASK [Gathering Facts] ************************************************************************************************** +ok: [localhost] + +TASK [Find IPv4 of the host] ******************************************************************************************** +ok: [localhost] + +TASK [Create PersistentVolume resource] ********************************************************************************* +changed: [localhost] => (item=['1Gi', 0]) +changed: [localhost] => (item=['1Gi', 1]) +changed: [localhost] => (item=['1Gi', 2]) +changed: [localhost] => (item=['1Gi', 3]) +changed: [localhost] => (item=['1Gi', 4]) +changed: [localhost] => (item=['1Gi', 5]) +changed: [localhost] => (item=['2Gi', 0]) +changed: [localhost] => (item=['2Gi', 1]) +changed: [localhost] => (item=['5Gi', 0]) + +PLAY RECAP ************************************************************************************************************** +localhost : ok=10 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 +quicklab : ok=8 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 +``` + +
                                diff --git a/index.html b/index.html index 074b393cb860..b14e57d39d35 100644 --- a/index.html +++ b/index.html @@ -14,4 +14,4 @@ - }Operate First for Open Data Hub | Operate First
                                ODH Logo

                                Operate First for Open Data Hub

                                The transition from delivering projects to delivering services involves different roles and a different mindset. Features that enable the software to be run at scale need to be built into the project. Operate First means, we must also operate the project, involving developers from the beginning.

                                As the AICoE in the Office of the CTO at Red Hat we can lead the way with Open Data Hub: operate it in a transparent open cloud environment, build a community around the deployment and define which services would be consumed in what way.

                                This can act as a blueprint for deploying this service in any environment.

                                With Operate First, we open up our operational knowledge to all users of Open Data Hub and the open source community. This will allow us to bring the learnings of the SRE team into the open source community and is a potential for us to leverage a broad community input into developing software.

                                As one of the first steps, we have begun operating Open Data Hub on the Mass Open Cloud(MOC) in an open cloud environment, before we ship it to our customers. At the AICoE, we are focused on creating examples of how ODH is operated and deployed in an open cloud environment, how we perform open source data science in an open cloud environment and sharing our learnings with the community.

                                This website acts as a landing site for sharing examples from our experience of operating Open Data Hub in an open cloud environment. It is targeted to serve as an upstream platform where a wider community can participate and leverage our work (and we theirs), ultimately to drive an open source solution for cloud operation.

                                Getting started

                                To learn about Open Data Hub and its architecture, visit opendatahub.io.

                                To get started with using ODH applications deployed and running on an open cloud instance, visit the MOC - ODH Users section.

                                To get started with deploying components on ODH, visit the MOC - ODH Operations section.

                                To learn more about Operate First: making cloud operations as fundamental as functionality in the upstreams. Read the Operate First Community Manifesto

                                Contribute

                                To contribute to the Operate First initiative, seek support or report bugs on the website, please open an issue here.

                                Phases

                                Crawl

                                • CI/Continous Delivery pipeline to build ODH assets
                                • Continuous Deployment pipeline to deploy ODH on MOC
                                • Incident and outage management

                                Walk

                                Get real users on the service - students from universities doing classes, opensource projects, AICoE public examples etc.

                                Work with those users to:

                                • Improve the AI development workflow
                                • Improve the AI deployment workflow (MLOps)

                                Run

                                TBD

                                Roles

                                CI/CD pipeline engineer

                                • Testing of ODH assets
                                • Release and publish assets
                                • Optimize assets for the target platform (e.g. Notebook Images with Intel optimized TF)

                                Data Scientist

                                • Create sample workflows
                                • Inform testing of ODH assets
                                • Write end-user documentation

                                SRE

                                • Deployment of ODH assets
                                • Monitoring / Incident Management

                                Service Owner

                                • Define service interface
                                • Define service level agreements (SLA)

                                Organization

                                • All systems must be available on the internet (no VPN)
                                • All data (tickets, logs, metrics) must be publicly available
                                • Sprint planning and demos are public
                                \ No newline at end of file + }Operate First for Open Data Hub | Operate First
                                ODH Logo

                                Operate First for Open Data Hub

                                Operate First is an initiative to operate software in a production-grade environment - bringing users, developers and operators closer together.

                                The goal is to create an Open Cloud environment, with reproducibility built-in, operated by a Community.

                                Open means, onboarding and getting involved should mimic the process of an Open Source project, where planning, issue tracking and the code are accessible in a read-only fashion.

                                Reproducibility caters towards being a blueprint for other setups. If we don’t want each environment to be a snowflake, we should be able to extract best practices that are easy to apply to new environments.

                                At the Office of the CTO at Red Hat, we can lead the way with Open Data Hub by opening up our operational knowledge to all open source communities to improve the integration and operability from the source.

                                Getting started

                                Data Science

                                Get started with tutorials and examples for data science on Open Data Hub.

                                Users

                                Learn how you can engage with Open Data Hub and access the deployed components.

                                Operators

                                See how we are deploying and operating Open Data Hub

                                Blueprints

                                Apply best practices and tooling to your own projects.

                                Contribute

                                To contribute to the Operate First initiative, seek support or report bugs on the website, please open an issue here.

                                \ No newline at end of file diff --git a/operators/continuous-deployment/README/index.html b/operators/continuous-deployment/README/index.html index 9f761a54ea44..a23e86f52ff1 100644 --- a/operators/continuous-deployment/README/index.html +++ b/operators/continuous-deployment/README/index.html @@ -14,6 +14,6 @@ - }
                                ODH Logo

                                Continous Deployment

                                This repository contains an opinionated reference architecture to setup, manage and operate a continous deployment pipeline.

                                Prerequisites

                                Kustomize 3.8.1+ + }

                                ODH Logo

                                Continous Deployment

                                This repository contains an opinionated reference architecture to setup, manage and operate a continous deployment pipeline.

                                Prerequisites

                                Kustomize 3.8.1+ SOPS 3.4.0+ KSOPS 2.1.2+

                                Ensure you have the key to decrypt secrets. Reach out to members of the Data Hub team for access.

                                GPG Key access

                                This repo encrypts secrets using a dev test key, you can find the test key in examples/key.asc folder.

                                $ base64 -d < examples/key.asc | gpg --import

                                You will need to import this key to be able to decrypt the contents of the secrets using sops.

                                Do NOT use this gpg key for prod purposes.

                                Howtos

                                See howto index for various howtos.

                                \ No newline at end of file diff --git a/operators/continuous-deployment/docs/README/index.html b/operators/continuous-deployment/docs/README/index.html index b6495586fd22..3f35b1a279f6 100644 --- a/operators/continuous-deployment/docs/README/index.html +++ b/operators/continuous-deployment/docs/README/index.html @@ -14,4 +14,4 @@ - }
                                ODH Logo

                                Here you will find a series of docs that outline various procedures and how-tos when interacting with ArgoCD.

                                CRC

                                CRC stands for Code Ready Containers. Download CRC here: https://developers.redhat.com/products/codeready-containers/overview. Follow the guides below for setting up ArgoCD and deploying Open Data Hub (via ArgoCD) in CRC:

                                1. Installation of ArgoCD - Guide with instructions for setting up ArgoCD in CRC.
                                2. Installation of ODH - Guide with instructions on deploying Open Data Hub in CRC.

                                Quicklab

                                Quicklab is a web application where users can automatically provision and install clusters of various Red Hat products into public and private clouds. Follow the guides below for setting up ArgoCD and deploying Open Data Hub (via ArgoCD) in a Quicklab cluster:

                                1. Installation of ArgoCD - Guide with instructions for setting up ArgoCD in a Quicklab cluster.
                                2. Setup Persistent Volumes - Bare Openshift cluster installations, like for example Quicklab’s Openshift 4 UPI clusters may lack persistent volume setup. This guide provides instructions for setting up PVs in your Quicklab cluster.
                                3. Installation of ODH - Guide with instructions on deploying the Open Data Hub in a Quicklab cluster.

                                Next steps

                                \ No newline at end of file + }
                                ODH Logo

                                Here you will find a series of docs that outline various procedures and how-tos when interacting with ArgoCD.

                                CRC

                                CRC stands for Code Ready Containers. Download CRC here: https://developers.redhat.com/products/codeready-containers/overview. Follow the guides below for setting up ArgoCD and deploying Open Data Hub (via ArgoCD) in CRC:

                                1. Installation of ArgoCD - Guide with instructions for setting up ArgoCD in CRC.
                                2. Installation of ODH - Guide with instructions on deploying Open Data Hub in CRC.

                                Quicklab

                                Quicklab is a web application where users can automatically provision and install clusters of various Red Hat products into public and private clouds. Follow the guides below for setting up ArgoCD and deploying Open Data Hub (via ArgoCD) in a Quicklab cluster:

                                1. Installation of ArgoCD - Guide with instructions for setting up ArgoCD in a Quicklab cluster.
                                2. Setup Persistent Volumes - Bare Openshift cluster installations, like for example Quicklab’s Openshift 4 UPI clusters may lack persistent volume setup. This guide provides instructions for setting up PVs in your Quicklab cluster.
                                3. Installation of ODH - Guide with instructions on deploying the Open Data Hub in a Quicklab cluster.

                                Next steps

                                \ No newline at end of file diff --git a/operators/continuous-deployment/docs/admin/add_namespace_to_cluster/index.html b/operators/continuous-deployment/docs/admin/add_namespace_to_cluster/index.html index 0d40ad13f82c..ed8aace440e1 100644 --- a/operators/continuous-deployment/docs/admin/add_namespace_to_cluster/index.html +++ b/operators/continuous-deployment/docs/admin/add_namespace_to_cluster/index.html @@ -14,7 +14,7 @@ - }
                                ODH Logo

                                Add namespace to cluster

                                Prerequisites

                                • sops 3.6+
                                • sops access

                                Instructions

                                Namespaces are added to ArgoCD by altering the corresponding cluster spec. Cluster specs are defined within the /manifests/overlays/<env>/secrets/clusters folder.

                                Open the file in the sops editor, for example if updating the cluster spec dev.cluster.example.enc.yaml in dev you would execute:

                                # From repo root
                                +      }
                                ODH Logo

                                Add namespace to cluster

                                Prerequisites

                                • sops 3.6+
                                • sops access

                                Instructions

                                Namespaces are added to ArgoCD by altering the corresponding cluster spec. Cluster specs are defined within the /manifests/overlays/<env>/secrets/clusters folder.

                                Open the file in the sops editor, for example if updating the cluster spec dev.cluster.example.enc.yaml in dev you would execute:

                                # From repo root
                                 $ target_env=dev
                                 $ cd manifests/overlays/$target_env/secrets/clusters
                                 $ sops dev.cluster.example.enc.yaml

                                This should open the decrypted form of the cluster spec. Update the namespace field by appending your namespace (comma-separated, no spaces, if there are multiple namespaces).

                                ...
                                diff --git a/operators/continuous-deployment/docs/admin/add_new_cluster_spec/index.html b/operators/continuous-deployment/docs/admin/add_new_cluster_spec/index.html
                                index 22c0dab016ad..324a500befa3 100644
                                --- a/operators/continuous-deployment/docs/admin/add_new_cluster_spec/index.html
                                +++ b/operators/continuous-deployment/docs/admin/add_new_cluster_spec/index.html
                                @@ -14,7 +14,7 @@
                                       
                                       
                                       
                                -      }
                                ODH Logo

                                Adding a new cluster spec

                                Prerequisites

                                • sops 3.6+
                                • sops access

                                Instructions

                                ArgoCD will need a service account present on the cluster for deployments. Where the SA is located is irrelevant, though it’s advised to have it be located in its own independent namespace. For consistency name this service account argocd-manager.

                                This workflow may look like this:

                                oc login <your_cluster>
                                +      }
                                ODH Logo

                                Adding a new cluster spec

                                Prerequisites

                                • sops 3.6+
                                • sops access

                                Instructions

                                ArgoCD will need a service account present on the cluster for deployments. Where the SA is located is irrelevant, though it’s advised to have it be located in its own independent namespace. For consistency name this service account argocd-manager.

                                This workflow may look like this:

                                oc login <your_cluster>
                                 oc new-project argocd-manager
                                 oc create sa argocd-manager

                                Get the token for this SA

                                SA_TOKEN=`oc sa get-token argocd-manager -n argocd-manager`

                                Store the cluster specs in /manifests/overlays/<env>/secrets/clusters folder.

                                Create the cluster spec:

                                # /manifests/overlays/dev/secrets/clusters/dev.cluster.example.yaml
                                 apiVersion: v1
                                diff --git a/operators/continuous-deployment/docs/admin/update_gpg_key/index.html b/operators/continuous-deployment/docs/admin/update_gpg_key/index.html
                                index c1f566e3f077..5fa107f2a1db 100644
                                --- a/operators/continuous-deployment/docs/admin/update_gpg_key/index.html
                                +++ b/operators/continuous-deployment/docs/admin/update_gpg_key/index.html
                                @@ -14,7 +14,7 @@
                                       
                                       
                                       
                                -      }
                                ODH Logo

                                Update gpg key

                                Prerequisites

                                • sops 3.6+

                                Instructions

                                Export the key

                                $ gpg --export-secret-keys "${KEY_ID}" | base64 > private.asc
                                # From the repo root
                                +      }
                                ODH Logo

                                Update gpg key

                                Prerequisites

                                • sops 3.6+

                                Instructions

                                Export the key

                                $ gpg --export-secret-keys "${KEY_ID}" | base64 > private.asc
                                # From the repo root
                                 $ target_env=dev
                                 $ cd manifests/overlays/$target_env/secrets/gpg
                                 $ sops secret.enc.yaml

                                Copy the contents of private.asc into the private.key field.

                                Save the file, exit. Commit and make a PR.

                                \ No newline at end of file diff --git a/operators/continuous-deployment/docs/create_argocd_application_manifest/index.html b/operators/continuous-deployment/docs/create_argocd_application_manifest/index.html index 809644858474..0a46ef6f8330 100644 --- a/operators/continuous-deployment/docs/create_argocd_application_manifest/index.html +++ b/operators/continuous-deployment/docs/create_argocd_application_manifest/index.html @@ -14,7 +14,7 @@ - }
                                ODH Logo

                                Application Management

                                While ArgoCD allows you to create ArgoCD applications via the UI and CLI, we recommend that all applications be + }

                                ODH Logo

                                Application Management

                                While ArgoCD allows you to create ArgoCD applications via the UI and CLI, we recommend that all applications be created declaratively.

                                This allows you to easily restore your applications should the need arise.

                                Pre-requisites

                                • Kustomize version 3.8+

                                Steps for creating an application

                                For your application to show up to ArgoCD you need to do 2 things:

                                1. Create the Application yaml in the appropriate path in a fork
                                2. Submit a PR to the base repository

                                These steps are outlined in detail below:

                                Step 1. Create the Application Yaml

                                Clone the repo and cd into where applications are stored:

                                $ target_env=dev
                                 $ cd /manifests/overlays/$target_env/applications

                                If your team folder does not exist, create it:

                                $ mkdir example_team && cd example_team
                                 $ kustomize create

                                Let’s create a sample application called example-app.

                                # /manifests/overlays/dev/applications/example_team/example_app.yaml
                                diff --git a/operators/continuous-deployment/docs/downstream/crc-disk-size/index.html b/operators/continuous-deployment/docs/downstream/crc-disk-size/index.html
                                index b29dfa412078..22f610a8d3be 100644
                                --- a/operators/continuous-deployment/docs/downstream/crc-disk-size/index.html
                                +++ b/operators/continuous-deployment/docs/downstream/crc-disk-size/index.html
                                @@ -14,7 +14,7 @@
                                       
                                       
                                       
                                -      }
                                ODH Logo

                                Increasing CRC disk space

                                Below are steps to give your CRC instance more disk space. E.g. if you run out of ephemeral-space while experimenting with ODH.

                                Unlike setting the nuber of CPUs or available memory increasing the disk image is not direcly supported by CRC.

                                First you can check your current image size. Do it with CRC turned off:

                                qemu-img info ~/.crc/machines/crc/crc | grep 'virtual size'
                                +      }
                                ODH Logo

                                Increasing CRC disk space

                                Below are steps to give your CRC instance more disk space. E.g. if you run out of ephemeral-space while experimenting with ODH.

                                Unlike setting the nuber of CPUs or available memory increasing the disk image is not direcly supported by CRC.

                                First you can check your current image size. Do it with CRC turned off:

                                qemu-img info ~/.crc/machines/crc/crc | grep 'virtual size'
                                 virtual size: 31 GiB (33285996544 bytes)

                                By default you will see something similar as above.

                                Then you can grow the quemu size:

                                qemu-img resize ~/.crc/machines/crc/crc +32G

                                Start CRC again:

                                crc start

                                Log is and grow the filesystem:

                                ssh -i ~/.crc/machines/crc/id_rsa core@192.168.130.11
                                 df -h /sysroot
                                 sudo xfs_growfs /sysroot
                                diff --git a/operators/continuous-deployment/docs/downstream/crc/index.html b/operators/continuous-deployment/docs/downstream/crc/index.html
                                index b6b8b4cb3e30..ff1ee7dfb929 100644
                                --- a/operators/continuous-deployment/docs/downstream/crc/index.html
                                +++ b/operators/continuous-deployment/docs/downstream/crc/index.html
                                @@ -14,7 +14,7 @@
                                       
                                       
                                       
                                -      }
                                ODH Logo

                                Deployment on CRC

                                This is how to deploy ArgoCD on CRC.

                                Installation Steps

                                • Setup CRC https://developers.redhat.com/products/codeready-containers/overview

                                  • Do not forget to install the corresponding version of oc tool or some commands might fail.

                                  • add more memory to CRC : \ + }

                                    ODH Logo

                                    Deployment on CRC

                                    This is how to deploy ArgoCD on CRC.

                                    Installation Steps

                                    • Setup CRC https://developers.redhat.com/products/codeready-containers/overview

                                      • Do not forget to install the corresponding version of oc tool or some commands might fail.

                                      • add more memory to CRC : \ crc delete \ crc config set memory 16384 \ crc start

                                      • Consider adding more disk space to your CRC.

                                    • Use Toolbox to get the command line tools needed: https://github.com/containers/toolbox \ diff --git a/operators/continuous-deployment/docs/downstream/odh-install-crc/index.html b/operators/continuous-deployment/docs/downstream/odh-install-crc/index.html index 459be49fb0d2..43a089caa9f3 100644 --- a/operators/continuous-deployment/docs/downstream/odh-install-crc/index.html +++ b/operators/continuous-deployment/docs/downstream/odh-install-crc/index.html @@ -14,7 +14,7 @@ - }

                                      ODH Logo

                                      Installing ODH using ArgoCD

                                      Preparation

                                      First make sure that you have the OpenDataHub operator available in your OpenShift.

                                      oc get packagemanifests -n openshift-marketplace | grep opendatahub-operator

                                      If it’s not present, then you need to troubleshoot the marketplace: https://github.com/operator-framework/operator-marketplace/issues/344

                                      Then we need to create a projects named odh-operator and opf-*. (The name of the project is hard-coded in the kustomize files in https://github.com/operate-first/odh and you can change is there using kustomize.)

                                      oc new-project odh-operator
                                      +      }
                                      ODH Logo

                                      Installing ODH using ArgoCD

                                      Preparation

                                      First make sure that you have the OpenDataHub operator available in your OpenShift.

                                      oc get packagemanifests -n openshift-marketplace | grep opendatahub-operator

                                      If it’s not present, then you need to troubleshoot the marketplace: https://github.com/operator-framework/operator-marketplace/issues/344

                                      Then we need to create a projects named odh-operator and opf-*. (The name of the project is hard-coded in the kustomize files in https://github.com/operate-first/odh and you can change is there using kustomize.)

                                      oc new-project odh-operator
                                       oc new-project opf-{jupyterhub,superset}

                                      Our ArgoCD SA argocd-manager needs to have access to the new project. On CRC I suggest giving argocd-manager access to all projects by creating a ClusterRoleBinding to a ClusterRole.

                                      oc apply -f examples/argocd-cluster-binding.yaml

                                      Then you need to change the dev-cluster to include all projects/namespace. (This is done by actually removing the value of namespace).

                                      oc patch secret dev-cluster-spec -n aicoe-argocd-dev --type='json' -p="[{'op': 'replace', 'path': '/data/namespaces', 'value':''}]"

                                      Be aware that if you change this value in the ArgoCD UI, you might loose the stored credentials for the cluster due to a bug in ArgoCD.

                                      Creating the ArgoCD applications

                                      Now we can proceed with creating the ArgoCD Applications. We will create 2 applications:

                                      1. The ODH operator. (Installs the ODH operator itself odh-operator.)
                                      2. The ODH deployment. (Installs ODH components into opf-*.)

                                      Creating the ODH operator

                                      You can create the Application resource from the command line using

                                      oc apply -f examples/odh-operator-app.yaml

                                      Or you can go to https://argocd-server-aicoe-argocd-dev.apps-crc.testing/applications and click “New App” and entering these values:

                                      FieldValue
                                      Project:Default
                                      Cluster:dev-cluster (https://api.crc.testing.6443)
                                      Namespace:odh-operator
                                      Repo URL:https://github.com/operate-first/odh.git
                                      Target revision:HEAD
                                      Path:operator/base

                                      This creates an app from definition in the repo under the path odh-operator/base and deploys it to the odh-operator namespace on the dev-cluster.

                                      This app is all about deploying the Open Data Hub operator to your cluster.

                                      Please note that the namespace is also hard-coded in the repo so changing it requires changing files in the repo.

                                      Create the app and you will see Argo deploying resources: diff --git a/operators/continuous-deployment/docs/downstream/odh-install-quicklab/index.html b/operators/continuous-deployment/docs/downstream/odh-install-quicklab/index.html index 87cc7f8ce4f1..62c96144cc65 100644 --- a/operators/continuous-deployment/docs/downstream/odh-install-quicklab/index.html +++ b/operators/continuous-deployment/docs/downstream/odh-install-quicklab/index.html @@ -14,4 +14,4 @@ - }

                                      ODH Logo

                                      Installing ODH using ArgoCD in Quicklab

                                      The steps for installing ODH in Quicklab are basically the same as for CRC.

                                      The only difference is that you need to use the correct URL for your cluster and setup sufficient persistent volumes (PVs) in your cluster.

                                      • To setup persistent volumes in your Quicklab cluster, follow the guide here.

                                      • In quicklab guide step 9 there’s a screenshot with the Hosts value and the oc login ... command. Use the value (e.g. upi-0.tcoufaltest.lab.upshift.rdu2.redhat.com:6443) as the value of the Cluster in steps “Creating the ODH operator” and “Creating the ODH deployment” in CRC.

                                      • If you choose to use the command-line to create the Application resources, then edit examples/odh-operator-app.yaml and examples/odh-deployment-app.yaml and put the value of Cluster there.

                                      • Also, please note that if you are installing multiple ODH components, you may need to assign additional worker nodes for your cluster. This is mentioned in quicklab guide step 3.

                                      Except for the Cluster address, the steps are exactly the same.

                                      \ No newline at end of file + }
                                      ODH Logo

                                      Installing ODH using ArgoCD in Quicklab

                                      The steps for installing ODH in Quicklab are basically the same as for CRC.

                                      The only difference is that you need to use the correct URL for your cluster and setup sufficient persistent volumes (PVs) in your cluster.

                                      • To setup persistent volumes in your Quicklab cluster, follow the guide here.

                                      • In quicklab guide step 9 there’s a screenshot with the Hosts value and the oc login ... command. Use the value (e.g. upi-0.tcoufaltest.lab.upshift.rdu2.redhat.com:6443) as the value of the Cluster in steps “Creating the ODH operator” and “Creating the ODH deployment” in CRC.

                                      • If you choose to use the command-line to create the Application resources, then edit examples/odh-operator-app.yaml and examples/odh-deployment-app.yaml and put the value of Cluster there.

                                      • Also, please note that if you are installing multiple ODH components, you may need to assign additional worker nodes for your cluster. This is mentioned in quicklab guide step 3.

                                      Except for the Cluster address, the steps are exactly the same.

                                      \ No newline at end of file diff --git a/operators/continuous-deployment/docs/downstream/on-cluster-persistent-storage/README/index.html b/operators/continuous-deployment/docs/downstream/on-cluster-persistent-storage/README/index.html new file mode 100644 index 000000000000..4307f0cb3479 --- /dev/null +++ b/operators/continuous-deployment/docs/downstream/on-cluster-persistent-storage/README/index.html @@ -0,0 +1,149 @@ +
                                      ODH Logo

                                      Set up on-cluster PersistentVolumes storage using NFS on local node

                                      Bare Openshift cluster installations, like for example Quicklab’s Openshift 4 UPI clusters may lack persistent volume setup. This guide will help you set it up.

                                      Please verify that your cluster really lacks pvs:

                                      1. Login as a cluster admin

                                      2. Lookup available PersistentVolume resources:

                                        $ oc get pv
                                        +No resources found

                                      If there are no PersistentVolumes available please continue and follow this guide. We’re gonna set up NFS server on the cluster node and show Openshift how to connect to it.

                                      Note: This guide will lead you through the process of setting up PVs, which use the deprecated Recycle reclaim policy. This makes the PersistentVolume available again as soon as the PersistentVolumeClaim resource is terminated and removed. However the data are left on the NFS share untouched. While this is suitable for development purposes, be advised that old data (from previous mounts) will be still available on the volume. Please consult Kubernetes docs for other options.

                                      Manual steps

                                      See automated Ansible playbook bellow for easier-to-use provisioning

                                      Prepare remote host

                                      1. SSH to the Quicklab node, and become superuser:

                                        curl https://gitlab.cee.redhat.com/cee_ops/quicklab/raw/master/docs/quicklab.key --output ~/.ssh/quicklab.key
                                        +chmod 600 ~/.ssh/quicklab.key
                                        +ssh -i ~/.ssh/quicklab.key -o "UserKnownHostsFile /dev/null" -o "StrictHostKeyChecking no" quicklab@HOST
                                        +
                                        +# On HOST
                                        +sudo su -
                                      2. Install nfs-utils package

                                        yum install nfs-utils
                                      3. Create exported directories (for example in /mnt/nfs) and set ownership and permissions

                                        mkdir -p /mnt/nfs/A ...
                                        +chown nfsnobody:nfsnobody /mnt/nfs/A
                                        +chmod 0777 /mnt/nfs/A
                                      4. Populate /etc/exports file referencing directories from previous step to be accessible from your nodes as read,write:

                                         /mnt/nfs/A node1(rw) node2(rw) ...
                                        + ...
                                      5. Allow NFS in firewall

                                        firewall-cmd --permanent --add-service mountd
                                        +firewall-cmd --permanent --add-service rpc-bind
                                        +firewall-cmd --permanent --add-service nfs
                                        +firewall-cmd --reload
                                      6. Start and enable NFS service

                                        systemctl enable --now nfs-server

                                      Add PersistentVolumes to Openshift cluster

                                      Login as a cluster admin and create a PersistentVolume resource for each network share using this manifest:

                                      apiVersion: v1
                                      +kind: PersistentVolume
                                      +metadata:
                                      +  name: NAME # Unique name
                                      +spec:
                                      +  capacity:
                                      +    storage: CAPACITY # Keep in mind the total max size, the Quicklab host has a disk size of 20Gi total (usually ~15Gi of available and usable space)
                                      +  accessModes:
                                      +    - ReadWriteOnce
                                      +  nfs:
                                      +    path: /mnt/nfs/A # Path to the NFS share on the server
                                      +    server: HOST_IP # Not a hostname
                                      +  persistentVolumeReclaimPolicy: Recycle

                                      Using Ansible

                                      To avoid all the hustle with manual setup, we can use an Ansible playbook playbook.yaml.

                                      Setup

                                      Please install Ansible and some additional collections from Ansible Galaxy needed by this playbook: ansible.posix for firewalld module and community.kubernetes for k8s module. Also install the underlying python dependency openshift.

                                      $ ansible-galaxy collection install ansible.posix
                                      +Starting galaxy collection install process
                                      +Process install dependency map
                                      +Starting collection install process
                                      +Installing 'ansible.posix:1.1.1' to '/home/tcoufal/.ansible/collections/ansible_collections/ansible/posix'
                                      +Downloading https://galaxy.ansible.com/download/ansible-posix-1.1.1.tar.gz to /home/tcoufal/.ansible/tmp/ansible-local-43567u9ge76rl/tmpyttcjmul
                                      +ansible.posix (1.1.1) was installed successfully
                                      +
                                      +$ ansible-galaxy collection install community.kubernetes
                                      +Starting galaxy collection install process
                                      +Process install dependency map
                                      +Starting collection install process
                                      +Installing 'community.kubernetes:1.0.0' to '/home/tcoufal/.ansible/collections/ansible_collections/community/kubernetes'
                                      +Downloading https://galaxy.ansible.com/download/community-kubernetes-1.0.0.tar.gz to /home/tcoufal/.ansible/tmp/ansible-local-29431yk2zoutk/tmpwgl4xsnb
                                      +community.kubernetes (1.0.0) was installed successfully
                                      +
                                      +$ pip install --user openshift
                                      +...
                                      +Installing collected packages: kubernetes, openshift
                                      +    Running setup.py install for openshift ... done
                                      +Successfully installed kubernetes-11.0.0 openshift-0.11.2

                                      Additionally please login to your Quicklab cluster via oc login as a cluster admin.

                                      Configuration

                                      Please view and modify the env.yaml file (or create additional variable files, and select it before executing playbook via vars_file variable)

                                      Example environment file:

                                      quicklab_host: "upi-0.tcoufaldev.lab.upshift.rdu2.redhat.com"
                                      +
                                      +pv_count_per_size:
                                      +  1Gi: 6
                                      +  2Gi: 2
                                      +  5Gi: 1
                                      • quicklab_host - Points to one of the “Hosts” from your Quicklab Cluster info tab
                                      • pv_count_per_size - Defines PV counts in relation to maximal allocable sizes map:
                                        • Use the target PV size as a key (follow GO/Kubernetes notation)
                                        • Use volume count for that key “size” as the value
                                        • Keep in mind the total size sum(key*value for key,value in pv_count_per_size.items()) < Disk size of the Quicklab instance (usually ~15Gi of available space)

                                      Run the playbook

                                      Run the playbook.yaml (if you created a new environment file and you’d like to use other than default env.yaml, please specify the file via -e vars_file=any-filename.yaml)

                                      $ ansible-playbook playbook.yaml
                                      Click to expand output
                                      PLAY [Dynamically create Quicklab host in Ansible] **********************************************************************
                                      +
                                      +TASK [Gathering Facts] **************************************************************************************************
                                      +ok: [localhost]
                                      +
                                      +TASK [Load variables file] **********************************************************************************************
                                      +ok: [localhost]
                                      +
                                      +TASK [Preprocess the PV count per size map to a flat list] **************************************************************
                                      +ok: [localhost]
                                      +
                                      +TASK [Fetch Quicklab certificate] ***************************************************************************************
                                      +ok: [localhost]
                                      +
                                      +TASK [Adding host] ******************************************************************************************************
                                      +changed: [localhost]
                                      +
                                      +TASK [Get available Openshift nodes] ************************************************************************************
                                      +ok: [localhost]
                                      +
                                      +TASK [Preprocess nodes k8s resource response to list of IPs] ************************************************************
                                      +ok: [localhost]
                                      +
                                      +PLAY [Setup NFS on Openshift host] **************************************************************************************
                                      +
                                      +TASK [Gathering Facts] **************************************************************************************************
                                      +ok: [quicklab]
                                      +
                                      +TASK [Copy localhost variables for easier access] ***********************************************************************
                                      +ok: [quicklab]
                                      +
                                      +TASK [Install the NFS server] *******************************************************************************************
                                      +ok: [quicklab]
                                      +
                                      +TASK [Create export dirs] ***********************************************************************************************
                                      +changed: [quicklab] => (item=['1Gi', 0])
                                      +changed: [quicklab] => (item=['1Gi', 1])
                                      +changed: [quicklab] => (item=['1Gi', 2])
                                      +changed: [quicklab] => (item=['1Gi', 3])
                                      +changed: [quicklab] => (item=['1Gi', 4])
                                      +changed: [quicklab] => (item=['1Gi', 5])
                                      +changed: [quicklab] => (item=['2Gi', 0])
                                      +changed: [quicklab] => (item=['2Gi', 1])
                                      +changed: [quicklab] => (item=['5Gi', 0])
                                      +
                                      +TASK [Populate /etc/exports file] ***************************************************************************************
                                      +changed: [quicklab]
                                      +
                                      +TASK [Allow services in firewall] ***************************************************************************************
                                      +changed: [quicklab] => (item=nfs)
                                      +changed: [quicklab] => (item=rpc-bind)
                                      +changed: [quicklab] => (item=mountd)
                                      +
                                      +TASK [Reload firewall] **************************************************************************************************
                                      +changed: [quicklab]
                                      +
                                      +TASK [Enable and start NFS server] **************************************************************************************
                                      +changed: [quicklab]
                                      +
                                      +TASK [Reload exports when the server was already started] ***************************************************************
                                      +skipping: [quicklab]
                                      +
                                      +PLAY [Create PersistentVolumes in OpenShift] ****************************************************************************
                                      +
                                      +TASK [Gathering Facts] **************************************************************************************************
                                      +ok: [localhost]
                                      +
                                      +TASK [Find IPv4 of the host] ********************************************************************************************
                                      +ok: [localhost]
                                      +
                                      +TASK [Create PersistentVolume resource] *********************************************************************************
                                      +changed: [localhost] => (item=['1Gi', 0])
                                      +changed: [localhost] => (item=['1Gi', 1])
                                      +changed: [localhost] => (item=['1Gi', 2])
                                      +changed: [localhost] => (item=['1Gi', 3])
                                      +changed: [localhost] => (item=['1Gi', 4])
                                      +changed: [localhost] => (item=['1Gi', 5])
                                      +changed: [localhost] => (item=['2Gi', 0])
                                      +changed: [localhost] => (item=['2Gi', 1])
                                      +changed: [localhost] => (item=['5Gi', 0])
                                      +
                                      +PLAY RECAP **************************************************************************************************************
                                      +localhost                  : ok=10   changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
                                      +quicklab                   : ok=8    changed=5    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0
                                      \ No newline at end of file diff --git a/operators/continuous-deployment/docs/downstream/quicklab/index.html b/operators/continuous-deployment/docs/downstream/quicklab/index.html index c0eb54653590..86cdb6235748 100644 --- a/operators/continuous-deployment/docs/downstream/quicklab/index.html +++ b/operators/continuous-deployment/docs/downstream/quicklab/index.html @@ -14,7 +14,7 @@ - }
                                      ODH Logo

                                      Quicklab

                                      Set up a new Quicklab cluster

                                      1. Go to https://quicklab.upshift.redhat.com/ and log in (top right corner)

                                      2. Click New cluster

                                      3. Select openshift4upi template and a region you like the most, then select the reservation duration, the rest can be left as is: + }

                                        ODH Logo

                                        Quicklab

                                        Set up a new Quicklab cluster

                                        1. Go to https://quicklab.upshift.redhat.com/ and log in (top right corner)

                                        2. Click New cluster

                                        3. Select openshift4upi template and a region you like the most, then select the reservation duration, the rest can be left as is: @@ -26,11 +26,11 @@ Cluster is active for the first time -

                                        4. Now click on New Bundle button in Product information section

                                        5. Select openshift4upi bundle. A new form loads - you can keep all the values as they are (you can ignore the warning on top as well, since this is the first install attempt of Openshift on that cluster): +

                                        6. Now click on New Bundle button in Product information section

                                        7. Select openshift4upi bundle. A new form loads. Opt-in for the htpasswd credentials provider. (You can ignore the warning on top as well, since this is the first install attempt of Openshift on that cluster): - - - Select a bundle + + + Select a bundle

                                        8. Wait for OCP4 to install. After successful installation you should see a cluster history log like this: @@ -38,29 +38,14 @@ Cluster log after OCP4 install -

                                        9. Use the link and credentials from the Cluster Information section to access your cluster. +

                                        10. Use the link and credentials from the Cluster Information section to access your cluster. Verify it contains login information for both kube:admin and quicklab user. - - - Cluster information + + + Cluster information -

                                        11. Login as the kubeadmin, take the value from “Hosts” and port 6443.\ -For example:

                                          oc login upi-0.tcoufaltest.lab.upshift.rdu2.redhat.com:6443

                                        Install Argo CD on your cluster

                                        1. kube:admin is not supported in user api, therefore you have to create additional user. Simplest way is to deploy an Oauth via Htpasswd:

                                        2. Create a htpasswd config file and deploy it to OpenShift:

                                          $ htpasswd -nb username password > oc.htpasswd
                                          -$ oc create secret generic htpass-secret --from-file=htpasswd=oc.htpasswd -n openshift-config
                                          -$ cat <<EOF | oc apply -f -
                                          -apiVersion: config.openshift.io/v1
                                          -kind: OAuth
                                          -metadata:
                                          -  name: cluster
                                          -spec:
                                          -  identityProviders:
                                          -  - name: my_htpasswd_provider
                                          -    mappingMethod: claim
                                          -    type: HTPasswd
                                          -    htpasswd:
                                          -      fileData:
                                          -        name: htpass-secret
                                          -EOF
                                        3. Grant the new user admin cluster-admin rights

                                          oc adm policy add-cluster-role-to-user cluster-admin username
                                        4. Now log out and log in using the htpasswd provider (the new username). Generate new API token and login via this token on your local CLI

                                        5. Now you can follow the upstream docs. Create the projects:

                                          oc new-project argocd-test
                                          +    

                                        6. Login as the kube:admin, take the value from “Hosts” and port 6443. +For example:

                                        oc login upi-0.tcoufaltest.lab.upshift.rdu2.redhat.com:6443

                                        Install Argo CD on your cluster

                                        1. kube:admin is not supported in user api, that’s why we’ve opted in for the htpasswd provider during the bundle install.

                                        2. Log in as the quicklab user using the htpasswd provider in the web console. To create the Openshift user. Then log out.

                                        3. Login as the kube:admin user in the web console and your local cli client.

                                        4. Grant the htpasswd’s quicklab user admin cluster-admin rights

                                          oc adm policy add-cluster-role-to-user cluster-admin quicklab
                                        5. Now log out and log in using the htpasswd provider (the new username). Generate new API token and login via this token on your local CLI

                                        6. Now you can follow the upstream docs. Create the projects:

                                          oc new-project argocd-test
                                           oc new-project aicoe-argocd-dev
                                        7. Make sure you have imported the required gpg keys

                                        8. And deploy

                                          $ kustomize build manifests/crds --enable_alpha_plugins | oc apply -f -
                                           customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
                                           customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
                                          diff --git a/operators/continuous-deployment/docs/get_argocd_to_manage_your_app/index.html b/operators/continuous-deployment/docs/get_argocd_to_manage_your_app/index.html
                                          index 2fa894bc8780..0e39852057e8 100644
                                          --- a/operators/continuous-deployment/docs/get_argocd_to_manage_your_app/index.html
                                          +++ b/operators/continuous-deployment/docs/get_argocd_to_manage_your_app/index.html
                                          @@ -14,4 +14,4 @@
                                                 
                                                 
                                                 
                                          -      }
                                          ODH Logo

                                          Get ArgoCD to manage your Application

                                          When migrating an application’s deployment to be managed by ArgoCD use the following checklist to verify your process.

                                          • Ensure your application manifests can be built using Kustomize.
                                          • If using secrets, make sure to include the .sops.yaml file in your repository.
                                            • See here for more info.
                                          • Create the role granting access to namespace.
                                            • See here for more info.
                                            • This role should be tracked in your application manifest repository.

                                          The following items require a PR:

                                          • Ensure the application repository is added in the repository file in /manifests/overlays/<target_env>/configs/argo_cm/repositories.
                                          • Ensure that all OCP resources that will be managed by ArgoCD on this cluster are included in the inclusions list in /manifests/overlays/<target_env>/configs/argo_cm/resource.inclusions.
                                            • See here for more info.
                                          • Create the ArgoCD Application manifest
                                            • See here for more info.

                                          The following items require a PR with sops access:

                                          • Ensure your namespace exists in your cluster’s spec see here for details.
                                          • If you are switching between ArgoCD managed namespaces, and that namespace was deleted in OCP, then ensure it’s also removed from your cluster’s credentials found here /manifests/overlays/<target_env>/secrets/clusters.
                                          \ No newline at end of file + }
                                          ODH Logo

                                          Get ArgoCD to manage your Application

                                          When migrating an application’s deployment to be managed by ArgoCD use the following checklist to verify your process.

                                          • Ensure your application manifests can be built using Kustomize.
                                          • If using secrets, make sure to include the .sops.yaml file in your repository.
                                            • See here for more info.
                                          • Create the role granting access to namespace.
                                            • See here for more info.
                                            • This role should be tracked in your application manifest repository.

                                          The following items require a PR:

                                          • Ensure the application repository is added in the repository file in /manifests/overlays/<target_env>/configs/argo_cm/repositories.
                                          • Ensure that all OCP resources that will be managed by ArgoCD on this cluster are included in the inclusions list in /manifests/overlays/<target_env>/configs/argo_cm/resource.inclusions.
                                            • See here for more info.
                                          • Create the ArgoCD Application manifest
                                            • See here for more info.

                                          The following items require a PR with sops access:

                                          • Ensure your namespace exists in your cluster’s spec see here for details.
                                          • If you are switching between ArgoCD managed namespaces, and that namespace was deleted in OCP, then ensure it’s also removed from your cluster’s credentials found here /manifests/overlays/<target_env>/secrets/clusters.
                                          \ No newline at end of file diff --git a/operators/continuous-deployment/docs/give_argocd_access_to_your_project/index.html b/operators/continuous-deployment/docs/give_argocd_access_to_your_project/index.html index 8cc67665b476..95aaf975c530 100644 --- a/operators/continuous-deployment/docs/give_argocd_access_to_your_project/index.html +++ b/operators/continuous-deployment/docs/give_argocd_access_to_your_project/index.html @@ -14,7 +14,7 @@ - }
                                          ODH Logo

                                          Give ArgoCD access to your project

                                          ArgoCD uses an SA named argocd-manager to deploy resources to another cluster/namespace. These SAs need access to the resources it will be deploying, this is done via roles and rolebindings.

                                          In your namespace, you will need to deploy a rolebinding like the one below:

                                          apiVersion: authorization.openshift.io/v1
                                          +      }
                                          ODH Logo

                                          Give ArgoCD access to your project

                                          ArgoCD uses an SA named argocd-manager to deploy resources to another cluster/namespace. These SAs need access to the resources it will be deploying, this is done via roles and rolebindings.

                                          In your namespace, you will need to deploy a rolebinding like the one below:

                                          apiVersion: authorization.openshift.io/v1
                                           kind: RoleBinding
                                           metadata:
                                             name: argocd-manager-rolebinding
                                          diff --git a/operators/continuous-deployment/docs/inclusions_explained/index.html b/operators/continuous-deployment/docs/inclusions_explained/index.html
                                          index 93557bcb93c3..56082c20fd7c 100644
                                          --- a/operators/continuous-deployment/docs/inclusions_explained/index.html
                                          +++ b/operators/continuous-deployment/docs/inclusions_explained/index.html
                                          @@ -14,7 +14,7 @@
                                                 
                                                 
                                                 
                                          -      }
                                          ODH Logo

                                          Inclusions explained

                                          It is likely that your team does not have get access to all namespace scoped resources. + }

                                          ODH Logo

                                          Inclusions explained

                                          It is likely that your team does not have get access to all namespace scoped resources. This can be an issue when deploying apps to a namespace in a cluster, because ArgoCD will attempt to discover all namespace scoped resourced and be denied. To avoid this, we limit ArgoCD to discover the resources that are available to project admins, these should be added diff --git a/operators/continuous-deployment/docs/manage_your_app_secrets/index.html b/operators/continuous-deployment/docs/manage_your_app_secrets/index.html index c04dfc321fd0..40bc1d667187 100644 --- a/operators/continuous-deployment/docs/manage_your_app_secrets/index.html +++ b/operators/continuous-deployment/docs/manage_your_app_secrets/index.html @@ -14,7 +14,7 @@ - }

                                          ODH Logo

                                          Secret Management

                                          Secret management is handled using the KSOPs plugin. Use sops to encrypt your secrets in vcs.

                                          Overview: KSOPs

                                          KSOPS, or kustomize-SOPS, is a kustomize plugin for SOPS encrypted resources. KSOPS can be used to decrypt any Kubernetes resource, but is most commonly used to decrypt encrypted Kubernetes Secrets and ConfigMaps. As a kustomize plugin, KSOPS allows you to manage, build, and apply encrypted manifests the same way you manage the rest of your Kubernetes manifests.

                                          Requirements

                                          See versions to download the appropriate version of SOPS, Kustomize, and KSOPS.

                                          0. Verify Requirements

                                          Before continuing, verify your installation of Go, SOPS, and gpg. Below are a few non-comprehensive commands to quickly check your installations:

                                          # Verify that the latest version of Go is installed i.e. v1.13 and above
                                          +      }
                                          ODH Logo

                                          Secret Management

                                          Secret management is handled using the KSOPs plugin. Use sops to encrypt your secrets in vcs.

                                          Overview: KSOPs

                                          KSOPS, or kustomize-SOPS, is a kustomize plugin for SOPS encrypted resources. KSOPS can be used to decrypt any Kubernetes resource, but is most commonly used to decrypt encrypted Kubernetes Secrets and ConfigMaps. As a kustomize plugin, KSOPS allows you to manage, build, and apply encrypted manifests the same way you manage the rest of your Kubernetes manifests.

                                          Requirements

                                          See versions to download the appropriate version of SOPS, Kustomize, and KSOPS.

                                          0. Verify Requirements

                                          Before continuing, verify your installation of Go, SOPS, and gpg. Below are a few non-comprehensive commands to quickly check your installations:

                                          # Verify that the latest version of Go is installed i.e. v1.13 and above
                                           go version
                                           
                                           # Verify that your $GOPATH is set
                                          diff --git a/operators/continuous-deployment/docs/modify-odh-deployment/index.html b/operators/continuous-deployment/docs/modify-odh-deployment/index.html
                                          index e2363c1c6c64..52e5b769795a 100644
                                          --- a/operators/continuous-deployment/docs/modify-odh-deployment/index.html
                                          +++ b/operators/continuous-deployment/docs/modify-odh-deployment/index.html
                                          @@ -14,7 +14,7 @@
                                                 
                                                 
                                                 
                                          -      }
                                          ODH Logo

                                          Modify your ODH deployment

                                          Below are the steps to modify your ODH deployment. Add components, content or anything else.

                                          Fork repo

                                          Fork the repo https://github.com/operate-first/odh on GitHub.

                                          Replace repo reference in the ArgoCD app

                                          Modify your odh-deployment Application resource in ArgoCD to point to your own fork.

                                          + }

                                          ODH Logo

                                          Modify your ODH deployment

                                          Below are the steps to modify your ODH deployment. Add components, content or anything else.

                                          Fork repo

                                          Fork the repo https://github.com/operate-first/odh on GitHub.

                                          Replace repo reference in the ArgoCD app

                                          Modify your odh-deployment Application resource in ArgoCD to point to your own fork.

                                          Edit repo in odh-deployment Application resource diff --git a/operators/continuous-deployment/docs/publish/versions/index.html b/operators/continuous-deployment/docs/publish/versions/index.html index e292405802a7..4735ec0dd616 100644 --- a/operators/continuous-deployment/docs/publish/versions/index.html +++ b/operators/continuous-deployment/docs/publish/versions/index.html @@ -14,4 +14,4 @@ - }

                                          ODH Logo

                                          Versions

                                          ArgoCD: 1.7.4

                                          KSOPs: 2.1.4

                                          Kustomize: 3.8.0

                                          SOPS: 3.6.0

                                          The KSOPS and Kustomize versions refer to the ones provisioned with ArgoCD.

                                          Kustomize versions can be adjusted manually using customized versions.

                                          \ No newline at end of file + }
                                          ODH Logo

                                          Versions

                                          ArgoCD: 1.7.4

                                          KSOPs: 2.1.4

                                          Kustomize: 3.8.0

                                          SOPS: 3.6.0

                                          The KSOPS and Kustomize versions refer to the ones provisioned with ArgoCD.

                                          Kustomize versions can be adjusted manually using customized versions.

                                          \ No newline at end of file diff --git a/operators/continuous-deployment/docs/setup_argocd_dev_environment/index.html b/operators/continuous-deployment/docs/setup_argocd_dev_environment/index.html index 4db2b84ff377..aaff173d1ce8 100644 --- a/operators/continuous-deployment/docs/setup_argocd_dev_environment/index.html +++ b/operators/continuous-deployment/docs/setup_argocd_dev_environment/index.html @@ -14,7 +14,7 @@ - }
                                          ODH Logo

                                          Deploying a development environment

                                          Prequisites

                                          • An OCP 4.x Development cluster
                                          • Must have cluster admin (not kube:admin)

                                          Instructions

                                          Create the project aicoe-argocd-dev and argocd-test. The latter wll be used + }

                                          ODH Logo

                                          Deploying a development environment

                                          Prequisites

                                          • An OCP 4.x Development cluster
                                          • Must have cluster admin (not kube:admin)

                                          Instructions

                                          Create the project aicoe-argocd-dev and argocd-test. The latter wll be used for deploying a dev application via ArgoCD.

                                          oc new-project argocd-test
                                           oc new-project aicoe-argocd-dev

                                          Deploy ArgoCD

                                          git clone git@github.com:operate-first/continuous-deployment.git
                                           cd continuous-deployment
                                          diff --git a/operators/index.html b/operators/index.html
                                          index d7c99e844b9c..163a14f469b1 100644
                                          --- a/operators/index.html
                                          +++ b/operators/index.html
                                          @@ -14,4 +14,4 @@
                                                 
                                                 
                                                 
                                          -      }Operators | Operate First
                                          \ No newline at end of file
                                          +      }Operators | Operate First
                                          \ No newline at end of file
                                          diff --git a/operators/moc-cnv-sandbox/README/index.html b/operators/moc-cnv-sandbox/README/index.html
                                          index 19d98018244b..b9b3cecf08aa 100644
                                          --- a/operators/moc-cnv-sandbox/README/index.html
                                          +++ b/operators/moc-cnv-sandbox/README/index.html
                                          @@ -14,7 +14,7 @@
                                                 
                                                 
                                                 
                                          -      }
                                          ODH Logo

                                          MOC CNV Sandbox

                                          Configuration and documentation for the CNV Sandbox at the Mass Open Cloud (MOC).

                                          Playbooks

                                          • playbook-preinstall.yml

                                            Set up provisioning host and generate the install configuration.

                                          • playbook-postinstall.yml

                                            Fetches authentication credentials from the provisioning host and + }

                                            ODH Logo

                                            MOC CNV Sandbox

                                            Configuration and documentation for the CNV Sandbox at the Mass Open Cloud (MOC).

                                            Playbooks

                                            • playbook-preinstall.yml

                                              Set up provisioning host and generate the install configuration.

                                            • playbook-postinstall.yml

                                              Fetches authentication credentials from the provisioning host and then uses the OpenShift API to perform post-configuration tasks (installing certificates, configuring SSO, installing CNV, etc).

                                            Encryption

                                            Files with credentials and other secrets are encrypted using ansible-vault. The vault key itself is included in the repository and diff --git a/operators/moc-cnv-sandbox/docs/about-the-cluster/index.html b/operators/moc-cnv-sandbox/docs/about-the-cluster/index.html index c4afc71f93e5..3ab3739d0502 100644 --- a/operators/moc-cnv-sandbox/docs/about-the-cluster/index.html +++ b/operators/moc-cnv-sandbox/docs/about-the-cluster/index.html @@ -14,7 +14,7 @@ - }

                                            ODH Logo

                                            Mass Open Cloud OpenShift + CNV Baremetal Cluster

                                            The MOC CNV cluster is a small OpenShift 4.x cluster operated as a partnership between + }

                                            ODH Logo

                                            Mass Open Cloud OpenShift + CNV Baremetal Cluster

                                            The MOC CNV cluster is a small OpenShift 4.x cluster operated as a partnership between the Mass Open Cloud and Red Hat.

                                            The cluster is running on baremetal nodes with support for virtualization via the Container Native Virtualization (CNV) operator. See the hardware summary for more information.

                                            Request access to the MOC CNV cluster

                                            1. File a ticket in our issue tracker diff --git a/operators/moc-cnv-sandbox/docs/hardware/index.html b/operators/moc-cnv-sandbox/docs/hardware/index.html index d3f05f72e64f..d707bf6e4786 100644 --- a/operators/moc-cnv-sandbox/docs/hardware/index.html +++ b/operators/moc-cnv-sandbox/docs/hardware/index.html @@ -14,4 +14,4 @@ - }
                                              ODH Logo

                                              MOC CNV Cluster Hardware Configuration

                                              The MOC CNV cluster is comprised of the following hardware:

                                              • 3 control nodes
                                              • 3 general purpose worker nodes
                                              • 3 storage worker nodes

                                              Control nodes

                                              • 2 * Intel Xeon E5-2660 @ 2.20Ghz (16 cores total)
                                              • 384 GB RAM
                                              • 372 GB SSD root disk
                                              • 2 * 10 Gb ethernet interfaces

                                              Worker nodes

                                              • 2 * Intel Xeon E5-2660 @ 2.20Ghz (16 cores total)
                                              • 384 GB RAM
                                              • 372 GB SSD root disk
                                              • 2 * 10 Gb ethernet interfaces

                                              Storage nodes

                                              • 2 * Intel Xeon E5-2660 @ 2.20Ghz (16 cores total)
                                              • 384 GB RAM
                                              • 372 GB SSD root disk
                                              • 3 * 558 GB rotational disk for Ceph
                                              • 2 * 10 Gb ethernet interfaces
                                              \ No newline at end of file + }
                                              ODH Logo

                                              MOC CNV Cluster Hardware Configuration

                                              The MOC CNV cluster is comprised of the following hardware:

                                              • 3 control nodes
                                              • 3 general purpose worker nodes
                                              • 3 storage worker nodes

                                              Control nodes

                                              • 2 * Intel Xeon E5-2660 @ 2.20Ghz (16 cores total)
                                              • 384 GB RAM
                                              • 372 GB SSD root disk
                                              • 2 * 10 Gb ethernet interfaces

                                              Worker nodes

                                              • 2 * Intel Xeon E5-2660 @ 2.20Ghz (16 cores total)
                                              • 384 GB RAM
                                              • 372 GB SSD root disk
                                              • 2 * 10 Gb ethernet interfaces

                                              Storage nodes

                                              • 2 * Intel Xeon E5-2660 @ 2.20Ghz (16 cores total)
                                              • 384 GB RAM
                                              • 372 GB SSD root disk
                                              • 3 * 558 GB rotational disk for Ceph
                                              • 2 * 10 Gb ethernet interfaces
                                              \ No newline at end of file diff --git a/operators/moc-cnv-sandbox/manifests/README/index.html b/operators/moc-cnv-sandbox/manifests/README/index.html index bb0383246e44..460cf4678d96 100644 --- a/operators/moc-cnv-sandbox/manifests/README/index.html +++ b/operators/moc-cnv-sandbox/manifests/README/index.html @@ -14,7 +14,7 @@ - }
                                              ODH Logo

                                              Manifests in this directory should be applied using the kustomize + }

                                              ODH Logo

                                              Manifests in this directory should be applied using the kustomize command, like this:

                                              kustomize build | oc apply -f-

                                              Many of the manifests can be applied using oc apply -k <directory>, but where the syntax of oc apply -k and kustomize have diverged we will prefer the kustomize syntax.

                                              \ No newline at end of file diff --git a/operators/moc-cnv-sandbox/meeting-notes/2020-05-06/index.html b/operators/moc-cnv-sandbox/meeting-notes/2020-05-06/index.html index 1186de5263c9..b0b6c6e83c0d 100644 --- a/operators/moc-cnv-sandbox/meeting-notes/2020-05-06/index.html +++ b/operators/moc-cnv-sandbox/meeting-notes/2020-05-06/index.html @@ -14,5 +14,5 @@ - }
                                              ODH Logo

                                              2020/05/13

                                              • Yesterday we had our initial meeting (kickoff) with CNV HTB PM. We’ll meet again in two weeks after we get the Deployment Kit.
                                              • Repositories and task tracking
                                              • Designs from Israeli Team - Draft on General Dilemmas
                                              • We need a single place to go for cluster monitoring, to combine data from multiple levels and do correlations.
                                              • Hardware wish list
                                                • Starting with SDDs for our Masters (instead of HDDs)
                                                • HW for our OCS-based storage solution
                                              • ACM team engagement
                                                • Bill Burns: could you pull some people from the ACM team interested in Operate First?
                                                • Team: Use the IRC for Open Infra Labs (#openinfralabs on freenode)
                                              • Others
                                              • Next Meeting:
                                                • Review answers to architecture questions posed today.
                                                • A look into what we will need (ideally) to deploy the services we need.
                                                • Answers to questions on CISCO nodes.

                                              2020/05/12

                                              Initial meeting for CNV HTP on MOC

                                              • Introductions
                                              • Program Intro/Overview
                                                • The purpose is that we get the right feedback in terms of features, bugs, etc as we go forward with our initial GA.
                                                • Expectations: A Deployment Kit to stand up the product in a standard way as quickly and frictionless as possible so that we can execute the test plan with the use cases of interest we would like you to execute as part of this program.
                                                  • This does not mean that we can’t go out and try other things outside of the test plan. We expect the MOC team to stretch this test plan.
                                                  • Our plan would be to start with an install using the prescribed plan and then figure out ways to do it automatically.
                                                • A process to report issues as part of the customer portal.
                                                • Our timing for having the test kit available within the next one or two weeks.
                                                • We’ll have Ian engaged with getting the environment setup (fortunately w/o any hiccups due to the COVID-19 situation).
                                                • Recommended HW and environment requirements for the CNV HTB.
                                                  • We have 6 Dell RX620 servers.
                                                  • Questions on HW support:
                                                    • Networking
                                                      • Cisco nodes for which RHEL doesn’t have the right drivers.
                                                      • Do we have the correct iPXE driver support on the Dell nodes (Lars to clarify…)
                                                        • NIC models
                                                      • Lars: Provide more specific details about our HW configuration.
                                                    • Storage
                                                      • Initially
                                                        • Use Local Storage - 600 GB HDDs
                                                      • Subsequently
                                                        • Use External Storage using OCS 4.4 - Ceph Clusters
                                                    • Memory
                                                      • 128 GBs
                                                  • Info about network configuration
                                                    • We have administrative access to network infrastructure (routers, switches, etc).
                                                  • Info about Workloads
                                                    • We are looking to move our management plane to a single platform.
                                                    • We would love to set up the infrastructure for the New England Research Cloud to run on OpenShift Container Platform + OpenShift virtualization.
                                                  • Info about installation:
                                                    • Initially, we’ll start with the current available OCP and CNV version. This environment is expected to be burned down, however it would be better to do the upgrade.
                                                    • Subsequently, we’ll move up to OCP 4.5 / CNV 2.4.
                                                • Next Steps
                                                  • Get started with the OCP IPI BM installation as soon as possible.
                                                  • Expect the Deployment Kit to be available within two weeks.

                                              [ ] Rick: set another follow up meeting in two weeks.

                                              2020-05-06 PM

                                              UPI vs IPI

                                              UPI is more mature, but IPI is the preferred option, especially if you want to + }

                                              ODH Logo

                                              2020/05/13

                                              • Yesterday we had our initial meeting (kickoff) with CNV HTB PM. We’ll meet again in two weeks after we get the Deployment Kit.
                                              • Repositories and task tracking
                                              • Designs from Israeli Team - Draft on General Dilemmas
                                              • We need a single place to go for cluster monitoring, to combine data from multiple levels and do correlations.
                                              • Hardware wish list
                                                • Starting with SDDs for our Masters (instead of HDDs)
                                                • HW for our OCS-based storage solution
                                              • ACM team engagement
                                                • Bill Burns: could you pull some people from the ACM team interested in Operate First?
                                                • Team: Use the IRC for Open Infra Labs (#openinfralabs on freenode)
                                              • Others
                                              • Next Meeting:
                                                • Review answers to architecture questions posed today.
                                                • A look into what we will need (ideally) to deploy the services we need.
                                                • Answers to questions on CISCO nodes.

                                              2020/05/12

                                              Initial meeting for CNV HTP on MOC

                                              • Introductions
                                              • Program Intro/Overview
                                                • The purpose is that we get the right feedback in terms of features, bugs, etc as we go forward with our initial GA.
                                                • Expectations: A Deployment Kit to stand up the product in a standard way as quickly and frictionless as possible so that we can execute the test plan with the use cases of interest we would like you to execute as part of this program.
                                                  • This does not mean that we can’t go out and try other things outside of the test plan. We expect the MOC team to stretch this test plan.
                                                  • Our plan would be to start with an install using the prescribed plan and then figure out ways to do it automatically.
                                                • A process to report issues as part of the customer portal.
                                                • Our timing for having the test kit available within the next one or two weeks.
                                                • We’ll have Ian engaged with getting the environment setup (fortunately w/o any hiccups due to the COVID-19 situation).
                                                • Recommended HW and environment requirements for the CNV HTB.
                                                  • We have 6 Dell RX620 servers.
                                                  • Questions on HW support:
                                                    • Networking
                                                      • Cisco nodes for which RHEL doesn’t have the right drivers.
                                                      • Do we have the correct iPXE driver support on the Dell nodes (Lars to clarify…)
                                                        • NIC models
                                                      • Lars: Provide more specific details about our HW configuration.
                                                    • Storage
                                                      • Initially
                                                        • Use Local Storage - 600 GB HDDs
                                                      • Subsequently
                                                        • Use External Storage using OCS 4.4 - Ceph Clusters
                                                    • Memory
                                                      • 128 GBs
                                                  • Info about network configuration
                                                    • We have administrative access to network infrastructure (routers, switches, etc).
                                                  • Info about Workloads
                                                    • We are looking to move our management plane to a single platform.
                                                    • We would love to set up the infrastructure for the New England Research Cloud to run on OpenShift Container Platform + OpenShift virtualization.
                                                  • Info about installation:
                                                    • Initially, we’ll start with the current available OCP and CNV version. This environment is expected to be burned down, however it would be better to do the upgrade.
                                                    • Subsequently, we’ll move up to OCP 4.5 / CNV 2.4.
                                                • Next Steps
                                                  • Get started with the OCP IPI BM installation as soon as possible.
                                                  • Expect the Deployment Kit to be available within two weeks.

                                              [ ] Rick: set another follow up meeting in two weeks.

                                              2020-05-06 PM

                                              UPI vs IPI

                                              UPI is more mature, but IPI is the preferred option, especially if you want to add things after deploying the cluster. IPI looks like the way we’ll go.

                                              Requirements

                                              Minimal required configuration:

                                              • 1 Bootstrap VM
                                              • 3 masters (to get the cluster to come up)
                                              • 2 Workers (more can be added later)

                                              The greater uniformity, the greater chance of success. Nodes within a role must be identical. NIC names (at least for provisioning NIC) must be identical across all roles.

                                              There must be existing DHCP and DNS services.

                                              Architecture

                                              KNI UPI Lab Diagram:

                                              KNI UPI Lab Diagram

                                              The goal is to allow people to reproduce this environment (OCP + CNV on BM) given that you have the “same” hardware.

                                              Ansible playbook for deploying with IPI

                                              HW is defined in an inventory file.

                                              Adding new nodes would involve adding DHCP + DNS entries for them.

                                              2020-05-06 AM

                                              Vision

                                              • A working example of how you should do this (private cloud) that we can show to customers.
                                              • Put together real use cases for real users.
                                              • Create/destroy OCP clusters on demand.
                                                • Individual Clusters: The Open Data Hub team gives clusters to individual users/tenants (specific configurations, data sets, etc).
                                                • Shared Clusters (like the OpenStack use case)
                                              • Kick the tires in the product.
                                              • Multi Cluster Use Case: OpenShift clusters running on top of CNV 

                                              Action items

                                              • Rick: Work with Lars to Create a “living” document for this project.
                                                • Start with an open repository in Git Lab. Do the documentation in markdown.
                                              • Rick: Work with Lars to get the task tracking setup.
                                                • Track this effort as part of the Open Infra Labs
                                                • Git Lab is a  source repository that contains some project management capabilities. 
                                              • Add BU Team Members to our weekly call:
                                                • Kristi Nikola
                                                • Naved Ansari

                                              We need to add some missing pieces (i.e. monitoring, storage & openshift) and the architectural diagram of how we are going to put this together.

                                              We need to identify storage requirements for the initial setup and expansion afterwards.

                                              Initially, we could use an existing Ceph cluster, preferably an OCS environment. (what are the requirements for this?)

                                              Hardware Resources Table (also the recommended HW and environment requirements for the CNV HTB).

                                              RoleOSCPU (min/rec)RAM (min/rec)Storage (min/rec)
                                              BootstrapRHCOS/RHEL 84/416/16120/120
                                              MasterRHCOS4/1616/64120/240
                                              WorkerRHCOS2/168/128120/?

                                              Questions

                                              • What do we lose by using local storage instead of OCS?

                                                Data persistent issues without shared storage.

                                              • What do we gain by using OCS?

                                                Ceph cluster deployed on k8s.

                                              \ No newline at end of file diff --git a/operators/moc-cnv-sandbox/meeting-notes/2020-05-20/index.html b/operators/moc-cnv-sandbox/meeting-notes/2020-05-20/index.html index 3127d9b13ee8..09e6143572e3 100644 --- a/operators/moc-cnv-sandbox/meeting-notes/2020-05-20/index.html +++ b/operators/moc-cnv-sandbox/meeting-notes/2020-05-20/index.html @@ -14,7 +14,7 @@ - }
                                              ODH Logo

                                              2020/05/20

                                              Agenda

                                              • Announcements
                                                • Our weekly project meeting will now be Tuesdays at 10:30 Boston / 17:30 Tel Aviv.
                                              • Reminder
                                                • We’ll have a meeting at 11:30 AM US Eastern time to bring the ACM team up to speed on Mass Open Cloud so we can all agree on scope of ACM involvement in it.
                                              • Scope Clarification <Review and expand>
                                                • Our goal is to create a repeatable deployment including OpenShift + virtualization and Advanced Cluster Management on Bare Metal using the Open Infrastructure Labs as our upstream project and the Mass Open Cloud (MOC) as the environment in which we will develop our downstream project. We have decided to name the output of this downstream project Mass Open Cloud Next Gen.
                                                • All the work developed as part of MOC Next Gen should land in the Open Infra Labs upstream project, this includes, but is not limited to deployment scripts, reference architecture, requirements, etc.
                                                • MOC Next Gen consists of the following pillars/components (to be expanded with the expectations from each pillar/component):
                                                  • OpenShift Container Platform (OCP)
                                                  • OpenShift Virtualization
                                                  • Advanced Cluster Management
                                                  • Consulting
                                                  • Infrastructure
                                                  • Operations
                                              • Review answers to architecture questions posed in our last meeting.
                                                • Usage of Cisco nodes?
                                                  • We’ve decided to use DELL servers since the Cisco servers have old and unsupported storage controller cards.
                                                • External monitoring?
                                                  • We’ve decided to start with self-contained monitoring.
                                                  • Monitoring of BM services - Shon found out that there is BM monitoring available, however not sure if it will be enough to monitor components at a lower level (like disks, etc).
                                                • CNV monitoring - how can we see the VMs themselves instead of pods or containers?
                                                  • Some of our concerns were addressed in a demo given during [What’s New] OpenShift virtualization [May-2020] (Slides, Recording, Q&A)
                                                • Data retention period for monitoring data (logs & statistics)
                                                  • We have decided to keep 3 days worth of monitoring data in the monitoring nodes. And we’ll export data beyond that to long term storage at the NorthEast Storage Exchange (NESE). The NESE is a 26 PB data lake adjacent to the MOC.
                                                    • Once in place we’ll evaluate against use cases and adjust if needed.
                                                • Storage for monitoring: \ + }
                                                  ODH Logo

                                                  2020/05/20

                                                  Agenda

                                                  • Announcements
                                                    • Our weekly project meeting will now be Tuesdays at 10:30 Boston / 17:30 Tel Aviv.
                                                  • Reminder
                                                    • We’ll have a meeting at 11:30 AM US Eastern time to bring the ACM team up to speed on Mass Open Cloud so we can all agree on scope of ACM involvement in it.
                                                  • Scope Clarification <Review and expand>
                                                    • Our goal is to create a repeatable deployment including OpenShift + virtualization and Advanced Cluster Management on Bare Metal using the Open Infrastructure Labs as our upstream project and the Mass Open Cloud (MOC) as the environment in which we will develop our downstream project. We have decided to name the output of this downstream project Mass Open Cloud Next Gen.
                                                    • All the work developed as part of MOC Next Gen should land in the Open Infra Labs upstream project, this includes, but is not limited to deployment scripts, reference architecture, requirements, etc.
                                                    • MOC Next Gen consists of the following pillars/components (to be expanded with the expectations from each pillar/component):
                                                      • OpenShift Container Platform (OCP)
                                                      • OpenShift Virtualization
                                                      • Advanced Cluster Management
                                                      • Consulting
                                                      • Infrastructure
                                                      • Operations
                                                  • Review answers to architecture questions posed in our last meeting.
                                                    • Usage of Cisco nodes?
                                                      • We’ve decided to use DELL servers since the Cisco servers have old and unsupported storage controller cards.
                                                    • External monitoring?
                                                      • We’ve decided to start with self-contained monitoring.
                                                      • Monitoring of BM services - Shon found out that there is BM monitoring available, however not sure if it will be enough to monitor components at a lower level (like disks, etc).
                                                    • CNV monitoring - how can we see the VMs themselves instead of pods or containers?
                                                      • Some of our concerns were addressed in a demo given during [What’s New] OpenShift virtualization [May-2020] (Slides, Recording, Q&A)
                                                    • Data retention period for monitoring data (logs & statistics)
                                                      • We have decided to keep 3 days worth of monitoring data in the monitoring nodes. And we’ll export data beyond that to long term storage at the NorthEast Storage Exchange (NESE). The NESE is a 26 PB data lake adjacent to the MOC.
                                                        • Once in place we’ll evaluate against use cases and adjust if needed.
                                                    • Storage for monitoring: \ Logs:
                                                          *   We would like to keep our logs for more than 15 days (see ^^)
                                                               *   This would require increased storage capacity. 	
                                                                   *   For all metrics 15 GB per node?
                                                      diff --git a/operators/moc-cnv-sandbox/roles/certificate-authority/README/index.html b/operators/moc-cnv-sandbox/roles/certificate-authority/README/index.html
                                                      index ab2ca6f87f02..180f0bd0148b 100644
                                                      --- a/operators/moc-cnv-sandbox/roles/certificate-authority/README/index.html
                                                      +++ b/operators/moc-cnv-sandbox/roles/certificate-authority/README/index.html
                                                      @@ -14,4 +14,4 @@
                                                             
                                                             
                                                             
                                                      -      }
                                                      \ No newline at end of file
                                                      +      }
                                                      \ No newline at end of file
                                                      diff --git a/operators/moc-cnv-sandbox/roles/ocp/README/index.html b/operators/moc-cnv-sandbox/roles/ocp/README/index.html
                                                      index 5dbe4129d29b..f1b9edcb8c38 100644
                                                      --- a/operators/moc-cnv-sandbox/roles/ocp/README/index.html
                                                      +++ b/operators/moc-cnv-sandbox/roles/ocp/README/index.html
                                                      @@ -14,7 +14,7 @@
                                                             
                                                             
                                                             
                                                      -      }
                                                      ODH Logo

                                                      OpenShift API Roles

                                                      The roles in this directory make configuration changes in OpenShift using + }

                                                      ODH Logo

                                                      OpenShift API Roles

                                                      The roles in this directory make configuration changes in OpenShift using Ansible’s k8s module.

                                                      Roles

                                                      • api

                                                        Library role used by other roles for common operations.

                                                      • authz

                                                        Ensure that users cannot create resources without being added to an approved-users group.

                                                      • cnv

                                                        Install and configure the CNV operator.

                                                      • default-ingress-certificate

                                                        Update the default ingress certificate 1.

                                                      • firewall

                                                        Create firewall rules to block traffic from another OpenShift cluster operating on the same network. This caused issues under diff --git a/page-data/data-science/data-science-workflows/Thoth-bots/page-data.json b/page-data/data-science/data-science-workflows/Thoth-bots/page-data.json new file mode 100644 index 000000000000..865844fdacd5 --- /dev/null +++ b/page-data/data-science/data-science-workflows/Thoth-bots/page-data.json @@ -0,0 +1 @@ +{"componentChunkName":"component---src-templates-doc-js","path":"/data-science/data-science-workflows/Thoth-bots/","result":{"data":{"site":{"siteMetadata":{"title":"Operate First"}},"mdx":{"id":"e85ff5eb-16fd-5187-99c1-5f7541e5dcd7","body":"function _extends() { _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; }; return _extends.apply(this, arguments); }\n\nfunction _objectWithoutProperties(source, excluded) { if (source == null) return {}; var target = _objectWithoutPropertiesLoose(source, excluded); var key, i; if (Object.getOwnPropertySymbols) { var sourceSymbolKeys = Object.getOwnPropertySymbols(source); for (i = 0; i < sourceSymbolKeys.length; i++) { key = sourceSymbolKeys[i]; if (excluded.indexOf(key) >= 0) continue; if (!Object.prototype.propertyIsEnumerable.call(source, key)) continue; target[key] = source[key]; } } return target; }\n\nfunction _objectWithoutPropertiesLoose(source, excluded) { if (source == null) return {}; var target = {}; var sourceKeys = Object.keys(source); var key, i; for (i = 0; i < sourceKeys.length; i++) { key = sourceKeys[i]; if (excluded.indexOf(key) >= 0) continue; target[key] = source[key]; } return target; }\n\n/* @jsx mdx */\nvar _frontmatter = {};\nvar layoutProps = {\n _frontmatter: _frontmatter\n};\nvar MDXLayout = \"wrapper\";\nreturn function MDXContent(_ref) {\n var components = _ref.components,\n props = _objectWithoutProperties(_ref, [\"components\"]);\n\n return mdx(MDXLayout, _extends({}, layoutProps, props, {\n components: components,\n mdxType: \"MDXLayout\"\n }), mdx(\"h1\", null, \"Instructions on how to set up various Thoth bots in your project\"), mdx(\"h2\", null, \"Kebechet\"), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://github.com/thoth-station/kebechet#kebechet\"\n }), \"Kebechet\"), \" is the bot that you can use to automatically update your project dependencies.\")), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Kebechet can be configured using a yaml configuration file (\", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://github.com/thoth-station/kebechet/blob/master/.thoth.yaml\"\n }), \".thoth.yaml\"), \") in the root of your repo.\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"yaml\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-yaml\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-yaml\"\n }), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"host\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" khemenu.thoth\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \"station.ninja\\n\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"tls_verify\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token boolean important\"\n }), \"false\"), \"\\n\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"requirements_format\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" pipenv\\n\\n\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"runtime_environments\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"name\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" rhel\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"8\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"operating_system\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"name\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" rhel\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"version\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"\\\"8\\\"\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"python_version\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"\\\"3.6\\\"\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"recommendation_type\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" latest\\n\\n\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"managers\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"name\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" pipfile\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \"requirements\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"name\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" update\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"configuration\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"labels\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"bot\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"name\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" info\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"name\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" version\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"configuration\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"maintainers\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \" goern \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token comment\"\n }), \"# Update this list of project maintainers\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \" fridex\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"assignees\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \" sesheta\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"labels\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"bot\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"changelog_file\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token boolean important\"\n }), \"true\")))))), mdx(\"h2\", null, \"Zuul (Sesheta)\"), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"You can use the \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://github.com/thoth-station/zuul-config\"\n }), \"zuul\"), \" bot to set up automatic testing and merging for your PRs.\")), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Zuul can be configured using a yaml configuration file (\", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://github.com/thoth-station/zuul-config#integration-of-zuul-with-github-repos\"\n }), \".zuul.yaml\"), \")\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"yaml\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-yaml\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-yaml\"\n }), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"project\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"check\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"jobs\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"\\\"noop\\\"\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"gate\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"jobs\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"\\\"noop\\\"\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"kebechet-auto-gate\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"jobs\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"\\\"noop\\\"\"))))), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"You can add different types of jobs:\"), mdx(\"ul\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"pf-c-list\"\n }), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"code\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"language-text\"\n }), \"thoth-coala\"), \" job - It uses \", mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"https://coala.io/#/home?lang=Python\"\n }), \"Coala\"), \" for code linting, it can be configured using a \", mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"http://docs.coala.io/en/latest/Users/coafile.html#project-wide-coafile\"\n }), \".coafile\"), \". in the root of your repo.\"), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"code\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"language-text\"\n }), \"thoth-pytest\"), \" job - It uses the pytest module to run tests in your repo.\"))), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Zuul will not merge any PRs for which any of the specified jobs have failed.\")), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"If there are no jobs specified in the zuul config (only \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"noops\"), \"), zuul will merge any PR as long as it has been approved by an authorized reviewer.\"))));\n}\n;\nMDXContent.isMDXComponent = true;","frontmatter":{"title":"","description":null}}},"pageContext":{"id":"e85ff5eb-16fd-5187-99c1-5f7541e5dcd7","slug":"Thoth-bots"}},"staticQueryHashes":["117426894","3000541721","3753692419"]} \ No newline at end of file diff --git a/page-data/index/page-data.json b/page-data/index/page-data.json index 599ce2952dc4..7dfc667871f0 100644 --- a/page-data/index/page-data.json +++ b/page-data/index/page-data.json @@ -1 +1 @@ -{"componentChunkName":"component---src-templates-doc-js","path":"/","result":{"data":{"site":{"siteMetadata":{"title":"Operate First"}},"mdx":{"id":"ed6e1ea7-9833-576c-8dfc-71e5b1a6fe40","body":"function _extends() { _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; }; return _extends.apply(this, arguments); }\n\nfunction _objectWithoutProperties(source, excluded) { if (source == null) return {}; var target = _objectWithoutPropertiesLoose(source, excluded); var key, i; if (Object.getOwnPropertySymbols) { var sourceSymbolKeys = Object.getOwnPropertySymbols(source); for (i = 0; i < sourceSymbolKeys.length; i++) { key = sourceSymbolKeys[i]; if (excluded.indexOf(key) >= 0) continue; if (!Object.prototype.propertyIsEnumerable.call(source, key)) continue; target[key] = source[key]; } } return target; }\n\nfunction _objectWithoutPropertiesLoose(source, excluded) { if (source == null) return {}; var target = {}; var sourceKeys = Object.keys(source); var key, i; for (i = 0; i < sourceKeys.length; i++) { key = sourceKeys[i]; if (excluded.indexOf(key) >= 0) continue; target[key] = source[key]; } return target; }\n\n/* @jsx mdx */\nvar _frontmatter = {\n \"title\": \"Operate First for Open Data Hub\",\n \"description\": \"Operate First for ODH\"\n};\nvar layoutProps = {\n _frontmatter: _frontmatter\n};\nvar MDXLayout = \"wrapper\";\nreturn function MDXContent(_ref) {\n var components = _ref.components,\n props = _objectWithoutProperties(_ref, [\"components\"]);\n\n return mdx(MDXLayout, _extends({}, layoutProps, props, {\n components: components,\n mdxType: \"MDXLayout\"\n }), mdx(\"p\", null, \"The transition from delivering projects to delivering services involves different roles and a different mindset. Features that enable the software to be run at scale need to be built into the project. \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://openinfralabs.org/\"\n }), \"Operate First\"), \" means, we must also operate the project, involving developers from the beginning.\"), mdx(\"p\", null, \"As the AICoE in the Office of the CTO at Red Hat we can lead the way with \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://opendatahub.io/\"\n }), \"Open Data Hub\"), \": operate it in a transparent open cloud environment, build a community around the deployment and define which services would be consumed in what way.\"), mdx(\"p\", null, \"This can act as a blueprint for deploying this service in any environment.\"), mdx(\"p\", null, \"With Operate First, we open up our operational knowledge to all users of Open Data Hub and the open source community. This will allow us to bring the learnings of the SRE team into the open source community and is a potential for us to leverage a broad community input into developing software.\"), mdx(\"p\", null, \"As one of the first steps, we have begun operating Open Data Hub on the Mass Open Cloud(MOC) in an open cloud environment, before we ship it to our customers. At the AICoE, we are focused on creating examples of how ODH is operated and deployed in an open cloud environment, how we perform open source data science in an open cloud environment and sharing our learnings with the community.\"), mdx(\"p\", null, \"This website acts as a landing site for sharing examples from our experience of operating Open Data Hub in an open cloud environment. It is targeted to serve as an upstream platform where a wider community can participate and leverage our work (and we theirs), ultimately to drive an open source solution for cloud operation.\"), mdx(\"h2\", null, \"Getting started\"), mdx(\"p\", null, \"To learn about Open Data Hub and its architecture, visit \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://www.opendatahub.io\"\n }), \"opendatahub.io\"), \".\"), mdx(\"p\", null, \"To get started with using ODH applications deployed and running on an open cloud instance, visit the \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"MOC - ODH Users\"), \" section.\"), mdx(\"p\", null, \"To get started with deploying components on ODH, visit the \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"MOC - ODH Operations\"), \" section.\"), mdx(\"p\", null, \"To learn more about Operate First: making cloud operations as fundamental as functionality in the upstreams. Read the \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://openinfralabs.org/operate-first-manifesto/\"\n }), \"Operate First Community Manifesto\")), mdx(\"h2\", null, \"Contribute\"), mdx(\"p\", null, \"To contribute to the Operate First initiative, seek support or report bugs on the website, please open an issue \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://github.com/operate-first/operate-first.github.io/issues\"\n }), \"here\"), \".\"), mdx(\"h2\", null, \"Phases\"), mdx(\"h3\", null, \"Crawl\"), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, \"CI/Continous Delivery pipeline to build ODH assets\"), mdx(\"li\", {\n parentName: \"ul\"\n }, \"Continuous Deployment pipeline to deploy ODH on MOC\"), mdx(\"li\", {\n parentName: \"ul\"\n }, \"Incident and outage management\")), mdx(\"h3\", null, \"Walk\"), mdx(\"p\", null, \"Get real users on the service - students from universities doing classes, opensource projects, AICoE public examples etc.\"), mdx(\"p\", null, \"Work with those users to:\"), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, \"Improve the AI development workflow\"), mdx(\"li\", {\n parentName: \"ul\"\n }, \"Improve the AI deployment workflow (MLOps)\")), mdx(\"h3\", null, \"Run\"), mdx(\"p\", null, \"TBD\"), mdx(\"h2\", null, \"Roles\"), mdx(\"h3\", null, \"CI/CD pipeline engineer\"), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, \"Testing of ODH assets\"), mdx(\"li\", {\n parentName: \"ul\"\n }, \"Release and publish assets\"), mdx(\"li\", {\n parentName: \"ul\"\n }, \"Optimize assets for the target platform (e.g. Notebook Images with Intel optimized TF)\")), mdx(\"h3\", null, \"Data Scientist\"), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, \"Create sample workflows\"), mdx(\"li\", {\n parentName: \"ul\"\n }, \"Inform testing of ODH assets\"), mdx(\"li\", {\n parentName: \"ul\"\n }, \"Write end-user documentation\")), mdx(\"h3\", null, \"SRE\"), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, \"Deployment of ODH assets\"), mdx(\"li\", {\n parentName: \"ul\"\n }, \"Monitoring / Incident Management\")), mdx(\"h3\", null, \"Service Owner\"), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, \"Define service interface\"), mdx(\"li\", {\n parentName: \"ul\"\n }, \"Define service level agreements (SLA)\")), mdx(\"h2\", null, \"Organization\"), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, \"All systems must be available on the internet (no VPN)\"), mdx(\"li\", {\n parentName: \"ul\"\n }, \"All data (tickets, logs, metrics) must be publicly available\"), mdx(\"li\", {\n parentName: \"ul\"\n }, \"Sprint planning and demos are public\")));\n}\n;\nMDXContent.isMDXComponent = true;","frontmatter":{"title":"Operate First for Open Data Hub","description":"Operate First for ODH"}}},"pageContext":{"id":"ed6e1ea7-9833-576c-8dfc-71e5b1a6fe40","slug":"/"}},"staticQueryHashes":["117426894","3000541721","3753692419"]} \ No newline at end of file +{"componentChunkName":"component---src-templates-doc-js","path":"/","result":{"data":{"site":{"siteMetadata":{"title":"Operate First"}},"mdx":{"id":"ed6e1ea7-9833-576c-8dfc-71e5b1a6fe40","body":"function _extends() { _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; }; return _extends.apply(this, arguments); }\n\nfunction _objectWithoutProperties(source, excluded) { if (source == null) return {}; var target = _objectWithoutPropertiesLoose(source, excluded); var key, i; if (Object.getOwnPropertySymbols) { var sourceSymbolKeys = Object.getOwnPropertySymbols(source); for (i = 0; i < sourceSymbolKeys.length; i++) { key = sourceSymbolKeys[i]; if (excluded.indexOf(key) >= 0) continue; if (!Object.prototype.propertyIsEnumerable.call(source, key)) continue; target[key] = source[key]; } } return target; }\n\nfunction _objectWithoutPropertiesLoose(source, excluded) { if (source == null) return {}; var target = {}; var sourceKeys = Object.keys(source); var key, i; for (i = 0; i < sourceKeys.length; i++) { key = sourceKeys[i]; if (excluded.indexOf(key) >= 0) continue; target[key] = source[key]; } return target; }\n\n/* @jsx mdx */\nvar _frontmatter = {\n \"title\": \"Operate First for Open Data Hub\",\n \"description\": \"Operate First for ODH\"\n};\nvar layoutProps = {\n _frontmatter: _frontmatter\n};\nvar MDXLayout = \"wrapper\";\nreturn function MDXContent(_ref) {\n var components = _ref.components,\n props = _objectWithoutProperties(_ref, [\"components\"]);\n\n return mdx(MDXLayout, _extends({}, layoutProps, props, {\n components: components,\n mdxType: \"MDXLayout\"\n }), mdx(\"p\", null, mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://openinfralabs.org/operate-first-manifesto/\"\n }), \"Operate First\"), \" is an initiative to operate software in a production-grade environment - bringing users, developers and operators closer together.\"), mdx(\"p\", null, \"The goal is to create an Open Cloud environment, with reproducibility built-in, operated by a Community.\"), mdx(\"p\", null, \"Open means, onboarding and getting involved should mimic the process of an Open Source project, where planning, issue tracking and the code are accessible in a read-only fashion.\"), mdx(\"p\", null, \"Reproducibility caters towards being a blueprint for other setups. If we don\\u2019t want each environment to be a snowflake, we should be able to extract best practices that are easy to apply to new environments.\"), mdx(\"p\", null, \"At the Office of the CTO at Red Hat, we can lead the way with \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://opendatahub.io/\"\n }), \"Open Data Hub\"), \" by opening up our operational knowledge to all open source communities to improve the integration and operability from the source.\"), mdx(\"h2\", null, \"Getting started\"), mdx(\"h2\", null, mdx(\"a\", _extends({\n parentName: \"h2\"\n }, {\n \"href\": \"https://www.operate-first.cloud/data-science/\"\n }), \"Data Science\")), mdx(\"p\", null, \"Get started with tutorials and examples for data science on Open Data Hub.\"), mdx(\"h2\", null, mdx(\"a\", _extends({\n parentName: \"h2\"\n }, {\n \"href\": \"https://www.operate-first.cloud/users/\"\n }), \"Users\")), mdx(\"p\", null, \"Learn how you can engage with Open Data Hub and access the deployed components.\"), mdx(\"h2\", null, mdx(\"a\", _extends({\n parentName: \"h2\"\n }, {\n \"href\": \"https://www.operate-first.cloud/operators/\"\n }), \"Operators\")), mdx(\"p\", null, \"See how we are deploying and operating Open Data Hub\"), mdx(\"h2\", null, mdx(\"a\", _extends({\n parentName: \"h2\"\n }, {\n \"href\": \"https://www.operate-first.cloud/blueprints/\"\n }), \"Blueprints\")), mdx(\"p\", null, \"Apply best practices and tooling to your own projects.\"), mdx(\"h2\", null, \"Contribute\"), mdx(\"p\", null, \"To contribute to the Operate First initiative, seek support or report bugs on the website, please open an issue \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://github.com/operate-first/operate-first.github.io/issues\"\n }), \"here\"), \".\"));\n}\n;\nMDXContent.isMDXComponent = true;","frontmatter":{"title":"Operate First for Open Data Hub","description":"Operate First for ODH"}}},"pageContext":{"id":"ed6e1ea7-9833-576c-8dfc-71e5b1a6fe40","slug":"/"}},"staticQueryHashes":["117426894","3000541721","3753692419"]} \ No newline at end of file diff --git a/page-data/operators/continuous-deployment/docs/README/page-data.json b/page-data/operators/continuous-deployment/docs/README/page-data.json index 9b037073bfc4..fac05000c74b 100644 --- a/page-data/operators/continuous-deployment/docs/README/page-data.json +++ b/page-data/operators/continuous-deployment/docs/README/page-data.json @@ -1 +1 @@ -{"componentChunkName":"component---src-templates-doc-js","path":"/operators/continuous-deployment/docs/README/","result":{"data":{"site":{"siteMetadata":{"title":"Operate First"}},"mdx":{"id":"85d2862c-a341-5273-b19c-3f694523f424","body":"function _extends() { _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; }; return _extends.apply(this, arguments); }\n\nfunction _objectWithoutProperties(source, excluded) { if (source == null) return {}; var target = _objectWithoutPropertiesLoose(source, excluded); var key, i; if (Object.getOwnPropertySymbols) { var sourceSymbolKeys = Object.getOwnPropertySymbols(source); for (i = 0; i < sourceSymbolKeys.length; i++) { key = sourceSymbolKeys[i]; if (excluded.indexOf(key) >= 0) continue; if (!Object.prototype.propertyIsEnumerable.call(source, key)) continue; target[key] = source[key]; } } return target; }\n\nfunction _objectWithoutPropertiesLoose(source, excluded) { if (source == null) return {}; var target = {}; var sourceKeys = Object.keys(source); var key, i; for (i = 0; i < sourceKeys.length; i++) { key = sourceKeys[i]; if (excluded.indexOf(key) >= 0) continue; target[key] = source[key]; } return target; }\n\n/* @jsx mdx */\nvar _frontmatter = {};\nvar layoutProps = {\n _frontmatter: _frontmatter\n};\nvar MDXLayout = \"wrapper\";\nreturn function MDXContent(_ref) {\n var components = _ref.components,\n props = _objectWithoutProperties(_ref, [\"components\"]);\n\n return mdx(MDXLayout, _extends({}, layoutProps, props, {\n components: components,\n mdxType: \"MDXLayout\"\n }), mdx(\"p\", null, \"Here you will find a series of docs that outline various procedures and how-tos when interacting with ArgoCD.\"), mdx(\"h1\", null, \"CRC\"), mdx(\"p\", null, \"CRC stands for Code Ready Containers. Download CRC here: \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://developers.redhat.com/products/codeready-containers/overview\"\n }), \"https://developers.redhat.com/products/codeready-containers/overview\"), \". Follow the guides below for setting up ArgoCD and deploying Open Data Hub (via ArgoCD) in CRC:\"), mdx(\"ol\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"/cb2ed6ae1d9d559404798d0e4dde4d5e/crc.md\"\n }), \"Installation of ArgoCD\"), \" - Guide with instructions for setting up ArgoCD in CRC.\"), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"/5a1465700bda46b26165eebf1a672204/odh-install-crc.md\"\n }), \"Installation of ODH\"), \" - Guide with instructions on deploying Open Data Hub in CRC.\")), mdx(\"h1\", null, \"Quicklab\"), mdx(\"p\", null, mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://quicklab.upshift.redhat.com/clusters\"\n }), \"Quicklab\"), \" is a web application where users can automatically provision and install clusters of various Red Hat products into public and private clouds. Follow the guides below for setting up ArgoCD and deploying Open Data Hub (via ArgoCD) in a Quicklab cluster:\"), mdx(\"ol\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"/f9dab4cf7cc9c72c3013ef6b8b0f7eb2/quicklab.md\"\n }), \"Installation of ArgoCD\"), \" - Guide with instructions for setting up ArgoCD in a Quicklab cluster.\"), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"./downstream/on-cluster-persistent-storage/README.md\"\n }), \"Setup Persistent Volumes\"), \" - Bare Openshift cluster installations, like for example Quicklab\\u2019s Openshift 4 UPI clusters may lack persistent volume setup. This guide provides instructions for setting up PVs in your Quicklab cluster.\"), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"/37a57d275946e2550da13155bf3abc03/odh-install-quicklab.md\"\n }), \"Installation of ODH\"), \" - Guide with instructions on deploying the Open Data Hub in a Quicklab cluster.\")), mdx(\"h1\", null, \"Next steps\"), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"/501ff354b179b38adea33d1ede7e66e4/modify-odh-deployment.md\"\n }), \"Modifying your ODH deployment\"), \" - Guide for customizing your Open Data Hub deployment i.e. adding multiple services/applications.\")));\n}\n;\nMDXContent.isMDXComponent = true;","frontmatter":{"title":"","description":null}}},"pageContext":{"id":"85d2862c-a341-5273-b19c-3f694523f424","slug":"docs/README"}},"staticQueryHashes":["117426894","3000541721","3753692419"]} \ No newline at end of file +{"componentChunkName":"component---src-templates-doc-js","path":"/operators/continuous-deployment/docs/README/","result":{"data":{"site":{"siteMetadata":{"title":"Operate First"}},"mdx":{"id":"85d2862c-a341-5273-b19c-3f694523f424","body":"function _extends() { _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; }; return _extends.apply(this, arguments); }\n\nfunction _objectWithoutProperties(source, excluded) { if (source == null) return {}; var target = _objectWithoutPropertiesLoose(source, excluded); var key, i; if (Object.getOwnPropertySymbols) { var sourceSymbolKeys = Object.getOwnPropertySymbols(source); for (i = 0; i < sourceSymbolKeys.length; i++) { key = sourceSymbolKeys[i]; if (excluded.indexOf(key) >= 0) continue; if (!Object.prototype.propertyIsEnumerable.call(source, key)) continue; target[key] = source[key]; } } return target; }\n\nfunction _objectWithoutPropertiesLoose(source, excluded) { if (source == null) return {}; var target = {}; var sourceKeys = Object.keys(source); var key, i; for (i = 0; i < sourceKeys.length; i++) { key = sourceKeys[i]; if (excluded.indexOf(key) >= 0) continue; target[key] = source[key]; } return target; }\n\n/* @jsx mdx */\nvar _frontmatter = {};\nvar layoutProps = {\n _frontmatter: _frontmatter\n};\nvar MDXLayout = \"wrapper\";\nreturn function MDXContent(_ref) {\n var components = _ref.components,\n props = _objectWithoutProperties(_ref, [\"components\"]);\n\n return mdx(MDXLayout, _extends({}, layoutProps, props, {\n components: components,\n mdxType: \"MDXLayout\"\n }), mdx(\"p\", null, \"Here you will find a series of docs that outline various procedures and how-tos when interacting with ArgoCD.\"), mdx(\"h1\", null, \"CRC\"), mdx(\"p\", null, \"CRC stands for Code Ready Containers. Download CRC here: \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://developers.redhat.com/products/codeready-containers/overview\"\n }), \"https://developers.redhat.com/products/codeready-containers/overview\"), \". Follow the guides below for setting up ArgoCD and deploying Open Data Hub (via ArgoCD) in CRC:\"), mdx(\"ol\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"/cb2ed6ae1d9d559404798d0e4dde4d5e/crc.md\"\n }), \"Installation of ArgoCD\"), \" - Guide with instructions for setting up ArgoCD in CRC.\"), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"/5a1465700bda46b26165eebf1a672204/odh-install-crc.md\"\n }), \"Installation of ODH\"), \" - Guide with instructions on deploying Open Data Hub in CRC.\")), mdx(\"h1\", null, \"Quicklab\"), mdx(\"p\", null, mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://quicklab.upshift.redhat.com/clusters\"\n }), \"Quicklab\"), \" is a web application where users can automatically provision and install clusters of various Red Hat products into public and private clouds. Follow the guides below for setting up ArgoCD and deploying Open Data Hub (via ArgoCD) in a Quicklab cluster:\"), mdx(\"ol\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"/0524a8b6e000303e9d4dd17fc0fb6647/quicklab.md\"\n }), \"Installation of ArgoCD\"), \" - Guide with instructions for setting up ArgoCD in a Quicklab cluster.\"), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"/e1addec5b3564a6a1ac472a0e48a23fd/README.md\"\n }), \"Setup Persistent Volumes\"), \" - Bare Openshift cluster installations, like for example Quicklab\\u2019s Openshift 4 UPI clusters may lack persistent volume setup. This guide provides instructions for setting up PVs in your Quicklab cluster.\"), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"/37a57d275946e2550da13155bf3abc03/odh-install-quicklab.md\"\n }), \"Installation of ODH\"), \" - Guide with instructions on deploying the Open Data Hub in a Quicklab cluster.\")), mdx(\"h1\", null, \"Next steps\"), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"a\", _extends({\n parentName: \"li\"\n }, {\n \"href\": \"/501ff354b179b38adea33d1ede7e66e4/modify-odh-deployment.md\"\n }), \"Modifying your ODH deployment\"), \" - Guide for customizing your Open Data Hub deployment i.e. adding multiple services/applications.\")));\n}\n;\nMDXContent.isMDXComponent = true;","frontmatter":{"title":"","description":null}}},"pageContext":{"id":"85d2862c-a341-5273-b19c-3f694523f424","slug":"docs/README"}},"staticQueryHashes":["117426894","3000541721","3753692419"]} \ No newline at end of file diff --git a/page-data/operators/continuous-deployment/docs/downstream/odh-install-quicklab/page-data.json b/page-data/operators/continuous-deployment/docs/downstream/odh-install-quicklab/page-data.json index 7317c62abe03..d6f2be21ddc8 100644 --- a/page-data/operators/continuous-deployment/docs/downstream/odh-install-quicklab/page-data.json +++ b/page-data/operators/continuous-deployment/docs/downstream/odh-install-quicklab/page-data.json @@ -1 +1 @@ -{"componentChunkName":"component---src-templates-doc-js","path":"/operators/continuous-deployment/docs/downstream/odh-install-quicklab/","result":{"data":{"site":{"siteMetadata":{"title":"Operate First"}},"mdx":{"id":"7ecb1b21-99d5-5725-8692-79894320799e","body":"function _extends() { _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; }; return _extends.apply(this, arguments); }\n\nfunction _objectWithoutProperties(source, excluded) { if (source == null) return {}; var target = _objectWithoutPropertiesLoose(source, excluded); var key, i; if (Object.getOwnPropertySymbols) { var sourceSymbolKeys = Object.getOwnPropertySymbols(source); for (i = 0; i < sourceSymbolKeys.length; i++) { key = sourceSymbolKeys[i]; if (excluded.indexOf(key) >= 0) continue; if (!Object.prototype.propertyIsEnumerable.call(source, key)) continue; target[key] = source[key]; } } return target; }\n\nfunction _objectWithoutPropertiesLoose(source, excluded) { if (source == null) return {}; var target = {}; var sourceKeys = Object.keys(source); var key, i; for (i = 0; i < sourceKeys.length; i++) { key = sourceKeys[i]; if (excluded.indexOf(key) >= 0) continue; target[key] = source[key]; } return target; }\n\n/* @jsx mdx */\nvar _frontmatter = {};\nvar layoutProps = {\n _frontmatter: _frontmatter\n};\nvar MDXLayout = \"wrapper\";\nreturn function MDXContent(_ref) {\n var components = _ref.components,\n props = _objectWithoutProperties(_ref, [\"components\"]);\n\n return mdx(MDXLayout, _extends({}, layoutProps, props, {\n components: components,\n mdxType: \"MDXLayout\"\n }), mdx(\"h1\", null, \"Installing ODH using ArgoCD in Quicklab\"), mdx(\"p\", null, \"The steps for installing ODH in Quicklab are basically the same as for \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"/5a1465700bda46b26165eebf1a672204/odh-install-crc.md\"\n }), \"CRC\"), \".\"), mdx(\"p\", null, \"The only difference is that you need to use the correct URL for your cluster and setup sufficient persistent volumes (PVs) in your cluster.\"), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"To setup persistent volumes in your Quicklab cluster, follow the guide \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"./on-cluster-persistent-storage\"\n }), \"here\"), \".\")), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"In \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"/f9dab4cf7cc9c72c3013ef6b8b0f7eb2/quicklab.md\"\n }), \"quicklab guide\"), \" step 9 there\\u2019s a screenshot with the Hosts value and the \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"oc login ...\"), \" command. Use the value (e.g. \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"upi-0.tcoufaltest.lab.upshift.rdu2.redhat.com:6443\"), \") as the value of the Cluster in steps \\u201CCreating the ODH operator\\u201D and \\u201CCreating the ODH deployment\\u201D in \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"/5a1465700bda46b26165eebf1a672204/odh-install-crc.md\"\n }), \"CRC\"), \".\")), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"If you choose to use the command-line to create the Application resources, then edit \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"examples/odh-operator-app.yaml\"), \" and \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"examples/odh-deployment-app.yaml\"), \" and put the value of Cluster there.\")), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Also, please note that if you are installing multiple ODH components, you may need to assign additional worker nodes for your cluster. This is mentioned in \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"/f9dab4cf7cc9c72c3013ef6b8b0f7eb2/quicklab.md\"\n }), \"quicklab guide\"), \" step 3.\"))), mdx(\"p\", null, \"Except for the Cluster address, the steps are exactly the same.\"));\n}\n;\nMDXContent.isMDXComponent = true;","frontmatter":{"title":"","description":null}}},"pageContext":{"id":"7ecb1b21-99d5-5725-8692-79894320799e","slug":"docs/downstream/odh-install-quicklab"}},"staticQueryHashes":["117426894","3000541721","3753692419"]} \ No newline at end of file +{"componentChunkName":"component---src-templates-doc-js","path":"/operators/continuous-deployment/docs/downstream/odh-install-quicklab/","result":{"data":{"site":{"siteMetadata":{"title":"Operate First"}},"mdx":{"id":"7ecb1b21-99d5-5725-8692-79894320799e","body":"function _extends() { _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; }; return _extends.apply(this, arguments); }\n\nfunction _objectWithoutProperties(source, excluded) { if (source == null) return {}; var target = _objectWithoutPropertiesLoose(source, excluded); var key, i; if (Object.getOwnPropertySymbols) { var sourceSymbolKeys = Object.getOwnPropertySymbols(source); for (i = 0; i < sourceSymbolKeys.length; i++) { key = sourceSymbolKeys[i]; if (excluded.indexOf(key) >= 0) continue; if (!Object.prototype.propertyIsEnumerable.call(source, key)) continue; target[key] = source[key]; } } return target; }\n\nfunction _objectWithoutPropertiesLoose(source, excluded) { if (source == null) return {}; var target = {}; var sourceKeys = Object.keys(source); var key, i; for (i = 0; i < sourceKeys.length; i++) { key = sourceKeys[i]; if (excluded.indexOf(key) >= 0) continue; target[key] = source[key]; } return target; }\n\n/* @jsx mdx */\nvar _frontmatter = {};\nvar layoutProps = {\n _frontmatter: _frontmatter\n};\nvar MDXLayout = \"wrapper\";\nreturn function MDXContent(_ref) {\n var components = _ref.components,\n props = _objectWithoutProperties(_ref, [\"components\"]);\n\n return mdx(MDXLayout, _extends({}, layoutProps, props, {\n components: components,\n mdxType: \"MDXLayout\"\n }), mdx(\"h1\", null, \"Installing ODH using ArgoCD in Quicklab\"), mdx(\"p\", null, \"The steps for installing ODH in Quicklab are basically the same as for \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"/5a1465700bda46b26165eebf1a672204/odh-install-crc.md\"\n }), \"CRC\"), \".\"), mdx(\"p\", null, \"The only difference is that you need to use the correct URL for your cluster and setup sufficient persistent volumes (PVs) in your cluster.\"), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"To setup persistent volumes in your Quicklab cluster, follow the guide \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"./on-cluster-persistent-storage\"\n }), \"here\"), \".\")), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"In \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"/0524a8b6e000303e9d4dd17fc0fb6647/quicklab.md\"\n }), \"quicklab guide\"), \" step 9 there\\u2019s a screenshot with the Hosts value and the \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"oc login ...\"), \" command. Use the value (e.g. \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"upi-0.tcoufaltest.lab.upshift.rdu2.redhat.com:6443\"), \") as the value of the Cluster in steps \\u201CCreating the ODH operator\\u201D and \\u201CCreating the ODH deployment\\u201D in \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"/5a1465700bda46b26165eebf1a672204/odh-install-crc.md\"\n }), \"CRC\"), \".\")), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"If you choose to use the command-line to create the Application resources, then edit \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"examples/odh-operator-app.yaml\"), \" and \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"examples/odh-deployment-app.yaml\"), \" and put the value of Cluster there.\")), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Also, please note that if you are installing multiple ODH components, you may need to assign additional worker nodes for your cluster. This is mentioned in \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"/0524a8b6e000303e9d4dd17fc0fb6647/quicklab.md\"\n }), \"quicklab guide\"), \" step 3.\"))), mdx(\"p\", null, \"Except for the Cluster address, the steps are exactly the same.\"));\n}\n;\nMDXContent.isMDXComponent = true;","frontmatter":{"title":"","description":null}}},"pageContext":{"id":"7ecb1b21-99d5-5725-8692-79894320799e","slug":"docs/downstream/odh-install-quicklab"}},"staticQueryHashes":["117426894","3000541721","3753692419"]} \ No newline at end of file diff --git a/page-data/operators/continuous-deployment/docs/downstream/on-cluster-persistent-storage/README/page-data.json b/page-data/operators/continuous-deployment/docs/downstream/on-cluster-persistent-storage/README/page-data.json new file mode 100644 index 000000000000..c6694214d563 --- /dev/null +++ b/page-data/operators/continuous-deployment/docs/downstream/on-cluster-persistent-storage/README/page-data.json @@ -0,0 +1 @@ +{"componentChunkName":"component---src-templates-doc-js","path":"/operators/continuous-deployment/docs/downstream/on-cluster-persistent-storage/README/","result":{"data":{"site":{"siteMetadata":{"title":"Operate First"}},"mdx":{"id":"bd3f3c40-5079-5f97-917c-c1d853961c09","body":"function _extends() { _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; }; return _extends.apply(this, arguments); }\n\nfunction _objectWithoutProperties(source, excluded) { if (source == null) return {}; var target = _objectWithoutPropertiesLoose(source, excluded); var key, i; if (Object.getOwnPropertySymbols) { var sourceSymbolKeys = Object.getOwnPropertySymbols(source); for (i = 0; i < sourceSymbolKeys.length; i++) { key = sourceSymbolKeys[i]; if (excluded.indexOf(key) >= 0) continue; if (!Object.prototype.propertyIsEnumerable.call(source, key)) continue; target[key] = source[key]; } } return target; }\n\nfunction _objectWithoutPropertiesLoose(source, excluded) { if (source == null) return {}; var target = {}; var sourceKeys = Object.keys(source); var key, i; for (i = 0; i < sourceKeys.length; i++) { key = sourceKeys[i]; if (excluded.indexOf(key) >= 0) continue; target[key] = source[key]; } return target; }\n\n/* @jsx mdx */\nvar _frontmatter = {};\nvar layoutProps = {\n _frontmatter: _frontmatter\n};\nvar MDXLayout = \"wrapper\";\nreturn function MDXContent(_ref) {\n var components = _ref.components,\n props = _objectWithoutProperties(_ref, [\"components\"]);\n\n return mdx(MDXLayout, _extends({}, layoutProps, props, {\n components: components,\n mdxType: \"MDXLayout\"\n }), mdx(\"h1\", null, \"Set up on-cluster PersistentVolumes storage using NFS on local node\"), mdx(\"p\", null, \"Bare Openshift cluster installations, like for example Quicklab\\u2019s Openshift 4 UPI clusters may lack persistent volume setup. This guide will help you set it up.\"), mdx(\"p\", null, \"Please verify that your cluster really lacks \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"pv\"), \"s:\"), mdx(\"ol\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Login as a cluster admin\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Lookup available \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"PersistentVolume\"), \" resources:\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"bash\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-bash\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-bash\"\n }), \"$ oc get \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token function\"\n }), \"pv\"), \"\\nNo resources found\"))))), mdx(\"p\", null, \"If there are no \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"PersistentVolume\"), \"s available please continue and follow this guide. We\\u2019re gonna set up NFS server on the cluster node and show Openshift how to connect to it.\"), mdx(\"p\", null, \"Note: This guide will lead you through the process of setting up PVs, which use the deprecated \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"Recycle\"), \" reclaim policy. This makes the \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"PersistentVolume\"), \" available again as soon as the \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"PersistentVolumeClaim\"), \" resource is terminated and removed. However the data are left on the NFS share untouched. While this is suitable for development purposes, be advised that old data (from previous mounts) will be still available on the volume. Please consult \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming\"\n }), \"Kubernetes docs\"), \" for other options.\"), mdx(\"h2\", null, \"Manual steps\"), mdx(\"p\", null, \"See automated Ansible playbook bellow for easier-to-use provisioning\"), mdx(\"h3\", null, \"Prepare remote host\"), mdx(\"ol\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"SSH to the Quicklab node, and become superuser:\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"sh\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-sh\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-sh\"\n }), \"curl https://gitlab.cee.redhat.com/cee_ops/quicklab/raw/master/docs/quicklab.key --output ~/.ssh/quicklab.key\\nchmod 600 ~/.ssh/quicklab.key\\nssh -i ~/.ssh/quicklab.key -o \\\"UserKnownHostsFile /dev/null\\\" -o \\\"StrictHostKeyChecking no\\\" quicklab@HOST\\n\\n# On HOST\\nsudo su -\")))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Install \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"nfs-utils\"), \" package\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"sh\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-sh\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-sh\"\n }), \"yum install nfs-utils\")))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Create exported directories (for example in \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"/mnt/nfs\"), \") and set ownership and permissions\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"sh\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-sh\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-sh\"\n }), \"mkdir -p /mnt/nfs/A ...\\nchown nfsnobody:nfsnobody /mnt/nfs/A\\nchmod 0777 /mnt/nfs/A\")))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Populate \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"/etc/exports\"), \" file referencing directories from previous step to be accessible from your nodes as read,write:\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"txt\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-txt\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-txt\"\n }), \" /mnt/nfs/A node1(rw) node2(rw) ...\\n ...\")))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Allow NFS in firewall\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"sh\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-sh\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-sh\"\n }), \"firewall-cmd --permanent --add-service mountd\\nfirewall-cmd --permanent --add-service rpc-bind\\nfirewall-cmd --permanent --add-service nfs\\nfirewall-cmd --reload\")))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Start and enable NFS service\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"sh\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-sh\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-sh\"\n }), \"systemctl enable --now nfs-server\"))))), mdx(\"h3\", null, \"Add PersistentVolumes to Openshift cluster\"), mdx(\"p\", null, \"Login as a cluster admin and create a \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"PersistentVolume\"), \" resource for each network share using this manifest:\"), mdx(\"div\", {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"yaml\"\n }, mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-yaml\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-yaml\"\n }), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"apiVersion\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" v1\\n\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"kind\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" PersistentVolume\\n\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"metadata\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"name\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" NAME \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token comment\"\n }), \"# Unique name\"), \"\\n\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"spec\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"capacity\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"storage\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" CAPACITY \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token comment\"\n }), \"# Keep in mind the total max size, the Quicklab host has a disk size of 20Gi total (usually ~15Gi of available and usable space)\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"accessModes\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"-\"), \" ReadWriteOnce\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"nfs\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"path\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" /mnt/nfs/A \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token comment\"\n }), \"# Path to the NFS share on the server\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"server\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" HOST_IP \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token comment\"\n }), \"# Not a hostname\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"persistentVolumeReclaimPolicy\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" Recycle\"))), mdx(\"h2\", null, \"Using Ansible\"), mdx(\"p\", null, \"To avoid all the hustle with manual setup, we can use an Ansible playbook \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"playbook.yaml\"\n }), mdx(\"code\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"language-text\"\n }), \"playbook.yaml\")), \".\"), mdx(\"h3\", null, \"Setup\"), mdx(\"p\", null, \"Please install Ansible and some additional collections from Ansible Galaxy needed by this playbook: \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://galaxy.ansible.com/ansible/posix\"\n }), \"ansible.posix\"), \" for \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"firewalld\"), \" module and \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://galaxy.ansible.com/community/kubernetes\"\n }), \"community.kubernetes\"), \" for \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"k8s\"), \" module. Also install the underlying python dependency \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"openshift\"), \".\"), mdx(\"div\", {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"bash\"\n }, mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-bash\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-bash\"\n }), \"$ ansible-galaxy collection \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token function\"\n }), \"install\"), \" ansible.posix\\nStarting galaxy collection \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token function\"\n }), \"install\"), \" process\\nProcess \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token function\"\n }), \"install\"), \" dependency map\\nStarting collection \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token function\"\n }), \"install\"), \" process\\nInstalling \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'ansible.posix:1.1.1'\"), \" to \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'/home/tcoufal/.ansible/collections/ansible_collections/ansible/posix'\"), \"\\nDownloading https://galaxy.ansible.com/download/ansible-posix-1.1.1.tar.gz to /home/tcoufal/.ansible/tmp/ansible-local-43567u9ge76rl/tmpyttcjmul\\nansible.posix \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"1.1\"), \".1\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \" was installed successfully\\n\\n$ ansible-galaxy collection \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token function\"\n }), \"install\"), \" community.kubernetes\\nStarting galaxy collection \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token function\"\n }), \"install\"), \" process\\nProcess \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token function\"\n }), \"install\"), \" dependency map\\nStarting collection \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token function\"\n }), \"install\"), \" process\\nInstalling \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'community.kubernetes:1.0.0'\"), \" to \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'/home/tcoufal/.ansible/collections/ansible_collections/community/kubernetes'\"), \"\\nDownloading https://galaxy.ansible.com/download/community-kubernetes-1.0.0.tar.gz to /home/tcoufal/.ansible/tmp/ansible-local-29431yk2zoutk/tmpwgl4xsnb\\ncommunity.kubernetes \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"1.0\"), \".0\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \" was installed successfully\\n\\n$ pip \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token function\"\n }), \"install\"), \" --user openshift\\n\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"..\"), \".\\nInstalling collected packages: kubernetes, openshift\\n Running setup.py \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token function\"\n }), \"install\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token keyword\"\n }), \"for\"), \" openshift \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"..\"), \". \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token keyword\"\n }), \"done\"), \"\\nSuccessfully installed kubernetes-11.0.0 openshift-0.11.2\"))), mdx(\"p\", null, \"Additionally please login to your Quicklab cluster via \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"oc login\"), \" as a cluster admin.\"), mdx(\"h3\", null, \"Configuration\"), mdx(\"p\", null, \"Please view and modify the \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"env.yaml\"), \" file (or create additional variable files, and select it before executing playbook via \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"vars_file\"), \" variable)\"), mdx(\"p\", null, \"Example environment file:\"), mdx(\"div\", {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"yaml\"\n }, mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-yaml\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-yaml\"\n }), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"quicklab_host\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"\\\"upi-0.tcoufaldev.lab.upshift.rdu2.redhat.com\\\"\"), \"\\n\\n\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"pv_count_per_size\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"1Gi\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"6\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"2Gi\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"2\"), \"\\n \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token key atrule\"\n }), \"5Gi\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \":\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"1\")))), mdx(\"ul\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"code\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"language-text\"\n }), \"quicklab_host\"), \" - Points to one of the \\u201CHosts\\u201D from your Quicklab Cluster info tab\"), mdx(\"li\", {\n parentName: \"ul\"\n }, mdx(\"code\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"language-text\"\n }), \"pv_count_per_size\"), \" - Defines PV counts in relation to maximal allocable sizes map:\", mdx(\"ul\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"pf-c-list\"\n }), mdx(\"li\", {\n parentName: \"ul\"\n }, \"Use the target PV size as a key (follow GO/Kubernetes notation)\"), mdx(\"li\", {\n parentName: \"ul\"\n }, \"Use volume count for that key \\u201Csize\\u201D as the value\"), mdx(\"li\", {\n parentName: \"ul\"\n }, \"Keep in mind the total size sum(key\", \"*\", \"value for key,value in pv_count_per_size.items()) < Disk size of the Quicklab instance (usually ~15Gi of available space)\")))), mdx(\"h3\", null, \"Run the playbook\"), mdx(\"p\", null, \"Run the \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"playbook.yaml\"), \" (if you created a new environment file and you\\u2019d like to use other than default \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"env.yaml\"), \", please specify the file via \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"-e vars_file=any-filename.yaml\"), \")\"), mdx(\"div\", {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"bash\"\n }, mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-bash\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-bash\"\n }), \"$ ansible-playbook playbook.yaml\"))), mdx(\"details\", null, mdx(\"summary\", null, \"Click to expand output\"), mdx(\"div\", {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"bash\"\n }, mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-bash\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-bash\"\n }), \"PLAY \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Dynamically create Quicklab \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token function\"\n }), \"host\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token keyword\"\n }), \"in\"), \" Ansible\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" **********************************************************************\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Gathering Facts\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" **************************************************************************************************\\nok: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Load variables file\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" **********************************************************************************************\\nok: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Preprocess the PV count per size map to a flat list\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" **************************************************************\\nok: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Fetch Quicklab certificate\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" ***************************************************************************************\\nok: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Adding host\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" ******************************************************************************************************\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Get available Openshift nodes\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" ************************************************************************************\\nok: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Preprocess nodes k8s resource response to list of IPs\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" ************************************************************\\nok: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nPLAY \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Setup NFS on Openshift host\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" **************************************************************************************\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Gathering Facts\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" **************************************************************************************************\\nok: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Copy localhost variables \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token keyword\"\n }), \"for\"), \" easier access\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" ***********************************************************************\\nok: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Install the NFS server\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" *******************************************************************************************\\nok: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Create \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token builtin class-name\"\n }), \"export\"), \" dirs\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" ***********************************************************************************************\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'1Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'1Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"1\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'1Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"2\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'1Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"3\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'1Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"4\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'1Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"5\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'2Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'2Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"1\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'5Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Populate /etc/exports file\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" ***************************************************************************************\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Allow services \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token keyword\"\n }), \"in\"), \" firewall\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" ***************************************************************************************\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), \"nfs\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), \"rpc-bind\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), \"mountd\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Reload firewall\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" **************************************************************************************************\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Enable and start NFS server\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" **************************************************************************************\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Reload exports when the server was already started\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" ***************************************************************\\nskipping: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"quicklab\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nPLAY \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Create PersistentVolumes \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token keyword\"\n }), \"in\"), \" OpenShift\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" ****************************************************************************\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Gathering Facts\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" **************************************************************************************************\\nok: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Find IPv4 of the host\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" ********************************************************************************************\\nok: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \"\\n\\nTASK \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"Create PersistentVolume resource\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" *********************************************************************************\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'1Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'1Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"1\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'1Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"2\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'1Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"3\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'1Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"4\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'1Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"5\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'2Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'2Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"1\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\nchanged: \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), \"localhost\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \">\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"(\"), \"item\", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"[\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token string\"\n }), \"'5Gi'\"), \", \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \"]\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token punctuation\"\n }), \")\"), \"\\n\\nPLAY RECAP **************************************************************************************************************\\nlocalhost \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token builtin class-name\"\n }), \":\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"ok\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"10\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"changed\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"2\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"unreachable\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"failed\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"skipped\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"rescued\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"ignored\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), \"\\nquicklab \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token builtin class-name\"\n }), \":\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"ok\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"8\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"changed\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"5\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"unreachable\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"failed\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"skipped\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"1\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"rescued\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"), \" \", mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token assign-left variable\"\n }), \"ignored\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token operator\"\n }), \"=\"), mdx(\"span\", _extends({\n parentName: \"code\"\n }, {\n \"className\": \"token number\"\n }), \"0\"))))));\n}\n;\nMDXContent.isMDXComponent = true;","frontmatter":{"title":"","description":null}}},"pageContext":{"id":"bd3f3c40-5079-5f97-917c-c1d853961c09","slug":"docs/downstream/on-cluster-persistent-storage/README"}},"staticQueryHashes":["117426894","3000541721","3753692419"]} \ No newline at end of file diff --git a/page-data/operators/continuous-deployment/docs/downstream/quicklab/page-data.json b/page-data/operators/continuous-deployment/docs/downstream/quicklab/page-data.json index 4c27d4ec5569..8d46a255093a 100644 --- a/page-data/operators/continuous-deployment/docs/downstream/quicklab/page-data.json +++ b/page-data/operators/continuous-deployment/docs/downstream/quicklab/page-data.json @@ -1 +1 @@ -{"componentChunkName":"component---src-templates-doc-js","path":"/operators/continuous-deployment/docs/downstream/quicklab/","result":{"data":{"site":{"siteMetadata":{"title":"Operate First"}},"mdx":{"id":"55241a87-a016-5a0d-9e32-185958a41bdc","body":"function _extends() { _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; }; return _extends.apply(this, arguments); }\n\nfunction _objectWithoutProperties(source, excluded) { if (source == null) return {}; var target = _objectWithoutPropertiesLoose(source, excluded); var key, i; if (Object.getOwnPropertySymbols) { var sourceSymbolKeys = Object.getOwnPropertySymbols(source); for (i = 0; i < sourceSymbolKeys.length; i++) { key = sourceSymbolKeys[i]; if (excluded.indexOf(key) >= 0) continue; if (!Object.prototype.propertyIsEnumerable.call(source, key)) continue; target[key] = source[key]; } } return target; }\n\nfunction _objectWithoutPropertiesLoose(source, excluded) { if (source == null) return {}; var target = {}; var sourceKeys = Object.keys(source); var key, i; for (i = 0; i < sourceKeys.length; i++) { key = sourceKeys[i]; if (excluded.indexOf(key) >= 0) continue; target[key] = source[key]; } return target; }\n\n/* @jsx mdx */\nvar _frontmatter = {};\nvar layoutProps = {\n _frontmatter: _frontmatter\n};\nvar MDXLayout = \"wrapper\";\nreturn function MDXContent(_ref) {\n var components = _ref.components,\n props = _objectWithoutProperties(_ref, [\"components\"]);\n\n return mdx(MDXLayout, _extends({}, layoutProps, props, {\n components: components,\n mdxType: \"MDXLayout\"\n }), mdx(\"h1\", null, \"Quicklab\"), mdx(\"h2\", null, \"Set up a new Quicklab cluster\"), mdx(\"ol\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Go to \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://quicklab.upshift.redhat.com/\"\n }), \"https://quicklab.upshift.redhat.com/\"), \" and log in (top right corner)\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Click \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"New cluster\"))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Select \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"openshift4upi\"), \" template and a region you like the most, then select the reservation duration, the rest can be left as is:\\n\", mdx(\"span\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"gatsby-resp-image-wrapper\",\n \"style\": {\n \"position\": \"relative\",\n \"display\": \"block\",\n \"marginLeft\": \"auto\",\n \"marginRight\": \"auto\",\n \"maxWidth\": \"590px\"\n }\n }), \"\\n \", mdx(\"a\", _extends({\n parentName: \"span\"\n }, {\n \"className\": \"gatsby-resp-image-link\",\n \"href\": \"/static/2fdd250137a45304fb5dc792ebbf1f48/b7936/template_select.png\",\n \"style\": {\n \"display\": \"block\"\n },\n \"target\": \"_blank\",\n \"rel\": \"noopener\"\n }), \"\\n \", mdx(\"span\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-background-image\",\n \"style\": {\n \"paddingBottom\": \"43.24324324324324%\",\n \"position\": \"relative\",\n \"bottom\": \"0\",\n \"left\": \"0\",\n \"backgroundImage\": \"url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAJCAIAAAC9o5sfAAAACXBIWXMAAAsSAAALEgHS3X78AAAAhUlEQVQoz52RSw7DIAwFuf9JU2gwP4PBDipN1aydjGZhFg/zhMFcjkcws+m9zzl/52vQhvf3nlJkHv1kjCFa2GwvZ51zuw+5FqyViNetCgeL2WJ1hWzpng74Kr6xRiA2ABBgrS0sq4icqju3Ra2NaD1DHfyHETHntIwxYOtyB2Otvf7pLh+HwRCJuP6mOgAAAABJRU5ErkJggg==')\",\n \"backgroundSize\": \"cover\",\n \"display\": \"block\"\n }\n })), \"\\n \", mdx(\"img\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-image\",\n \"alt\": \"Select a template\",\n \"title\": \"Select a template\",\n \"src\": \"/static/2fdd250137a45304fb5dc792ebbf1f48/fcda8/template_select.png\",\n \"srcSet\": [\"/static/2fdd250137a45304fb5dc792ebbf1f48/12f09/template_select.png 148w\", \"/static/2fdd250137a45304fb5dc792ebbf1f48/e4a3f/template_select.png 295w\", \"/static/2fdd250137a45304fb5dc792ebbf1f48/fcda8/template_select.png 590w\", \"/static/2fdd250137a45304fb5dc792ebbf1f48/efc66/template_select.png 885w\", \"/static/2fdd250137a45304fb5dc792ebbf1f48/b7936/template_select.png 1155w\"],\n \"sizes\": \"(max-width: 590px) 100vw, 590px\",\n \"style\": {\n \"width\": \"100%\",\n \"height\": \"100%\",\n \"margin\": \"0\",\n \"verticalAlign\": \"middle\",\n \"position\": \"absolute\",\n \"top\": \"0\",\n \"left\": \"0\"\n },\n \"loading\": \"lazy\"\n })), \"\\n \"), \"\\n \")), mdx(\"p\", {\n parentName: \"li\"\n }, mdx(\"strong\", {\n parentName: \"p\"\n }, \"Note\"), \": You may need to assign additional worker nodes for your cluster if you are planning to \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"/37a57d275946e2550da13155bf3abc03/odh-install-quicklab.md\"\n }), \"install multiple ODH components\"), \".\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Go to cluster page by clicking on the cluster name in \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"My clusters\"), \" table\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Once the cluster reaches \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"Active\"), \" state your cluster history should look like this:\\n\", mdx(\"span\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"gatsby-resp-image-wrapper\",\n \"style\": {\n \"position\": \"relative\",\n \"display\": \"block\",\n \"marginLeft\": \"auto\",\n \"marginRight\": \"auto\",\n \"maxWidth\": \"590px\"\n }\n }), \"\\n \", mdx(\"a\", _extends({\n parentName: \"span\"\n }, {\n \"className\": \"gatsby-resp-image-link\",\n \"href\": \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/77f8f/cluster_log_1.png\",\n \"style\": {\n \"display\": \"block\"\n },\n \"target\": \"_blank\",\n \"rel\": \"noopener\"\n }), \"\\n \", mdx(\"span\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-background-image\",\n \"style\": {\n \"paddingBottom\": \"12.837837837837837%\",\n \"position\": \"relative\",\n \"bottom\": \"0\",\n \"left\": \"0\",\n \"backgroundImage\": \"url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAADCAIAAAAcOLh5AAAACXBIWXMAAAsSAAALEgHS3X78AAAAaklEQVQI11WN0Q7FIAhD/f8fNWYTURDQpzVmS+7tA6mHVpKquvsYo7UGM+c0s1orCJ7wCGBlRyA38UVvMvXe994iwsxrLVBMIso5g8QRyGeiijWNfUhCDWV8/1vGqVIKVvEvVEgmq0Ug6A9fXawCo3H3OQAAAABJRU5ErkJggg==')\",\n \"backgroundSize\": \"cover\",\n \"display\": \"block\"\n }\n })), \"\\n \", mdx(\"img\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-image\",\n \"alt\": \"Cluster is active for the first time\",\n \"title\": \"Cluster is active for the first time\",\n \"src\": \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/fcda8/cluster_log_1.png\",\n \"srcSet\": [\"/static/2d0fcea4e44f2253a02ee1fe25f67a62/12f09/cluster_log_1.png 148w\", \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/e4a3f/cluster_log_1.png 295w\", \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/fcda8/cluster_log_1.png 590w\", \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/efc66/cluster_log_1.png 885w\", \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/c83ae/cluster_log_1.png 1180w\", \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/77f8f/cluster_log_1.png 1353w\"],\n \"sizes\": \"(max-width: 590px) 100vw, 590px\",\n \"style\": {\n \"width\": \"100%\",\n \"height\": \"100%\",\n \"margin\": \"0\",\n \"verticalAlign\": \"middle\",\n \"position\": \"absolute\",\n \"top\": \"0\",\n \"left\": \"0\"\n },\n \"loading\": \"lazy\"\n })), \"\\n \"), \"\\n \"))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Now click on \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"New Bundle\"), \" button in \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"Product information\"), \" section\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Select \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"openshift4upi\"), \" bundle. A new form loads - you can keep all the values as they are (you can ignore the warning on top as well, since this is the first install attempt of Openshift on that cluster):\\n\", mdx(\"span\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"gatsby-resp-image-wrapper\",\n \"style\": {\n \"position\": \"relative\",\n \"display\": \"block\",\n \"marginLeft\": \"auto\",\n \"marginRight\": \"auto\",\n \"maxWidth\": \"590px\"\n }\n }), \"\\n \", mdx(\"a\", _extends({\n parentName: \"span\"\n }, {\n \"className\": \"gatsby-resp-image-link\",\n \"href\": \"/static/20c0bcce54f2809ef0fa4c2da296a706/1d553/bundle_select.png\",\n \"style\": {\n \"display\": \"block\"\n },\n \"target\": \"_blank\",\n \"rel\": \"noopener\"\n }), \"\\n \", mdx(\"span\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-background-image\",\n \"style\": {\n \"paddingBottom\": \"53.37837837837838%\",\n \"position\": \"relative\",\n \"bottom\": \"0\",\n \"left\": \"0\",\n \"backgroundImage\": \"url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAALCAIAAADwazoUAAAACXBIWXMAAAsSAAALEgHS3X78AAAA3UlEQVQoz5WR23KFIAxF+f9fbB9FTz0eRQmEQGIK1V5m+qJr1mw2zGTygIkehPm0lN9+XHNuHqXmTxEpzIbf3mUYpOuk78Va6a3Yrl2nSZyTeZZlkeW71JcjMcm+G7Wdjh86jvp8Nl+TImoILasJlZJibL2WlM4sWVXNMK8PBzOyJ9mItyxb2Zv5v/K3e2LzmOYXYCxymjnQJT2SiQHA+wAe/Ba8h4i7yBW5FANfJMqZKBHVk69R6vBScW6DWKfkDm0YU0qYYkTKRW5i1tVBbJ/GN2mbnXO6n+gd6uZPtqCDbN/QtsAAAAAASUVORK5CYII=')\",\n \"backgroundSize\": \"cover\",\n \"display\": \"block\"\n }\n })), \"\\n \", mdx(\"img\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-image\",\n \"alt\": \"Select a bundle\",\n \"title\": \"Select a bundle\",\n \"src\": \"/static/20c0bcce54f2809ef0fa4c2da296a706/fcda8/bundle_select.png\",\n \"srcSet\": [\"/static/20c0bcce54f2809ef0fa4c2da296a706/12f09/bundle_select.png 148w\", \"/static/20c0bcce54f2809ef0fa4c2da296a706/e4a3f/bundle_select.png 295w\", \"/static/20c0bcce54f2809ef0fa4c2da296a706/fcda8/bundle_select.png 590w\", \"/static/20c0bcce54f2809ef0fa4c2da296a706/efc66/bundle_select.png 885w\", \"/static/20c0bcce54f2809ef0fa4c2da296a706/1d553/bundle_select.png 1159w\"],\n \"sizes\": \"(max-width: 590px) 100vw, 590px\",\n \"style\": {\n \"width\": \"100%\",\n \"height\": \"100%\",\n \"margin\": \"0\",\n \"verticalAlign\": \"middle\",\n \"position\": \"absolute\",\n \"top\": \"0\",\n \"left\": \"0\"\n },\n \"loading\": \"lazy\"\n })), \"\\n \"), \"\\n \"))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Wait for OCP4 to install. After successful installation you should see a cluster history log like this:\\n\", mdx(\"span\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"gatsby-resp-image-wrapper\",\n \"style\": {\n \"position\": \"relative\",\n \"display\": \"block\",\n \"marginLeft\": \"auto\",\n \"marginRight\": \"auto\",\n \"maxWidth\": \"590px\"\n }\n }), \"\\n \", mdx(\"a\", _extends({\n parentName: \"span\"\n }, {\n \"className\": \"gatsby-resp-image-link\",\n \"href\": \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/cad6c/cluster_log_2.png\",\n \"style\": {\n \"display\": \"block\"\n },\n \"target\": \"_blank\",\n \"rel\": \"noopener\"\n }), \"\\n \", mdx(\"span\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-background-image\",\n \"style\": {\n \"paddingBottom\": \"22.2972972972973%\",\n \"position\": \"relative\",\n \"bottom\": \"0\",\n \"left\": \"0\",\n \"backgroundImage\": \"url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAECAIAAAABPYjBAAAACXBIWXMAAAsSAAALEgHS3X78AAAAgElEQVQI102OSQ7FIAxDe/97IuZRJASx+f7QVvUC2SYvyqW1DiEwc+8958xb8DFGIoLH++nJxnz6McZlrQUvIggYmluIxpiU0vFKqdbaWouHuMbz0Q2DxDLAsoXonHtjrRW/2EM8bKW5yz+MIfAHLqW8sPcel6M8c6cH7CvJA/8AipDkUAUTSkUAAAAASUVORK5CYII=')\",\n \"backgroundSize\": \"cover\",\n \"display\": \"block\"\n }\n })), \"\\n \", mdx(\"img\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-image\",\n \"alt\": \"Cluster log after OCP4 install\",\n \"title\": \"Cluster log after OCP4 install\",\n \"src\": \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/fcda8/cluster_log_2.png\",\n \"srcSet\": [\"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/12f09/cluster_log_2.png 148w\", \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/e4a3f/cluster_log_2.png 295w\", \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/fcda8/cluster_log_2.png 590w\", \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/efc66/cluster_log_2.png 885w\", \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/c83ae/cluster_log_2.png 1180w\", \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/cad6c/cluster_log_2.png 1339w\"],\n \"sizes\": \"(max-width: 590px) 100vw, 590px\",\n \"style\": {\n \"width\": \"100%\",\n \"height\": \"100%\",\n \"margin\": \"0\",\n \"verticalAlign\": \"middle\",\n \"position\": \"absolute\",\n \"top\": \"0\",\n \"left\": \"0\"\n },\n \"loading\": \"lazy\"\n })), \"\\n \"), \"\\n \"))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Use the link and credentials from the \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"Cluster Information\"), \" section to access your cluster.\\n\", mdx(\"span\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"gatsby-resp-image-wrapper\",\n \"style\": {\n \"position\": \"relative\",\n \"display\": \"block\",\n \"marginLeft\": \"auto\",\n \"marginRight\": \"auto\",\n \"maxWidth\": \"590px\"\n }\n }), \"\\n \", mdx(\"a\", _extends({\n parentName: \"span\"\n }, {\n \"className\": \"gatsby-resp-image-link\",\n \"href\": \"/static/f65f739bcfef148b35b3b2f0d6a6e550/d30ee/cluster_information.png\",\n \"style\": {\n \"display\": \"block\"\n },\n \"target\": \"_blank\",\n \"rel\": \"noopener\"\n }), \"\\n \", mdx(\"span\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-background-image\",\n \"style\": {\n \"paddingBottom\": \"72.2972972972973%\",\n \"position\": \"relative\",\n \"bottom\": \"0\",\n \"left\": \"0\",\n \"backgroundImage\": \"url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAOCAIAAACgpqunAAAACXBIWXMAAAsSAAALEgHS3X78AAABH0lEQVQoz5VSi26DMAzM/39kG2gnKCQ0L/IO9ALaNGlsoycnQiJnn+0jMcYQgvd+Kcv6ieUciNZaCGGtzaki54yDH+sJEGMMY6NUMmyIoQoppZwl912P4s45+w2nyHj33DBxLoQ0ZgbeIHM+7XxtTIzJb9jr/62f2Hl+TqjK8RTdQjr073zMD+SSc40jEOuskFIqhUBlfM+26pZaGVsz+YRt1vMTxDOm7zfdPfSDyZ7JbhQf/XOL6d4h/DCuKS5HyydmGPjlMl6vjDacNqp/ZOezdYg0W0Rxbl2OOydu4pJS1bbq1uqmcULEUqrMlP61CsFw0J7Wpjokp92q+w23Qev6ewqCYVFKMe7gw+6Tr3v20bq6MyQ6JmNocEUt8j5efWEyw2hmktQAAAAASUVORK5CYII=')\",\n \"backgroundSize\": \"cover\",\n \"display\": \"block\"\n }\n })), \"\\n \", mdx(\"img\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-image\",\n \"alt\": \"Cluster information\",\n \"title\": \"Cluster information\",\n \"src\": \"/static/f65f739bcfef148b35b3b2f0d6a6e550/fcda8/cluster_information.png\",\n \"srcSet\": [\"/static/f65f739bcfef148b35b3b2f0d6a6e550/12f09/cluster_information.png 148w\", \"/static/f65f739bcfef148b35b3b2f0d6a6e550/e4a3f/cluster_information.png 295w\", \"/static/f65f739bcfef148b35b3b2f0d6a6e550/fcda8/cluster_information.png 590w\", \"/static/f65f739bcfef148b35b3b2f0d6a6e550/efc66/cluster_information.png 885w\", \"/static/f65f739bcfef148b35b3b2f0d6a6e550/d30ee/cluster_information.png 980w\"],\n \"sizes\": \"(max-width: 590px) 100vw, 590px\",\n \"style\": {\n \"width\": \"100%\",\n \"height\": \"100%\",\n \"margin\": \"0\",\n \"verticalAlign\": \"middle\",\n \"position\": \"absolute\",\n \"top\": \"0\",\n \"left\": \"0\"\n },\n \"loading\": \"lazy\"\n })), \"\\n \"), \"\\n \"))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Login as the \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"kubeadmin\"), \", take the value from \\u201CHosts\\u201D and port 6443.\\\\\\nFor example:\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"sh\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-sh\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-sh\"\n }), \"oc login upi-0.tcoufaltest.lab.upshift.rdu2.redhat.com:6443\"))))), mdx(\"h2\", null, \"Install Argo CD on your cluster\"), mdx(\"ol\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"kube:admin is not supported in user api, therefore you have to create additional user. Simplest way is to deploy an Oauth via Htpasswd:\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Create a htpasswd config file and deploy it to OpenShift:\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"sh\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-sh\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-sh\"\n }), \"$ htpasswd -nb username password > oc.htpasswd\\n$ oc create secret generic htpass-secret --from-file=htpasswd=oc.htpasswd -n openshift-config\\n$ cat <= 0) continue; if (!Object.prototype.propertyIsEnumerable.call(source, key)) continue; target[key] = source[key]; } } return target; }\n\nfunction _objectWithoutPropertiesLoose(source, excluded) { if (source == null) return {}; var target = {}; var sourceKeys = Object.keys(source); var key, i; for (i = 0; i < sourceKeys.length; i++) { key = sourceKeys[i]; if (excluded.indexOf(key) >= 0) continue; target[key] = source[key]; } return target; }\n\n/* @jsx mdx */\nvar _frontmatter = {};\nvar layoutProps = {\n _frontmatter: _frontmatter\n};\nvar MDXLayout = \"wrapper\";\nreturn function MDXContent(_ref) {\n var components = _ref.components,\n props = _objectWithoutProperties(_ref, [\"components\"]);\n\n return mdx(MDXLayout, _extends({}, layoutProps, props, {\n components: components,\n mdxType: \"MDXLayout\"\n }), mdx(\"h1\", null, \"Quicklab\"), mdx(\"h2\", null, \"Set up a new Quicklab cluster\"), mdx(\"ol\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Go to \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"https://quicklab.upshift.redhat.com/\"\n }), \"https://quicklab.upshift.redhat.com/\"), \" and log in (top right corner)\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Click \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"New cluster\"))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Select \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"openshift4upi\"), \" template and a region you like the most, then select the reservation duration, the rest can be left as is:\\n\", mdx(\"span\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"gatsby-resp-image-wrapper\",\n \"style\": {\n \"position\": \"relative\",\n \"display\": \"block\",\n \"marginLeft\": \"auto\",\n \"marginRight\": \"auto\",\n \"maxWidth\": \"590px\"\n }\n }), \"\\n \", mdx(\"a\", _extends({\n parentName: \"span\"\n }, {\n \"className\": \"gatsby-resp-image-link\",\n \"href\": \"/static/2fdd250137a45304fb5dc792ebbf1f48/b7936/template_select.png\",\n \"style\": {\n \"display\": \"block\"\n },\n \"target\": \"_blank\",\n \"rel\": \"noopener\"\n }), \"\\n \", mdx(\"span\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-background-image\",\n \"style\": {\n \"paddingBottom\": \"43.24324324324324%\",\n \"position\": \"relative\",\n \"bottom\": \"0\",\n \"left\": \"0\",\n \"backgroundImage\": \"url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAJCAIAAAC9o5sfAAAACXBIWXMAAAsSAAALEgHS3X78AAAAhUlEQVQoz52RSw7DIAwFuf9JU2gwP4PBDipN1aydjGZhFg/zhMFcjkcws+m9zzl/52vQhvf3nlJkHv1kjCFa2GwvZ51zuw+5FqyViNetCgeL2WJ1hWzpng74Kr6xRiA2ABBgrS0sq4icqju3Ra2NaD1DHfyHETHntIwxYOtyB2Otvf7pLh+HwRCJuP6mOgAAAABJRU5ErkJggg==')\",\n \"backgroundSize\": \"cover\",\n \"display\": \"block\"\n }\n })), \"\\n \", mdx(\"img\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-image\",\n \"alt\": \"Select a template\",\n \"title\": \"Select a template\",\n \"src\": \"/static/2fdd250137a45304fb5dc792ebbf1f48/fcda8/template_select.png\",\n \"srcSet\": [\"/static/2fdd250137a45304fb5dc792ebbf1f48/12f09/template_select.png 148w\", \"/static/2fdd250137a45304fb5dc792ebbf1f48/e4a3f/template_select.png 295w\", \"/static/2fdd250137a45304fb5dc792ebbf1f48/fcda8/template_select.png 590w\", \"/static/2fdd250137a45304fb5dc792ebbf1f48/efc66/template_select.png 885w\", \"/static/2fdd250137a45304fb5dc792ebbf1f48/b7936/template_select.png 1155w\"],\n \"sizes\": \"(max-width: 590px) 100vw, 590px\",\n \"style\": {\n \"width\": \"100%\",\n \"height\": \"100%\",\n \"margin\": \"0\",\n \"verticalAlign\": \"middle\",\n \"position\": \"absolute\",\n \"top\": \"0\",\n \"left\": \"0\"\n },\n \"loading\": \"lazy\"\n })), \"\\n \"), \"\\n \")), mdx(\"p\", {\n parentName: \"li\"\n }, mdx(\"strong\", {\n parentName: \"p\"\n }, \"Note\"), \": You may need to assign additional worker nodes for your cluster if you are planning to \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"/37a57d275946e2550da13155bf3abc03/odh-install-quicklab.md\"\n }), \"install multiple ODH components\"), \".\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Go to cluster page by clicking on the cluster name in \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"My clusters\"), \" table\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Once the cluster reaches \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"Active\"), \" state your cluster history should look like this:\\n\", mdx(\"span\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"gatsby-resp-image-wrapper\",\n \"style\": {\n \"position\": \"relative\",\n \"display\": \"block\",\n \"marginLeft\": \"auto\",\n \"marginRight\": \"auto\",\n \"maxWidth\": \"590px\"\n }\n }), \"\\n \", mdx(\"a\", _extends({\n parentName: \"span\"\n }, {\n \"className\": \"gatsby-resp-image-link\",\n \"href\": \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/77f8f/cluster_log_1.png\",\n \"style\": {\n \"display\": \"block\"\n },\n \"target\": \"_blank\",\n \"rel\": \"noopener\"\n }), \"\\n \", mdx(\"span\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-background-image\",\n \"style\": {\n \"paddingBottom\": \"12.837837837837837%\",\n \"position\": \"relative\",\n \"bottom\": \"0\",\n \"left\": \"0\",\n \"backgroundImage\": \"url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAADCAIAAAAcOLh5AAAACXBIWXMAAAsSAAALEgHS3X78AAAAaklEQVQI11WN0Q7FIAhD/f8fNWYTURDQpzVmS+7tA6mHVpKquvsYo7UGM+c0s1orCJ7wCGBlRyA38UVvMvXe994iwsxrLVBMIso5g8QRyGeiijWNfUhCDWV8/1vGqVIKVvEvVEgmq0Ug6A9fXawCo3H3OQAAAABJRU5ErkJggg==')\",\n \"backgroundSize\": \"cover\",\n \"display\": \"block\"\n }\n })), \"\\n \", mdx(\"img\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-image\",\n \"alt\": \"Cluster is active for the first time\",\n \"title\": \"Cluster is active for the first time\",\n \"src\": \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/fcda8/cluster_log_1.png\",\n \"srcSet\": [\"/static/2d0fcea4e44f2253a02ee1fe25f67a62/12f09/cluster_log_1.png 148w\", \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/e4a3f/cluster_log_1.png 295w\", \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/fcda8/cluster_log_1.png 590w\", \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/efc66/cluster_log_1.png 885w\", \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/c83ae/cluster_log_1.png 1180w\", \"/static/2d0fcea4e44f2253a02ee1fe25f67a62/77f8f/cluster_log_1.png 1353w\"],\n \"sizes\": \"(max-width: 590px) 100vw, 590px\",\n \"style\": {\n \"width\": \"100%\",\n \"height\": \"100%\",\n \"margin\": \"0\",\n \"verticalAlign\": \"middle\",\n \"position\": \"absolute\",\n \"top\": \"0\",\n \"left\": \"0\"\n },\n \"loading\": \"lazy\"\n })), \"\\n \"), \"\\n \"))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Now click on \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"New Bundle\"), \" button in \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"Product information\"), \" section\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Select \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"openshift4upi\"), \" bundle. A new form loads. \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"Opt-in for the \", mdx(\"code\", _extends({\n parentName: \"strong\"\n }, {\n \"className\": \"language-text\"\n }), \"htpasswd\"), \" credentials provider.\"), \" (You can ignore the warning on top as well, since this is the first install attempt of Openshift on that cluster):\\n\", mdx(\"span\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"gatsby-resp-image-wrapper\",\n \"style\": {\n \"position\": \"relative\",\n \"display\": \"block\",\n \"marginLeft\": \"auto\",\n \"marginRight\": \"auto\",\n \"maxWidth\": \"590px\"\n }\n }), \"\\n \", mdx(\"a\", _extends({\n parentName: \"span\"\n }, {\n \"className\": \"gatsby-resp-image-link\",\n \"href\": \"/static/f905d13c6365a059954fa79df820437c/1132d/bundle_select.png\",\n \"style\": {\n \"display\": \"block\"\n },\n \"target\": \"_blank\",\n \"rel\": \"noopener\"\n }), \"\\n \", mdx(\"span\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-background-image\",\n \"style\": {\n \"paddingBottom\": \"58.78378378378378%\",\n \"position\": \"relative\",\n \"bottom\": \"0\",\n \"left\": \"0\",\n \"backgroundImage\": \"url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAMCAIAAADtbgqsAAAACXBIWXMAAA7EAAAOxAGVKw4bAAABBUlEQVQoz5WR627DIAyFef9HnNRqSjoIWZOAwy0J2DXtstuftEefLIM4snUQfa/C7ErOX2zbT/84frOuv4+5FJHfz+V0Qq1RfqDqUElUCqXEywXHAY3BaarUZqz1AQD7BUlZmrZ0Heqe+p60pmEg58h5CqESY8V7insfU61stjADgPd+2TKWUlfFgkR/QPp/Q8RvhbUA1vqYCmLlrvyENp4MM8txdc7VlreIqY46Up1slILrFfpPD443T8ua1m3N5RB+LJq3c9tK1Uith3EpJuUpMtshY1gFpwXGzNYaY5wP9/97VsJx0IEVU4rc8Np7agfizETwVWx6Jek9bcXqOjbhi2LLDajUvBPdIwluAAAAAElFTkSuQmCC')\",\n \"backgroundSize\": \"cover\",\n \"display\": \"block\"\n }\n })), \"\\n \", mdx(\"img\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-image\",\n \"alt\": \"Select a bundle\",\n \"title\": \"Select a bundle\",\n \"src\": \"/static/f905d13c6365a059954fa79df820437c/fcda8/bundle_select.png\",\n \"srcSet\": [\"/static/f905d13c6365a059954fa79df820437c/12f09/bundle_select.png 148w\", \"/static/f905d13c6365a059954fa79df820437c/e4a3f/bundle_select.png 295w\", \"/static/f905d13c6365a059954fa79df820437c/fcda8/bundle_select.png 590w\", \"/static/f905d13c6365a059954fa79df820437c/efc66/bundle_select.png 885w\", \"/static/f905d13c6365a059954fa79df820437c/1132d/bundle_select.png 1158w\"],\n \"sizes\": \"(max-width: 590px) 100vw, 590px\",\n \"style\": {\n \"width\": \"100%\",\n \"height\": \"100%\",\n \"margin\": \"0\",\n \"verticalAlign\": \"middle\",\n \"position\": \"absolute\",\n \"top\": \"0\",\n \"left\": \"0\"\n },\n \"loading\": \"lazy\"\n })), \"\\n \"), \"\\n \"))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Wait for OCP4 to install. After successful installation you should see a cluster history log like this:\\n\", mdx(\"span\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"gatsby-resp-image-wrapper\",\n \"style\": {\n \"position\": \"relative\",\n \"display\": \"block\",\n \"marginLeft\": \"auto\",\n \"marginRight\": \"auto\",\n \"maxWidth\": \"590px\"\n }\n }), \"\\n \", mdx(\"a\", _extends({\n parentName: \"span\"\n }, {\n \"className\": \"gatsby-resp-image-link\",\n \"href\": \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/cad6c/cluster_log_2.png\",\n \"style\": {\n \"display\": \"block\"\n },\n \"target\": \"_blank\",\n \"rel\": \"noopener\"\n }), \"\\n \", mdx(\"span\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-background-image\",\n \"style\": {\n \"paddingBottom\": \"22.2972972972973%\",\n \"position\": \"relative\",\n \"bottom\": \"0\",\n \"left\": \"0\",\n \"backgroundImage\": \"url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAECAIAAAABPYjBAAAACXBIWXMAAAsSAAALEgHS3X78AAAAgElEQVQI102OSQ7FIAxDe/97IuZRJASx+f7QVvUC2SYvyqW1DiEwc+8958xb8DFGIoLH++nJxnz6McZlrQUvIggYmluIxpiU0vFKqdbaWouHuMbz0Q2DxDLAsoXonHtjrRW/2EM8bKW5yz+MIfAHLqW8sPcel6M8c6cH7CvJA/8AipDkUAUTSkUAAAAASUVORK5CYII=')\",\n \"backgroundSize\": \"cover\",\n \"display\": \"block\"\n }\n })), \"\\n \", mdx(\"img\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-image\",\n \"alt\": \"Cluster log after OCP4 install\",\n \"title\": \"Cluster log after OCP4 install\",\n \"src\": \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/fcda8/cluster_log_2.png\",\n \"srcSet\": [\"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/12f09/cluster_log_2.png 148w\", \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/e4a3f/cluster_log_2.png 295w\", \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/fcda8/cluster_log_2.png 590w\", \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/efc66/cluster_log_2.png 885w\", \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/c83ae/cluster_log_2.png 1180w\", \"/static/a7a0e24eb66af03f5f7b7b48cf642e0f/cad6c/cluster_log_2.png 1339w\"],\n \"sizes\": \"(max-width: 590px) 100vw, 590px\",\n \"style\": {\n \"width\": \"100%\",\n \"height\": \"100%\",\n \"margin\": \"0\",\n \"verticalAlign\": \"middle\",\n \"position\": \"absolute\",\n \"top\": \"0\",\n \"left\": \"0\"\n },\n \"loading\": \"lazy\"\n })), \"\\n \"), \"\\n \"))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Use the link and credentials from the \", mdx(\"strong\", {\n parentName: \"p\"\n }, \"Cluster Information\"), \" section to access your cluster. Verify it contains login information for both \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"kube:admin\"), \" and \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"quicklab\"), \" user.\\n\", mdx(\"span\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"gatsby-resp-image-wrapper\",\n \"style\": {\n \"position\": \"relative\",\n \"display\": \"block\",\n \"marginLeft\": \"auto\",\n \"marginRight\": \"auto\",\n \"maxWidth\": \"590px\"\n }\n }), \"\\n \", mdx(\"a\", _extends({\n parentName: \"span\"\n }, {\n \"className\": \"gatsby-resp-image-link\",\n \"href\": \"/static/8a70b0e82af8b5d4ad86a372c4429cdf/5a6dd/cluster_information.png\",\n \"style\": {\n \"display\": \"block\"\n },\n \"target\": \"_blank\",\n \"rel\": \"noopener\"\n }), \"\\n \", mdx(\"span\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-background-image\",\n \"style\": {\n \"paddingBottom\": \"99.32432432432432%\",\n \"position\": \"relative\",\n \"bottom\": \"0\",\n \"left\": \"0\",\n \"backgroundImage\": \"url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAIAAAAC64paAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAB1UlEQVQ4y5VUa4+bMBD0//9tbaNLAmnAF+P3GhsnPKL72gEStWpyzXW0shaLxbPsjFlKsQ0tEELIXT7/D1jXdd57IjLWIh+Gob9jeAWGM7XWgSjnPE3TeMfwBTCcqZTCSiEs5NsYI5KVxYtivETknXMoQBs4f+3ncrm8LkbDRbGv65oCJSAiIr6Yuu5l2wwUQVsq1QCikVIqKY0x2ASJfzfP0KExWpsb8LgUzH/7sqD/nD8DU2vsWu29QxcYOFpY5hfWKYL++AwYVRBCqEZiRZ9zt3HGks0bOPxTkaTcKWsdROKcMsZ63yillx3rHXIHFm2gtk0PYGdjUlWl0ylLnaXC2i1rbmQSiCaeRHwXWTT9A1jLufz+TW93+lC5Izc/uTlUvhbUaH9SJPSciPmL4xN51rXY/JDFXry9qf0OQcfjaO308QG1TtfreL1iHRBPFMa53Wzsbme2WyoKKKYty17pYZwGaOR39M9GJYQ8HERVkZJnCpmos3ZwbviCN1j0XlUVwnDuhQhSUtM4i79+GzVeWnX2qDYGE2FCCCRgBkOc+x6+OC8uWS2y2OTy6BYGPZVlyTlv7wi0GhQRYFGKKaRuuWkCNPOXwubLAJbMf+DmzfwCvwBEVVGhPET+rwAAAABJRU5ErkJggg==')\",\n \"backgroundSize\": \"cover\",\n \"display\": \"block\"\n }\n })), \"\\n \", mdx(\"img\", _extends({\n parentName: \"a\"\n }, {\n \"className\": \"gatsby-resp-image-image\",\n \"alt\": \"Cluster information\",\n \"title\": \"Cluster information\",\n \"src\": \"/static/8a70b0e82af8b5d4ad86a372c4429cdf/fcda8/cluster_information.png\",\n \"srcSet\": [\"/static/8a70b0e82af8b5d4ad86a372c4429cdf/12f09/cluster_information.png 148w\", \"/static/8a70b0e82af8b5d4ad86a372c4429cdf/e4a3f/cluster_information.png 295w\", \"/static/8a70b0e82af8b5d4ad86a372c4429cdf/fcda8/cluster_information.png 590w\", \"/static/8a70b0e82af8b5d4ad86a372c4429cdf/5a6dd/cluster_information.png 802w\"],\n \"sizes\": \"(max-width: 590px) 100vw, 590px\",\n \"style\": {\n \"width\": \"100%\",\n \"height\": \"100%\",\n \"margin\": \"0\",\n \"verticalAlign\": \"middle\",\n \"position\": \"absolute\",\n \"top\": \"0\",\n \"left\": \"0\"\n },\n \"loading\": \"lazy\"\n })), \"\\n \"), \"\\n \"))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Login as the \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"kube:admin\"), \", take the value from \\u201CHosts\\u201D and port 6443.\\nFor example:\"))), mdx(\"div\", {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"sh\"\n }, mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-sh\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-sh\"\n }), \"oc login upi-0.tcoufaltest.lab.upshift.rdu2.redhat.com:6443\"))), mdx(\"h2\", null, \"Install Argo CD on your cluster\"), mdx(\"ol\", {\n \"className\": \"pf-c-list\"\n }, mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"kube:admin\"), \" is not supported in user api, that\\u2019s why we\\u2019ve opted in for the \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"htpasswd\"), \" provider during the bundle install.\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Log in as the \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"quicklab\"), \" user using the \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"htpasswd\"), \" provider in the web console. To create the Openshift user. Then log out.\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Login as the \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"kube:admin\"), \" user in the web console and your local cli client.\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Grant the htpasswd\\u2019s \", mdx(\"code\", _extends({\n parentName: \"p\"\n }, {\n \"className\": \"language-text\"\n }), \"quicklab\"), \" user admin cluster-admin rights\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"sh\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-sh\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-sh\"\n }), \"oc adm policy add-cluster-role-to-user cluster-admin quicklab\")))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Now log out and log in using the htpasswd provider (the new username). Generate new API token and login via this token on your local CLI\")), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Now you can follow the upstream docs. Create the projects:\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"sh\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-sh\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-sh\"\n }), \"oc new-project argocd-test\\noc new-project aicoe-argocd-dev\")))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"Make sure you have imported the required \", mdx(\"a\", _extends({\n parentName: \"p\"\n }, {\n \"href\": \"../../README.md#gpg-key-access\"\n }), \"gpg keys\"))), mdx(\"li\", {\n parentName: \"ol\"\n }, mdx(\"p\", {\n parentName: \"li\"\n }, \"And deploy\"), mdx(\"div\", _extends({\n parentName: \"li\"\n }, {\n \"className\": \"gatsby-highlight\",\n \"data-language\": \"sh\"\n }), mdx(\"pre\", _extends({\n parentName: \"div\"\n }, {\n \"className\": \"language-sh\"\n }), mdx(\"code\", _extends({\n parentName: \"pre\"\n }, {\n \"className\": \"language-sh\"\n }), \"$ kustomize build manifests/crds --enable_alpha_plugins | oc apply -f -\\ncustomresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created\\ncustomresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created\\nclusterrole.rbac.authorization.k8s.io/argocd-events-create-aggregate-to-admin created\\nclusterrole.rbac.authorization.k8s.io/argocd-proj-apps-aggregate-to-admin created\\n\\n$ kustomize build manifests/overlays/dev --enable_alpha_plugins | oc apply -f -\\nserviceaccount/argocd-application-controller created\\nserviceaccount/argocd-dex-server created\\nserviceaccount/argocd-manager created\\nserviceaccount/argocd-server created\\nrole.rbac.authorization.k8s.io/argocd-application-controller created\\nrole.rbac.authorization.k8s.io/argocd-dex-server created\\nrole.rbac.authorization.k8s.io/argocd-server created\\nclusterrole.rbac.authorization.k8s.io/argocd-manager-role created\\nrolebinding.rbac.authorization.k8s.io/argocd-application-controller created\\nrolebinding.rbac.authorization.k8s.io/argocd-dex-server created\\nrolebinding.rbac.authorization.k8s.io/argocd-server created\\nclusterrolebinding.rbac.authorization.k8s.io/argocd-manager-role-binding created\\nconfigmap/argocd-cm created\\nconfigmap/argocd-rbac-cm created\\nconfigmap/argocd-ssh-known-hosts-cm created\\nconfigmap/argocd-tls-certs-cm created\\nsecret/argocd-dex-server-oauth-token created\\nsecret/argocd-secret created\\nsecret/dev-cluster-spec created\\nsecret/ksops-pgp-key created\\nservice/argocd-dex-server created\\nservice/argocd-metrics created\\nservice/argocd-redis created\\nservice/argocd-repo-server created\\nservice/argocd-server created\\nservice/argocd-server-metrics created\\ndeployment.apps/argocd-application-controller created\\ndeployment.apps/argocd-dex-server created\\ndeployment.apps/argocd-redis created\\ndeployment.apps/argocd-repo-server created\\ndeployment.apps/argocd-server created\\nappproject.argoproj.io/dev created\\napplication.argoproj.io/dev-app created\\nroute.route.openshift.io/argocd-metrics created\\nroute.route.openshift.io/argocd-server created\\nroute.route.openshift.io/argocd-server-metrics created\\n\\n$ examples/configure_development.sh\\nserviceaccount/argocd-dex-server patched\\nsecret/argocd-secret patched\\nconfigmap/argocd-cm replaced\\nsecret/dev-cluster-spec patched\\nsecret/dev-cluster-spec patched\\napplication.argoproj.io/dev-app replaced\\ngroup.user.openshift.io/dev-group created\\ngroup.user.openshift.io/dev-group added: \\\"username\\\"\"))))));\n}\n;\nMDXContent.isMDXComponent = true;","frontmatter":{"title":"","description":null}}},"pageContext":{"id":"55241a87-a016-5a0d-9e32-185958a41bdc","slug":"docs/downstream/quicklab"}},"staticQueryHashes":["117426894","3000541721","3753692419"]} \ No newline at end of file diff --git a/page-data/sq/d/117426894.json b/page-data/sq/d/117426894.json index 9176acfa1805..435da269f12e 100644 --- a/page-data/sq/d/117426894.json +++ b/page-data/sq/d/117426894.json @@ -1 +1 @@ -{"data":{"navData":{"navItems":[{"id":"data-science","label":"Data Science Projects","href":null,"links":[{"id":"projects-overview","label":"Projects Overview","href":"/data-science/"},{"id":"categorical-encoding","label":"Categorical Encoding","href":"/data-science/categorical-encoding/docs/blog/blog"},{"id":"configuration-file-analysis","label":"Configuration File Analysis","href":"/data-science/configuration-files-analysis/docs/blog/configuration-file-analysis-blog"}]},{"id":"data-science-workflow","label":"Data Science Workflow","href":null,"links":[{"id":"data-science-workflow-overview","label":"Data Science Workflow Overview","href":"/data-science/data-science-workflows/README"},{"id":"project-document-template","label":"Project Template","href":"/data-science/data-science-workflows/docs/publish/project-document-template/"}]},{"id":"AI-for-CI","label":"AI for Continuous Integration","href":null,"links":[{"id":"ai-for-ci-overview","label":"AI for Continuous Integration Overview","href":"/data-science/ocp-ci-analysis/README"},{"id":"ci-failure-type-classification","label":"Failure type classification with the TestGrid","href":"/data-science/ocp-ci-analysis/docs/publish/failure-type-classification-with-the-testgrid-data-project-doc"},{"id":"testgrid-EDA","label":"Initial TestGrid EDA","href":"/data-science/ocp-ci-analysis/notebooks/TestGrid_EDA/"},{"id":"testgrid-indepth-eda","label":"Indepth TestGrid EDA","href":"/data-science/ocp-ci-analysis/notebooks/TestGrid_indepth_EDA/"}]},{"id":"moc","label":"User support","href":"/users/odh-moc-support/README","links":null},{"id":"moc-user-docs","label":"Components","href":null,"links":[{"id":"moc-jh","label":"JupyterHub","href":"/users/odh-moc-support/docs/user-docs/jupyterhub"}]},{"id":"argocd","label":"ArgoCD Operations","href":null,"links":[{"id":"argocd-application-manifests","label":"Create ArgoCD Application Manifest","href":"/operators/continuous-deployment/docs/create_argocd_application_manifest"},{"id":"argocd-manage-app","label":"Get ArgoCD to Manage your app","href":"/operators/continuous-deployment/docs/get_argocd_to_manage_your_app"},{"id":"argocd-access-project","label":"Give ArgoCD Access to your Project","href":"/operators/continuous-deployment/docs/give_argocd_access_to_your_project"},{"id":"argocd-inclusions","label":"Inclusions Explained","href":"/operators/continuous-deployment/docs/inclusions_explained"},{"id":"argocd-setup","label":"ArgoCD Setup","href":"/operators/continuous-deployment/docs/setup_argocd_dev_environment"}]},{"id":"crc","label":"CRC","href":null,"links":[{"id":"crc-setup","label":"CRC Setup","href":"/operators/continuous-deployment/docs/downstream/crc"},{"id":"crc-odh","label":"Installing ODH on CRC","href":"/operators/continuous-deployment/docs/downstream/odh-install-crc"},{"id":"crc-disk","label":"CRC Disk Size","href":"/operators/continuous-deployment/docs/downstream/crc-disk-size"}]},{"id":"quicklab","label":"Quicklab","href":null,"links":[{"id":"quicklab-setup","label":"Quicklab Setup","href":"/operators/continuous-deployment/docs/downstream/quicklab"},{"id":"quicklab-odh","label":"Installing ODH on Quicklab","href":"/operators/continuous-deployment/docs/downstream/odh-install-quicklab"}]},{"id":"moc-ops-docs","label":"MOC - ODH Operations","href":null,"links":[{"id":"moc-getting-access","label":"Getting Access","href":"/operators/moc-cnv-sandbox/docs/about-the-cluster"}]},{"id":"blueprint-adr","label":"Architecture Decision Records","href":null,"links":[{"id":"blueprint-adr-0000","label":"Use Markdown Architectural Decision Records","href":"/blueprints/blueprint/docs/adr/0000-use-markdown-architectural-decision-records"},{"id":"blueprint-adr-0001","label":"Use GNU GPL as license","href":"/blueprints/blueprint/docs/adr/0001-use-gpl3-as-license"},{"id":"blueprint-adr-0003","label":"Operate First deployment feature selection Policy","href":"/blueprints/blueprint/docs/adr/0003-feature-selection-policy"}]},{"id":"continuous-delivery","label":"Continuous Delivery","href":null,"links":[{"id":"cicd_intro","label":"(Opinionated) Continuous Delivery","href":"/blueprints/continuous-delivery/docs/continuous_delivery"},{"id":"continuous-delivery-setup-source-operations","label":"Setting up Source Code Operations","href":"/blueprints/continuous-delivery/docs/setup_source_operations"},{"id":"continuous-delivery-setup-ci","label":"Setting up a Continuous Integration Pipeline","href":"/blueprints/continuous-delivery/docs/setup_ci_pipeline"},{"id":"continuous-delivery-setup-cd","label":"Setting up a Continuous Delivery Pipeline","href":"/blueprints/continuous-delivery/docs/setup_cd_pipeline"}]}]}}} \ No newline at end of file +{"data":{"navData":{"navItems":[{"id":"data-science","label":"Data Science Projects","href":null,"links":[{"id":"projects-overview","label":"Projects Overview","href":"/data-science/"},{"id":"categorical-encoding","label":"Categorical Encoding","href":"/data-science/categorical-encoding/docs/blog/blog"},{"id":"configuration-file-analysis","label":"Configuration File Analysis","href":"/data-science/configuration-files-analysis/docs/blog/configuration-file-analysis-blog"}]},{"id":"data-science-workflow","label":"Data Science Workflow","href":null,"links":[{"id":"data-science-workflow-overview","label":"Data Science Workflow Overview","href":"/data-science/data-science-workflows/README"},{"id":"project-document-template","label":"Project Template","href":"/data-science/data-science-workflows/docs/publish/project-document-template/"}]},{"id":"AI-for-CI","label":"AI for Continuous Integration","href":null,"links":[{"id":"ai-for-ci-overview","label":"AI for Continuous Integration Overview","href":"/data-science/ocp-ci-analysis/README"},{"id":"ci-failure-type-classification","label":"Failure type classification with the TestGrid","href":"/data-science/ocp-ci-analysis/docs/publish/failure-type-classification-with-the-testgrid-data-project-doc"},{"id":"testgrid-EDA","label":"Initial TestGrid EDA","href":"/data-science/ocp-ci-analysis/notebooks/TestGrid_EDA/"},{"id":"testgrid-indepth-eda","label":"Indepth TestGrid EDA","href":"/data-science/ocp-ci-analysis/notebooks/TestGrid_indepth_EDA/"}]},{"id":"moc","label":"User support","href":"/users/odh-moc-support/README","links":null},{"id":"moc-user-docs","label":"Components","href":null,"links":[{"id":"moc-jh","label":"JupyterHub","href":"/users/odh-moc-support/docs/user-docs/jupyterhub"}]},{"id":"argocd","label":"ArgoCD Operations","href":null,"links":[{"id":"argocd-application-manifests","label":"Create ArgoCD Application Manifest","href":"/operators/continuous-deployment/docs/create_argocd_application_manifest"},{"id":"argocd-manage-app","label":"Get ArgoCD to Manage your app","href":"/operators/continuous-deployment/docs/get_argocd_to_manage_your_app"},{"id":"argocd-access-project","label":"Give ArgoCD Access to your Project","href":"/operators/continuous-deployment/docs/give_argocd_access_to_your_project"},{"id":"argocd-inclusions","label":"Inclusions Explained","href":"/operators/continuous-deployment/docs/inclusions_explained"},{"id":"argocd-setup","label":"ArgoCD Setup","href":"/operators/continuous-deployment/docs/setup_argocd_dev_environment"}]},{"id":"crc","label":"CRC","href":null,"links":[{"id":"crc-setup","label":"CRC Setup","href":"/operators/continuous-deployment/docs/downstream/crc"},{"id":"crc-odh","label":"Installing ODH on CRC","href":"/operators/continuous-deployment/docs/downstream/odh-install-crc"},{"id":"crc-disk","label":"CRC Disk Size","href":"/operators/continuous-deployment/docs/downstream/crc-disk-size"}]},{"id":"quicklab","label":"Quicklab","href":null,"links":[{"id":"quicklab-setup","label":"Quicklab Setup","href":"/operators/continuous-deployment/docs/downstream/quicklab"},{"id":"quicklab-pv","label":"Create persistent volumes","href":"/operators/continuous-deployment/docs/downstream/on-cluster-persistent-storage/README"},{"id":"quicklab-odh","label":"Installing ODH on Quicklab","href":"/operators/continuous-deployment/docs/downstream/odh-install-quicklab"}]},{"id":"moc-ops-docs","label":"MOC - ODH Operations","href":null,"links":[{"id":"moc-getting-access","label":"Getting Access","href":"/operators/moc-cnv-sandbox/docs/about-the-cluster"}]},{"id":"blueprint-adr","label":"Architecture Decision Records","href":null,"links":[{"id":"blueprint-adr-0000","label":"Use Markdown Architectural Decision Records","href":"/blueprints/blueprint/docs/adr/0000-use-markdown-architectural-decision-records"},{"id":"blueprint-adr-0001","label":"Use GNU GPL as license","href":"/blueprints/blueprint/docs/adr/0001-use-gpl3-as-license"},{"id":"blueprint-adr-0003","label":"Operate First deployment feature selection Policy","href":"/blueprints/blueprint/docs/adr/0003-feature-selection-policy"}]},{"id":"continuous-delivery","label":"Continuous Delivery","href":null,"links":[{"id":"cicd_intro","label":"(Opinionated) Continuous Delivery","href":"/blueprints/continuous-delivery/docs/continuous_delivery"},{"id":"continuous-delivery-setup-source-operations","label":"Setting up Source Code Operations","href":"/blueprints/continuous-delivery/docs/setup_source_operations"},{"id":"continuous-delivery-setup-ci","label":"Setting up a Continuous Integration Pipeline","href":"/blueprints/continuous-delivery/docs/setup_ci_pipeline"},{"id":"continuous-delivery-setup-cd","label":"Setting up a Continuous Delivery Pipeline","href":"/blueprints/continuous-delivery/docs/setup_cd_pipeline"}]}]}}} \ No newline at end of file diff --git a/page-data/using-typescript/page-data.json b/page-data/using-typescript/page-data.json index 0305580a500a..1e62a69335df 100644 --- a/page-data/using-typescript/page-data.json +++ b/page-data/using-typescript/page-data.json @@ -1 +1 @@ -{"componentChunkName":"component---src-pages-using-typescript-tsx","path":"/using-typescript/","result":{"data":{"site":{"buildTime":"2020-11-25 12:53 pm UTC"}},"pageContext":{}},"staticQueryHashes":["117426894","3000541721"]} \ No newline at end of file +{"componentChunkName":"component---src-pages-using-typescript-tsx","path":"/using-typescript/","result":{"data":{"site":{"buildTime":"2020-11-25 02:49 pm UTC"}},"pageContext":{}},"staticQueryHashes":["117426894","3000541721"]} \ No newline at end of file diff --git a/static/20c0bcce54f2809ef0fa4c2da296a706/12f09/bundle_select.png b/static/20c0bcce54f2809ef0fa4c2da296a706/12f09/bundle_select.png deleted file mode 100644 index e25dec83d196..000000000000 Binary files a/static/20c0bcce54f2809ef0fa4c2da296a706/12f09/bundle_select.png and /dev/null differ diff --git a/static/20c0bcce54f2809ef0fa4c2da296a706/1d553/bundle_select.png b/static/20c0bcce54f2809ef0fa4c2da296a706/1d553/bundle_select.png deleted file mode 100644 index 9d2e4595021b..000000000000 Binary files a/static/20c0bcce54f2809ef0fa4c2da296a706/1d553/bundle_select.png and /dev/null differ diff --git a/static/20c0bcce54f2809ef0fa4c2da296a706/e4a3f/bundle_select.png b/static/20c0bcce54f2809ef0fa4c2da296a706/e4a3f/bundle_select.png deleted file mode 100644 index 387c29bf9761..000000000000 Binary files a/static/20c0bcce54f2809ef0fa4c2da296a706/e4a3f/bundle_select.png and /dev/null differ diff --git a/static/20c0bcce54f2809ef0fa4c2da296a706/efc66/bundle_select.png b/static/20c0bcce54f2809ef0fa4c2da296a706/efc66/bundle_select.png deleted file mode 100644 index 0744a30fa0a6..000000000000 Binary files a/static/20c0bcce54f2809ef0fa4c2da296a706/efc66/bundle_select.png and /dev/null differ diff --git a/static/20c0bcce54f2809ef0fa4c2da296a706/fcda8/bundle_select.png b/static/20c0bcce54f2809ef0fa4c2da296a706/fcda8/bundle_select.png deleted file mode 100644 index dde6f502988e..000000000000 Binary files a/static/20c0bcce54f2809ef0fa4c2da296a706/fcda8/bundle_select.png and /dev/null differ diff --git a/static/8a70b0e82af8b5d4ad86a372c4429cdf/12f09/cluster_information.png b/static/8a70b0e82af8b5d4ad86a372c4429cdf/12f09/cluster_information.png new file mode 100644 index 000000000000..a7a9dbde9399 Binary files /dev/null and b/static/8a70b0e82af8b5d4ad86a372c4429cdf/12f09/cluster_information.png differ diff --git a/static/8a70b0e82af8b5d4ad86a372c4429cdf/5a6dd/cluster_information.png b/static/8a70b0e82af8b5d4ad86a372c4429cdf/5a6dd/cluster_information.png new file mode 100644 index 000000000000..efc4cd61fbb9 Binary files /dev/null and b/static/8a70b0e82af8b5d4ad86a372c4429cdf/5a6dd/cluster_information.png differ diff --git a/static/8a70b0e82af8b5d4ad86a372c4429cdf/e4a3f/cluster_information.png b/static/8a70b0e82af8b5d4ad86a372c4429cdf/e4a3f/cluster_information.png new file mode 100644 index 000000000000..7124ce1d3791 Binary files /dev/null and b/static/8a70b0e82af8b5d4ad86a372c4429cdf/e4a3f/cluster_information.png differ diff --git a/static/8a70b0e82af8b5d4ad86a372c4429cdf/fcda8/cluster_information.png b/static/8a70b0e82af8b5d4ad86a372c4429cdf/fcda8/cluster_information.png new file mode 100644 index 000000000000..fec64812993e Binary files /dev/null and b/static/8a70b0e82af8b5d4ad86a372c4429cdf/fcda8/cluster_information.png differ diff --git a/static/f65f739bcfef148b35b3b2f0d6a6e550/12f09/cluster_information.png b/static/f65f739bcfef148b35b3b2f0d6a6e550/12f09/cluster_information.png deleted file mode 100644 index 59b5ebeabffd..000000000000 Binary files a/static/f65f739bcfef148b35b3b2f0d6a6e550/12f09/cluster_information.png and /dev/null differ diff --git a/static/f65f739bcfef148b35b3b2f0d6a6e550/d30ee/cluster_information.png b/static/f65f739bcfef148b35b3b2f0d6a6e550/d30ee/cluster_information.png deleted file mode 100644 index ac51405c9e2a..000000000000 Binary files a/static/f65f739bcfef148b35b3b2f0d6a6e550/d30ee/cluster_information.png and /dev/null differ diff --git a/static/f65f739bcfef148b35b3b2f0d6a6e550/e4a3f/cluster_information.png b/static/f65f739bcfef148b35b3b2f0d6a6e550/e4a3f/cluster_information.png deleted file mode 100644 index 808569b3bcd1..000000000000 Binary files a/static/f65f739bcfef148b35b3b2f0d6a6e550/e4a3f/cluster_information.png and /dev/null differ diff --git a/static/f65f739bcfef148b35b3b2f0d6a6e550/efc66/cluster_information.png b/static/f65f739bcfef148b35b3b2f0d6a6e550/efc66/cluster_information.png deleted file mode 100644 index c9272230e4d4..000000000000 Binary files a/static/f65f739bcfef148b35b3b2f0d6a6e550/efc66/cluster_information.png and /dev/null differ diff --git a/static/f65f739bcfef148b35b3b2f0d6a6e550/fcda8/cluster_information.png b/static/f65f739bcfef148b35b3b2f0d6a6e550/fcda8/cluster_information.png deleted file mode 100644 index 639de25b1e38..000000000000 Binary files a/static/f65f739bcfef148b35b3b2f0d6a6e550/fcda8/cluster_information.png and /dev/null differ diff --git a/static/f905d13c6365a059954fa79df820437c/1132d/bundle_select.png b/static/f905d13c6365a059954fa79df820437c/1132d/bundle_select.png new file mode 100644 index 000000000000..a9b56ce62c14 Binary files /dev/null and b/static/f905d13c6365a059954fa79df820437c/1132d/bundle_select.png differ diff --git a/static/f905d13c6365a059954fa79df820437c/12f09/bundle_select.png b/static/f905d13c6365a059954fa79df820437c/12f09/bundle_select.png new file mode 100644 index 000000000000..462c592d986f Binary files /dev/null and b/static/f905d13c6365a059954fa79df820437c/12f09/bundle_select.png differ diff --git a/static/f905d13c6365a059954fa79df820437c/e4a3f/bundle_select.png b/static/f905d13c6365a059954fa79df820437c/e4a3f/bundle_select.png new file mode 100644 index 000000000000..5693a2651fa9 Binary files /dev/null and b/static/f905d13c6365a059954fa79df820437c/e4a3f/bundle_select.png differ diff --git a/static/f905d13c6365a059954fa79df820437c/efc66/bundle_select.png b/static/f905d13c6365a059954fa79df820437c/efc66/bundle_select.png new file mode 100644 index 000000000000..fc77b6e4779b Binary files /dev/null and b/static/f905d13c6365a059954fa79df820437c/efc66/bundle_select.png differ diff --git a/static/f905d13c6365a059954fa79df820437c/fcda8/bundle_select.png b/static/f905d13c6365a059954fa79df820437c/fcda8/bundle_select.png new file mode 100644 index 000000000000..1d92cb231df8 Binary files /dev/null and b/static/f905d13c6365a059954fa79df820437c/fcda8/bundle_select.png differ diff --git a/users/index.html b/users/index.html index fc6dadb7a501..237a85184e7c 100644 --- a/users/index.html +++ b/users/index.html @@ -14,4 +14,4 @@ - }Users | Operate First

                                                        \ No newline at end of file + }Users | Operate First \ No newline at end of file diff --git a/users/odh-moc-support/README/index.html b/users/odh-moc-support/README/index.html index b36143cfbeb8..ace68553f511 100644 --- a/users/odh-moc-support/README/index.html +++ b/users/odh-moc-support/README/index.html @@ -14,4 +14,4 @@ - }
                                                        ODH Logo

                                                        Open Data Hub on MOC

                                                        This repository contains all the operational and user documentation for running the Open Data Hub in MOC.

                                                        Getting Started

                                                        We have the Open Data Hub applications deployed and running in a MOC (Mass Open Cloud) cluster. All user documentation such as the login process for each of the applications deployed can be found here.

                                                        End User Support

                                                        If you have any problems, questions and feature requests please report them by opening an issue in this repo here and we will be happy to assist!

                                                        Community

                                                        \ No newline at end of file + }
                                                        ODH Logo

                                                        Open Data Hub on MOC

                                                        This repository contains all the operational and user documentation for running the Open Data Hub in MOC.

                                                        Getting Started

                                                        We have the Open Data Hub applications deployed and running in a MOC (Mass Open Cloud) cluster. All user documentation such as the login process for each of the applications deployed can be found here.

                                                        End User Support

                                                        If you have any problems, questions and feature requests please report them by opening an issue in this repo here and we will be happy to assist!

                                                        Community

                                                        \ No newline at end of file diff --git a/users/odh-moc-support/docs/user-docs/jupyterhub/index.html b/users/odh-moc-support/docs/user-docs/jupyterhub/index.html index 772b6bc8511e..b3dc50ff84ab 100644 --- a/users/odh-moc-support/docs/user-docs/jupyterhub/index.html +++ b/users/odh-moc-support/docs/user-docs/jupyterhub/index.html @@ -14,4 +14,4 @@ - }
                                                        ODH Logo

                                                        JupyterHub

                                                        JupyterHub is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.

                                                        1. The JupyterHub application can be accessed at: https://jupyterhub-opf-jupyterhub.apps.cnv.massopen.cloud/hub/login
                                                        2. You can login by using your Google account.
                                                        3. If you face any problems, please report them by opening an issue here.
                                                        \ No newline at end of file + }
                                                        ODH Logo

                                                        JupyterHub

                                                        JupyterHub is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.

                                                        1. The JupyterHub application can be accessed at: https://jupyterhub-opf-jupyterhub.apps.cnv.massopen.cloud/hub/login
                                                        2. You can login by using your Google account.
                                                        3. If you face any problems, please report them by opening an issue here.
                                                        \ No newline at end of file diff --git a/using-typescript/index.html b/using-typescript/index.html index 564ab0215c89..d84eaa4d1c55 100644 --- a/using-typescript/index.html +++ b/using-typescript/index.html @@ -14,4 +14,4 @@ - }Using TypeScript | Operate First
                                                        ODH Logo

                                                        Gatsby supports TypeScript by default!

                                                        This means that you can create and write .ts/.tsx files for your pages, components etc. Please note that the gatsby-*.js files (like gatsby-node.js) currently don't support TypeScript yet.

                                                        For type checking you'll want to install typescript via npm and run tsc --init to create a .tsconfig file.

                                                        You're currently on the page "/*" which was built on 2020-11-25 12:53 pm UTC.

                                                        To learn more, head over to our documentation about TypeScript.

                                                        Go back to the homepage
                                                        \ No newline at end of file + }Using TypeScript | Operate First
                                                        ODH Logo

                                                        Gatsby supports TypeScript by default!

                                                        This means that you can create and write .ts/.tsx files for your pages, components etc. Please note that the gatsby-*.js files (like gatsby-node.js) currently don't support TypeScript yet.

                                                        For type checking you'll want to install typescript via npm and run tsc --init to create a .tsconfig file.

                                                        You're currently on the page "/*" which was built on 2020-11-25 02:49 pm UTC.

                                                        To learn more, head over to our documentation about TypeScript.

                                                        Go back to the homepage
                                                        \ No newline at end of file