The OpenShift architecture builds upon the flexibility and scalability of Docker and Kubernetes to deliver a powerful new Platform-as-a-Service system. This article explains how to set up a development environment and get involved with this latest version of OpenShift. Kubernetes is included in this repo for ease of development, and the version we include is periodically updated.
To get started you can either:
Or if you are interested in development, start with:
The OpenShift team periodically publishes binaries to GitHub on the Releases page. These are Linux, Windows, or Mac OS X 64bit binaries (note that Mac and Windows are client only). You’ll need Docker installed on your local system (see the installation page if you’ve never installed Docker before and you’re not on RHEL/CentOS/Fedora).
The tar file for each platform contains a single binary openshift
which is
the all-in-one OpenShift installation.
-
Use
oc cluster up
to launch the server. -
Use
oc login <server> …
to connect to an OpenShift server -
Use
openshift help
to see more about the commands in the binary
To get started, fork the origin repo.
You can develop OpenShift on Windows, Mac, or Linux, but you’ll need Docker
installed on Linux to actually launch containers. Client and server binaries
can be built locally or in the openshift/release
container environment. The
Go programming language is only necessary for building on
the local host.
Currently, OpenShift is built with go
1.8 and uses Docker 1.12. The exact
requirement for Docker is documented
here.
Follow the installation steps to install Homebrew, which will
allow you to install git
:
$ brew install git
Then, follow the instructions to install docker
.
You will need to build linux/amd64
binaries for the OpenShift server; if you
want to do the builds locally, you will need to follow the instructions to
install the go
programming language.
Follow the installation steps to install git
for Windows
and docker
.
You will need to build linux/amd64
binaries for the OpenShift server; if you
want to do the builds locally, you will need to follow the instructions to
install the go
programming language.
Install git
and docker
with:
$ sudo dnf install git docker-latest
In order to do builds locally, install the following build dependencies:
$ sudo dnf install golang golang-race make gcc zip mercurial krb5-devel bsdtar bc rsync bind-utils file jq tito createrepo openssl gpgme gpgme-devel libassuan libassuan-devel
Install git
and docker
with:
$ sudo yum install git docker
In order to do builds locally, install the following build dependencies:
$ sudo yum install golang make gcc zip mercurial krb5-devel bsdtar bc rsync bind-utils file jq tito createrepo openssl gpgme gpgme-devel libassuan libassuan-devel
-
Create a Go workspace directory:
$ mkdir $HOME/go
-
In your
.bashrc
file or.bash_profile
file, set a GOPATH and update your PATH:export GOPATH=$HOME/go export PATH=$PATH:$GOPATH/bin export OS_OUTPUT_GOPATH=1
-
Open up a new terminal or source the changes in your current terminal. Then clone your forked repo:
$ mkdir -p $GOPATH/src/github.com/openshift $ cd $GOPATH/src/github.com/openshift $ git clone git://github.com/<forkid>/origin # Replace <forkid> with the your github id $ cd origin $ git remote add upstream git://github.com/openshift/origin
-
You are now ready to edit the source, build and restart OpenShift to test your changes.
In order to build a full release of Origin, containing binaries, RPMs and container images, run:
$ hack/env make release
In order to make use of the binaries from your shell, add the build output
directory to the $PATH
:
$ export PATH="${PATH}:$( source hack/lib/init.sh; echo "${OS_OUTPUT_BINPATH}/$( os::util::host_platform )/" )"
See more information in HACKING.md
for a more in-depth approach to building releases and incremental artifacts.
Next, follow the set-up steps in cluster_up_down.md
to start a cluster with oc cluster up
. When starting the cluster, you will
need to use container images. Images built locally with the make release
and
hack/build-images.sh
scripts are tagged with the git
commit you’re working
off of as well as :latest
. If you have not built all of the images locally,
ask oc cluster up
for the :latest
version and any missing images will be
pulled down:
$ oc cluster up --version=latest
If you have built a full suite of images and want to ensure that only the images
you just built are going to be used, ask oc cluster up
for the version that
corresponds to your git
commit:
$ oc cluster up --version="$(git log -1 --pretty=%h )"
It’s possible to run an OpenShift multinode cluster on a single host thanks to docker-in-docker (dind). Cluster creation is cheaper since each node is a container instead of a VM. This was initially implemented to support multinode network testing, but has proven useful for development as well.
Prerequisites:
-
A host running docker and with SELinux disabled.
-
It is acceptable to load some kernel modules (overlay and openvswitch) on the docker host.
-
An environment with the tools necessary to build origin.
-
A clone of the origin repo.
From the root of the origin repo, run the following command to launch a new cluster:
# -b to build origin, -i to build images $ hack/dind-cluster.sh start -b -i
Once the cluster is up, source the cluster’s rc file to configure the environment to use it:
$ . dind-openshift.rc
Now the 'oc' command can be used to interact with the cluster:
$ oc get nodes
It’s also possible to login to the participating containers (openshift-master, openshift-node-1, openshift-node-2, etc) via docker exec:
$ docker exec -ti openshift-master bash
While it is possible to manage the OpenShift daemon in the containers, dind cluster management is fast enough that the suggested approach is to manage at the cluster level instead.
Invoking the dind-cluster.sh script without arguments will provide a usage message:
Usage: hack/dind-cluster.sh {start|stop|restart|...}
Additional documentation of how a dind cluster is managed can be found at the top of the dind-cluster.sh script.
Attempting to start a cluster when one is already running will result in an error message from docker indicating that the named containers already exist. To redeploy a cluster use the 'start' command with the '-r' flag to remove an existing cluster.
It is possible to run networking tests against a running docker-in-docker cluster (i.e. after 'hack/dind-cluster.sh start' has been invoked):
$ OPENSHIFT_CONFIG_ROOT=dind test/extended/networking.sh
Since a cluster can only be configured with a single network plugin at a time, this method of invoking the networking tests will only validate the active plugin. It is possible to target all plugins by invoking the same script in 'ci mode' by not setting a config root:
$ test/extended/networking.sh
In ci mode, for each networking plugin, networking.sh will create a new dind cluster, run the tests against that cluster, and tear down the cluster. The test dind clusters are isolated from any user-created clusters, and test output and artifacts of the most recent test run are retained in /tmp/openshift-extended-tests/networking.
It’s possible to override the default test regexes via the NETWORKING_E2E_FOCUS and NETWORKING_E2E_SKIP environment variables. These variables set the '-focus' and '-skip' arguments supplied to the ginkgo test runner.
To debug a test run with delve, make sure the dlv executable is installed in your path and run the tests with DLV_DEBUG set:
$ DLV_DEBUG=1 test/extended/networking.sh
It’s possible to run networking tests against any cluster. To target the default vm dev cluster:
$ OPENSHIFT_CONFIG_ROOT=dev test/extended/networking.sh
To target an arbitrary cluster, the config root (parent of openshift.local.config) can be supplied instead:
$ OPENSHIFT_CONFIG_ROOT=[cluster config root] test/extended/networking.sh
It’s also possible to supply the path to a kubeconfig file:
$ OPENSHIFT_TEST_KUBECONFIG=./admin.kubeconfig test/extended/networking.sh
See the script’s inline documentation for further details.
It’s possible to target the Kubernetes e2e tests against a running OpenShift cluster. From the root of an origin repo:
$ pushd .. $ git clone http://github.com/kubernetes/kubernetes/ $ pushd kubernetes/build $ ./run hack/build-go.sh $ popd && popd $ export KUBE_ROOT=../kubernetes $ hack/test-kube-e2e.sh --ginkgo.focus="[regex]"
The previous sequence of commands will target a vagrant-based OpenShift cluster whose configuration is stored in the default location in the origin repo. To target a dind cluster, an additional environment variable needs to be set before invoking test-kube-e2e.sh:
$ export OS_CONF_ROOT=/tmp/openshift-dind-cluster/openshift
Right now you can see what’s happening with OpenShift development at:
Ready to play with some code? Hop down and read up on our roadmap for ideas on where you can contribute. You can also try to take a stab at any issue tagged with the help-wanted label.
If you are interested in contributing to Kubernetes directly:
Join the Kubernetes community and check out the contributing guide.
If you run into difficulties running OpenShift, start by reading through the troubleshooting guide.
Reach out to the OpenShift team and other community contributors through IRC and our mailing list:
-
IRC: Hop onto the #openshift-dev channel on FreeNode.
-
E-mail: Join the OpenShift developers' mailing list.