Docker and Kubernetes DevOps artifacts for the ForgeRock platform.
These samples are provided on an “as is” basis, without warranty of any kind, to the fullest extent permitted by law. ForgeRock does not warrant or guarantee the individual success developers may have in implementing the code on their development platforms or in production configurations. ForgeRock does not warrant, guarantee or make any representations regarding the use, results of use, accuracy, timeliness or completeness of any data or information relating to these samples. ForgeRock disclaims all warranties, expressed or implied, and in particular, disclaims all warranties of merchantability, and warranties related to the code, or any service or software related thereto. ForgeRock shall not be liable for any direct, indirect or consequential damages or costs of any type arising out of any action taken by you or others related to the samples.
The master branch targets features that are still in development and may not be stable. Please checkout the branch that matches the targeted release.
For example, if you have the source checked out from git:
git checkout release/x.y.0
- docker/ - contains the Dockerfiles for the various containers.
- helm/ - contains Kubernetes helm charts to deploy those containers. See the helm/README.md
- etc/ - contains various scripts and utilities
- bin/ - Utility shell scripts to deploy the helm charts
See the docker/README.md for instructions on how to build your own docker images.
The Draft ForgeRock DevOps Guide tracks the master branch.
The documentation for the current release can be found on backstage.
- Knowledge of Kubernetes and Helm is assumed. Please read the helm documentation before proceeding.
- This assumes minikube is running (8G of RAM), and helm and kubectl are installed.
- See bin/setup.sh for a sample setup script.
# Make sure you have the ingress controller add on
minikube addons enable ingress
helm init --upgrade --service-account default
cd helm/
# Or, deploy from local helm charts..
helm install -f my-custom.yaml frconfig
helm install amster
helm install --set instance=configstore ds
helm install openam
#Get your minikube ip
minikube ip
# You can put DNS entries in an entry in /etc/hosts. For example:
# 192.168.99.100 openam.default.example.com openidm.default.example.com openig.default.example.com
open http://openam.default.example.com
The individual charts all have parmeters which you can override to control the deployment. For example, setting the domain FQDN.
Please refer to the chart settings.
If you do not want to use the 'default' namespace, set your namespace using:
kubectl config set-context $(kubectl config current-context) --namespace=
The kubectx
and kubens
utilities are recommended.
Refer to the toubleshooting chapter in the DevOps Guide.
Troubleshooting Suggestions:
- The script bin/debug-log.sh will generate an HTML file with log output. Useful for troubleshooting.
- Simplify. Deploy a single helm chart at a time (for example, opendj), and make sure that chart is working correctly before deploying the next chart. The
bin/deploy.sh
script and the cmp-platform composite charts are provided as a convenience, but can make it more difficult to narrow down an issue in a single chart. - Describe a failing pod using
kubectl get pods; kubectl describe pod pod-xxx
- Look at the event log for failures. For example, the image can't be pulled.
- Examine all the init containers. Did each init container complete with a zero (success) exit code? If not, examine the logs from that failed init container using
kubectl logs pod-xxx -c init-container-name
- Did the main container enter a crashloop? Retrieve the logs using
kubectl logs pod-xxx
. - Did a docker image fail to be pulled? Check for the correct docker image name and tag. If you are using a private registry, verify your image pull secret is correct.
- You can use
kubectl logs -p pod-xxx
to examine the logs of previous (exited) pods.
- A common problem with 6.0 charts is the
git-ssh-secret
has not been properly created, or an existing secret is present and the helm chart is attempting to recreate it. Look at the init logs where git is used (amster, openidm, openig). You may find errors in attempting to clone the forgeops configuration repo. Even if you are cloning the public read only forgeops-init repo, you still need a "dummy" git-ssh-key (this process is being simplified for 6.5) - If the pods are coming up successfully, but you can't reach the service, you likely have ingress issues:
- Use
kubectl describe ing
andkubectl get ing ingress-name -o yaml
to view the ingress object. - Describe the service using
kubectl get svc; kubectl describe svc xxx
. Does the service have anEndpoint:
binding? If the service endpoint binding is not present, it means the service did not match any running pods.
- Use
- Determine if your cluster is having issues (not enough memory, failing nodes). Watch for pods killed with OOM (Out of Memory). Commands to check:
kubectl describe node
kubectl get events -w
- Most images provide the ability to exec into the pod using bash, and examine processes and logs. Use
kubectl exec pod-name -it bash
. - For 6.5, the Kubernetes cluster must support a read-write-many (RWX) volume type, such as NFS, or Minikube's hostpath provisioner. You can describe persistent volumes using
kubectl describe pvc
. If a PVC is in a pending state, your cluster may not support the required storage class.