Otomi adds developer- and operations-centric tools, automation, and self-service on top of Kubernetes in any infrastructure or cloud, to code, build, and run containerized applications
Developers - With easy self-service to let them focus on their apps only
- Build OCI compliant images from application code
- Deploy containerized workloads the GitOps way using build-in or custom golden path templates
- Automatically update container images of workloads
- Publicly expose applications
- Get instant access to logs, metrics and traces
- Store charts and images in a private registry
- Configure network policies, response headers and CNAMEs
- Manage secrets
- Create private Git repositories and custom pipelines
Platform engineers - To setup a Kubernetes-based platform and provide a paved road to production
- Create your platform profile and deploy to any K8s
- Onboard development teams in a comprehensive multi-tenant setup and make them self-serving
- Get all the required capabilities in an integrated and automated way
- Ensure governance with security policies
- Implement zero-trust networking
- Change the desired state of the platform based on Configuration-as-Code
- Support multi- and hybrid cloud scenarios
- Prevent cloud provider lock-in
- Implement full observability
To install Otomi, make sure to have a K8s cluster running with at least:
- Version
1.25
,1.26
or1.27
- A node pool with at least 8 vCPU and 16GB+ RAM (more resources might be required based on the activated capabilities)
- Calico CNI installed (or any other CNI that supports K8s network policies)
- A default storage class configured
- When using the
custom
provider, make sure the K8s LoadBalancer Service created byOtomi
can obtain an external IP (using a cloud load balancer or MetalLB)
NOTE: Install Otomi with DNS to unlock it's full potential. Check otomi.io for more info.
Add the Helm repository:
helm repo add otomi https://otomi.io/otomi-core
helm repo update
and then install the Helm chart:
helm install otomi otomi/otomi \
--set cluster.name=$CLUSTERNAME \
--set cluster.provider=$PROVIDER # use 'azure', 'aws', 'google', 'digitalocean', 'ovh', 'vultr', 'scaleway', 'civo', or 'custom' for any other cloud or onprem K8s
When the installer job is completed, follow the activation steps.
The self-service portal (Otomi Console) offers seamless user experience for developers and platform administrators. Platform administrators can use Otomi Console to enable and configure platform capabilities and onboard development teams. Developers can use Otomi Console to build images, deploy applications, expose services, configure CNAMEs, configure network policies and manage secrets. Otomi Console also provided direct and context aware access to platform capabilities like code repositories, registries, logs, metrics, traces, dashboards, etc. Next to the web based self-service, both developers and admins can start a Cloud Shell and run cli commands.
When Otomi is installed, the desired state of the platform is stored in the Desired State Store (the otomi/values
repo in the local Git repository). Changes made through the Console will be reflected in the repo.
The otomi/charts
Git repo includes a set of build-in Helm charts that are used to create workloads in the Console. You can also add your own charts and offer them to the users of the platform.
All changes made through the Console are validated by the control plane (otomi-api
) and then committed in the state store. This will automatically trigger the platform to synchronize the desired state to the actual state of the platform.
The automation is used to synchronize desired state with the state of applications like Keycloak, Harbor and Gitea.
The platform offers a set of Kubernetes applications for all the required capabilities. Core applications are always installed, optional applications can be activated. When an application is activated, the application will be installed based on default configuration. Default configuration can be adjusted using the Console.
Core Applications (that are always installed):
- Istio: The service mesh framework with end-to-end transit encryption
- Keycloak: Identity and access management for modern applications and services
- Cert Manager - Bring your own wildcard certificate or request one from Let's Encrypt
- Nginx Ingress Controller: Ingress controller for Kubernetes
- External DNS: Synchronize exposed ingresses with DNS providers
- Drone: Continuous integration platform built on Docker
- Gitea: Self-hosted Git service
Optional Applications (that you can activate to compose your ideal platform):
- Velero: Back up and restore your Kubernetes cluster resources and persistent volumes
- Argo CD: Declarative continuous deployment
- Knative: Deploy and manage serverless workloads
- Kaniko: Build container images from a Dockerfile
- Prometheus: Collecting container application metrics
- Grafana: Visualize metrics, logs, and traces from multiple sources
- Grafana Loki: Collecting container application logs
- Harbor: Container image registry with role-based access control, image scanning, and image signing
- HashiCorp Vault: Manage Secrets and Protect Sensitive Data
- OPA/Gatekeeper: Policy-based control for cloud-native environments
- Jaeger: End-to-end distributed tracing and monitor for complex distributed systems
- Kiali: Observe Istio service mesh relations and connections
- Minio: High performance Object Storage compatible with Amazon S3 cloud storage service
- Trivy: Kubernetes-native security toolkit
- Thanos: HA Prometheus setup with long term storage capabilities
- Falco: Cloud Native Runtime Security
- Opencost: Cost monitoring for Kubernetes
- Tekton Pipeline: K8s-style resources for declaring CI/CD pipelines
- Tekton Triggers: Trigger pipelines from event payloads
- Tekton dashboard: Web-based UI for Tekton Pipelines and Tekton Triggers
- Paketo build packs: Cloud Native Buildpack implementations for popular programming language ecosystems
- KubeClarity: Detect vulnerabilities of container images
- Cloudnative-pg: Open source operator designed to manage PostgreSQL workloads
- Grafana Tempo: High-scale distributed tracing backend
- OpenTelemetry: Instrument, generate, collect, and export telemetry data to help you analyze your software’s performance and behavior
Otomi can be installed on any Kubernetes cluster. At this time, the following providers are supported:
aws
for AWS Elastic Kubernetes Serviceazure
for Azure Kubernetes Servicegoogle
for Google Kubernetes Enginelinode
for Linode Kubernetes Engineovh
for OVH Cloudvultr
for Vultr Kubernetes Enginescaleway
for Scaleway Kapsulecivo
for Civo Cloud K3Scustom
for any other cloud/infrastructure
- Activate capabilities to compose your ideal platform
- Generate resources for ArgoCD, Tekton, Istio and Ingress based on build-in golden templates
- BYO golden templates and deploy them the GitOps way using ArgoCD
- Scan container images for vulnerabilities (at the gate and at runtime)
- Apply security policies (at the gate and at runtime)
- Advanced ingress architecture using Istio, Nginx and Oauth2
- Configure network policies for internal ingress and external egress
- Deploy workloads the GitOps way without writing any YAML
- Create secrets and use them in workloads
- Role-based access to all integrated applications
- Comprehensive multi-tenant setup
- Automation tasks for Harbor, Keycloak, ArgoCD, Vault, Velero, Gitea and Drone
- Expose services on multiple (public/private) networks
- Automated Istio resource configuration
- SOPS/KMS for encryption of sensitive configuration values
- BYO IdP, DNS and/or CA
- Full observability (logs, metrics, traces, rules, alerts)
- Cloud shell with integrated cli tools like velero and k9s
Otomi open source consists out of the following projects:
- Otomi Core (this project): The heart of Otomi
- Otomi Tasks: Autonomous jobs orchestrated by Otomi Core
- Otomi Clients: Factory to build and publish openapi clients used in by otomi-tasks
Check out the dev docs index for developer documentation or go to otomi.io for more detailed documentation.
If you wish to contribute please read our Contributor Code of Conduct and Contribution Guidelines.
If you want to say thank you or/and support the active development of Otomi:
- Star the Otomi project on Github
- Feel free to write articles about the project on dev.to, medium or on your personal blog and share your experiences
This project exists thanks to all the people who have contributed
Otomi is licensed under the Apache 2.0 License.