Skip to content

origoss/tigera-enterprise-on-sks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Table of Contents

  1. SKS Cluster Deployment
    1. Deploy the demo cluster
    2. Download kubeconfig
    3. Install Longhorn
      1. Update the StorageClass name
  2. Deploying Calico Enterprise
    1. Deploy the Calico operator
    2. Deploy the Prometheus Operator
    3. Install the pull secrets
      1. Calico pull secrets
      2. Prometheus pull secrets
    4. Install Calico custom resources
    5. Create SecurityGroup
    6. Create nodepool
    7. Deploy the License
    8. Calico Operator Workaround
      1. The SKS Kubernetes API server
      2. Bypass network policy
  3. Accessissing the system
    1. Create a user
    2. Create an authentication token
    3. Port forward to Tigera Manager

SKS Cluster Deployment

Deploy the demo cluster

exo compute sks create ceosks-poc \
    --no-cni                      \
    --nodepool-size 0             \
    --zone at-vie-1

img

Download kubeconfig

rm -f "$KUBECONFIG"
exo compute sks kubeconfig ceosks-poc admin \
    --zone at-vie-1                         \
    -g system:masters                       \
    -t $((86400 * 7)) > "$KUBECONFIG"
chmod 0600 "$KUBECONFIG"

img

Install Longhorn

helm install my-longhorn longhorn \
     --version 1.5.1              \
     --repo https://charts.longhorn.io

img

Update the StorageClass name

Calico Enterprise wants to use the StorageClass tigera-elasticsearch.

The file storageclass-config.yaml has the content:

apiVersion: v1
data:
  storageclass.yaml: |
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: tigera-elasticsearch
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: driver.longhorn.io
    allowVolumeExpansion: true
    reclaimPolicy: "Delete"
    volumeBindingMode: Immediate
    parameters:
      numberOfReplicas: "3"
      staleReplicaTimeout: "30"
      fromBackup: ""
      fsType: "ext4"
      dataLocality: "disabled"
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: my-longhorn
    meta.helm.sh/release-namespace: default
  labels:
    app.kubernetes.io/instance: my-longhorn
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: longhorn
    app.kubernetes.io/version: v1.5.1
    helm.sh/chart: longhorn-1.5.1
  name: longhorn-storageclass
  namespace: default

kubectl apply -f storageclass-config.yaml \
              --server-side --force-conflicts

img

Deploying Calico Enterprise

Deploy the Calico operator

kubectl create -f https://downloads.tigera.io/ee/v3.17.2/manifests/tigera-operator.yaml

img

Deploy the Prometheus Operator

kubectl create -f https://downloads.tigera.io/ee/v3.17.2/manifests/tigera-prometheus-operator.yaml

img

Install the pull secrets

Calico Enterprise users need to be authenticated at the Calico registry to download the container images.

Calico pull secrets

kubectl create secret generic tigera-pull-secret \
        --type=kubernetes.io/dockerconfigjson    \
	-n tigera-operator                       \
	--from-file=.dockerconfigjson=tigera-partners-origoss-auth.json

img

Prometheus pull secrets

kubectl create secret generic tigera-pull-secret \
        --type=kubernetes.io/dockerconfigjson    \
	-n tigera-prometheus                     \
        --from-file=.dockerconfigjson=tigera-partners-origoss-auth.json

kubectl patch deployment -n tigera-prometheus calico-prometheus-operator \
        -p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name": "tigera-pull-secret"}]}}}}'

img

Install Calico custom resources

kubectl create -f https://downloads.tigera.io/ee/v3.17.2/manifests/custom-resources.yaml

img

Create SecurityGroup

This security group opens the ports required by Calico Enterprise.

exo compute security-group create ceosks-poc

exo compute security-group rule add ceosks-poc \
                --security-group ceosks-poc    \
		--protocol tcp                 \
		--port 179
exo compute security-group rule add ceosks-poc \
                --security-group ceosks-poc    \
		--protocol udp                 \
		--port 4789
exo compute security-group rule add ceosks-poc \
                --security-group ceosks-poc    \
		--protocol tcp                 \
		--port 5473
exo compute security-group rule add ceosks-poc \
                --security-group ceosks-poc    \
		--protocol tcp                 \
		--port 10250
exo compute security-group rule add ceosks-poc \
                --network 0.0.0.0              \
		--protocol tcp                 \
		--port 30000-32767
exo compute security-group rule add ceosks-poc \
                --network 0.0.0.0              \
		--protocol udp                 \
		--port 30000-32767
exo compute security-group rule add ceosks-poc \
                --security-group ceosks-poc    \
		--protocol udp                 \
		--port 51820-51821

img

Create nodepool

exo compute sks nodepool add \
    --zone at-vie-1 ceosks-poc ceosks-poc-worker \
    --size=2 \
    --instance-type c6f99499-7f59-4138-9427-a09db13af2bc \
    --security-group ceosks-poc

img

Deploy the License

This is the Calico Enterprise license.

kubectl create -f license.yml

img

Calico Operator Workaround

The Calico NetworkPolicies generated by the Calico Operator preventing the components from reaching the SKS Kubernetes API server.

The SKS Kubernetes API server

kubectl describe endpoints/kubernetes -n default

img

The API server can be reached at https://194.182.185.29:30876. This endpoint is not allowed by the default Calico network policies.

Bypass network policy

apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
  name: allow-tigera.allow-sks-apiserver
spec:
  order: 0
  tier: allow-tigera
  types:
    - Egress
  egress:
    - action: Allow
      protocol: TCP
      destination:
        ports:
          - 30876

kubectl apply -f bypass-networkpolicy.yaml -n calico-system
kubectl apply -f bypass-networkpolicy.yaml -n tigera-eck-operator

img

Accessissing the system

Create a user

kubectl create sa tigera-admin -n default
kubectl create clusterrolebinding tigera-admin \
        --clusterrole tigera-network-admin     \
        --serviceaccount default:tigera-admin

img

Create an authentication token

kubectl create token tigera-admin -n default

img

Port forward to Tigera Manager

kubectl port-forward -n tigera-manager service/tigera-manager 9443:9443

Access the Tigera Manager dashboard at https://localhost:9443

img

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published