Skip to content

Commit

Permalink
docs: fork docs for Sidero 0.6
Browse files Browse the repository at this point in the history
Revert changes for deployment strategy, as they're not actually in 0.5.

Signed-off-by: Andrey Smirnov <[email protected]>
  • Loading branch information
smira committed Apr 13, 2022
1 parent 061ee8e commit 6c81518
Show file tree
Hide file tree
Showing 42 changed files with 2,951 additions and 9 deletions.
10 changes: 7 additions & 3 deletions website/config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -85,13 +85,13 @@ copyright = "Sidero Labs, Inc."
# This menu appears only if you have at least one [params.versions] set.
version_menu = "Releases"

# Flag used in the "version-banner" partial to decide whether to display a
# Flag used in the "version-banner" partial to decide whether to display a
# banner on every page indicating that this is an archived version of the docs.
# Set this flag to "true" if you want to display the banner.
# archived_version = false

# The version number for the version of the docs represented in this doc set.
# Used in the "version-banner" partial to display a version number for the
# Used in the "version-banner" partial to display a version number for the
# current doc set.
# version = "0.6"

Expand Down Expand Up @@ -124,6 +124,10 @@ offlineSearch = false
# Enable syntax highlighting and copy buttons on code blocks with Prism
prism_syntax_highlighting = false

[[params.versions]]
url = "/v0.6"
version = "v0.6 (pre-release)"

[[params.versions]]
url = "/v0.5"
version = "v0.5 (latest)"
Expand Down Expand Up @@ -170,7 +174,7 @@ no = 'Sorry to hear that. Please <a href="https://github.com/USERNAME/REPOSITORY
yes = 'Glad to hear it! Please <a href="https://github.com/USERNAME/REPOSITORY/issues/new">tell us how we can improve</a>.'

# Adds a reading time to the top of each doc.
# If you want this feature, but occasionally need to remove the Reading time from a single page,
# If you want this feature, but occasionally need to remove the Reading time from a single page,
# add "hide_readingtime: true" to the page's front matter
[params.ui.readingtime]
enable = false
Expand Down
1 change: 0 additions & 1 deletion website/content/v0.5/Getting Started/install-clusterapi.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,6 @@ options.

```bash
export SIDERO_CONTROLLER_MANAGER_HOST_NETWORK=true
export SIDERO_CONTROLLER_MANAGER_DEPLOYMENT_STRATEGY=Recreate
export SIDERO_CONTROLLER_MANAGER_API_ENDPOINT=192.168.1.150
export SIDERO_CONTROLLER_MANAGER_SIDEROLINK_ENDPOINT=192.168.1.150

Expand Down
1 change: 0 additions & 1 deletion website/content/v0.5/Guides/bootstrapping.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,6 @@ To install Sidero and the other Talos providers, simply issue:
```bash
SIDERO_CONTROLLER_MANAGER_HOST_NETWORK=true \
SIDERO_CONTROLLER_MANAGER_DEPLOYMENT_STRATEGY=Recreate \
SIDERO_CONTROLLER_MANAGER_API_ENDPOINT=$PUBLIC_IP \
clusterctl init -b talos -c talos -i sidero
```
Expand Down
2 changes: 1 addition & 1 deletion website/content/v0.5/Guides/sidero-on-rpi4.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ kubectl get nodes
Install Sidero with host network mode, exposing the endpoints on the node's address:

```bash
SIDERO_CONTROLLER_MANAGER_HOST_NETWORK=true SIDERO_CONTROLLER_MANAGER_DEPLOYMENT_STRATEGY=Recreate SIDERO_CONTROLLER_MANAGER_API_ENDPOINT=${SIDERO_IP} clusterctl init -i sidero -b talos -c talos
SIDERO_CONTROLLER_MANAGER_HOST_NETWORK=true SIDERO_CONTROLLER_MANAGER_API_ENDPOINT=${SIDERO_IP} clusterctl init -i sidero -b talos -c talos
```

Watch the progress of installation with:
Expand Down
2 changes: 1 addition & 1 deletion website/content/v0.5/Overview/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Sidero supports several variables to configure the installation, these variables
variables or as variables in the `clusterctl` configuration:

- `SIDERO_CONTROLLER_MANAGER_HOST_NETWORK` (`false`): run `sidero-controller-manager` on host network
- `SIDERO_CONTROLLER_MANAGER_DEPLOYMENT_STRATEGY` (`RollingUpdate`): strategy to use when updating `sidero-controller-manager`, use `Recreate` when using a single node and `SIDERO_CONTROLLER_MANAGER_HOST_NETWORK` is `true`
`SIDERO_CONTROLLER_MANAGER_HOST_NETWORK` is `true`
- `SIDERO_CONTROLLER_MANAGER_API_ENDPOINT` (empty): specifies the IP address controller manager API service can be reached on, defaults to the node IP (TCP)
- `SIDERO_CONTROLLER_MANAGER_API_PORT` (8081): specifies the port controller manager can be reached on
- `SIDERO_CONTROLLER_MANAGER_CONTAINER_API_PORT` (8081): specifies the controller manager internal container port
Expand Down
4 changes: 4 additions & 0 deletions website/content/v0.6/Getting Started/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
title: "Getting Started"
weight: 20
---
125 changes: 125 additions & 0 deletions website/content/v0.6/Getting Started/create-workload.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
---
description: "Create a Workload Cluster"
weight: 8
title: "Create a Workload Cluster"
---

Once created and accepted, you should see the servers that make up your ServerClasses appear as "available":

```bash
$ kubectl get serverclass
NAME AVAILABLE IN USE
any ["00000000-0000-0000-0000-d05099d33360"] []
```

## Generate Cluster Manifests

We are now ready to generate the configuration manifest templates for our first workload
cluster.

There are several configuration parameters that should be set in order for the templating to work properly:

- `CONTROL_PLANE_ENDPOINT`: The endpoint used for the Kubernetes API server (e.g. `https://1.2.3.4:6443`).
This is the equivalent of the `endpoint` you would specify in `talosctl gen config`.
There are a variety of ways to configure a control plane endpoint.
Some common ways for an HA setup are to use DNS, a load balancer, or BGP.
A simpler method is to use the IP of a single node.
This has the disadvantage of being a single point of failure, but it can be a simple way to get running.
- `CONTROL_PLANE_SERVERCLASS`: The server class to use for control plane nodes.
- `WORKER_SERVERCLASS`: The server class to use for worker nodes.
- `KUBERNETES_VERSION`: The version of Kubernetes to deploy (e.g. `v1.21.1`).
- `CONTROL_PLANE_PORT`: The port used for the Kubernetes API server (port 6443)

For instance:

```bash
export CONTROL_PLANE_SERVERCLASS=any
export WORKER_SERVERCLASS=any
export TALOS_VERSION=v0.14.0
export KUBERNETES_VERSION=v1.22.2
export CONTROL_PLANE_PORT=6443
export CONTROL_PLANE_ENDPOINT=1.2.3.4

clusterctl generate cluster cluster-0 -i sidero > cluster-0.yaml
```

Take a look at this new `cluster-0.yaml` manifest and make any changes as you
see fit.
Feel free to adjust the `replicas` field of the `TalosControlPlane` and `MachineDeployment` objects to match the number of machines you want in your controlplane and worker sets, respecively.
`MachineDeployment` (worker) count is allowed to be 0.

Of course, these may also be scaled up or down _after_ they have been created,
as well.

## Create the Cluster

When you are satisfied with your configuration, go ahead and apply it to Sidero:

```bash
kubectl apply -f cluster-0.yaml
```

At this point, Sidero will allocate Servers according to the requests in the
cluster manifest.
Once allocated, each of those machines will be installed with Talos, given their
configuration, and form a cluster.

You can watch the progress of the Servers being selected:

```bash
watch kubectl --context=sidero-demo \
get servers,machines,clusters
```

First, you should see the Cluster created in the `Provisioning` phase.
Once the Cluster is `Provisioned`, a Machine will be created in the
`Provisioning` phase.

![machine provisioning](/images/sidero-cluster-start.png)

During the `Provisioning` phase, a Server will become allocated, the hardware
will be powered up, Talos will be installed onto it, and it will be rebooted
into Talos.
Depending on the hardware involved, this may take several minutes.

Eventually, the Machine should reach the `Running` phase.

![machine_running](/images/sidero-cluster-up.png)

The initial controlplane Machine will always be started first.
Any additional nodes will be started after that and will join the cluster when
they are ready.

## Retrieve the Talosconfig

In order to interact with the new machines (outside of Kubernetes), you will
need to obtain the `talosctl` client configuration, or `talosconfig`.
You can do this by retrieving the secret from the Sidero
management cluster:

```bash
kubectl --context=sidero-demo \
get secret \
cluster-0-talosconfig \
-o jsonpath='{.data.talosconfig}' \
| base64 -d \
> cluster-0-talosconfig
```

## Retrieve the Kubeconfig

With the talosconfig obtained, the workload cluster's kubeconfig can be retrieved in the normal Talos way:

```bash
talosctl --talosconfig cluster-0-talosconfig --nodes <CONTROL_PLANE_IP> kubeconfig
```

## Check access

Now, you should have two cluster available: you management cluster
(`sidero-demo`) and your workload cluster (`cluster-0`).

```bash
kubectl --context=sidero-demo get nodes
kubectl --context=cluster-0 get nodes
```
41 changes: 41 additions & 0 deletions website/content/v0.6/Getting Started/expose-services.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
---
description: "A guide for bootstrapping Sidero management plane"
weight: 6
title: "Expose Sidero Services"
---

> If you built your cluster as specified in the [Prerequisite: Kubernetes] section in this tutorial, your services are already exposed and you can skip this section.
There are three external Services which Sidero serves and which much be made
reachable by the servers which it will be driving.

For most servers, TFTP (port 69/udp) will be needed.
This is used for PXE booting, both BIOS and UEFI.
Being a primitive UDP protocol, many load balancers do not support TFTP.
Instead, solutions such as [MetalLB](https://metallb.universe.tf) may be used to expose TFTP over a known IP address.
For servers which support UEFI HTTP Network Boot, TFTP need not be used.

The kernel, initrd, and all configuration assets are served from the HTTP service
(port 8081/tcp).
It is needed for all servers, but since it is HTTP-based, it
can be easily proxied, load balanced, or run through an ingress controller.

Overlay Wireguard SideroLink network requires UDP port 51821 to be open.
Same as TFTP, many load balancers do not support Wireguard UDP protocol.
Instead, use MetalLB.

The main thing to keep in mind is that the services **MUST** match the IP or
hostname specified by the `SIDERO_CONTROLLER_MANAGER_API_ENDPOINT` and
`SIDERO_CONTROLLER_MANAGER_SIDEROLINK_ENDPOINT` environment
variables (or configuration parameters) when you installed Sidero.

It is a good idea to verify that the services are exposed as you think they
should be.

```bash
$ curl -I http://192.168.1.150:8081/tftp/ipxe.efi
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 1020416
Content-Type: application/octet-stream
```
73 changes: 73 additions & 0 deletions website/content/v0.6/Getting Started/import-machines.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
---
description: "A guide for bootstrapping Sidero management plane"
weight: 7
title: "Import Workload Machines"
---

At this point, any servers on the same network as Sidero should network boot from Sidero.
To register a server with Sidero, simply turn it on and Sidero will do the rest.
Once the registration is complete, you should see the servers registered with `kubectl get servers`:

```bash
$ kubectl get servers -o wide
NAME HOSTNAME ACCEPTED ALLOCATED CLEAN
00000000-0000-0000-0000-d05099d33360 192.168.1.201 false false false
```

## Accept the Servers

Note in the output above that the newly registered servers are not `accepted`.
In order for a server to be eligible for consideration, it _must_ be marked as `accepted`.
Before a `Server` is accepted, no write action will be performed against it.
This default is for safety (don't accidentally delete something just because it
was plugged in) and security (make sure you know the machine before it is given
credentials to communicate).

> Note: if you are running in a safe environment, you can configure Sidero to
> automatically accept new machines.
For more information on server acceptance, see the [server docs](../../resource-configuration/servers/#server-acceptance).

## Create ServerClasses

By default, Sidero comes with a single ServerClass `any` which matches any
(accepted) server.
This is sufficient for this demo, but you may wish to have
more flexibility by defining your own ServerClasses.

ServerClasses allow you to group machines which are sufficiently similar to
allow for unnamed allocation.
This is analogous to cloud providers using such classes as `m3.large` or
`c2.small`, but the names are free-form and only need to make sense to you.

For more information on ServerClasses, see the [ServerClass
docs](../../resource-configuration/serverclasses/).

## Hardware differences

In baremetal systems, there are commonly certain small features and
configurations which are unique to the hardware.
In many cases, such small variations may not require special configurations, but
others do.

If hardware-specific differences do mandate configuration changes, we need a way
to keep those changes local to the hardware specification so that at the higher
level, a Server is just a Server (or a server in a ServerClass is just a Server
like all the others in that Class).

The most common variations seem to be the installation disk and the console
serial port.

Some machines have NVMe drives, which show up as something like `/dev/nvme0n1`.
Others may be SATA or SCSI, which show up as something like `/dev/sda`.
Some machines use `/dev/ttyS0` for the serial console; others `/dev/ttyS1`.

Configuration patches can be applied to either Servers or ServerClasses, and
those patches will be applied to the final machine configuration for those
nodes without having to know anything about those nodes at the allocation level.

For examples of install disk patching, see the [Installation Disk
doc](../../resource-configuration/servers/#installation-disk).

For more information about patching in general, see the [Patching
Guide](../../guides/patching).
61 changes: 61 additions & 0 deletions website/content/v0.6/Getting Started/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
description: "Overview"
weight: 1
title: "Overview"
---

This tutorial will walk you through a complete Sidero setup and the formation,
scaling, and destruction of a workload cluster.

To complete this tutorial, you will need a few things:

- ISC DHCP server.
While any DHCP server will do, we will be presenting the
configuration syntax for ISC DHCP.
This is the standard DHCP server available on most Linux distributions (NOT
dnsmasq) as well as on the Ubiquiti EdgeRouter line of products.
- Machine or Virtual Machine on which to run Sidero itself.
The requirements for this machine are very low, it can be x86 or arm64
and it should have at least 4GB of RAM.
- Machines on which to run Kubernetes clusters.
These have the same minimum specifications as the Sidero machine.
- Workstation on which `talosctl`, `kubectl`, and `clusterctl` can be run.

## Steps

1. Prerequisite: CLI tools
1. Prerequisite: DHCP server
1. Prerequisite: Kubernetes
1. Install Sidero
1. Expose services
1. Import workload machines
1. Create a workload cluster
1. Scale the workload cluster
1. Destroy the workload cluster
1. Optional: Pivot management cluster

## Useful Terms

**ClusterAPI** or **CAPI** is the common system for managing Kubernetes clusters
in a declarative fashion.

**Management Cluster** is the cluster on which Sidero itself runs.
It is generally a special-purpose Kubernetes cluster whose sole responsibility
is maintaining the CRD database of Sidero and providing the services necessary
to manage your workload Kubernetes clusters.

**Sidero** is the ClusterAPI-powered system which manages baremetal
infrastructure for Kubernetes.

**Talos** is the Kubernetes-focused Linux operating system built by the same
people who bring to you Sidero.
It is a very small, entirely API-driven OS which is meant to provide a reliable
and self-maintaining base on which Kubernetes clusters may run.
More information about Talos can be found at
[https://talos.dev](https://talos.dev).

**Workload Cluster** is a cluster, managed by Sidero, on which your Kubernetes
workloads may be run.
The workload clusters are where you run your own applications and infrastructure.
Sidero creates them from your available resources, maintains them over time as
your needs and resources change, and removes them whenever it is told to do so.
Loading

0 comments on commit 6c81518

Please sign in to comment.