-
Notifications
You must be signed in to change notification settings - Fork 4
/
Copy pathREADME.md.gotmpl
467 lines (325 loc) · 20.9 KB
/
README.md.gotmpl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
{{- /*gotype: main.TemplateData*/ -}}
# CloudControl ☁️ 🧰
The cloud engineer's toolbox.
## Introduction
*CloudControl* is a [Docker](https://docker.com) based configuration environment containing all the tools
required and configured to manage modern cloud infrastructures.
The toolbox comes in different "flavours" depending on what cloud you are working in.
Currently supported cloud flavours are:
{{- range .Flavours }}
* {{ .Title }} [{{ $sep := "" }}{{ range .Platforms }}{{ $sep }}{{ . }}{{ $sep = ", " }}{{ end }}]
{{- end }}
Following features and tools are supported:
{{- range .Features}}
* {{ if .Icon }}{{.Icon}}{{ end }} {{ .Title -}}{{- if .Deprecation }} ⚠️ Deprecated: {{ .Deprecation }}{{- end -}}
{{ end }}
## Table of contents
* [Usage](#usage)
* [Using Kubernetes](#using-kubernetes-preview)
* [FAQ](#faq)
* [Flavours](#flavours)
{{- range $name, $flavour := .Flavours }}
* [{{ $name }}](#{{ $name }})
{{- end }}
* [Features](#features)
{{- range $name, $feature := .Features }}
* [{{ $feature.Title }}](#{{ $name }})
{{- end }}
* [Development](#development)
* [Building](#building)
## Usage
*CloudControl* can be used best with docker-compose. Check out the `sample` directory in a flavour for a sample
compose file and to convenience scripts. It includes a small web server written in Go and Vuejs-client dubbed
"CloudControlCenter", which is used as a status screen. It listens to port 8080 inside the container.
Copy the compose file and configure it to your needs. Check below for configuration options per flavour and feature.
Run `init.sh`. This script basically just runs `docker-compose up -d` and tells you the URL for CloudControlCenter.
Open it and wait for *CloudControl* to finish initializing.
The initialization process will download and configure the additional tools and completes with a message when its done.
It will run each time when the stack is recreated.
After the initialization process you can simply run `docker-compose exec cli /usr/local/bin/cloudcontrol run` to jump
into the running container and work with the installed features.
If you need to change any of the configuration environment variables, rerun the init script afterwards to apply
the changes. Remember, that *CloudControl* needs to reininitialize for this.
## Configuring features
There are two ways to configure a feature and the version it should use. The first way is to use the given
`USE_[feature name]=yes` environment variable and specifying the version with `[FEATURE NAME]_VERSION=[version]`.
If there are multiple features configured, this can get a bit messy. Another approach is to use the `FEATURES`
environment variable and list the features and optionally the version like this:
FEATURES=kubernetes helm:3.5.1 terraform:1.1.9
This would install the version 3.5.1 of Helm and version 1.1.9 of terraform. (Kubernetes uses the flavour's provided
version of kubectl, e.g. using az aks install-cli)
**Note**: Please see the feature documentation below if a feature supports specifying a version string. All version
strings need to be provided in semver format (i.e. 1.2.3), the feature installers will take care about prefixes for
download URLs, if required.
## Using Kubernetes (Preview)
*CloudControl* is targeted to run on a local machine. It requires the following features to work:
* host path volumes
* host based networking
Some Kubernetes distributions such as [Rancher desktop](https://rancherdesktop.io/) support this and can be used to
run *CloudControl*.
The `sample` directories of each flavour provide an example Kubernetes configuration based on a deployment and a
service. They were preliminary tested on Rancher desktop.
Modify them to your local requirements and then run
kubectl apply -f k8s.yaml
to apply them.
This will create a new namespace for your project and a deployment and a service in that. Check
`kubectl get -n [project] pod` to watch the progress until a cli pod has been created.
Use `kubectl get -n [project] svc cli` to see the bound ports for the cli service and use your browser to connect
to the *CloudControlCenter* instance.
After the initialization is done, use `kubectl -n [project] exec -it deployment/cli -- /usr/local/bin/cloudcontrol run`
to enter *CloudControl*.
*Warning*: This implementation is currently a preview feature and hasn't been tested thoroughly. It highly depends on
the proper support for host based volumes and networking of the Kubernetes distribution. Please refer to the
documentation and support of your Kubernetes distribution if something isn't working.
## FAQ
### What does an error message like "no matching manifest for linux/arm64/v8 in the manifest list entries" mean?
Apparently you're using *CloudControl* on a system for which no specific image exist. Some cloud providers have not
provided base images for all architectures (e.g. the Apple ARM-based processors) yet. See the list of flavours above
for the available platforms per flavour.
As a workaround this, you can use the [`platform`](https://docs.docker.com/compose/compose-file/compose-file-v2/#platform)
parameter for docker-compose or the `--platform` parameter for `docker run` to specify a compatible architecture
(e.g. linux/amd64 on Apple ARM-based machines).
### How can I add an informational text for users of *CloudControl*?
If you want to display a *custom login message* when users enter the container, set environment variable `MOTD`
to that message. If you want to display the default login message as well, also
set the environment variable `MOTD_DISPLAY_DEFAULT` to *yes*.
### How can I forward ports to the host?
If you'd like to forward traffic into a cluster using `kubectl port-forward` you can do the following:
* Add a ports key to the cli-service in your docker-compose file to forward a free port on your host to a defined
port in your container. The docker-compose-files in the sample directories already use port 8081 for this.
* Inside *CloudControl*, check the IP of the container:
```
bash-5.0$ ifconfig eth0
eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:02
inet addr:172.21.0.2 Bcast:172.21.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:53813 errors:0 dropped:0 overruns:0 frame:0
TX packets:20900 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:75260363 (71.7 MiB) TX bytes:2691219 (2.5 MiB)
```
* Use the IP address used by the container as the bind address for port-forward to forward traffic
to the previously defined container port to a service on its port (e.g. port 8081 to the
service my-service listening on port 8080):
```
kubectl port-forward --address 172.21.0.2 svc/my-service 8081:8080
```
* Check out, which host port docker bound to the private port you set up (e.g. 8081)
```
docker-compose port cli 8081
```
* Connect to localhost:[host port] on your host
### How to set up command aliases
If you'd like to set up aliases to save some typing, you can use the *run* feature. Run your container with these
environment variables:
* `USE_run=yes`: Set up the run feature
* `RUN_COMMANDS=alias firstalias=command;alias secondalias=command`: Set up some aliases
### How can I share my SSH-keys with the *CloudControl* container
First, mount your .ssh directory into the container at /home/cloudcontrol/.ssh.
Also, to not enter your passphrase every time you use the key, you should mount the ssh agent socket into the
container and set the environment variable SSH_AUTH_SOCK to that path. *CloudControl* will automatically fix the
permissions of that file so the *CloudControl* user can use it.
Here are snippets for your docker-compose file for convenience:
(...)
volumes:
- "[Path to .ssh directory]:/home/cloudcontrol/.ssh"
# for Linux:
- "${SSH_AUTH_SOCK}:/ssh-agent"
# for macOS:
- "/run/host-services/ssh-auth.sock:/ssh-agent"
environment:
- "SSH_AUTH_SOCK=/ssh-agent"
### How to identify Terraform state locks by other *CloudControl*-users
Because of how *CloudControl* is designed it uses a defined user named "cloudcontrol", so Terraform state lock
messages look like this:
> Error: Error locking state: Error acquiring the state lock: storage: service returned error: StatusCode=409, ErrorCode=LeaseAlreadyPresent, ErrorMessage=There is already a lease present.
RequestId:56c21b95-501e-0096-7082-41fa0d000000
Time:2021-05-05T07:41:25.9164547Z, RequestInitiated=Wed, 05 May 2021 07:41:25 GMT, RequestId=56c21b95-501e-0096-7082-41fa0d000000, API Version=2018-03-28, QueryParameterName=, QueryParameterValue=
Lock Info:
ID: a1cef2cc-fec4-1765-4da8-d068a729ba7e
Path: path/terraform.tfstate
Operation: OperationTypeApply
Who: cloudcontrol@5c47a37f920b
Version: 0.12.17
Created: 2021-05-05 07:38:01.188897776 +0000 UTC
Info:
It's hard to identify from that who the other *CloudControl* user is, that may have opened a lock. The system
user can't be changed, but it's possible to set a better hostname than the one Docker autogenerated.
See this docker-compose snippet on how to set a better hostname:
version: "3"
services:
cli:
image: "dodevops/cloudcontrol-azure:latest"
hostname: "[TODO yourname]"
volumes:
(...)
If you set hostname in that snippet to "alice", the state lock will look like this now:
> Error: Error locking state: Error acquiring the state lock: storage: service returned error: StatusCode=409, ErrorCode=LeaseAlreadyPresent, ErrorMessage=There is already a lease present.
RequestId:56c21b95-501e-0096-7082-41fa0d000000
Time:2021-05-05T07:41:25.9164547Z, RequestInitiated=Wed, 05 May 2021 07:41:25 GMT, RequestId=56c21b95-501e-0096-7082-41fa0d000000, API Version=2018-03-28, QueryParameterName=, QueryParameterValue=
Lock Info:
ID: a1cef2cc-fec4-1765-4da8-d068a729ba7e
Path: path/terraform.tfstate
Operation: OperationTypeApply
Who: cloudcontrol@alice
Version: 0.12.17
Created: 2021-05-05 07:38:01.188897776 +0000 UTC
Info:
### I get an error "repomd.xml signature could not be verified for kubernetes" when using Kubernetes in the AWS flavour
*CloudControl* uses the
[official guide to install kubectl on an RPM-based system](https://kubernetes.io/de/docs/tasks/tools/install-kubectl/).
However, Google seems to
[regularly have problems with its key-signing in the used repository](https://github.com/kubernetes/kubernetes/issues/60134),
so we added a workaround to this problem. If you add the environment variable `AWS_SKIP_GPG=1` to your
docker-compose.yaml, it will ignore an invalid GPG key during the installation.
*Please note though*, that this affects the security of the system and should not be used constantly.
### When I try to start *CloudControl*, it keeps exiting with "./ccc: exit status 1". What can I do?
Use the `docker logs` command with the failed container to see the complete log output. You can enhance the log by using the
"DEBUG_[feature]" options or add the environment variable "DEBUG_FLAVOUR" to turn on the debug log for the flavour
installation.
If you are really stuck, you can convince the container to keep running by setting "CONTINUE_ON_ERROR=yes" as an
environment variable in the docker-compose file. Then you can debug with the running container.
## Flavours
{{- range $name, $flavour := .Flavours }}
### <a id="{{ $name }}"></a> {{ $name }}
{{ $flavour.Description }}
{{ if $flavour.Configuration }}#### Configuration
{{ range $flavour.Configuration }}
* {{ indent 2 . | trim }}
{{- end }}
{{- end }}
{{- end -}}
## Features
{{- range $name, $feature := .Features }}
### <a id="{{ $name }}"></a> {{ $feature.Title }}{{- if $feature.Deprecation }} ⚠️ Deprecated: {{ $feature.Deprecation }}{{- end }}
{{ $feature.Description }}
#### Configuration
* USE_{{ trimPrefix "_" $name }}: Enable this feature (or use the FEATURES variable instead)
{{- if $feature.RequiresVersion }}
* {{ $name | upper }}_VERSION (required): Version to install (or use the FEATURES variable instead)
{{- end }}
* DEBUG_{{ trimPrefix "_" $name }}: Debug this feature
{{- range $feature.Configuration }}
* {{ indent 2 . | trim }}
{{- end }}
{{ end }}
## Development
*CloudControl* supports a decoupled development of features and flavours.
### Features
If you're missing a feature, just fork this repository, copy the feature template from features/.template into a
new subfolder, check out the comments in the example files, and modify them to your needs.
These files make up a feature:
* `feature.yaml`: A descriptor for your feature with a title, a description and configuration notes
* `install.sh`: A shell script that is run by CloudControlCenter and should install everything you need
for your new feature
* `motd.sh`: (optional) If you want to show some information to the users upon login, put them here.
And optional, but recommended [integration tests](integration-testing) in a `.goss` folder.
### Flavours
If you need another flavour (aka cloud provider), add a new subdirectory under "flavour" and add a flavour.yaml describing
your flavour the same way as a feature. For the rest of the files, please check out existing flavours for details. Please,
include a sample configuration for your flavour to make it easier for other people to work with it.
### Configuration best practices
The `feature.yaml` is a descriptor file used to automatically create this documentation. It includes a "configuration"
key, that should be used to inform the user of ways to configure the feature. Usually, this is done using
environment variables.
It is recommended to use prefixed variables for the feature. For example, when creating a feature called "myfeature",
use environment variables prefixed with "MYFEATURE_" to circumvent the problem of accidentally sharing configuration
variables with another feature or a flavour.
Additionally, please add enough information to the configuration array of your feature so the user knows what values
to set for the specific environment variable. If a configuration option is required, please state this in the
environment declaration using `(required)` after its name and check if the variable is set in your installation script
and break accordingly if not.
If your feature needs a version specification, set "requiresVersion" to true in the feature descriptor. This will
enable the use of an environment variable `[FEATURE NAME]_VERSION`. This variable is also filled if the *CloudControl*
user uses the FEATURES-variable approach to enable features.
### Installer utilities
In your install script, you can source the utils library
. /feature-installer-utils.sh
#### execHandle
Installation scripts usually echo out some kind of progress, execute something and have to check for errors. The command
`execHandle` does all this in a one liner:
execHandle "Progress message" command
This will print out "Progress message...", run the command and if it exits with a non-zero status code, it will print
the output of the command and exit with status code 1.
Using this makes installer script way shorter and easier to maintain.
### Integration testing
To validate if your feature correctly installs on a target flavour, [goss](https://github.com/goss-org/goss) is used.
goss can test various things you specify in a yaml formatted file. The *CloudControl* test runner expects a subdirectory
`goss` in your feature, which can hold these files:
* goss.yaml: A goss file containing tests specific for your feature. See the
[goss manual](https://github.com/goss-org/goss/blob/master/docs/manual.md) for the available tests. You can also
use [template](https://github.com/goss-org/goss/blob/master/docs/manual.md#templates) directives to filter
specific tests only for specific flavours by using e.g. `\{\{ if eq .Env.FLAVOUR "aws" }}(... aws tests)\{\{ end }}`
* .env: (optional) A file containing environment variable for the automatic tests. Can be empty. **Please do not put
any sensitive information in this file!**
* .env.[flavour]: (optional) An .env file that is specifically used for the given flavour
* .ignore-integration: (optional) Ignore this test for the final integration test (for example if it tests a
a different shell than the shell in the integration test)
This subdirectory is mounted as /goss-sup into your container. Additionally, another directory which contains
flavour-specific supplemental data (such as access keys) is mounted as /flavour. You can use these two directories
in environment variables and tests to build the required environment for your test.
**Warning**: When doing an integration test of all features, the test runner copies the contents of all supplemental
paths into one directory. Make sure to provide unique filenames for supplemental files for your feature, so they're not
overwritten by a file from another feature in that stage.
The test runner runs tests from all subdirectories starting with "goss", so you can add multiple directories to test
multiple variations of your feature.
### Features for specific flavours ###
If your feature only supports specific flavours, add a `test` key to your `feature.yaml` and under that a 'flavours'
key and list the available flavours. In this example the flavour "some flavour" will only be tested with the gcloud
flavour:
title: "some flavour"
test:
flavours:
- gcloud
## Building
Build a flavor container image with the base of the repository as the build context like this:
build.sh [tag] [flavour]
To build all flavours with the same tag, use
build.sh [tag]
## Testing
To run the test suite for a specific flavour, you need to create a local directory that holds flavour-specific data
(e.g. keys for authentication) and optionally an .env-file with flavour-specific environment variables. This is called
a "testbed" directory.
First, you need to compile the test runner:
docker run --rm -e GOOS=[os, e.g. darwin, linux, windows] -e GOARCH=[architecture, e.g. arm64, amd64] -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.19-alpine go build -o test-features cmd/tests/test-features
After that, download the latest goss binary for the target architecture you will test (linux/amd64 or linux/arm64) from
the [Goss site](https://github.com/goss-org/goss) and put it somewhere local.
Once that is done, run the tests like following:
./test-features -f [flavour] -i [image:tag] -t [path to testbed directory] -p [test architecture, e.g. linux/amd64] -g [path to the goss binary]
This will run the tests of all features that supply a test suite one by one and, if all succeed, will test all
features together for integration testing. Check out `test-features --help` for other options.
### Testing for failing configurations
If you'd like to test if a specific configuration fails to install, create a goss subdirectory and only put a file
named `.will-fail` into it.
When the testrunner encounters such file it will check if CloudControl fails to complete initialization.
You can add a regular expression pattern into `.will-fail` to test if the container or command output matches it.
### Unstable tests
As we're dealing with a lot of moving targets in the features, sometimes a test might not be reliable. For these
situations we support a .might-fail file. Just add it as a text file into the test suite subdirectory and put some text
into it describing the problem. Failed test won't fail the test suite then but instead the description will be shown.
### Test debugging
To check why a test failed, use the -l parameter to enable debug logging. Additionally, you can use the -n parameter
to specify the specific feature to test and use the -x parameter to stop testing if one test fails.
When a test fails, the test container will not be removed automatically (unless you specified the -c parameter), so
you can inspect the failing container as well.
## Building documentation ##
To rebuild this documentation, first compile the documentation maker:
docker run --rm -e GOOS=[os, e.g. darwin, linux, windows] -e GOARCH=[architecture, e.g. arm64, amd64] -v "$PWD":/usr/src/myapp -w /usr/src/myapp golang:1.19-alpine go build cmd/doc/mkdoc.go
Then run it to rebuild README.md based on README.md.gotmpl:
./mkdoc
## Workflows
This repository includes different workflows to test and automate PRs and Pushes. The following workflows are used:
```mermaid
flowchart TD
A[Every Push] --> B[Update documentation]
D[Every PR] --> E[Check commits]
D --> F[Run Testsuite]
G[Push to Main] --> H[Generate Changelog and Release]
G --> C
I[Push to Develop] --> C[Build images]
click B "https://github.com/dodevops/cloudcontrol/blob/develop/.github/workflows/docs.yml" "Docs workflow"
click C "https://github.com/dodevops/cloudcontrol/blob/develop/.github/workflows/image.yml" "Image workflow"
click E "https://github.com/dodevops/cloudcontrol/blob/develop/.github/workflows/check_commits.yml" "Check workflow"
click F "https://github.com/dodevops/cloudcontrol/blob/develop/.github/workflows/test.yml" "Test workflow"
click H "https://github.com/dodevops/cloudcontrol/blob/develop/.github/workflows/release.yml" "Release workflow"
```mermaid