Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spread: add LXD VM backend #185

Open
wants to merge 12 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 30 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ Spread
[Disabling unless manually selected](#manual)
[Fetching artifacts](#artifacts)
[LXD backend](#lxd)
[LXD VM backend](#lxd-vm)
[QEMU backend](#qemu)
[Google backend](#google)
[Linode backend](#linode)
Expand Down Expand Up @@ -805,6 +806,35 @@ backends:

That's it. Have fun with your self-contained multi-system task runner.

The LXD backend supports setting memory limit for the containers like so:

```
backends:
lxd:
memory: 1024M
systems:
- ubuntu-16.04:
```

<a name="lxd-vm"/>

## LXD VM backend

The LXD VM backend works very much the same as the LXD backend, but instead of
containers it spins up LXD VMs.

Assuming LXD was successfully installed and configured, setting up the backend
in your project file is as trivial as:

```
backends:
lxd-vm:
systems:
- ubuntu-22.04
```

The image naming and resource limits rules are identical to the LXD backend. By
default, each VM is created with a single CPU and 1GB memory limit.

<a name="qemu"/>

Expand Down
51 changes: 47 additions & 4 deletions spread/lxd.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,29 @@ import (
)

func LXD(p *Project, b *Backend, o *Options) Provider {
return &lxdProvider{p, b, o}
return &lxdProvider{
project: p,
backend: b,
options: o,
vm: false,
}
}

func LXDVM(p *Project, b *Backend, o *Options) Provider {
return &lxdProvider{
project: p,
backend: b,
options: o,
vm: true,
}
}

type lxdProvider struct {
project *Project
backend *Backend
options *Options

vm bool
}

type lxdServer struct {
Expand Down Expand Up @@ -111,6 +127,16 @@ func (p *lxdProvider) Allocate(ctx context.Context, system *System) (Server, err
if !p.options.Reuse {
args = append(args, "--ephemeral")
}
if p.vm {
args = append(args, "--vm")
}
if p.backend.Memory > 0 {
mem := int(p.backend.Memory / mb)
args = append(args, "-c", fmt.Sprintf("limits.memory=%dMiB", mem))
}
if system.Storage != Size(0) {
args = append(args, "-d", fmt.Sprintf("root,size=%d", system.Storage))
}
output, err := exec.Command("lxc", args...).CombinedOutput()
if err != nil {
err = outputErr(output, err)
Expand All @@ -128,8 +154,18 @@ func (p *lxdProvider) Allocate(ctx context.Context, system *System) (Server, err
system: system,
}

printf("Waiting for lxd container %s to have an address...", name)
timeout := time.After(60 * time.Second)
what := "container"
if p.vm {
what = "VM"
}
printf("Waiting for lxd %s %s to have an address...", what, name)
maxTimeout := 60 * time.Second
if p.vm {
// VM may take considerably longer to start
// TODO: should this be configurable?
maxTimeout = 180 * time.Second
}
timeout := time.After(maxTimeout)
retry := time.NewTicker(1 * time.Second)
defer retry.Stop()
for {
Expand Down Expand Up @@ -292,8 +328,15 @@ func (p *lxdProvider) lxdLocalImage(system *System) (string, error) {
return "", err
}

// TODO use lxd image list --format=json to get all of this in one call
var stderr bytes.Buffer
cmd := exec.Command("lxc", "image", "list")
args := []string{"image", "list"}
if p.vm {
args = append(args, "type=virtual-machine")
} else {
args = append(args, "type=container")
}
cmd := exec.Command("lxc", args...)
cmd.Stderr = &stderr

output, err := cmd.Output()
Expand Down
2 changes: 1 addition & 1 deletion spread/project.go
Original file line number Diff line number Diff line change
Expand Up @@ -529,7 +529,7 @@ func Load(path string) (*Project, error) {
backend.Type = bname
}
switch backend.Type {
case "google", "linode", "lxd", "qemu", "adhoc", "humbox":
case "google", "linode", "lxd", "lxd-vm", "qemu", "adhoc", "humbox":
default:
return nil, fmt.Errorf("%s has unsupported type %q", backend, backend.Type)
}
Expand Down
2 changes: 2 additions & 0 deletions spread/runner.go
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,8 @@ func Start(project *Project, options *Options) (*Runner, error) {
r.providers[bname] = Linode(project, backend, options)
case "lxd":
r.providers[bname] = LXD(project, backend, options)
case "lxd-vm":
r.providers[bname] = LXDVM(project, backend, options)
case "qemu":
r.providers[bname] = QEMU(project, backend, options)
case "adhoc":
Expand Down
4 changes: 4 additions & 0 deletions tests/lxd-vm/checks/main/task.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
summary: Ensure it works.

execute: |
echo WORKS
19 changes: 19 additions & 0 deletions tests/lxd-vm/spread.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
project: spread

backends:
lxd-vm:
systems:
# list systems which are known to work as VMs, both 16.04 and 18.04 do
# not work at all
- ubuntu-20.04
- ubuntu-22.04
- ubuntu-24.04

path: /home/test

suites:
checks/:
summary: Verification tasks.


# vim:ts=4:sw=4:et
14 changes: 14 additions & 0 deletions tests/lxd-vm/task.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
summary: Test the lxd backend.

execute: |
# we'll run out of space on GCP systems so run VM tests one by one
for t in $(spread -list); do
spread -vv "$t" &> task.out
MATCH '^WORKS$' < task.out
# delete all VM images (there's just one test per system so this is
# fine), note lxd 4.x does not support filtering by image type
lxc image delete $(lxc image list -f csv -c 'ft' |grep VIRTUAL | cut -f1 -d,)
done

debug: |
cat task.out || true