Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move to yaml resources everywhere #612

Open
2 of 4 tasks
martinpitt opened this issue Apr 2, 2024 · 9 comments
Open
2 of 4 tasks

Move to yaml resources everywhere #612

martinpitt opened this issue Apr 2, 2024 · 9 comments

Comments

@martinpitt
Copy link
Member

martinpitt commented Apr 2, 2024

podman kube play can create podman secrests from k8s yaml secrets now. With that, both our OpenShift and systemd deployments can use the same input.

While at it, split the input secrets further:

@martinpitt martinpitt moved this to improvement in Pilot tasks Apr 2, 2024
@martinpitt
Copy link
Member Author

#617 does the first part of flattening the s3-keys

@martinpitt
Copy link
Member Author

martinpitt commented Apr 8, 2024

Unfortunately podman secrets don't understand k8s style secrets at all. If I have a /tmp/s.yaml with

---
apiVersion: v1
kind: Secret
metadata:
  name: foo-tokens
stringData:
  github-token: "foo bar 123"
  supi: |
    first line
    second line

Then podman play kube /tmp/s.yaml works, but it doesn't get mounted as a directory (with the keys as files) as in k8s, but as a single flat yaml file:

$ podman run -it --rm --secret=foo-tokens,target=/run/secrets/foo  quay.io/cockpit/tasks cat /run/secrets/foo
apiVersion: v1
kind: Secret
metadata:
  creationTimestamp: null
  name: foo-tokens
stringData:
  github-token: foo bar 123
  supi: |
    first line
    second line

In order for this to work, you have to pick out every single key individually with env.valueFrom.secretKeyRef.{name,key}, which is awkward.

See https://docs.podman.io/en/latest/markdown/podman-kube-play.1.html

ConfigMaps have the same problem, BTW. So here goes the dream of uniform handling..

@martinpitt
Copy link
Member Author

martinpitt commented Apr 8, 2024

It actually does work fine when using podman play kube for creating the pod as well:

---
apiVersion: v1
kind: Secret
metadata:
  name: foo-tokens
stringData:
  github-token: "foo bar 123"
  supi: |
    first line
    second line

---

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: test1
      image: quay.io/libpod/alpine_nginx:latest
      volumeMounts:
        - name: foo
          mountPath: /etc/foo
          readOnly: true
  volumes:
    - name: foo
      secret:
        secretName: foo-tokens
        optional: false
❱❱❱ podman exec -it mypod-test1 ls -l /etc/foo
total 8
-rw-r--r--    1 root     root            11 Apr  8 15:27 github-token
-rw-r--r--    1 root     root            23 Apr  8 15:27 supi

So we could go all-in on YAML and use that everywhere. quadlets even support .kube files.

https://www.redhat.com/sysadmin/multi-container-application-podman-quadlet

@martinpitt martinpitt changed the title Move to yaml secrets everywhere Move to yaml resources everywhere Apr 8, 2024
@martinpitt
Copy link
Member Author

In other words, we should kube-ify the containers first (as that works with host directory secrets) and then convert the secrets -- that way it can be broken down into multiple smaller steps.

@allisonkarlitskaya
Copy link
Member

So one possibility to do secrets, which falls short of putting each individual secret into bitwarden would be to use Ansible Vault. We'd check the encrypted secret vault into this repository and put the encryption passphrase into bitwarden (and manually enter it on each deployment). That should be very easy to get going.

The only downside of that is that we introduce a giant encrypted blob with all of our secrets inside of it into source control. That sucks for two reasons:

  • diffs probably aren't going to be nice to look at
  • somehow, even with encryption I guess it seems "weird" to have that data in a public repo

@allisonkarlitskaya
Copy link
Member

I played with this a bit more and found out a couple of things:

  • you don't actually need to create the container. Telling podman to create a pod with a volume in it is sufficient. The volume ends up in the global namespace, with the name you give it, which can then be hit by the container being started via the usual commandline interface. Unfortunately, you get the extra pod as a side-effect.
  • You can also request creation of the volume this way as part of a "deployment" but that also gets a pod created.
  • You can create volumes directly, but then there's no support for creating a volume from secrets.
  • I took a look into the code (VolumeSource) and it looks like this is only supported for volumes created as parts of pods (or deployments).
  • this general approach to initializing the contents of a volume from some file which isn't a tarball seems useful and we could probably ask podman to add a feature for that. Failing that, we could do it ourselves via tar and a small script -or- just use the pod/deployment approach and delete the useless pod after.

On the Bitwarden side, this is very much possible to do this with bw, but it seems that the performance situation there is pretty awful. Each commandline invocation takes on the order of ~1s which adds up pretty quickly for the way ansible wants to interact with it. There's a rust version which is a lot faster, but it's sort of sad that we can't use the official one. We might solve that by putting all secrets into a tarball which we put into bitwarden as an attachment, but this approach doesn't seem a whole lot better than having an Ansible archive in git...

@martinpitt
Copy link
Member Author

you don't actually need to create the container. Telling podman to create a pod with a volume in it is sufficient.

If you mean the yaml resource, then yes.

The volume ends up in the global namespace, with the name you give it,

Right, it turns into a standard podman volume.

which can then be hit by the container being started via the usual commandline interface.

Do you generally not like to start the containers via a .kube file, or do you mean to do this just as an intermediate step to yaml-ify the secrets first, and the containers later?

Unfortunately, you get the extra pod as a side-effect.

🤷‍♂️ that overhead is tiny.

diffs probably aren't going to be nice to look at

Yes, I don't like this either -- this can't be the primary source of truth, just a transport format. So we still need to keep the secrets someplace else. That's also why I am not a 💯 fan of bitwarden -- there's no history, no commit logs with explanations, etc. (I'm not against it, this is just something which I really like about having them in git).

somehow, even with encryption I guess it seems "weird" to have that data in a public repo

Yeah, I share the feeling. It's entirely emotional, though -- if someone can break that encryption, they can also break pretty much everything else that holds the internet together.

Thanks for your bw investigations! The 1s delay per step doesn't sound so bad really -- we only refresh secrets aller Jubeljahre, and I usually run ansible with -f20 so that it parallelizes heavily. Or does that only allow one serial access at a time?

@allisonkarlitskaya
Copy link
Member

Do you generally not like to start the containers via a .kube file, or do you mean to do this just as an intermediate step to yaml-ify the secrets first, and the containers later?

For the "monitor" containers, this is fine by me, but I'm trying to imagine how this will interact with job-runner...

Is your idea to kube the monitors (providing them with the secrets) and keep the podman imperative approach which manually accesses the secrets via --volume using the names which it assumes are present because the monitor container is running?

If so, then I agree that this would be reasonable.

If you want job-runner to somehow use podman kube play to start the job containers, we're going to need some more thinking...

@martinpitt
Copy link
Member Author

Yes, I only wanted the "static" deployments in yaml, at least for now. The job-runner instances are fine with podman run --volume. We only need to re-think this if/when we'll ever get an OpenShift with kvm support, then job-runner will want to kubectl create an actual Job object. But that's not in sight anytime soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: improvement
Development

No branches or pull requests

2 participants