-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm chart redesign for Quarkus-based runtimes #626
base: main
Are you sure you want to change the base?
Conversation
09657d1
to
f66d623
Compare
0ac047b
to
434a847
Compare
68cbdfc
to
acee0c9
Compare
``` | ||
|
||
### Uninstalling the chart | ||
|
||
```bash | ||
$ helm uninstall --namespace polaris polaris | ||
helm uninstall --namespace polaris polaris | ||
``` | ||
|
||
## Values |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: this is automatically generated by helm-docs
.
This is now ready for review. @MonkeyCanCode I would love to get your review on this please :-) I know we could add more unit tests, but I wanted to get your general feeling first. We can complete test coverage little by little imho. |
Great. Nice work. I will review those later this weekend |
67bf183
to
69c1265
Compare
# -- The root credentials to create during the bootstrap. If you don't provide credentials for the | ||
# root principal of each realm to bootstrap, random credentials will be generated. | ||
# Each entry in the array must be of the form: realm,clientId,clientSecret | ||
credentials: [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is admittedly not great. I opened #878 to improve this and use secrets instead. But for now we need to stick with credentials in clear text.
helm/polaris/templates/_helpers.tpl
Outdated
{{- if has $portNumber (values $ports) -}} | ||
{{- fail (printf "service.ports[%d]: port number already taken: %v" $i $portNumber) -}} | ||
{{- end -}} | ||
{{- $_ := set $ports $port.name $portNumber }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nits: lines 261-264 and 276-278, may want to add 2 more space for each of them to fix indentation (still valid yaml files)
args: ['{{ include "polaris.fullname" . }}:{{ index .Values.service.ports "polaris-metrics" }}/q/health'] | ||
args: | ||
- --spider | ||
- '{{ include "polaris.fullnameWithSuffix" (list . "mgmt") }}:{{ get (first .Values.managementService.ports) "port" }}/q/health/ready' | ||
restartPolicy: Never |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seem likes now the pod will take some time to start up. From my local testing, this will always fail. Should we consider add an init container to sleep for X seconds before run the connection test? Here is a sample change:
initContainers:
- name: sleep
image: busybox
command: ['sleep', '30']
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm it wasn't failing for me, the test pod always comes up after the Polaris pod reaches ready state.
I don't mind changing but waiting 30 seconds seems a lot, how about we poll the health endpoint X times until it succeeds?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that will be good. Yeah, on my setup it was taking 6-7 seconds somehow. But yeah, good to add some retry etc.
helm/polaris/templates/_helpers.tpl
Outdated
name: {{ tpl .Values.authentication.tokenBroker.secret.name . }} | ||
items: | ||
{{- if eq .Values.authentication.tokenBroker.type "rsa-key-pair" }} | ||
- key: {{ tpl .Values.authentication.tokenBroker.secret.publicKey . }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nits: lines 173-175 and 179-180, may want to add 2 more space for each of them to fix indentation (still valid yaml files)
.github/workflows/helm.yml
Outdated
@@ -99,7 +99,7 @@ jobs: | |||
if: steps.list-changed.outputs.changed == 'true' | |||
run: | | |||
eval $(minikube -p minikube docker-env) | |||
./gradlew :polaris-quarkus-server:assemble \ | |||
./gradlew :polaris-quarkus-server:assemble :polaris-quarkus-admin:assemble \ | |||
-Dquarkus.container-image.build=true \ | |||
-PeclipseLinkDeps=com.h2database:h2:2.3.232 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While testing the helm, I noticed the bootstrap pod will shows the bootstrap is completed. However, when trying to use the polaris CLI against the server, it will say the instance is not yet bootstrapped. Not sure if this is known issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you using in-memory persistence? Because with in-memory it won't work indeed. The bootstrap job can only be used with EclipseLink.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, H2 will not work either because it's in-memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MonkeyCanCode I think we should switch our tests to use Postgres, wdyt? H2 won't give us a "real" experience and bootstrapping will be broken.
I just tested with postgres and the bootstrap job worked and I was able to connect to Polaris after.
The only annoying thing is that we will need support for purge as well. I will try to add that too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I was using this branch for testing and bootstrap is probably hitting in-memory persistency for h2database i think? I also agreed we should use somewhat real db people may be using (e.g. Postgres) instead of using h2database (kinds left people to think this project is not close to ready to be use).
The latest main branch seems to be pretty stable (I only did some basic testing...planning to do more later this weekend). So maybe good to introduce those changes now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is probably hitting in-memory persistency for h2database i think?
Yup it was using H2 in-memory.
So maybe good to introduce those changes now?
I just pushed a commit that switches our tests to postgres. It's working well in CI.
When doing local testing though, if you install the chart more than once, you need to either kill the postgres pod or --set bootstrap.enabled=false
to avoid an error when bootstrapping the realm, since it was already bootstrapped previously.
env: | ||
{{- if .Values.storage.secret.name }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe use a helper function for secret to env?
If necessary, load the Docker images into Minikube: | ||
|
||
```bash | ||
eval $(minikube -p minikube docker-env) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nits: I used kind a lot more as oppose to minikube (both are great...kind is very lightwight and super fast). I am okay with switching to minikube, we may want to update https://github.com/apache/polaris/blob/main/run.sh as well to match the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah that's true. Well, let's mention both! I will add a note about Kind.
This is a major redesign of the Helm chart to make it compliant with Quarkus, but also improving a lot of aspects of the existing chart, such as the persistence secret management and the bootstrap jobs, to name a few.
Summary of changes
Persistence
Thanks to the improvement brought by #613, it's not necessary anymore to have an init container create a conf.jar. From now on, the user provides a secret with their persistence.xml, and that file is mounted on each Polaris pod directly.
Enhanced services
The services for Polaris now are separated in 3 categories:
Each service section has the same structure and allows to configure one or more ports.
Observability
New sections were added for logging, tracing, metrics, with also the ability to create a
ServiceMonitor
.Bootstrap Job
The
bootstrap
section now configures the bootstrap job. Thanks to the Polaris Admin Tool introduced in #605, the jobs now use the tool to bootstrap realms.I am not convinced personally that this bootstrap job has a huge value, compared to just running the admin tool directly. But I didn't want to remove it.
Advanced Config
A new
advancedConfig
section can be used to customize Polaris deployments in any possible way, even when the Helm chart does not expose the desired setting.