Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

InfluxDB persistence after helm install #191

Open
pfeodrippe opened this issue Apr 27, 2017 · 4 comments
Open

InfluxDB persistence after helm install #191

pfeodrippe opened this issue Apr 27, 2017 · 4 comments
Assignees
Labels

Comments

@pfeodrippe
Copy link

pfeodrippe commented Apr 27, 2017

Could I enable InfluxDB persistence after helm install deis/workflow?
I read the docs, but what would happen?

@jchauncey
Copy link
Member

Yes you can do that. It would create a persistent volume and bind it to the influx pod. All data currently stored in the pod would be lost though since influx would need to restart to bind the volume.

@jchauncey jchauncey self-assigned this Apr 27, 2017
@jchauncey
Copy link
Member

I think you should be able to do something like helm upgrade <release name> <chart path> --set influxdb.persistence=true

@pfeodrippe
Copy link
Author

pfeodrippe commented Apr 27, 2017

I've did what you said, but it's taking a long time to terminate the old pods and Grafana is not showing anything. Maybe it should be better to delete the release and reinstall it? The influxapi IP should be the same, right? It's a Service, it should discover its pods

helm upgrade amber-bobcat deis/workflow -f ./values.yaml \ 
--set global.storage=s3,s3.accesskey=$AWS_ACCESS_KEY_ID,s3.secretkey=$AWS_SECRET_ACCESS_KEY,s3.region=us-east-1,s3.registry_bucket=registry-deis-staging,s3.database_bucket=database-deis-staging,s3.builder_bucket=builder-deis-staging,monitor.influxdb.persistence.enabled=true,monitor.grafana.persistence.enabled=true


$ kubectl -n deis get po -w
...
deis-monitor-grafana-1790189660-jsgg8    1/1       Running       0          10m
deis-monitor-grafana-541875647-nj7vm     1/1       Terminating   0          3d
deis-monitor-influxdb-1032455162-26jft   1/1       Running       0          10m
deis-monitor-influxdb-2881701064-h821m   1/1       Terminating   0          3d
...


$ kubectl -n deis get services
...
deis-monitor-influxapi   100.64.181.59    <none>             80/TCP                                                     10d
deis-monitor-influxui    100.64.188.135   <none>             80/TCP
...

@jchauncey
Copy link
Member

If you are ok deleting the release and reinstalling then yeah I would try that. Ive recently been working on the charts and I have done several no persistence to persistence upgrades and didnt notice that issue. But that was on a branch of some stuff I was testing so it may not work that way in master.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants