-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
install questions #415
Comments
We are looking into issue with As for:
We are tracking this issue in #365 |
@marsty339 could you check if this is still the case when using tobs |
|
We removed kubernetes version constraint in tobs |
|
I cannot reproduce this. When doing the following: helm repo add timescale https://charts.timescale.com/
helm repo update
helm install --wait --timeout 10m test timescale/tobs I get the following, correct, output from the installation: W0704 11:55:15.004749 673465 warnings.go:70] spec.template.spec.containers[0].env[2].name: duplicate name "TOBS_TELEMETRY_INSTALLED_BY"
W0704 11:55:15.004768 673465 warnings.go:70] spec.template.spec.containers[0].env[3].name: duplicate name "TOBS_TELEMETRY_VERSION"
NAME: test
LAST DEPLOYED: Mon Jul 4 11:54:55 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
#######################################################################################################################
👋🏽 Welcome to tobs, The Observability Stack for Kubernetes
✨ Auto-configured and deployed:
🔥 Kube-Prometheus
🐯 In-cluster TimescaleDB
🤝 Promscale
📈 Grafana
🚀 OpenTelemetry
#######################################################################################################################
🔥 PROMETHEUS NOTES:
#######################################################################################################################
Prometheus can be accessed via port 9090 on the following DNS name from within your cluster:
tobs-kube-prometheus-prometheus.default.svc
Get the Prometheus server URL by running these commands in the same shell:
kubectl --namespace default port-forward service/tobs-kube-prometheus-prometheus 9090:9090
#######################################################################################################################
🔥 ALERTMANAGER NOTES:
#######################################################################################################################
The Alertmanager can be accessed via port 9093 on the following DNS name
from within your cluster:
tobs-kube-prometheus-alertmanager.default.svc
Get the Alertmanager URL by running these commands in the same shell:
kubectl --namespace default port-forward service/tobs-kube-prometheus-alertmanager 9093:9093
WARNING! Persistence is disabled on AlertManager. You will lose your data when
the AlertManager pod is terminated.
#######################################################################################################################
🐯 TIMESCALEDB NOTES:
#######################################################################################################################
TimescaleDB can be accessed via port 5432 on the following DNS name
from within your cluster:
test.default.svc
To get your password for superuser run:
# superuser password
PGPASSWORD_POSTGRES=$(
kubectl get secret --namespace default \
test-credentials \
-o jsonpath="{.data.PATRONI_SUPERUSER_PASSWORD}" |\
base64 --decode \
)
echo $PGPASSWORD_POSTGRES
# admin password
PGPASSWORD_ADMIN=$(\
kubectl get secret --namespace default \
test-credentials \
-o jsonpath="{.data.PATRONI_admin_PASSWORD}" |\
base64 --decode \
)
echo $PGPASSWORD_ADMIN
To connect to your database, chose one of these options:
1. Run a postgres pod and connect using the psql cli:
# login as superuser
kubectl run -it --rm psql --image=postgres --env "PGPASSWORD=$PGPASSWORD_POSTGRES" --command --\
psql -U postgres -h test.default.svc postgres
# login as admin
kubectl run -it --rm psql --image=postgres --env "PGPASSWORD=$PGPASSWORD_ADMIN" --command --\
psql -U admin -h test.default.svc postgres
2. Directly execute a psql session on the master node
MASTER_POD=$(\
kubectl get pod -o name --namespace default -l release=test,role=master \
)
kubectl exec -it --namespace default ${MASTERPOD} -- psql -U postgres
#######################################################################################################################
🚀 OPENTELEMETRY NOTES:
#######################################################################################################################
The OpenTelemetry collector is deployed to collect traces.
OpenTelemetry collector can be accessed with the following DNS name from within your cluster:
test-opentelemetry-collector.default.svc
#######################################################################################################################
📈 GRAFANA NOTES:
#######################################################################################################################
The Grafana server can be accessed via port 80 on the following DNS name from within your cluster:
test-grafana.default.svc
You can access grafana locally by executing:
kubectl --namespace default port-forward service/test-grafana 8080:80
Then you can point your browser to http://127.0.0.1:8080/.
Grafana persistence is enabled, and you did an upgrade. If you don't have the password
for 'admin', it can not be retrieved again, you need to reset it (see next paragraph).
To reset the admin user password you can use grafana-cli from inside the pod by executing:
GRAFANA_POD="$(kubectl get pod -o name --namespace default -l app.kubernetes.io/name=grafana)"
kubectl exec -it ${GRAFANA_POD} -c grafana -- grafana-cli admin reset-admin-password <password-you-want-to-set>
🚀 Happy observing! If possible, I would recommend removing namespace in which tobs should be installed and start from scratch. However if this is not possible, I recommend to manually remove Pro tip: Use |
This issue went stale because it was not updated in a month. Please consider updating it to improve the quality of the project. |
This issue was closed because it has been stalled for 30 days with no activity. |
when I run helm install --wait --timeout 10m tobs timescale/tobs
Error: no Secret with the name "tobs-certificate" found
not opentelemetrycollectors cr , how can i to do?
when I delete helm release tobs, the pod tobs-opentelemetry-collector-5f5889956d-8nbb2 is creating....
The text was updated successfully, but these errors were encountered: