-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AtlasMigration k8s deployment issues #3232
Comments
Hi Tal, let's start with a couple turn off/on kind of things and work from there ;-). Can you check the following? Issue 1:
|
Hello @talsuk5, what's the version of atlas-operator you have? We already fixed the issue with devdb's role in v0.6.3, which will use the default PG role. |
Hi @ariga-peretz, thanks for the reply. As for querying my db for select * from pg_user; and select * from pg_roles; ![]() ![]() you can see that there's a But like I said, I think it's related to your inner deployment: apiVersion: apps/v1
kind: ReplicaSet
metadata:
annotations:
deployment.kubernetes.io/desired-replicas: '1'
deployment.kubernetes.io/max-replicas: '2'
deployment.kubernetes.io/revision: '2'
creationTimestamp: '2024-11-25T13:48:17Z'
generation: 1
labels:
app.kubernetes.io/created-by: controller-manager
app.kubernetes.io/instance: migration-atlas-dev-db
app.kubernetes.io/name: atlas-dev-db
app.kubernetes.io/part-of: atlas-operator
atlasgo.io/engine: postgres
pod-template-hash: 7dfd65997c
name: migration-atlas-dev-db-7dfd65997c
namespace: postgres-migrator
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: Deployment
name: migration-atlas-dev-db
uid: 2e1c7c24-07dc-4590-91e6-e2a2af0cecb2
resourceVersion: '38134509'
uid: 73c76563-bdd9-4b92-a666-f4d539499dcc
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/created-by: controller-manager
app.kubernetes.io/instance: migration-atlas-dev-db
app.kubernetes.io/name: atlas-dev-db
app.kubernetes.io/part-of: atlas-operator
atlasgo.io/engine: postgres
pod-template-hash: 7dfd65997c
template:
metadata:
annotations:
atlasgo.io/conntmpl: postgres://root:pass@localhost:5432/postgres?sslmode=disable
kubectl.kubernetes.io/restartedAt: '2024-11-25T13:48:17Z'
creationTimestamp: null
labels:
app.kubernetes.io/created-by: controller-manager
app.kubernetes.io/instance: migration-atlas-dev-db
app.kubernetes.io/name: atlas-dev-db
app.kubernetes.io/part-of: atlas-operator
atlasgo.io/engine: postgres
pod-template-hash: 7dfd65997c
spec:
containers:
- env:
- name: POSTGRES_DB
value: postgres
- name: POSTGRES_USER
value: root
- name: POSTGRES_PASSWORD
value: pass
image: postgres:latest
imagePullPolicy: Always
name: postgres
ports:
- containerPort: 5432
name: postgres
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 999
startupProbe:
exec:
command:
- pg_isready
failureThreshold: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
fullyLabeledReplicas: 1
observedGeneration: 1
readyReplicas: 1
replicas: 1 Also, anyway I can change the inner deployment image to not be latest? or change the pull policy to |
Hi @giautm, I upgraded the operator to version |
Hi Atlas team!
I'm trying to set up an
AtlasMigration
in my k8s cluster as per this guide.My yaml definition looks like this:
The resource then creates a deployment of the dev db but fails inside with:
Looking at the operator logs:
I already talked to your support and they lifted the run limit limitation for the trial period.
As for the error
FATAL: role "postgres" does not exist
that is coming from the dev db pod, would love to get some guidance on how to resolve this issue.Thanks,
Tal
The text was updated successfully, but these errors were encountered: