Argo CD can't delete an app if it cannot generate manifests. You need to either:
- Reinstate/fix your repo.
- Delete the app using
--cascade=false
and then manually deleting the resources.
See Diffing documentation for reasons resources can be OutOfSync, and ways to configure Argo CD to ignore fields when differences are expected.
Argo CD provides health for several standard Kubernetes types. The Ingress
, StatefulSet
and SealedSecret
types have known issues
which might cause health check to return Progressing
state instead of Healthy
.
-
Ingress
is considered healthy ifstatus.loadBalancer.ingress
list is non-empty, with at least one value forhostname
orIP
. Some ingress controllers (contour , traefik) don't updatestatus.loadBalancer.ingress
field which causesIngress
to stuck inProgressing
state forever. -
StatefulSet
is considered healthy if value ofstatus.updatedReplicas
field matches tospec.replicas
field. Due to Kubernetes bug kubernetes/kubernetes#68573 thestatus.updatedReplicas
is not populated. So unless you run Kubernetes version which include the fix kubernetes/kubernetes#67570StatefulSet
might stay inProgressing
state. -
Your
StatefulSet
orDaemonSet
is usingOnDelete
instead ofRollingUpdate
strategy. See #1881. -
For
SealedSecret
, see Why are resources of typeSealedSecret
stuck in theProgressing
state?
As workaround Argo CD allows providing health check customization which overrides default behavior.
If you are using Traefik for your Ingress, you can update the Traefik config to publish the loadBalancer IP using publishedservice, which will resolve this issue.
providers:
kubernetesIngress:
publishedService:
enabled: true
For Argo CD v1.8 and earlier, the initial password is set to the name of the server pod, as
per the getting started guide. For Argo CD v1.9 and later, the initial password is available from
a secret named argocd-initial-admin-secret
.
To change the password, edit the argocd-secret
secret and update the admin.password
field with a new bcrypt hash.
!!! note "Generating a bcrypt hash"
Use the following command to generate a bcrypt hash for admin.password
argocd account bcrypt --password <YOUR-PASSWORD-HERE>
To apply the new password hash, use the following command (replacing the hash with your own):
# bcrypt(password)=$2a$10$rRyBsGSHK6.uc8fntPwVIuLVHgsAhAX7TcdrqW/RADU0uh7CaChLa
kubectl -n argocd patch secret argocd-secret \
-p '{"stringData": {
"admin.password": "$2a$10$rRyBsGSHK6.uc8fntPwVIuLVHgsAhAX7TcdrqW/RADU0uh7CaChLa",
"admin.passwordMtime": "'$(date +%FT%T%Z)'"
}}'
Another option is to delete both the admin.password
and admin.passwordMtime
keys and restart argocd-server. This
will generate a new password as per the getting started guide, so either to the name of the pod
(Argo CD 1.8 and earlier)
or a randomly generated password stored in a secret (Argo CD 1.9 and later).
Add admin.enabled: "false"
to the argocd-cm
ConfigMap
(see user management).
Argo CD might fail to generate Helm chart manifests if the chart has dependencies located in external repositories. To
solve the problem you need to make sure that requirements.yaml
uses only internally available Helm repositories. Even if the chart uses only dependencies from internal repos Helm
might decide to refresh stable
repo. As workaround override
stable
repo URL in argocd-cm
config map:
data:
repositories: |
- type: helm
url: http://<internal-helm-repo-host>:8080
name: stable
After deploying my Helm application with Argo CD I cannot see it with helm ls
and other Helm commands
When deploying a Helm application Argo CD is using Helm
only as a template mechanism. It runs helm template
and
then deploys the resulting manifests on the cluster instead of doing helm install
. This means that you cannot use any Helm command
to view/verify the application. It is fully managed by Argo CD.
Note that Argo CD supports natively some capabilities that you might miss in Helm (such as the history and rollback commands).
This decision was made so that Argo CD is neutral to all manifest generators.
I've configured cluster secret but it does not show up in CLI/UI, how do I fix it?
Check if cluster secret has argocd.argoproj.io/secret-type: cluster
label. If secret has the label but the cluster is
still not visible then make sure it might be a permission issue. Try to list clusters using admin
user
(e.g. argocd login --username admin && argocd cluster list
).
Use the following steps to reconstruct configured cluster config and connect to your cluster manually using kubectl:
kubectl exec -it <argocd-pod-name> bash # ssh into any argocd server pod
argocd admin cluster kubeconfig https://<cluster-url> /tmp/config --namespace argocd # generate your cluster config
KUBECONFIG=/tmp/config kubectl get pods # test connection manually
Now you can manually verify that cluster is accessible from the Argo CD pod.
To terminate the sync, click on the "synchronization" then "terminate":
In some cases, the tool you use may conflict with Argo CD by adding the app.kubernetes.io/instance
label. E.g. using
Kustomize common labels feature.
Argo CD automatically sets the app.kubernetes.io/instance
label and uses it to determine which resources form the app.
If the tool does this too, this causes confusion. You can change this label by setting
the application.instanceLabelKey
value in the argocd-cm
. We recommend that you use argocd.argoproj.io/instance
.
!!! note When you make this change your applications will become out of sync and will need re-syncing.
See #1482.
The default polling interval is 3 minutes (180 seconds) with a configurable jitter.
You can change the setting by updating the timeout.reconciliation
value and the timeout.reconciliation.jitter
in the argocd-cm config map. If there are any Git changes, Argo CD will only update applications with the auto-sync setting enabled. If you set it to 0
then Argo CD will stop polling Git repositories automatically and you can only use alternative methods such as webhooks and/or manual syncs for deploying applications.
Why is my ArgoCD application Out Of Sync
when there are no actual changes to the resource limits (or other fields with unit values)?
Kubernetes has normalized your resource limits when they are applied, and then Argo CD has compared the version in your generated manifests from git to the normalized ones in the Kubernetes cluster - they may not match.
E.g.
'1000m'
normalized to'1'
'0.1'
normalized to'100m'
'3072Mi'
normalized to'3Gi'
3072
normalized to'3072'
(quotes added)8760h
normalized to8760h0m0s
To fix this use diffing customizations.
Argo CD uses a JWT as the auth token. You likely are part of many groups and have gone over the 4KB limit which is set for cookies. You can get the list of groups by opening "developer tools -> network"
- Click log in
- Find the call to
<argocd_instance>/auth/callback?code=<random_string>
Decode the token at https://jwt.io/. That will provide the list of teams that you can remove yourself from.
See #2165.
Maybe you're behind a proxy that does not support HTTP 2? Try the --grpc-web
flag:
argocd ... --grpc-web
The certificate created by default by Argo CD is not automatically recognised by the Argo CD CLI, in order to create a secure system you must follow the instructions to install a certificate and configure your client OS to trust that certificate.
If you're not running in a production system (e.g. you're testing Argo CD out), try the --insecure
flag:
argocd ... --insecure
!!! warning "Do not use --insecure
in production"
Most likely you forgot to set the url
in argocd-cm
to point to your Argo CD as well. See also
the docs.
Versions of SealedSecret
up to and including v0.15.0
(especially through helm 1.15.0-r3
) don't include
a modern CRD and thus the status field will not
be exposed (on k8s 1.16+
). If your Kubernetes deployment is modern, ensure you're using a
fixed CRD if you want this feature to work at all.
The controller of the SealedSecret
resource may expose the status condition on resource it provisioned. Since
version v2.0.0
Argo CD picks up that status condition to derive a health status for the SealedSecret
.
Versions before v0.15.0
of the SealedSecret
controller are affected by an issue regarding this status
conditions updates, which is why this feature is disabled by default in these versions. Status condition updates may be
enabled by starting the SealedSecret
controller with the --update-status
command line parameter or by setting
the SEALED_SECRETS_UPDATE_STATUS
environment variable.
To disable Argo CD from checking the status condition on SealedSecret
resources, add the following resource
customization in your argocd-cm
ConfigMap via resource.customizations.health.<group_kind>
key.
resource.customizations.health.bitnami.com_SealedSecret: |
hs = {}
hs.status = "Healthy"
hs.message = "Controller doesn't report resource status"
return hs
An application may trigger a sync error labeled a ComparisonError
with a message like:
The order in patch list: [map[name:KEY_BC value:150] map[name:KEY_BC value:500] map[name:KEY_BD value:250] map[name:KEY_BD value:500] map[name:KEY_BI value:something]] doesn't match $setElementOrder list: [map[name:KEY_AA] map[name:KEY_AB] map[name:KEY_AC] map[name:KEY_AD] map[name:KEY_AE] map[name:KEY_AF] map[name:KEY_AG] map[name:KEY_AH] map[name:KEY_AI] map[name:KEY_AJ] map[name:KEY_AK] map[name:KEY_AL] map[name:KEY_AM] map[name:KEY_AN] map[name:KEY_AO] map[name:KEY_AP] map[name:KEY_AQ] map[name:KEY_AR] map[name:KEY_AS] map[name:KEY_AT] map[name:KEY_AU] map[name:KEY_AV] map[name:KEY_AW] map[name:KEY_AX] map[name:KEY_AY] map[name:KEY_AZ] map[name:KEY_BA] map[name:KEY_BB] map[name:KEY_BC] map[name:KEY_BD] map[name:KEY_BE] map[name:KEY_BF] map[name:KEY_BG] map[name:KEY_BH] map[name:KEY_BI] map[name:KEY_BC] map[name:KEY_BD]]
There are two parts to the message:
-
The order in patch list: [
This identifies values for items, especially items that appear multiple times:
map[name:KEY_BC value:150] map[name:KEY_BC value:500] map[name:KEY_BD value:250] map[name:KEY_BD value:500] map[name:KEY_BI value:something]
You'll want to identify the keys that are duplicated -- you can focus on the first part, as each duplicated key will appear, once for each of its value with its value in the first list. The second list is really just
]
-
doesn't match $setElementOrder list: [
This includes all of the keys. It's included for debugging purposes -- you don't need to pay much attention to it. It will give you a hint about the precise location in the list for the duplicated keys:
map[name:KEY_AA] map[name:KEY_AB] map[name:KEY_AC] map[name:KEY_AD] map[name:KEY_AE] map[name:KEY_AF] map[name:KEY_AG] map[name:KEY_AH] map[name:KEY_AI] map[name:KEY_AJ] map[name:KEY_AK] map[name:KEY_AL] map[name:KEY_AM] map[name:KEY_AN] map[name:KEY_AO] map[name:KEY_AP] map[name:KEY_AQ] map[name:KEY_AR] map[name:KEY_AS] map[name:KEY_AT] map[name:KEY_AU] map[name:KEY_AV] map[name:KEY_AW] map[name:KEY_AX] map[name:KEY_AY] map[name:KEY_AZ] map[name:KEY_BA] map[name:KEY_BB] map[name:KEY_BC] map[name:KEY_BD] map[name:KEY_BE] map[name:KEY_BF] map[name:KEY_BG] map[name:KEY_BH] map[name:KEY_BI] map[name:KEY_BC] map[name:KEY_BD]
]
In this case, the duplicated keys have been emphasized to help you identify the problematic keys. Many editors have the ability to highlight all instances of a string, using such an editor can help with such problems.
The most common instance of this error is with env:
fields for containers
.
!!! note "Dynamic applications" It's possible that your application is being generated by a tool in which case the duplication might not be evident within the scope of a single file. If you have trouble debugging this problem, consider filing a ticket to the owner of the generator tool asking them to improve its validation and error reporting.
- Delete
argocd-redis
secret in the namespace where Argo CD is installed.
kubectl delete secret argocd-redis -n <argocd namesapce>
- If you are running Redis in HA mode, restart Redis in HA.
kubectl rollout restart deployment argocd-redis-ha-haproxy
kubectl rollout restart statefulset argocd-redis-ha-server
- If you are running Redis in non-HA mode, restart Redis.
kubectl rollout restart deployment argocd-redis
- Restart other components.
kubectl rollout restart deployment argocd-server argocd-repo-server
kubectl rollout restart statefulset argocd-application-controller
Argo CD default installation is now configured to automatically enable Redis authentication. If for some reason authenticated Redis does not work for you and you want to use non-authenticated Redis, here are the steps:
-
You need to have your own Redis installation.
-
Configure Argo CD to use your own Redis instance. See this doc for the Argo CD configuration.
-
If you already installed Redis shipped with Argo CD, you also need to clean up the existing components:
-
When HA Redis is used:
- kubectl delete deployment argocd-redis-ha-haproxy
- kubectl delete statefulset argocd-redis-ha-server
-
When non-HA Redis is used:
- kubectl delete deployment argocd-redis
-
-
Remove environment variable
REDIS_PASSWORD
from the following manifests:- Deployment: argocd-repo-server
- Deployment: argocd-server
- StatefulSet: argocd-application-controller
The Redis password is stored in Kubernetes secret argocd-redis
with key auth
in the namespace where Argo CD is installed.
You can config your secret provider to generate Kubernetes secret accordingly.
Manifest generation error (cached)
means that there was an error when generating manifests and that the error message has been cached to avoid runaway retries.
Doing a hard refresh (ignoring the cached error) can overcome transient issues. But if there's an ongoing reason manifest generation is failing, a hard refresh will not help.
Instead, try searching the repo-server logs for the app name in order to identify the error that is causing manifest generation to fail.
For certain features, Argo CD relies on a static (hard-coded) set of schemas for built-in Kubernetes resource types.
If your manifests use fields which are not present in the hard-coded schemas, you may get an error like field not declared in schema
.
The schema version is based on the Kubernetes libraries version that Argo CD is built against. To find the Kubernetes
version for a given Argo CD version, navigate to this page, where X.Y.Z
is the Argo CD version:
https://github.com/argoproj/argo-cd/blob/vX.Y.Z/go.mod
Then find the Kubernetes version in the go.mod
file. For example, for Argo CD v2.11.4, the Kubernetes libraries
version is v0.26.11
k8s.io/api => k8s.io/api v0.26.11
To completely resolve the issue, upgrade to an Argo CD version which contains a static schema supporting all the needed fields.
As mentioned above, only certain Argo CD features rely on the static schema: 1) ignoreDifferences
with
managedFieldManagers
, 2) server-side apply without server-side diff, and 3) server-side diff with mutation
webhooks.
If you can avoid using these features, you can avoid triggering the error. The options are as follows:
- Disable
ignoreDifferences
which havemanagedFieldsManagers
: see diffing docs for details about that feature. Removing this config could cause undesired diffing behavior. - Disable server-side apply: see server-side apply docs for details about that feature. Disabling server-side apply may have undesired effects on sync behavior. Note that you can bypass this issue if you use server-side diff and exclude mutation webhooks from the diff. Excluding mutation webhooks from the diff could cause undesired diffing behavior.
- Disable mutation webhooks when using server-side diff: see server-side diff docs for details about that feature. Disabling mutation webhooks may have undesired effects on sync behavior.