-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI] TokenBackwardsCompatibilityIT failure in 6.x #37379
Comments
Pinging @elastic/es-security |
After looking more at these, especially https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+intake/966/console, I'm inclined to say that there might be another problem here. I'm going to leave this open as this test failure may be valid, but I'm not sure it's the only problem - it looks like at least one of the nodes just died in the middle of some of these tests. |
And there's another failure just now in https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+release-tests/335/console |
These two reproduce locally but this doesn't seem to be related to the token refresh but to the fact that no master can be elected in the mixed cluster
The Token refresh error is seen in relevant logs from the i.e. 335 but the suppressed Exception doesn't make sense to me :
Locally, these tests fail before the test even tries to refresh the token and I see similar I'll keep looking, but it could be the refresh token error is a red herring |
Happened again today: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+intake/1002/console
|
👀 |
The tests are muted in |
I reverted muting it on master as it seems to fail only in 6.x |
The local reproduction doesn't have to do with the original error ( failed to refresh token ) that we encounter on CI (see the explanation here). I think the way to try and reproduce these locally is to
which doesn't reproduce for me. Looking into the rest of the failures I came to realize that this might fail more consistently than we think. The
Now this makes this failure even more strange, as I'll keep looking |
I unmuted the test in 05b2c80 hoping to get some additional debugging information as this never fails locally. I will keep a close eye in 6.x and mute this test again so that I won't add too much noise. |
This was muted again right after 05b2c80 to not cause any more CI issues. I will unmute with proper logging in the next few days ( once CI is stabilized) to see if this fails and get enough information to debug it |
We unfortunately never got to the bottom of this. This had never failed in master and keeps on not failing so I will close it for now, and reopen if such a failure is encountered ever again. |
This change - Fixes TokenBackwardsCompatibilityIT: Existing tests seemed to made the assumption that in the oneThirdUpgraded stage the master node will be on the old version and in the twoThirdsUpgraded stage, the master node will be one of the upgraded ones. However, there is no guarantee that the master node in any of the states will or will not be one of the upgraded ones. This class now tests: - That we can generate and consume tokens before we start the rolling upgrade. - That we can consume tokens generated in the old cluster during all the stages of the rolling upgrade. - That while on a mixed cluster, when/if the master node is upgraded, we can generate, consume and refresh a token - That after the rolling upgrade, we can consume a token generated in an old cluster and can invalidate it so that it can't be used any more. - Ensures that during the rolling upgrade, the upgraded nodes have the same configuration as the old nodes. Specifically that the file realm we use is explicitly named `file1`. This is needed because while attempting to refresh a token in a mixed cluster we might create a token hitting an old node and attempt to refresh it hitting a new node. If the file realm name is not the same, the refresh will be seen as being made by a "different" client, and will, thus, fail. - Renames the Authentication variable we check while refreshing a token to be clientAuth in order to make the code more readable. Some of the above were possibly causing the flakiness of elastic#37379
This change - Fixes TokenBackwardsCompatibilityIT: Existing tests seemed to made the assumption that in the oneThirdUpgraded stage the master node will be on the old version and in the twoThirdsUpgraded stage, the master node will be one of the upgraded ones. However, there is no guarantee that the master node in any of the states will or will not be one of the upgraded ones. This class now tests: - That we can generate and consume tokens before we start the rolling upgrade. - That we can consume tokens generated in the old cluster during all the stages of the rolling upgrade. - That while on a mixed cluster, when/if the master node is upgraded, we can generate, consume and refresh a token - That after the rolling upgrade, we can consume a token generated in an old cluster and can invalidate it so that it can't be used any more. - Ensures that during the rolling upgrade, the upgraded nodes have the same configuration as the old nodes. Specifically that the file realm we use is explicitly named `file1`. This is needed because while attempting to refresh a token in a mixed cluster we might create a token hitting an old node and attempt to refresh it hitting a new node. If the file realm name is not the same, the refresh will be seen as being made by a "different" client, and will, thus, fail. - Renames the Authentication variable we check while refreshing a token to be clientAuth in order to make the code more readable. Some of the above were possibly causing the flakiness of #37379
This change - Fixes TokenBackwardsCompatibilityIT: Existing tests seemed to made the assumption that in the oneThirdUpgraded stage the master node will be on the old version and in the twoThirdsUpgraded stage, the master node will be one of the upgraded ones. However, there is no guarantee that the master node in any of the states will or will not be one of the upgraded ones. This class now tests: - That we can generate and consume tokens before we start the rolling upgrade. - That we can consume tokens generated in the old cluster during all the stages of the rolling upgrade. - That while on a mixed cluster, when/if the master node is upgraded, we can generate, consume and refresh a token - That after the rolling upgrade, we can consume a token generated in an old cluster and can invalidate it so that it can't be used any more. - Ensures that during the rolling upgrade, the upgraded nodes have the same configuration as the old nodes. Specifically that the file realm we use is explicitly named `file1`. This is needed because while attempting to refresh a token in a mixed cluster we might create a token hitting an old node and attempt to refresh it hitting a new node. If the file realm name is not the same, the refresh will be seen as being made by a "different" client, and will, thus, fail. - Renames the Authentication variable we check while refreshing a token to be clientAuth in order to make the code more readable. Some of the above were possibly causing the flakiness of elastic#37379
This change - Fixes TokenBackwardsCompatibilityIT: Existing tests seemed to made the assumption that in the oneThirdUpgraded stage the master node will be on the old version and in the twoThirdsUpgraded stage, the master node will be one of the upgraded ones. However, there is no guarantee that the master node in any of the states will or will not be one of the upgraded ones. This class now tests: - That we can generate and consume tokens before we start the rolling upgrade. - That we can consume tokens generated in the old cluster during all the stages of the rolling upgrade. - That while on a mixed cluster, when/if the master node is upgraded, we can generate, consume and refresh a token - That after the rolling upgrade, we can consume a token generated in an old cluster and can invalidate it so that it can't be used any more. - Ensures that during the rolling upgrade, the upgraded nodes have the same configuration as the old nodes. Specifically that the file realm we use is explicitly named `file1`. This is needed because while attempting to refresh a token in a mixed cluster we might create a token hitting an old node and attempt to refresh it hitting a new node. If the file realm name is not the same, the refresh will be seen as being made by a "different" client, and will, thus, fail. - Renames the Authentication variable we check while refreshing a token to be clientAuth in order to make the code more readable. Some of the above were possibly causing the flakiness of elastic#37379
This change - Fixes TokenBackwardsCompatibilityIT: Existing tests seemed to made the assumption that in the oneThirdUpgraded stage the master node will be on the old version and in the twoThirdsUpgraded stage, the master node will be one of the upgraded ones. However, there is no guarantee that the master node in any of the states will or will not be one of the upgraded ones. This class now tests: - That we can generate and consume tokens before we start the rolling upgrade. - That we can consume tokens generated in the old cluster during all the stages of the rolling upgrade. - That while on a mixed cluster, when/if the master node is upgraded, we can generate, consume and refresh a token - That after the rolling upgrade, we can consume a token generated in an old cluster and can invalidate it so that it can't be used any more. - Ensures that during the rolling upgrade, the upgraded nodes have the same configuration as the old nodes. Specifically that the file realm we use is explicitly named `file1`. This is needed because while attempting to refresh a token in a mixed cluster we might create a token hitting an old node and attempt to refresh it hitting a new node. If the file realm name is not the same, the refresh will be seen as being made by a "different" client, and will, thus, fail. - Renames the Authentication variable we check while refreshing a token to be clientAuth in order to make the code more readable. Some of the above were possibly causing the flakiness of elastic#37379
This change is a backport of #39252 - Fixes TokenBackwardsCompatibilityIT: Existing tests seemed to made the assumption that in the oneThirdUpgraded stage the master node will be on the old version and in the twoThirdsUpgraded stage, the master node will be one of the upgraded ones. However, there is no guarantee that the master node in any of the states will or will not be one of the upgraded ones. This class now tests: - That we can generate and consume tokens before we start the rolling upgrade. - That we can consume tokens generated in the old cluster during all the stages of the rolling upgrade. - That while on a mixed cluster, when/if the master node is upgraded, we can generate, consume and refresh a token - That after the rolling upgrade, we can consume a token generated in an old cluster and can invalidate it so that it can't be used any more. - Ensures that during the rolling upgrade, the upgraded nodes have the same configuration as the old nodes. Specifically that the file realm we use is explicitly named `file1`. This is needed because while attempting to refresh a token in a mixed cluster we might create a token hitting an old node and attempt to refresh it hitting a new node. If the file realm name is not the same, the refresh will be seen as being made by a "different" client, and will, thus, fail. - Renames the Authentication variable we check while refreshing a token to be clientAuth in order to make the code more readable. Some of the above were possibly causing the flakiness of #37379
This failure appears to be pretty rare, but happened twice in 6.x intake this morning:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+intake/960/console
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+6.x+intake/966/console (maybe? This one might be caused by something else)[edit: Don't think this one is connected, see below]Reproduce line (I'm having trouble with my local environment ATM so I can't confirm whether this reproduces but I don't think so):
Stack trace:
This also seems to cause some trouble in the cluster which causes other tests to fail.
Possibly related to #33197 and/or #31195
The text was updated successfully, but these errors were encountered: