-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detect and bypass cycles during token revocation #5364
Conversation
This is nearly the same as when it was first committed, but with two changes:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
// If we make it here, there are children and they must | ||
// be prepended. | ||
dfs = append(children, dfs...) | ||
// If we make it here, there are children and they must be appended. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This appears to change from dfs to bfs, which was specifically what we got away from due to problems customers had with very large token trees. Is there a reason for this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not a functional change and is still dfs. The previous version had all operations working at the front of the slice. i.e. pulling from the head (0th element) and prepending the children to the slice. This pattern causes a new allocation and full copy of the dfs slice on every append. I've simply flipped the access pattern to use the end of the slice instead. So the next element is pulled off the end (see 1434), and deletions and child appends are made to the end. This way the underlying dfs array is only reallocated when it grows to a new max capacity.
Before the change I did a micro benchmark of prepending vs. appending and it was a big (260x) difference in speed. Admittedly the macro operation here is likely gated by I/O, but I felt that the reduced GC churn made it worthwhile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, I see what you did now, looks fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should not be merged without figuring out the dfs/bfs thing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
if _, seen := seenIDs[child]; !seen { | ||
children = append(children, child) | ||
} else { | ||
if err = ts.parentView(saltedNS).Delete(saltedCtx, path+child); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ts.revokeInternal
down below should be making this exact Delete
call on the child token, so I wonder if the cycle is is due to the token not being properly cleaned up in there. IMO revokeTreeInternal
should be only in charge of tree traversal and delegate deletion to revokeInternal
to ensure that all related entries are properly removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ts.revokeInternal
does make this delete for the parent reference in the token, but if we're here, there are extraneous parent references that will never get deleted. e.g. assume the state is:
parent/a/c
parent/b/c
If c
's correct parent is a
, that info is stored in c
and will cause parent/a/c
to be deleted in revokeInternal
. But nothing would trigger deletion of parent/b/c
.
A prior version of the PR didn't have the delete step. It worked fine and was resilient to cycles, but @briankassouf noted that it would leave stray parent references.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it can ever get to that state though right? c
can only ever have one parent, unless entry creation was borked at some point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Root cause for how a cycle could be created is still unknown, but we did see it in user data (see: https://hashicorp.slack.com/archives/C04L34UGZ/p1536860851000100) and if present, the revocation will just consume memory until Vault crashes.
410fd9d
Fixes #4803