-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detect and bypass cycles during token revocation #5364
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1404,8 +1404,8 @@ func (ts *TokenStore) revokeTree(ctx context.Context, le *leaseEntry) error { | |
// Updated to be non-recursive and revoke child tokens | ||
// before parent tokens(DFS). | ||
func (ts *TokenStore) revokeTreeInternal(ctx context.Context, id string) error { | ||
var dfs []string | ||
dfs = append(dfs, id) | ||
dfs := []string{id} | ||
seenIDs := make(map[string]struct{}) | ||
|
||
var ns *namespace.Namespace | ||
|
||
|
@@ -1429,7 +1429,8 @@ func (ts *TokenStore) revokeTreeInternal(ctx context.Context, id string) error { | |
} | ||
|
||
for l := len(dfs); l > 0; l = len(dfs) { | ||
id := dfs[0] | ||
id := dfs[len(dfs)-1] | ||
seenIDs[id] = struct{}{} | ||
|
||
saltedCtx := ctx | ||
saltedNS := ns | ||
|
@@ -1444,11 +1445,26 @@ func (ts *TokenStore) revokeTreeInternal(ctx context.Context, id string) error { | |
} | ||
|
||
path := saltedID + "/" | ||
children, err := ts.parentView(saltedNS).List(saltedCtx, path) | ||
childrenRaw, err := ts.parentView(saltedNS).List(saltedCtx, path) | ||
if err != nil { | ||
return errwrap.Wrapf("failed to scan for children: {{err}}", err) | ||
} | ||
|
||
// Filter the child list to remove any items that have ever been in the dfs stack. | ||
// This is a robustness check, as a parent/child cycle can lead to an OOM crash. | ||
children := make([]string, 0, len(childrenRaw)) | ||
for _, child := range childrenRaw { | ||
if _, seen := seenIDs[child]; !seen { | ||
children = append(children, child) | ||
} else { | ||
if err = ts.parentView(saltedNS).Delete(saltedCtx, path+child); err != nil { | ||
return errwrap.Wrapf("failed to delete entry: {{err}}", err) | ||
} | ||
|
||
ts.Logger().Warn("token cycle found", "token", child) | ||
} | ||
} | ||
|
||
// If the length of the children array is zero, | ||
// then we are at a leaf node. | ||
if len(children) == 0 { | ||
|
@@ -1464,11 +1480,10 @@ func (ts *TokenStore) revokeTreeInternal(ctx context.Context, id string) error { | |
if l == 1 { | ||
return nil | ||
} | ||
dfs = dfs[1:] | ||
dfs = dfs[:len(dfs)-1] | ||
} else { | ||
// If we make it here, there are children and they must | ||
// be prepended. | ||
dfs = append(children, dfs...) | ||
// If we make it here, there are children and they must be appended. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This appears to change from dfs to bfs, which was specifically what we got away from due to problems customers had with very large token trees. Is there a reason for this change? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's not a functional change and is still dfs. The previous version had all operations working at the front of the slice. i.e. pulling from the head (0th element) and prepending the children to the slice. This pattern causes a new allocation and full copy of the dfs slice on every append. I've simply flipped the access pattern to use the end of the slice instead. So the next element is pulled off the end (see 1434), and deletions and child appends are made to the end. This way the underlying dfs array is only reallocated when it grows to a new max capacity. Before the change I did a micro benchmark of prepending vs. appending and it was a big (260x) difference in speed. Admittedly the macro operation here is likely gated by I/O, but I felt that the reduced GC churn made it worthwhile. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yep, I see what you did now, looks fine. |
||
dfs = append(dfs, children...) | ||
} | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ts.revokeInternal
down below should be making this exactDelete
call on the child token, so I wonder if the cycle is is due to the token not being properly cleaned up in there. IMOrevokeTreeInternal
should be only in charge of tree traversal and delegate deletion torevokeInternal
to ensure that all related entries are properly removed.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ts.revokeInternal
does make this delete for the parent reference in the token, but if we're here, there are extraneous parent references that will never get deleted. e.g. assume the state is:If
c
's correct parent isa
, that info is stored inc
and will causeparent/a/c
to be deleted inrevokeInternal
. But nothing would trigger deletion ofparent/b/c
.A prior version of the PR didn't have the delete step. It worked fine and was resilient to cycles, but @briankassouf noted that it would leave stray parent references.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it can ever get to that state though right?
c
can only ever have one parent, unless entry creation was borked at some point.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Root cause for how a cycle could be created is still unknown, but we did see it in user data (see: https://hashicorp.slack.com/archives/C04L34UGZ/p1536860851000100) and if present, the revocation will just consume memory until Vault crashes.