-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podresources: add Watch endpoint #1926
Closed
Closed
Changes from 1 commit
Commits
Show all changes
5 commits
Select commit
Hold shift + click to select a range
0c6948a
podresources: add ListAndWatch function
ffromani 86596df
podresources: watch: split endpoint, add action
ffromani c4b38e3
misc minor fixes:
ffromani 45dd5d9
podresources: watch: expose resourceVersion
ffromani 1346c2a
podresources: watch: address review comments
ffromani File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the stream be of individual pods, rather than the entire list when it changes? We would need to do something to signal deletion... But I would worry that a high rate of pod churn could make this very expensive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's reasonable to introduce threshold on kubelet side, configurable via KubeletConfiguration, to not send notification so often, one notification will contain a bunch of podresources (it already described in this KEP). I think it worth mentioning in this KEP, but for implementation I think it's step 2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dashpole the intention is totally to stream only individual pod changes. The API will return:
Regarding deletion, I was thinking to just send a message with all the resources (currently devices only) cleared.
So the monitoring app observes
(some time passes, pod gets deleted)
So the monitor app can unambiguously learn that "all the resources previously allocated to C1 and C2 in P1 can now be cleared".
Makes sense?
Also, I believe this should be documented, processwise is ok to add this in the KEP, or should be added as comment to the .proto file? or both?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also: yes, we can totally add configurable thresholds, I'll add to the KEP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, that addresses my primary concern. I'm not entirely sold on "pod without resources" indicating deletion being the best way to represent it, but as long as we consider some alternatives and still prefer it, it is reasonable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No problem regarding better ways to represent deletions: I'm very open on alternatives.
To elaborate my rationale, I considered this approach because it required no extra changes to the API - the diff is minimal and the semantic seemed clear enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to avoid it
Maybe just add action field into ListPodResourcesResponse with following possible values: ADDED, UPDATED, DELETED
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which issues do you see with this approach?
If we send the complete allocation with each message, besides the DELETED case we are still discussing, I don't really see the benefit of a separate action field: could you please elaborate on this?
I see some benefit if each message provides the resource allocation delta (changes from the previous message), but I'd like to avoid sending deltas, it seems more robust (and not much more inefficient) to send total allocation each time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re serving initial state as part of "watch": kubernetes/kubernetes#13969
This is causing us a bunch of burden and is misleading to users.
I would really prefer those two to be separate calls for list vs watch (stream). There is a question how you can ensure consistency (i.e. that nothing happened between the list and the watch calls that won't be reflected in watch and also weren't in list yet).
That would be consistent with k8s watch, so I would really support this one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, all of those are good points, and I was not aware of the issue you mentioned. I will update the KEP.