Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
maintaining of the OCI hook becomes expensive - the hook is tightly coupled with the specific OCI runtime that was chosen to be used on the node. Moreover the hook cannot be adjusted on a per-pod basis when the pod is using a custom runtime class.
In addition the OCI hook is detached from the wasp-agent lifecycle. It means that extra effort needs to be put in order to clean the hook when the wasp-agent is unresponsive or when its deleted from the cluster.
if we consider to remove the hook we should compare two scenarios - with and w/o the hook. The difference is as follows:
(1) with the hook the transition is unlimited->limited swap usage. (2) w/o the hook the transition is zero->limited swap usage.
setting the limited swap is done by the limited swap controller which runs inside the wasp-agent daemonset. The time it takes to set the limited swap depends on the API latency (this design by itself can be improved).
by switching from (1) to (2) we actually don't introduce regression from workload stability perspective, because in both scenarios if the workload exceeds its allowed limited swap, it will be OOMkilled.
From node stability perspective switching to (2) is even safer.
scenario (2) puts in risk only the container itself that could be OOMkilled in the worst case, while in scenario (1) unlimited swap consumption can put the whole node in risk.
Regarding API latency the following steps can be taken: (*) We actually don't need the API server, we can work directly with the kubelet server.
(**) We can utilize NRI, thus opt-in for limited swap from inside the CRI lifecycle.