Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Audit the cgroups in Talos and resource reservation #7081

Closed
Tracked by #9249
smira opened this issue Apr 12, 2023 · 3 comments · Fixed by #9341
Closed
Tracked by #9249

Audit the cgroups in Talos and resource reservation #7081

smira opened this issue Apr 12, 2023 · 3 comments · Fixed by #9341
Assignees

Comments

@smira
Copy link
Member

smira commented Apr 12, 2023

Do we need to reserve some CPU for /init cgroup (machined)?

Do we want to have some default reservation for the kubelet cgroup?

Can we easily gather some resource consumption data on the dashboard?

Copy link

github-actions bot commented Jul 4, 2024

This issue is stale because it has been open 180 days with no activity. Remove stale label or comment or this will be closed in 7 days.

@github-actions github-actions bot added the Stale label Jul 4, 2024
Copy link

github-actions bot commented Jul 9, 2024

This issue was closed because it has been stalled for 7 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 9, 2024
@smira smira reopened this Jul 9, 2024
@smira
Copy link
Member Author

smira commented Jul 9, 2024

Still valid, need a stress test

@smira smira removed the Stale label Jul 9, 2024
@smira smira self-assigned this Sep 13, 2024
smira added a commit to smira/talos that referenced this issue Sep 19, 2024
Fixes: siderolabs#7081

Review all reservations and limits set, test under stress load (using
both memory and CPU).

The goal: system components (Talos itself) and runtime (kubelet, CRI)
should survive under extreme resource starvation (workloads consuming
all CPU/memory).

Uses siderolabs#9337 to visualize changes, but doesn't depend on it.

Signed-off-by: Andrey Smirnov <[email protected]>
smira added a commit to smira/talos that referenced this issue Sep 19, 2024
Fixes: siderolabs#7081

Review all reservations and limits set, test under stress load (using
both memory and CPU).

The goal: system components (Talos itself) and runtime (kubelet, CRI)
should survive under extreme resource starvation (workloads consuming
all CPU/memory).

Uses siderolabs#9337 to visualize changes, but doesn't depend on it.

Signed-off-by: Andrey Smirnov <[email protected]>
smira added a commit to smira/talos that referenced this issue Sep 20, 2024
Fixes: siderolabs#7081

Review all reservations and limits set, test under stress load (using
both memory and CPU).

The goal: system components (Talos itself) and runtime (kubelet, CRI)
should survive under extreme resource starvation (workloads consuming
all CPU/memory).

Uses siderolabs#9337 to visualize changes, but doesn't depend on it.

Signed-off-by: Andrey Smirnov <[email protected]>
smira added a commit to smira/talos that referenced this issue Sep 20, 2024
Fixes: siderolabs#7081

Review all reservations and limits set, test under stress load (using
both memory and CPU).

The goal: system components (Talos itself) and runtime (kubelet, CRI)
should survive under extreme resource starvation (workloads consuming
all CPU/memory).

Uses siderolabs#9337 to visualize changes, but doesn't depend on it.

Signed-off-by: Andrey Smirnov <[email protected]>
smira added a commit to smira/talos that referenced this issue Sep 21, 2024
Fixes: siderolabs#7081

Review all reservations and limits set, test under stress load (using
both memory and CPU).

The goal: system components (Talos itself) and runtime (kubelet, CRI)
should survive under extreme resource starvation (workloads consuming
all CPU/memory).

Uses siderolabs#9337 to visualize changes, but doesn't depend on it.

Signed-off-by: Andrey Smirnov <[email protected]>
(cherry picked from commit 6b15ca1)
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 20, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant