-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reflector going OOM caused by spiky behaviour #187
Comments
Automatically marked as stale due to no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@winromulus can you have a look into? What can I do about it? |
Removed stale label. |
Automatically marked as stale due to no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
/nostale
…On Sat, 19 Jun 2021, 01:48 stale[bot], ***@***.***> wrote:
Automatically marked as stale due to no recent activity. It will be closed
if no further activity occurs. Thank you for your contributions.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#187 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AF7ADLUG2NMLBJTCGA6NSQLTTPLNNANCNFSM454WQWOQ>
.
|
Removed stale label. |
@aeimer Are you using the reflector to reflect certificates from LetsEncrypt? |
Automatically marked as stale due to no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@klimisa is there any chance that you can have a look into this issue? |
Removed stale label. |
Automatically marked as stale due to no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Is the issue just the memory limit? Or is there some larger issue at play here causing a problem? |
Removed stale label. |
@brokenjacobs AFAIK everyhting seems to work. We have cert-manager with LE running. |
Automatically marked as stale due to no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
no stale. Problem seems to persist |
Removed stale label. |
Automatically marked as stale due to no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Automatically closed stale item. |
- New multi-arch pipeline with proper tagging convention - Removed cert-manager extension (deprecated due to new support from cert-manager) Fixes: #191 - Fixed healthchecks. Fixes: #208 - Removed Slack support links (GitHub issues only). Fixes: #199 - Simplified startup and improved performance. Fixes: #194 - Huge improvements in performance and stability. Fixes: #187 #182 #166 #150 #138 #121 #108
- New multi-arch pipeline with proper tagging convention - Removed cert-manager extension (deprecated due to new support from cert-manager) Fixes: #191 - Fixed healthchecks. Fixes: #208 - Removed Slack support links (GitHub issues only). Fixes: #199 - Simplified startup and improved performance. Fixes: #194 - Huge improvements in performance and stability. Fixes: #187 #182 #166 #150 #138 #121 #108
Hi guys,
my reflector pod just exited hard due to an out-of-memory event.
There are also regular errors popping up in the log.
Instana event:
Instana pod details:
Logs:
These kind of logs repeat every few hours.
So one thing that came up, maybe the pod just needs more RAM than you specified here: https://github.com/emberstack/kubernetes-reflector/blob/master/src/helm/reflector/values.yaml#L58
But in general this works quite good, but I don't know where the spikes come from.
I have the second smaller cluster with reflector running, the RAM fills up there also, but not that fast and I couldn't see any OOMs so far.
Thank you for helping.
BR
Alex
The text was updated successfully, but these errors were encountered: