Skip to content
This repository has been archived by the owner on Apr 11, 2022. It is now read-only.

feat: set cpu/mem requests and limits for prisma and cloudsql #298

Merged
merged 4 commits into from
Apr 12, 2020

Conversation

pmespresso
Copy link
Contributor

@pmespresso pmespresso commented Apr 11, 2020

addresses #271

particularly the exceeds its request of 0 as mentioned #271 (comment)

I had previously mistaken the mem/cpu requests fields to be only relate to HPA (which we agreed was unnecessary for our app), which was why that PR was closed.

This PR sets the requests for resources to be as following:

prisma:

mem cpu
request 1.28GiB (failed at 1278848Ki === 1.219GiBi) 250m default (wasn't the cause of failure)
limit 2.56GiB 500m

cloudsql

mem cpu
request 16Mi (failed at 9364Ki === 9.145Mi) 250m default (wasn't the cause of failure)
limit 32Mi 500m

see also:

@pmespresso pmespresso requested review from Tbaut and fevo1971 April 11, 2020 14:15
@pmespresso pmespresso removed request for fevo1971 and Tbaut April 11, 2020 14:17
Copy link
Contributor

@Tbaut Tbaut left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much for digging into that, and sharing the resources, I think I understood most of it. One question, where did you get the value for the failing (to then guess the request), and how did you decide the limit?

@pmespresso
Copy link
Contributor Author

pmespresso commented Apr 12, 2020

where did you get the value for the failing (to then guess the request)

actually you did :) #271 (comment), i.e. kubectl describe <svc>

@pmespresso pmespresso merged commit 9ab8811 into master Apr 12, 2020
@pmespresso pmespresso deleted the yj-memreq branch April 12, 2020 10:17
@Tbaut
Copy link
Contributor

Tbaut commented Apr 12, 2020

Hmm, I think I didn't write this clearly. The pod describe was on the last 50k. I don't think the nodewatcher ever got evicted. But I can't remember exactly tbh.

@pmespresso
Copy link
Contributor Author

hmm but the containers that ran out of resources according to that log are in nodewatcher deployment.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants