Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fastify under-pressure how to find out which routes are reasons #1027

Open
meotimdihia opened this issue May 7, 2024 · 12 comments
Open

Fastify under-pressure how to find out which routes are reasons #1027

meotimdihia opened this issue May 7, 2024 · 12 comments
Labels
help wanted Extra attention is needed

Comments

@meotimdihia
Copy link

meotimdihia commented May 7, 2024

Do we have a way to find out which routes are the problem?

fastify.register(underPressure, {
  maxEventLoopDelay: 1000,
  exposeStatusRoute: true,
  pressureHandler: (req, rep, type, value) => {
      if (type === underPressure.TYPE_HEAP_USED_BYTES) {
        fastify.log.warn(`too many heap bytes used: ${value}`)
      } else if (type === underPressure.TYPE_RSS_BYTES) {
        fastify.log.warn(`too many rss bytes used: ${value}`)
      } else if (type == underPressure.TYPE_EVENT_LOOP_DELAY) {
        fastify.log.warn(`reached max even loop delay: ${value}`)
        fastify.Sentry.captureMessage('reached max even loop delay');
      }

      rep.send('out of memory') // if you omit this line, the request will be handled normally
    }
  })

image

Please give me a tip/guide. I don't know how to do it.

@meotimdihia meotimdihia added the help wanted Extra attention is needed label May 7, 2024
@mcollina
Copy link
Member

mcollina commented May 7, 2024

req should have all the information regarding the route. I'm a bit concerned about that Infinity, can you add a full reproduction?

@meotimdihia
Copy link
Author

@mcollina thanks. This project's source code is big, I need to find out the reason first. I'll add a demo if I can reproduce it.

@meotimdihia
Copy link
Author

@mcollina I still can't fix this problem. My server just used 33% CPU and 50% memory.

Do we have any way to utilize 100% CPU?

@mcollina
Copy link
Member

I'm sorry but I don't understand the question. The full point of under-pressure is to prevent the server from overloading, keeping it responsive at all times.

The fact that your server "go" at 33% of CPU can mean all sorts of things. Usually you are exahusting some other limited resources, such as a connection pool or an OS buffer, or file descriptor limit.

If you remove under-pressure, could you get it to go 100%.

@deepaknf
Copy link

@meotimdihia Did you figure out the exact issue or external resource that is causing this? I am looking into the issue, expecting a hint to reproduce and fix it.

@meotimdihia
Copy link
Author

@deepaknf it was like as mcollina said " under-pressure is to prevent the server from overloading, keeping it responsive at all times." only. If the problem happens, you should optimize your code or upgrade your server.

@simoneb
Copy link

simoneb commented Sep 10, 2024

I guess the original question was more about how to identify the routes that are causing issues, right?

@deepaknf
Copy link

@meotimdihia I was wondering to find the route, req has all the information about it as mcollina mentioned. Do you expect anything else in this issue?

@meotimdihia
Copy link
Author

meotimdihia commented Sep 11, 2024

@meotimdihia I was wondering to find the route, req has all the information about it as mcollina mentioned. Do you expect anything else in this issue?

I think fastify-under-pressure doesn't give enough information for you to debug this.
You have to log the response time for routes. From my exp, it could be:

  • Using APM tools like Datadog, New Relic, etc.. These can be quite expensive, so I don’t personally use them
  • Find the custom solution yourself. Platformatic supported this: https://platformatic.dev/docs/guides/telemetry ( though I don’t use Platformatic currently, so it is the future solution)
  • The issue that usually occurs can be addressed by optimizing database queries. You have to log slow queries in your database. I am using PostgreSQL => https://github.com/darold/pgbadger, which works fine for me in addressing common problems.

@marcoturi
Copy link

@meotimdihia did you tried using the req param to get the info on the current route under pressure as @mcollina suggested?

@meotimdihia
Copy link
Author

@marcoturi I tried it, but it is not helpful because when the server is under pressure then any request will cause this problem.

@JoseArciniegasDev
Copy link

Use a Cluster on your server

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

6 participants