Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Spike] Performance benchmark #534

Closed
bajtos opened this issue Aug 23, 2017 · 8 comments
Closed

[Spike] Performance benchmark #534

bajtos opened this issue Aug 23, 2017 · 8 comments
Assignees
Labels

Comments

@bajtos
Copy link
Member

bajtos commented Aug 23, 2017

We should run at least a basic performance benchmark to understand how are we doing in comparison to LoopBack 3.x, but also competitors like Hapi, Express and Fastify.

Depending on the outcome, we may want to plan some time to fix low-hanging fruits.

UPDATE (2018-06-21)

The scope for 4.0 GA:

A high-level benchmark to know what our performance is. Use our Todo application with in-memory storage, measure throughput for the following request: get a list of todos (a simple read-only query), create a new todo without geo location (validations!).

UPDATE (2018-02-22)
Timeboxed to 2 weeks

Based on discussion with @raymondfeng @kjdelisle :

  • look into metrics, e.g. how the system performs. how request hops different components. how much time spent
  • Need spike on how to instrument our codebase to use third-party modules
  • can ask for community for help to build extension

Acceptance Criteria (no longer relevant)

  • measure performance in LB4. Examples include:
    • database connection (memory connector - write to file, a sql database, a non-sql (mongodb))
    • basic controllers (internal logic)
  • figure out what to measure and how to perform these measurements
    • identify if there needs to be additional scenario that needs to be created in order to measure a certain part of the framework
    • just the golden path

Follow-up stories (spike outcome)

@virkt25
Copy link
Contributor

virkt25 commented Aug 28, 2017

I'd also like to see Sails and Strapi included in the benchmark.

@bajtos bajtos added the non-MVP label Nov 2, 2017
@bajtos
Copy link
Member Author

bajtos commented Nov 2, 2017

We can leverage https://github.com/bajtos/async-frameworks which I build for my NodeConf.eu talk.

@dhmlau dhmlau added the p1 label Jun 26, 2018
@shimks shimks changed the title Performance benchmark [Spike] Performance benchmark Jul 12, 2018
@bajtos bajtos added this to the August Milestone milestone Jul 31, 2018
@bajtos bajtos self-assigned this Aug 1, 2018
@bajtos
Copy link
Member Author

bajtos commented Aug 1, 2018

I added LoopBack 3.x and 4.x to my async-frameworks benchmark, see bajtos/async-frameworks#1.

The results are encouraging. Express can handle ~5.7k requests per second, LoopBack 3.x ~3.0k and LoopBack 4.x ~2.8k requests per second. Speaking about latency (time to handle a request), Express needs 1.2ms, LB3 needs 2.7ms and LB4 needs 3.2ms.

In other words, LoopBack 4.x has about 10% lower throughput and adds 20% more latency compared LoopBack 3.x. I this find acceptable for the first GA release.

To better understand the picture, if we were refreshing a screen on every request, we would be limited to 312 fps with LoopBack 4.x. That's way more that most clients apps (native or HTML5) can achieve.

@bajtos
Copy link
Member Author

bajtos commented Aug 1, 2018

In #1583, I wrote a benchmark comparing the performance of createTodo and findTodos against an in-memory datasource.

The results have surprised me - the create operation is ~10x slower than reading. To investigate this further, I am thinking about writing another quick benchmark to measure how fast juggler can find and create records. I hope it will show that juggler is the slow part, not loopback-next.

@virkt25
Copy link
Contributor

virkt25 commented Aug 1, 2018

That's interesting ... I'm curious to see the results of the isolation tests for juggler vs. loopback-next. Possible overhead I can think of is the request / response context creation?

While it's acceptable for LB4 to be slower -- I think it's important for us and our users to know where the bottlenecks lies and this may be a factor for anyone looking to move from 3.x to 4.x

Maybe you can use node clinic to see where the slow down comes from?

@bajtos
Copy link
Member Author

bajtos commented Aug 2, 2018

Actually it was pretty easy to find the biggest source of slow down. Our request-body validation implementation does not catch cache AJV validators. When I modify buildOperationArguments() and comment out the call to validateRequestBody on line 126, I get much better results:

find all todos: { requestsPerSecond: 4564.7, latency: 1.78 }
create a new todo: { requestsPerSecond: 3517, latency: 2.43 }

My conclusion is that for 4.0 GA, we need to improve our request-body validation to cache pre-compiled AJV validations. With that change in place, the performance will become acceptable.

Thoughts?

@bajtos
Copy link
Member Author

bajtos commented Aug 2, 2018

I have created a follow-up issue to optimize AJV validations: #1590

@bajtos
Copy link
Member Author

bajtos commented Aug 13, 2018

Closing as done.

@bajtos bajtos closed this as completed Aug 13, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants