-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parse Server much slower than Parse.com with same database #2654
Comments
@zingano Thanks for the detailed report. What kind of dyno are you running on? When running parse-server on your local development machine, do you see similar times? I'm asking that as I want to rule out deployment related problems before any further investigation. The number of includes you ask for is probably the source of the issue as I believe we sequentially resolves them instead of in parallel
|
@zingano Do you feel like working on an improvement for that feature running all handleIncludes in parallel instead of sequentially like we do? |
@flovilmart Dynos, it says above :-) - Heroku, US East. 6 of 1x web instances, 1 of 1x worker instance. I have tried other combinations, even up to Performance-M but the difference is not close to being eradicated) Note the worker thread is not used in either of the tests. |
@zingano sorry, I misread. So I can definitely say that the includes occur sequentially. Do you want to improve that? |
However, I disagree with changing this to be only about running the includes in parallel. As I said, my findTasks is also slow (and this is a more complex cloud function with more at play than just include keys - there are multiple finds over various classes). I also have a minimal findJobs, the same as the code above but without any includeKeys or whereKey, but this time with a limit of 1000. In this case parse.com has a time of around 3.5 secs and parse server has a much more variable time of around 6-8 seconds. |
And if I drop the limit to 200, parse.com falls to 1.1s pretty consistently, but parse server is around 3s, but varying wildly up to around 9s. |
For cloud functions, they take time as they are using the HTTP API instead of a direct access. This PR will enable direct interaction with parse-server API's instead of using HTTP requests. You can try that PR by setting the parse-server version to |
Thanks, I'll do that. The minimal findJobs mentioned above, with no includeKeys, is OS X client code with no cloud code, like so:
|
On your dynos, do you run with NODE_ENV=production ? |
@flovilmart - yes, I do. I'd like to return to one of my main points, which is how do I instrument parse server to discover what is slow, before I jump to fix something that may not be the cause at all? Please can you share how you do it, whether with New Relic or some other way. |
I deployed it to elastic beanstalk,and the parse server is much faster than parse.com constantly |
@GuichiZhao what database are you using? |
I followed this guide http://docs.aws.amazon.com/quickstart/latest/mongodb/step2a.html for mongo server,didn't check which engine it use, but not the RockDB as Parse recommend,I think it should be mmapv1 which is default Using elastic beanstalk as application server. Also,the two server are in same subnet i.e. mongodb://10.0.1.123 |
My database was set up through Heroku so much the same set up I think. |
Looking at your trace, it originates from lib/response.js which is the actual expressjs response handle that serializes the JSON response. |
@flovilmart I didn't show those particular traces. I'll attach one of them. |
MongoTransform seems to be most consistently to blame, but I'm still confused about the difference in the entire time (in the tab) and the execution time (left, top). MongoTransform can be a large percentage of the latter, but not usually of the former. Any AppDynamics people help me out? Or anyone use another trace tool? |
For anyone else who's interested in instrumenting Parse Server, you'll be glad to know that NewRelic do think that something is amiss, and the support guys have added a request to add support for Parse Server (possibly just on Heroku, depends if the issue is seen more widely). I should be updated when that support arrives. |
@zingano, your first screenshot, originating from PromiseRouter, is actually the request serialization. In your tests, what's the size of the response you're sending? |
@zingano I see in the stack trace that also the stringify comes from the es6 polyfill. |
@flovilmart - not sure, but some responses are pretty large. I feel that if these issues we've referenced are all that's causing my problems, someone else would have come across them too, rather than someone else saying Parse Server is much faster? But after I have completed my current work (to remove stale data from my database and speed things that way, as that is a certainty for getting me ready for production), then I will look at these defects and also your direct access which I think may have more impact. Unless it is the latter, I don't think we've found the issue completely yet. |
can you try that branch and see if the response times are better? Or at least make sure that stringify is not anymore the cause of problems? |
I seem to also be hitting issue with json parser at least what I can tell from New Relic dashboard. #2711 .. My app dynamics trail expired so cant really drill down deeper :( |
I have tried enabling the direct access and I see no clear difference in my test times. I'm trying to find out at App Dynamics why the massive difference between the call time and the sum of the execution times - https://community.appdynamics.com/t5/AppDynamics-Discussions/Difference-in-times-on-top-tab-entire-execution-time-of-a/m-p/22075/highlight/false#M7725 I'll attach a particularly egregious example - in this call the summed execution time is 671ms but the total execution time (in the tab) is 4,662ms. It seems to me that it's not worth chasing small improvements in the execution time until I find what is responsible for this 4 second "wait time". My App Dynamics trial runs out tomorrow, I think. |
@zingano the direct access only affects cloud code, by removing all HTTP calls and replacing them with JS direct calls. What are you running? still the same function? |
My findTasks test calls cloud code. It's coming in pretty much the same with and without direct access. |
@flovilmart - do you have any ideas about what could be causing the massive "wait time" above? |
Could the total time be the time including waiting for response from the database etc. |
I haven't any slow responses from the database - at least, only one, not involved in that test, and any less than 150ms |
Not really, if you time it manually with running the function itself with pure nodejs? |
DIRECT ACCESS OK, I have a few stats back from running the direct access and it does seem faster. I had to run it on real production so there was enough data to play with, and didn't see any glitches in the matrix. However I don't want to leave it on there until I'm confident (i.e. it's no longer experimental in a release) so those few tests are all I'm going to do. Heroku - 3 2x dynos @1024mb, 8GB 8GB variable RAM shared tier database, session-affinity, NO experimental direct access Heroku - 3 2x dynos @1024mb, 8GB 8GB variable RAM shared tier database, session-affinity, experimental direct access For comparison, here's Parse.com - 9.8, 6.9, 7.7 So, without more major testing, I'd say that's now getting pretty similar results to Parse.com, which pleases me as the main bottlenecks for my mobile clients with be the three cloud code calls they make, which I can also work on optimising more. |
OTHER TIMINGS Here for comparison are a few tests across some of the configurations I've tried: Parse.com Heroku - 3 2x dynos @1024mb, 8GB 8GB variable RAM shared tier database, session-affinity Node Chef - 3 servers @1024mb , 10GB 1GB RAM database Conclusions NodeChef is better than Parse.com at queries with many includes (which makes me suspect they did some optimisation in this area as Parse Server itself is slower). NodeChef is slower than Parse.com on simple queries with no includes. (Parse Server on node.js is slower than the Parse.com GO code) NodeChef is slower than Parse.com on massive database counts (Dashboard stats) My Heroku set up is in every way worse than NodeChef , although not by an order of magnitude and therefore probably manageable if I try to optimise my own code. Makes me suspect that either a - NodeChef don't contribute optimisations back to Parse Server b- Could be my database as I was running on a shared cluster with variable RAM. I am currently stepping mine up to the M1 dedicated plan with 1.7GB RAM for comparison. Finally, something else that may be true (especially over other tests I've not shown here) is that my Heroku setup is more variable in timings than either Parse.com or NodeChef. |
Just a thought but Heroku 2x dynos are also shared so it may be worth trying with Performance M to see if you get a more consistent test time with dedicated dynos. I believe those have 2 CPUS so can safely use a node cluster option of 2. NodeChef uses the RocksDB storage engine which Parse.com also uses I think. RocksDB or WiredTiger should give significant performance gains over MMAPv1. mLab offers WT on some of their dedicated cluster plans so you could try that for a test. These are all hosting details though. Profiling is definitely useful if you can narrow the problem down to a specific part of the code. Database profiling is a great help too as, for example, we have previously seen parse-server make extraneous queries before caching was introduced. |
One other quick question - I'm seeing three authentications a second on my database. Is that to be expected? |
Thanks Steven - I thought that myself and tested Performance M briefly and didn't see what I hoped for. However, a while ago and things have changed since. I'm not set up to scale within a dyno, but I could at least test with one parse-server per dyno. I think as my mLab comes via Heroku, I can't go to Wired Tiger. However that might be different once I'm on M1 dedicated, I'll see. |
And while this thread is active, what do people think the following do for performance?
|
You should hopefully see a performance increase with 2.2.24 as the number of _SCHEMA queries should be reduced per request. |
Worth updating as we revamped the inclusion strategies in 2.2.23. |
Why not use MongoDB Bulk Operations? var bulk = db.users.initializeUnorderedBulkOp(); https://docs.mongodb.com/manual/reference/method/Bulk/ |
Moving conversation to #6193 |
Do you still recall, which file you add the below codes to get AppDynamic metric works?
I tried in |
I have migrated (and finalised) my database from Parse.com to mLab. My (production) client applications are still using the parse.com API, but in test I am using both that and my own Parse Server running on Heroku for comparison.
Steps to reproduce
Please include a detailed list of steps that reproduce the issue. Include curl commands when applicable.
Expected Results
The times for the tests should be roughly the same.
Actual Outcome
Parse Server tests are 2 - 3x the time of the Parse.com tests.
Find jobs is around 4.8 seconds on parse.com, 14 seconds on parse server.
Find tasks is around 7.7 seconds on parse.com, 16 seconds on parse server
This does not happen just in these tests, but permeates the app, with each "page" being 2-3x as slow to load on parse server.
Environment Setup
Heroku, US East. 6 of 1x web instances, 1 of 1x worker instance. I have tried other combinations, even up to Performance-M but the difference is not close to being eradicated) Note the worker thread is not used in either of the tests.
There are around 250K Task objects in the database, and around 900 job objects.
I have added indices for any queries that mLab thinks are slow, except one that cannot be automatically indexed, although none of them have been close to as slow as the actual transaction time - which is often in seconds. The most they have been is the order of 100 milliseconds.
I have added individual column indices as it helped so much here, but it made no difference to me.
#2537
Logs/Trace
I am using New Relic to trace / analyse but I am getting nowhere as all New Relic shows is that a high percentage (almost 100% in my most recent test) is taken up by Middleware: anonymous which I know from similar defects on here is parse server.
Since the database is the same in both cases, I assume the blame must be within Parse Server, but would like to find out more. Could it be that this would help me? #2316
Or am I seeing something like this with authentication? #2556
Basically without more instrumentation I won't find out.
The New Relic people are saying that perhaps I'm not getting more out of this analysis because Parse Server is passing through async boundaries in which the transaction is lost. They say that using custom instrumentation could help - trouble is I would have no idea where to put it. If anyone could help with that - or add commented out lines to Parse server to help all of us who use New Relic to instrument, that would be great.
https://docs.newrelic.com/docs/agents/nodejs-agent/supported-features/nodejs-custom-instrumentation#expanding-instrumentation
The code for my tests, run from an OS X app, is below.
The text was updated successfully, but these errors were encountered: