-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experiencing High Latency on AWS Elastic Beanstalk When Many Reads/Writes Done Through Parse REST API #2061
Comments
@drew-gross @hramos This feels like a very fundamental issue. Is there something we are missing with the REST API - parse server integration? |
Can you update the issue with more information about these external processes? |
@hramos Thank you for following up, Hector. Here are some characteristics about the external processes writing to our mLab database through the Parse REST API (same processes run on both parse.com and through our AWS Elastic Beanstalk hosted parse server):
Please let me know if you need any additional information. Thank you so much for your help! We have been struggling with this for weeks and could really use support for understanding the differences between how parse server and parse.com are handling our requests through REST API to see what we may be missing here. |
@hramos I have migrated to the open source Parse server running on AWS Elastic Beanstalk and am experiencing high CPU utilization (100%) and high latency (7+ seconds) consistently.
I modified the above script to use the /classes endpoint with no difference - I still see the 100% CPU and high latency. When running my curl script on api.parse.com with exactly the same scenario as above, I have no such issues - the RPS goes up to around 30, but I see no serious degradation in latency. Can you point me to some documents that can help resolve this issue? It is quite critical for my company to migrate well before the deadline and this is a showstopper. Please let me know if I need to provide any more information on this. I'm happy to share the curl scripts if it helps. |
I have a similar issue, but with smaller instances. About 10-20% of health checks fail even after a couple of requests, and that is not supposed to happen. If someone finds a possible hint on how to enchase the efficiency of parse-server will be very grateful |
@BrainDDump Out of curiosity, did you figure out a cause or workaround? |
I am getting the same error when I try to do a POST from the app after a series of GETs. [object Object] Next POST gives I changed AWS setup to m3.medium as it was said in parse-community/parse-server-example#177 but no change |
@sprabs have you figured out the solution/workaround for this? We have been facing the same issue with our setup. We have few processes (4-5) hitting the parse-server rest api and the cpu shoots up. This has become a blocker for us to move our jobs to our parse-server. |
I switched to use Heroku and deleted AWS account |
This Is not an issue with parse-server itself but in your deployment with elastic beanstalk, you may find a better community on server fault or stackoverflow for this kind of issues. |
We are using AWS Elastic Beanstalk environment with the latest parse server code on our development tier and have pointed our iOS and Android clients to this environment. When there are no external processes running (writing to our mLab database through REST API in high volume), the clients respond just fine and we have no issues. All our Cloud Code methods work well, too. However, as soon as we enable these external processes that write to our mLab database through the Parse REST API, latency goes up and the iOS and Android clients become unusable (calls either don't complete or take over a minute, which is obviously not anywhere near production quality).
On parse.com, we have an identical setup (except the obvious... the API endpoint is parse.com, not our EB environment). All else is equal (i.e., Cloud Code is parse-server compatible and iOS and Android clients are using the latest SDKs). For context: when looking at the Performance tab on parse.com, these external processes that seem to be causing the high latency (equivalent processes on parse-server, I mean) don't take more than 4 RPS. We have no issues there. The expected result is for there to be no difference in behavior between our hosted Parse server and parse.com's since the external processes that are piping requests through the REST API are IDENTICAL.
Environment Setup
Logs/Trace
We consistently see errors like this:
[error] 3307#0: *312 upstream prematurely closed connection while reading response header from upstream, client: , server: , request: "GET /1/classes/? HTTP/1.1", upstream: , host:
Is anyone else having similar issues with AWS Elastic Beanstalk? This seems like a fundamental issue we are hitting with the REST API that is preventing us from an otherwise stable environment and transitioning our production environment away from parse.com.
The text was updated successfully, but these errors were encountered: