Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experiencing High Latency on AWS Elastic Beanstalk When Many Reads/Writes Done Through Parse REST API #2061

Closed
sprabs opened this issue Jun 14, 2016 · 10 comments

Comments

@sprabs
Copy link

sprabs commented Jun 14, 2016

We are using AWS Elastic Beanstalk environment with the latest parse server code on our development tier and have pointed our iOS and Android clients to this environment. When there are no external processes running (writing to our mLab database through REST API in high volume), the clients respond just fine and we have no issues. All our Cloud Code methods work well, too. However, as soon as we enable these external processes that write to our mLab database through the Parse REST API, latency goes up and the iOS and Android clients become unusable (calls either don't complete or take over a minute, which is obviously not anywhere near production quality).

On parse.com, we have an identical setup (except the obvious... the API endpoint is parse.com, not our EB environment). All else is equal (i.e., Cloud Code is parse-server compatible and iOS and Android clients are using the latest SDKs). For context: when looking at the Performance tab on parse.com, these external processes that seem to be causing the high latency (equivalent processes on parse-server, I mean) don't take more than 4 RPS. We have no issues there. The expected result is for there to be no difference in behavior between our hosted Parse server and parse.com's since the external processes that are piping requests through the REST API are IDENTICAL.

Environment Setup

  • Server
    • parse-server version: 2.2.11
    • Operating System: 64bit Amazon Linux 2016.03 v2.1.1 running Node.js (v4.4.3)
    • Hardware: m4.large single instance hosted in Virginia with nginx proxy server
    • Localhost or remote server? AWS Elastic Beanstalk
    • Mongo DB driver (v2.1.18)
  • Database
    • mLab dedicated cluster (v3.0.10)

Logs/Trace

We consistently see errors like this:

[error] 3307#0: *312 upstream prematurely closed connection while reading response header from upstream, client: , server: , request: "GET /1/classes/? HTTP/1.1", upstream: , host:

Is anyone else having similar issues with AWS Elastic Beanstalk? This seems like a fundamental issue we are hitting with the REST API that is preventing us from an otherwise stable environment and transitioning our production environment away from parse.com.

@sprabs
Copy link
Author

sprabs commented Jun 14, 2016

@drew-gross @hramos This feels like a very fundamental issue. Is there something we are missing with the REST API - parse server integration?

@hramos
Copy link
Contributor

hramos commented Jun 16, 2016

Can you update the issue with more information about these external processes?

@sprabs
Copy link
Author

sprabs commented Jun 16, 2016

@hramos Thank you for following up, Hector.

Here are some characteristics about the external processes writing to our mLab database through the Parse REST API (same processes run on both parse.com and through our AWS Elastic Beanstalk hosted parse server):

  • using Python REST API - "/batch" endpoint
  • there are up to 8 processes
  • processes communicate with Parse server continuously (long-running)
  • processes run on an EC2 c4.xlarge instance
  • each process does parallel writes to the same class at random times
  • writes are batched; batch size varies from 1-50 rows
  • class to which it writes has ~100K rows with ~35 columns
  • 2 minute sleep between writes
  • on startup of processes, there's a read of several classes, including medium size classes (~6500 rows)
  • every 10 minutes, there's a read of a small class (~30 rows)

Please let me know if you need any additional information. Thank you so much for your help! We have been struggling with this for weeks and could really use support for understanding the differences between how parse server and parse.com are handling our requests through REST API to see what we may be missing here.

@sohagfan
Copy link

sohagfan commented Jun 21, 2016

@hramos I have migrated to the open source Parse server running on AWS Elastic Beanstalk and am experiencing high CPU utilization (100%) and high latency (7+ seconds) consistently.
I created a test curl script to trigger the problem. Here are the salient aspects:

  • Runs on a single m3.medium instance (disabled autoscaling to do this test)
  • Uses parse-server version: 2.2.11
  • Uses mLab dedicated cluster (v3.0.10)
  • Uses REST API /batch endpoint
  • Ran 6 parallel instances of the script writing to the same Parse class
  • Uses about 2K data per write

I modified the above script to use the /classes endpoint with no difference - I still see the 100% CPU and high latency.

When running my curl script on api.parse.com with exactly the same scenario as above, I have no such issues - the RPS goes up to around 30, but I see no serious degradation in latency.

Can you point me to some documents that can help resolve this issue? It is quite critical for my company to migrate well before the deadline and this is a showstopper.

Please let me know if I need to provide any more information on this. I'm happy to share the curl scripts if it helps.

@bra1nDump
Copy link

I have a similar issue, but with smaller instances. About 10-20% of health checks fail even after a couple of requests, and that is not supposed to happen. If someone finds a possible hint on how to enchase the efficiency of parse-server will be very grateful

@sprabs
Copy link
Author

sprabs commented Jul 25, 2016

@BrainDDump Out of curiosity, did you figure out a cause or workaround?

@vladicabg
Copy link

I am getting the same error when I try to do a POST from the app after a series of GETs.
My app runs some verification on the startup and generates a series of GETs.

[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]
[object Object]

Next POST gives
*84 upstream prematurely closed connection while reading response header from upstream

I changed AWS setup to m3.medium as it was said in parse-community/parse-server-example#177 but no change

@shivangagarwal
Copy link

@sprabs have you figured out the solution/workaround for this? We have been facing the same issue with our setup. We have few processes (4-5) hitting the parse-server rest api and the cpu shoots up. This has become a blocker for us to move our jobs to our parse-server.

@vladicabg
Copy link

I switched to use Heroku and deleted AWS account

@flovilmart
Copy link
Contributor

This Is not an issue with parse-server itself but in your deployment with elastic beanstalk, you may find a better community on server fault or stackoverflow for this kind of issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants