-
Notifications
You must be signed in to change notification settings - Fork 650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Dashboard] Performance Review #14615
Comments
Randomly came across this which should help: #14621 |
Some more info that might be relevant: #13953 |
Another place where
|
@jesusbotella sent me this issue about the (poor) performance of the datasets view: |
@ethervoid do you have any insights regarding this so far? Thanks! |
@alonsogarciapablo I'm reading all the issues, understanding the measures taken an so on. I finished yesterday the Likes cleaning so I'm on it :) |
Perfect! Thanks for the update! 💪 |
UpdateI've been uploading and trying to reproduce the problem in my local environment but is not the same and I don't have the same latency problems that in production. Why?
Then I tested it in staging cloud with similar results. It could be due to similar factors than the local environment. Next step is to provision a new RUI, redirect @andy-esch user to that rui and start doing profiling and so on. For example doing this request: |
Thanks for the update @ethervoid! Keep 🕵️ and you will find something! 💪 |
UpdateI've been doing benchmarking and here are my insights by now. TL; DR; Looks like either the connection to the database or the size of the database is damaging the performance. We have to take into account the the dbd-team pg data disk is 2.7TB I've added benchmarking in every visualization object to check where the time goes:
As you can see a huge amount, 3.5s, goes to the permission part. Why? because permissions is the first time the user presenter is loaded, without cache, aaaaaand there we have the So it looks like we still have the same problem. Measuring the time spent for @andy-esch 's user to retrieve the db size you can see the amount of time spent here (2,96s):
As an interesting part, @javitonino , was taking a look too and he notice that using the console the connection to the db takes ~3s:
Also @javitonino did other test: Tomorrow I'll keep investigating :) |
UpdateWell, we finally have a winner here and a patch for it ActiveRecord in Rails 4.0 is loading all the types from the database when a connection is made and in our team database is taking ~2.5 seconds because it loads 145k types. I did a test using two different user models: one from Sequel and the other from AR to do the same operation:
You can see the problematic method is this one:
As you can see, the AR model is taking 2.6 seconds to do the same operation than Sequel. Why? Basically, AR when open a connection with a database get all the types from We can divide the problem into data gathering and processing: To execute the query and retrieve all the data takes 1.3seconds
Process the data takes 0.5s
We can improve the data processing a bit but we're not going to deal with the real problem: the huge amount of types. But why are we having that huge amount of types? Taking a look at the table we can see most of the types are related to tables recordset and arrays of those recordset. So we have decided to filter those in this query and in case our app needs it there is no problem because AR would ask for it doing a query But...are you saying that every time AR asks for a type that we don't have in our cache it's going to do a query? Yes, but those types are not a common thing to ask for. Basically our app is dealing with the metadata database which has a few custom types (207) so we're, probably, going to notice that overhead. Is not a perfect solution but is going to save us about ~2 seconds when dealing with huge user databases operations |
Applying this patch now we're doing it in 0.6s
I've made some tests and when a new type is need it ask for it without problem. |
UpdateWe've re-routed team users to our test platform in production and here are the results using the patch: Without the patch activatedWith the patch activatedAs you can see we have improved the time for the dashboard:
In summary, we have passed from ~26s to load the dashboard to ~12.8. I'd let the patch active and the team users routing through it in order to test for undesired side-effects. If we don't have any issue In a couple of days (2-3), I'll put that patch into production. Hope this improvement is good enough :) |
Now I'm going to start checking the |
Great news @ethervoid! It's a great improvement! |
@alonsogarciapablo asked about the performance with the new dashboard. I've changed @andy-esch user to the new dashboard and here you can see the results: Without the new patchWith the new patchMore or less the same. From ~12s to ~6s |
The ordering issue is going to continue here |
* Patch for load times with pg_type loading in AR - See #14615 - See rails/rails#19578
Deployed this part. I'll continue here |
Great @ethervoid 👏👏👏 |
We have to make sure the New Dashboard performs 👌 in accounts with loads of maps and datasets. @andy-esch recently explained how the performance of the actual Dashboard is pretty bad in his account (~1000 datasets).
The goal of this issue is to assess what the performance of the New Dashboard is in accounts like @andy-esch 's and do a little investigation of what's causing the UI to be slow, so that we can decide how to fix some or all of them in the near future.
A more performant Dashboard will be add a lot of value to heavy CARTO users 💪
cc: @javitonino @ethervoid
The text was updated successfully, but these errors were encountered: