-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
importing planet.pbf postgres optimisations n specs #38
Comments
In my experience, with a pretty beefy AWS compute optimized instance (I don't recall how big, we should document this) with a high IOPS drive (the process is disk write intensive) we were able to get the data from the planent.pbf file into PostGIS with the generalized tables and the indexes in about 12-14 hours. It does depend on your config file but that's using the one we have documented in this repo. |
thankyou @ARolek for these info . . |
hey @ARolek , |
is there any redis support for better caching of tiles ? or any workaround to implement ??? |
@peldhose nice! thanks for the performance numbers. I going to add those to the Readme. tegola does not currently have Redis support. For caching tegola currently supports |
WOW ... Thanks bro. awesome.. |
@peldhose thanks for the positive feedback! We have plans to make tegola even faster. v0.6.0 is close to release which also comes with several rendering improvements. You can watch the tegola repo as new versions are released. We're going to need some help testing the pre-release if you're interested. ;-) Thanks for chiming in with your import results. |
Hi @ARolek As part of my PR to enhance the docs #60 I wanted to expand on the "How long does it take to import" section. I found this issue and noticed your comment above:
What do you consider to be a high IOPS on AWS? Can you give a figure? If you can remember I'll update the PR. |
@adamakhtar I don't recall how high the IOPS were, I just remember I provisioning the IOPS. Ideally, you would have a high IOPS volume in your database as well.
|
I just tried to do an import with the suggestions made in the above comments and it went really slow. I'm troubleshooting now and will share what I have currently found in case anybody else is looking for ideal specs. Imposm3's README states the following:
So it seems if you choose a server will too little ram compared to your pbf's file size you are going to be bottlenecked by IO. A full planet pbf is now around 53gb, about 60~70% bigger than what it was at the time the above comments were made, so 100gb of ram now seems to be the recommendation. I'll try again with more memory in a few days. |
@adamakhtar how slow was the planet import for you? |
@ARolek 12 hours later I was only 2% into the read phase. I assumed at that rate I was looking at least 48 hours to complete so I aborted. You can see my full server spec, htop and imposm output here https://gis.stackexchange.com/questions/427821/steps-to-troubleshoot-slow-imposm-performance Unfortunately I didn't consider IO to be the bottleneck at the time so never checked it but I'm assuming it's the problem. I'll try again closer to the weekend but this time go for 16cpu and 124gb server. |
Wow that's insanely slow. Is this on the M1 mac? I wonder if Rosetta is being used. In my experience the x86 virtualization on the M1 is very slow. |
@ARolek no this was on an EC2 Intel Xeon instance (c6i.4xlarge) with 16 vCPU, 32 GB mem and 1000GB SSD storage. I can only assume the 32gb ram was not enough for the 53gb planet data size and so IO became the bottleneck. I'll try again in a couple of days and will let you know how the rerun goes. |
@adamakhtar ok wow, that seems really odd. I will give this a run soon too. What version of imposm are you using? |
@ARolek I'm using version 0.11.1. |
Hi ,
can somebody explain performance of tegola-osm to perform import of planet.pbf to postgres database ??
how much time it may take for plannet data to push into postgres??
The text was updated successfully, but these errors were encountered: