-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too large memory usage in last version #103
Comments
@asmyasnikov thanks for reporting this. I haven't seen that to be the case in my own usage so far. Can you help me understand some of the factors involved here? Are you building your own Docker container for mbtileserver? If so, which version of Go? Does it run out of memory immediately while searching for tiles, or does the server run for a while before running out of memory? How many tiles are in internal vs external folders? |
|
I'm build mbtileserver with go1.14
Arch with this bug - amd64 (qemu - x86_64) |
No. A familiar message "Use Ctrl-C to exit the server" is missing |
Without limitation of memory I have
And after that usage of memory from 100% decresed to normal. |
Thanks for the additional info. We are allocating a number of objects at startup, but most of those are proportional to the number of tilesets rather than their size. I don't have very large tilesets available for testing with at the moment, but could rig up a test setup where memory is similarly limited during startup. I can't promise I can get to this immediately. You might also be able to do some minimal testing here that could help:
|
No too memory usage |
Sorry, I couldn't understand what you meant by:
Do you mean it started correctly? |
No memory excess. Yes, start correctly |
But I cannot view map from |
I usually see Are you able to test with the prior commit (f2305b5) just against your internal directory? It would be interesting to know if the refactor introduced a major memory regression. |
Also too memory usage in f2305b5. With limitation process always killed by cgroups policies. Without limitaion I have next picture |
I m understand nothing now |
Thanks @asmyasnikov for running these tests! It looks like the memory on startup is stable between the prior and latest commits, so we didn't introduce a memory regression as far as I can tell. That's good news, at least. I wonder if Go is not respecting the cgroups memory policy? Are you indeed seeing OOM errors in the log, as described here: https://fabianlee.org/2020/01/18/docker-placing-limits-on-container-memory-using-cgroups/ It could also be another issue:
|
I found small interpretation about not viewing map from vector (pbf) mbtiles file
File was corrupted. Maybe because a storage is a USB-device |
This is latest stable version ( https://github.com/consbio/mbtileserver/commit/8c21f5ac4c1a6fa3e3a0d6115f86881fba359ddc ) of mbtileserver which I was used long time. I re-build (few minute ago) this version with my Dockerfile (equal golang, sqlite and other). docker-compose.yml are not modified. Load my `/internal` path with two mbtiles files (9MB and 18GB). When I dragging the map actively there are no load big CPU or memory peaks |
|
Interesting! And confusing! There really aren't substantive differences between that commit and the previous one you tested above. It seems that f2305b5 should have had the same memory profile as that branch, because it was before the substantive changes were merged in. Would you mind trying with the common ancestor (a613fe5) between that branch and master? |
It is a stable version (by memory and CPU on start and moving map) |
ca4a94e also ok |
446c5e4 some peaks on moving map and you will see some differences between memory and cache |
Mistics... |
ca4a94e is the tip of master. Is that correct? This is the one that started all the issues at the top of the thread. |
ca4a94e was look like this |
f2305b5 also rebuild, testing and getting this |
3 hours ago I was see too large of memory usage. Only mbtileserver (from 24 containers) using so large memory. |
O! I remembered. All of last tests I make with read-only volume |
f2305b5 is looks wrong |
It is my stable version ( 8c21f5a ) Sorry. This bug only partly about of mbtileserver. Most causes about docker mounting ro/rw volumes. Sorry yet. |
Thanks for your detailed investigation @asmyasnikov ! I appreciate all the effort you invested in this and in reporting back on it here. I'm glad that our recent changes didn't introduce a memory regression, and that in general our memory profile is very low. |
Hi!
I'm use cgroups memory limitations from docker-compose.yml
Limit 30M not exceed in older version. But last version (ca4a94e) want about 1G of memory without limitation =(
This is a critical for single-board computers...
Log:
internal and external paths contains big mbtiles files:
The text was updated successfully, but these errors were encountered: