Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I make osrm-datastore to use swap memory? #5182

Closed
dolsup opened this issue Aug 28, 2018 · 5 comments
Closed

How can I make osrm-datastore to use swap memory? #5182

dolsup opened this issue Aug 28, 2018 · 5 comments

Comments

@dolsup
Copy link

dolsup commented Aug 28, 2018

OSRM version: 5.18.0

I successfully ran osrm-extract and osrm-contract for planet-latest with only 122GiB RAM and 100GB swap memory and now I finally have complete osrm files although it was a long long time. I also could execute osrm-routed on just 1GB RAM and 100GB swap. It started listening after about 2 hours but It works well and API response is fast enough to me.

And now I'm trying to substitute osrm-routed with my own Node.js server using osrm node module and to use shared memory with osrm-datastore for fast deployment and clustering of the server app. but osrm-datastore seems not to use swap memory at all.

So I searched and found a comment on an issue(#2123) but I can't track the code link on the comment because it's broken and I'm not sure that is true at now.

How can I make osrm-datastore to use swap memory? Should I edit some code and build it for myself? Is there any other solution?

@dolsup
Copy link
Author

dolsup commented Aug 29, 2018

I modified SHM_LOCK to SHM_UNLOCK in /include/storage/shared_memory.hpp#L78 and built, and executed osrm-datastore.

[info] Data layout has a size of 1488 bytes
[info] Allocating shared memory of 21607569821 bytes
[info] Data layout has a size of 1139 bytes
[info] Allocating shared memory of 54435648317 bytes
[info] All data loaded. Notify all client about new data in:
[info] /static	0
[info] /updatable	1
[info] All clients switched.

It worked successfully.

------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status      
0x00000000 0          root       644        80         2                       
0x00000000 32769      root       644        16384      2                       
0x00000000 65538      root       644        280        2                       
0x0001079b 589827     root       644        21607569821 0                       
0x0101079b 622596     root       644        54435648317 0    

This is the output of ipcs -m.

but when I execute osrm-routed --shared-memory or osrm-datastore --list...

terminate called after throwing an instance of 'osrm::util::exception'
  what():  No shared memory block 'osrm-region' found, have you forgotten to run osrm-datastore?include/storage/shared_monitor.hpp:83

I get an error like the above.

IMHO, I think it makes sense to be able to use osrm-datastore with swap unless there is any important technical issue about shared memory and swap. because we can run osrm-routed on swap in now.

RAM costs a lot! (; - ;) Please give me some help!

@daniel-j-h
Copy link
Member

Check out

https://github.com/Project-OSRM/osrm-backend/wiki/Running-OSRM

if you fallocate, mkswap, and swapon it should get used automatically if it's large enough. That said, you are throwing away all benefits from the datastore if the data is layed out on disk. You don't need the datastore at all in this use case.

@dolsup
Copy link
Author

dolsup commented Oct 10, 2018

Thank you for your answer!

But that's not my point. What I want is osrm data always remains regardless the state of osrm-routed or the alternative server using the data.
Yes, osrm-datastore can solve this problem. But It doesn't use swap and I don't have enough RAM. Because of it, It's way too long to wait for the data to load onto swap memory again when the server is down . So I want osrm-datastore to use swap memory.

Am I understanding it not properly?

@danpat
Copy link
Member

danpat commented Oct 10, 2018

@dolsup In #4881, @TheMarex added a feature to allow using mmap instead of loading files into memory.

Use it by doing the following:

$ osrm-routed --memory-file /tmp/scratchfile yourfile.osrm

What this will do is allocate a large file, copy the routing data into it, then mmap that large file as virtual memory. This has the effect you're looking for - persistent data between osrm-routed launches, and essentially no RAM usage (filesystem cache will be used, and the more you have the better things will be). If you re-launch osrm-routed after a crash, it will re-use the scratchfile without re-loading data into it, so startup is fast. Performance will depend on your data access patterns and the size of your filesystem buffer cache.

Over in https://github.com/Project-OSRM/osrm-backend/tree/ghoshkaj_mmaperize, we're doing some small refactors to make the use of the scratchfile unnecessary (the .osrm.* files can be mmaped directly).

I'm honestly not sure why osrm-datastore isn't working as-is. The line you adjusted:

if (-1 == shmctl(shm.get_shmid(), SHM_LOCK, nullptr))

only logs a warning if SHM_LOCK doesn't work - it doesn't cause anything to abort. Do you have sufficient swap space?

Keep your eye on the ghoshkaj_mmaperize branch (and PR to come) - I think it will do what you want.

@dolsup
Copy link
Author

dolsup commented Oct 10, 2018

@danpat I thank you a lot for your kind comment.
I'll try using mmap. It will be so helpful for me I think.

FYI, I'm sure that I had enough swap space when I executed osrm-datastore after the modification (SHM_LOCK to SHM_UNLOCK) and build. At my first try that time, osrm-datastore worked well without abort. but I had troubles with osrm-routed --shared-memory and osrm-datastore --list after osrm-datastore. Nevertheless, I'll try this again and share the result with more detail info when I'm free.

And I'll keep following ghoshkaj_mmaperize branch. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants