Skip to content

Configuring and using Shared Memory

Dennis Luxen edited this page Nov 19, 2013 · 13 revisions

Note: This applies to version 0.3.7+ only.

Usually, when you start an application and it allocates some block of memory, this is free'd after your application terminates. And also, this memory is only accessible to a single process only. And then, there is shared memory. It allows you to share data among a number of processes and the shared memory we use is persistent. It stays in the system until it is explicitly removed.

By default, there is some restriction on the size of and shared memory segment. Depending on your distribution and its version it may be as little as 64 kilobytes. This is of course not enough for serious applications.

The following gives a brief description on how to set the limits in a way that you (most probably) won't ever run into them in the future. Please read up on actual settings for your production environment in the manpage of shmctl, or consult further information, eg. here.

First, we are going to raise the system limits. Second, we are going to raise the user limits.

System Limits

Append the following lines to /etc/sysctl.conf:

kernel.shmall = 1152921504606846720
kernel.shmmax = 18446744073709551615

and then run sysctl -p with super-user privileges. Then check if settings were accepted:

$ ipcs -lm

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 18014398509481983
max total shared memory (kbytes) = 4611686018427386880
min seg size (bytes) = 1

User Limits

This is only half of the story. On Linux, only the super user is allowed to lock arbitrary amounts of shared memory into RAM. To fix this, we need to set the user limits properly. Let's have a look at what Ubuntu 12.10 sets by default:

$ ulimit -a|grep max
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited

So, as a user we are allowed to only lock at most 64 KiB into RAM. This is obviously not enough. The settings can be changed by editing /etc/security/limits.conf. Add the following lines to the file, to raise the user limits to 64 GiB. At the time of writing, this is enough to do planet-wide car routing.

<user>           hard    memlock         unlimited
<user>           soft    memlock         68719476736

Note that is the user name under which the routing process is running, and you need to re-login to activate these changes. If the user does not have a login, you can use sudo -i -u <user> to simulate an initial login.

Using Shared Memory

With all these changes done, you should now load all shared memory directly into RAM. Loading data into shared memory is as easy as

$ ./osrm-datastore /path/to/data.osrm

If there is insufficient available RAM (or not enough space configured), you will receive the following warning when loading data with osrm-datastore:

[warning] could not lock shared memory to RAM

In this case, data will be swapped to a cache on disk, and you will still be able to run queries. But note that caching comes at the prices of disk latency.

Starting the routing process and pointing it to shared memory is also very, very easy:

$ ./osrm-routed --sharedmemory=yes