-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Configuring and using Shared Memory
Note: This applies to version 0.3.7+ only.
Usually, when you start an application and it allocates some block of memory, this is free'd after your application terminates. And also, this memory is only accessible to a single process only. And then, there is shared memory. It allows you to share data among a number of processes and the shared memory we use is persistent. It stays in the system until it is explicitly removed.
By default, there is some restriction on the size of and shared memory segment. Depending on your distribution and its version it may be as little as 64 kilobytes. This is of course not enough for serious applications.
The following gives a brief description on how to set the limits in a way that you (most probably) won't ever run into them in the future. Please read up on actual settings for your production environment in the manpage of shmctl, or consult further information, eg. here.
First, we are going to raise the system limits. Second, we are going to raise the user limits.
Append the following lines to /etc/sysctl.conf
:
kernel.shmall = 1152921504606846720
kernel.shmmax = 18446744073709551615
and then run sysctl -p
with super-user privileges. Then check if settings were accepted:
$ ipcs -lm
On Mac OS X, add the following to /etc/sysctl.conf
:
kern.sysv.shmmax=1073741824
kern.sysv.shmall=262144
Then reboot.
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 18014398509481983
max total shared memory (kbytes) = 4611686018427386880
min seg size (bytes) = 1
This is only half of the story. On Linux, only the super user is allowed to lock arbitrary amounts of shared memory into RAM. To fix this, we need to set the user limits properly. Let's have a look at what Ubuntu 12.10 sets by default:
$ ulimit -a|grep max
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
So, as a user we are allowed to only lock at most 64 KiB into RAM. This is obviously not enough. The settings can be changed by editing /etc/security/limits.conf
. Add the following lines to the file, to raise the user limits to 64 GiB. At the time of writing, this is enough to do planet-wide car routing.
<user> hard memlock unlimited
<user> soft memlock 68719476736
Note that is the user name under which the routing process is running, and you need to re-login to activate these changes. If the user does not have a login, you can use sudo -i -u <user>
to simulate an initial login.
Note that on Ubuntu 12.04 LTS it is also necessary to edit /etc/pam.d/su
(and /etc/pam.d/common-session
) and remove the comment from the following line in order to activate /etc/security/limits.conf
:
session required pam_limits.so
With all these changes done, you should now load all shared memory directly into RAM. Loading data into shared memory is as easy as
$ ./osrm-datastore /path/to/data.osrm
If there is insufficient available RAM (or not enough space configured), you will receive the following warning when loading data with osrm-datastore
:
[warning] could not lock shared memory to RAM
In this case, data will be swapped to a cache on disk, and you will still be able to run queries. But note that caching comes at the prices of disk latency.
You will also see this error message, if you are lacking the CAP_IPC_LOCK
capability for system-wide memory locking. In this case granting the capability manually helps:
$ sudo setcap "cap_ipc_lock=ep" osrm-routed
$ getcap osrm-routed
osrm-routed = cap_ipc_lock+ep
Starting the routing process and pointing it to shared memory is also very, very easy:
$ ./osrm-routed --shared-memory=yes