Linux Out-Of-Memory Killer
What is this ?
The “OOM Killer” or “Out of Memory Killer” is a process that the Linux kernel employs when the system is critically low on memory. This situation occurs because processes on the server are consuming a large amount of memory, and the system requires more memory for its own processes and to allocate to other processes. When a process starts it requests a block of memory from the kernel. This initial request is usually a large request that the process will not immediately or indeed ever use all of. The kernel, aware of this tendency for processes to request redundant memory, over allocates the system memory. This means that when the system has, for example, 8GB of RAM the kernel may allocate 8.5GB to processes. This maximises the use of system memory by ensuring that the memory that is allocated to processes is being actively used.
Normally, this situation does not cause a problem. However, if enough processes begin to use all of their requested memory blocks then there will not be enough physical memory to support them all. This means that the running processes require more memory than is physically available. This situation is critical and must be resolved immediately.
The solution that the Linux kernel employs is to invoke the OOM Killer to review all running processes and kill one or more of them in order to free up system memory and keep the system running.
Process Selection
Whenever out of memory failure occurs, the out_of_memory() function will be called. Within it the select_bad_process() function is used which gets a score from the badness() function. The most ‘bad’ process is the one that will be sacrificed. There are some rules badness() function follows for the selection of the process.
- The kernel needs to obtain a minimum amount of memory for itself
- Try to reclaim a large amount of memory
- Don’t kill a process using a small amount of memory
- Try to kill the minimum number of processes
- Some meticulous algorithms that elevate the sacrifice priority on processes the user wants to kill
After all these checklists, the OOM killer checks the score (oom_score). OOM set the “oom_score” to each process and then multiplies that value with memory usage. The processes with bigger values will have a high probability of getting terminated by the OOM killer. The processes that are associated with the privileged user have a lower score value and have fewer chances to be killed by OOM.
postgres=# SELECT pg_backend_pid();
pg_backend_pid
— — — — — — — —
3813
(1 row)
The Postgres process id is 3813, therefore in another shell, you can get the score value by using this oom_score kernel parameter:
$ sudo cat /proc/3813/oom_score
2
If you really want your process not to be killed by OOM-Killer, then there is another kernel parameter oom_score_adj. You can add a big negative value to that to reduce the chance your process gets killed.
sudo echo -100 > /proc/3813/oom_score_adj
To set the value of oom_score_adj you can set that OOMScoreAdjust in the service unit
[Service]
OOMScoreAdjust=-1000
or rcctl command’s oomprotect can be used to set that.
rcctl set <i>servicename</i> oomprotect -1000
Killing a Process
When one or more processes are selected, then OOM-Killer calls the oom_kill_task() function. This function is responsible to send the terminate/kill signal to the process. In case of out of memory oom_kill() call this function so, it can send the SIGKILL signal to the process. A kernel log message is generated.
Out of Memory: Killed process [pid] [name].
How to control OOM-Killer
Linux provides a way to enable and disable the OOM-Killer, but it is not recommended to disable the OOM-killer. Kernel parameter vm.oom-kill is used to enable and disable the OOM-Killer. If you want to enable OOM-Killer runtime, then use sysctl command to enable that.
sudo -s sysctl -w vm.oom-kill = 1
To disable the OOM-killer use the same command with the value 0:
sudo -s sysctl -w vm.oom-kill = 0
This command does not set that permanently, and a machine reboot resets that. To set it permanently, add this line in /etc/sysctl.conf file:
echo vm.oom-kill = 1 >>/etc/sysctl.conf
The other way to enable or disable is to write the panic_on_oom variable, you can always check the value in /proc.
# cat /proc/sys/vm/panic_on_oom
0
When you set the value to 0 that means the kernel will not panic when out of memory error occurred.
$ echo 0 > /proc/sys/vm/panic_on_oom
When you set that value 1 that means the kernel will panic on out of memory error.
echo 1 > /proc/sys/vm/panic_on_oom
There are some more settings for the OOM-Killer other than enabling and disabling. As we already mentioned that Linux can overcommit the memory to processes with allocating it, this behavior can be controlled by the Linux kernel setting. The vm.overcommit_memory is variably used to control this behavior.
The vm_overcommit_memory variable memory can be controlled with the following settings :
0: Setting the variable to 0, where the kernel will decide whether to overcommit or not. This is the default value for most versions of Linux.
1: Setting the variable to 1 means that kernel will always overcommit. This is a risky setting because the kernel will always overcommit the memory to processes. This can lead to kernel running out of memory because there is a good chance that processes can end up using the memory committed by the kernel.
2: Setting the variable to 2 means that kernel is not supposed to overcommit memory greater than the overcommit_ratio. This overcommit_ratio is another kernel setting where you specify the percentage of memory kernel can overcommit. If there is no space for overcommit, the memory allocation function fails and overcommit is denied. This is the safest option and recommended value for PostgreSQL.
The second thing that can affect the OOM-killer is the behavior of swappiness. This behavior can be controlled by variable cat /proc/sys/vm/swappiness. These values specify the kernel setting for handling the swappiness of pages. The bigger the value, the less of the chance OOM kills the process but it affects the database efficiency because of I/O. A smaller value for the variable controlling the swappiness means that there are higher chances for OOM-Killer kicking in, but it also improves the database performance. The default value is 60, but if you entire database fits in memory than it is recommended to set this value to 1.
So why is Apache / MySQL/Postgres always Killed?
The above listed criteria mean that when selecting a process to kill the OOM Killer will choose a process using lots of memory and has lots of child processes and which are not system processes. An application such as Apache, MySQL, Nginx, Clamd (ClamAV), or a mail server will make an good candidate. However, as this situation usually occurs on a busy web servers Apache or MySQL will be the largest in-memory, non-system processes and consequently gets killed.
It must be remembered that although the Web Server or DB Server are very important to you, when the kernel calls the OOM Killer the situation is critical. If memory is not freed by killing a process the server will crash very shortly afterwards. Continuing normal operations at this juncture is impossible.
Common Cause
One of the common causes of Apache / Nginx / MySQL being killed by the OOM Process Killer, is when the site is receiving a large amount of traffic, this could be genuine traffic from a New Promotion, Media Attention or similar, or it could be a Bot crawling the site, or in some cases it can be Botnets, trying to attack of brute force your site. Reviewing the Apache / Nginx logs, is a good place to start to see if this is the case.
Summary
You don’t need to be confused by the name Killer (OOM-Killer). The killer is not always harmful; it is a savior for your system. It kills the most culprit process and saves your system from crashing. To avoid having to use OOM-Killer to kill PostgreSQL, it is recommended to set the vm .overcommit_memory value to 2. It will not 100% avoid the OOM-Killer but will reduce the chance to kill the PostgreSQL process.
Hope you like the tutorial. Please let me know your feedback in the response section.
Happy learning!