An Apache web server running on an Amazon EC2 Linux instance intermittently becomes unresponsive. Messages appear in the system log for the instance regarding "oom-killer," "failure to fork process," or other insufficient memory messages.

Note
To view the system log for an EC2 instance from the EC2 console, on the Actions menu, choose Instance Settings, Get System Log.

If you establish a terminal session to the instance, you might see stack traces in the location appropriate for the distribution of the Linux instance you are running, such as /var/log/syslog for Debian/Ubuntu, /var/log/messages for Centos/RHEL or in journalctl (for systems using systemd).

This usually means that memory for the instance has been exhausted.

You can set limits on the number of connections that the server accepts and the number of processes it starts. You can get the limit value by calculating the typical memory use of an Apache process and dividing the total memory you want to allocate to Apache by that average.

  1. Initiate a terminal session to the instance. If you are unable to connect, you might need to restart the instance.
  2. From the terminal session, run the top command to display a list of memory-resident processes on the instance. Sort the list in descending order by percentage of memory used. To sort on an rpm-based instance, press Shift+O and then press n. On other Linux distributions, choose the appropriate option to sort processes by memory usage.
  3. Scan the column of %MEM values returned for Apache processes and discern an average value.
  4. If one or more Apache processes has an unusually large %MEM value compared to the %MEM value of other Apache processes, there could be a memory leak in a web application running on the server. To mitigate the impact of a potential memory leak, you can change the default value for the configuration variable MaxRequestsPerChild (or MaxConnectionsPerChild on Apache 2.4) from 4000 to 1000. This configuration value is set in the httpd.conf file for the instance. This change should provide some relief for the problem until the source of the memory leak can be identified and addressed. If you suspect a memory leak, you can update the httpd.conf file with the new configuration value, save your changes, and skip to step 7.
  5. Calculate a value for the ServerLimit and MaxClients (or MaxRequestWorkers on Apache 2.4) configuration variables as follows:
         a. If your instance has more than 4 GB of RAM, divide the average %MEM value for Apache processes
             into 90%. For example, given an average %MEM value of 0.8%, divide 90% (.9) by 0.8% (.008) for a
             result of 112.5, and round down to the nearest whole number, 112 in this case.
         b. If your instance has 4 GB of RAM or less, divide the average %MEM value for Apache processes into
             80%. For example, given an average %MEM value of 0.8%, divide 80% (.8) by 0.8% (.008) for a
             result of 100.
    Note
    These values are calculated with the assumption that the instance is a dedicated web server. If you are hosting other applications on the server, subtract the total percentage memory use of these applications from either 90% or 80% before doing the calculation. Performance might decrease if you run other applications in addition to Apache on an instance with 4 GB of RAM or less.
  6. Update the MaxClients (or MaxRequestWorkers) and ServerLimit configuration variables in the instance httpd.conf file with the new value and save your changes. For example:
         MaxClients  112
         ServerLimit  112
  7. Restart the web server by running the following command from a terminal session:
         service httpd graceful

EC2 Linux, Apache web server, httpd, tune, tuning, maxclients, apache2.4, HTTP, serverlimit, httpd.conf, mpm, oom, out of memory, crash, oom-killer


Did this page help you? Yes | No

Back to the AWS Support Knowledge Center

Need help? Visit the AWS Support Center.

Published: 2016-01-07