Posts

WordPress: Cannot connect to database – MySQL killed

The MySQL server at the SysCrunch server was always being killed, the reason was due to lack of resources (RAM). Even after a memory increase to a total of 3+ GB, the problem persisted.

This blog post describes how to set up a monitoring tool (Monit) under CentOS 6 to monitor processes and to restart them when necessary. It also explains how to avoid the Apache server from exhausting the memory.

The problem

Essentially, the Apache server was consuming most of the RAM leaving little for other processes. Thus, MySQL was eventually killed as well as the ElasticSearch service. This happened as the number of web services and websites in the server increased. This would manifest when bots like googlebot started scanning intensively the server. Yes, Apache and MySQL are running in the same server with the current configuration.

Monit

The steps to install the monitoring tool Monit under CentOS 6 are similar to other service like Apache, you can also follow this tutorial or this one (Monit part) if you are under CentOS 7 and this might help for Ubuntu.

Install Monit

Monit Web Interface

First open the configuration file, you can also use the nano editor.

Find the set httpd port line, uncomment and update the configuration, here is a sample configuration:

Add monit to the system startup:

Start the monit service:

You can test the web interface in a browser using the provided IP address or hostname (e.g. http://192.192.192.192:8912 or http://hostname.com:8912). It will ask your for the username and password you have defined in the config file.

This is a sample capture of our server web interface:

Monit-web-interface-SysCrunch

Monit Monitoring

Monit can monitor multiple processes and execute actions based on some conditions like CPU and memory usage.

Apache Web Server

The paths might be different in your system, make sure to check them before. Find and uncomment the process httpd line, this is a sample config:

Essentially, the Apache server cannot take more than 1.5 GB, if it reaches that limit at least during 5 cycles, it gets restarted to free resources. The same happens if too many workers (250) are created. These values depend heavily on your system resources and the number of processes you are running.

MySQL Server

The default monit config file does not include a sample for MySQL. Following the Apache configuration and some samples from stack overflow, this is a proposed config that you can add after the Apache config:

If the server does not respond on port 3306, it raises an alert first and then does a restart. If too many restarts happen within a short period (5) then it timeouts.

ElasticSearch

Similar to the MySQL config, this is a proposed configuration that you can put after the Apache or MySQL configs.

I haven’t seen any alerts or restarts of this service yet. Notice the protocol http condition for the restart, it might be necessary as ElasticSearch runs as a http application on that port.

Configuring Alerts

Monit lets you configure alerts by e-mail, in this case we use gmail as SMTP and you can also default to localhost to deliver the e-mails. You can refer to the Monit documentation for more details.

Find and uncomment the set mailserver line to ressemble to:

You have to replace the username and password. You can then find the set alert line to define the recipient:

This alert is skipping Monit actions like start, stop or performing user actions in order to avoid trivial cases.

IMPORTANT: Gmail will not send e-mails with this configuration unless you enable the less secured Apps option for your account at http://www.google.com/settings/security/lesssecureapps.

Apache Server RAM Solution

After monitoring the processes with Monit, the problem was with the Apache server that was running too many worker instances. This would lead to exhausting the system memory and lead to other processes being killed. The issue is that most of the web services (virtual hosts) were configured with a HTTP Keep Alive header, this allows for faster response times but it consumes more resources as the connection is kept open.

The solution was to remove the  Header set Connection keep-alive from the virtual hosts. It can also be set explicitly to close  Header set Connection close. Alternatively there are other options to tune the persistent connections like  MaxKeepAliveRequests  and  KeepAliveTimeout , it’s recommended to have a good understanding since for HTTP 1.1 clients, the default option is to keep connections alive, you can read more in the Apache docs for Keep Alive.

In the end with this configuration, it was possible to launch 3000 thousand requests with 60 concurrent requests using Apache Benchmark (3 parallel calls) and the server handled it without any problems. Here is a sample output of one of the tests:

I wrote this article to document the solution to this problem and it might be updated over time.

Hope it’s useful for some other people.

Portfolio Items