The Charleston Software Associates website has been getting slower every week. For the past month I’ve been spending an hour or two trying to figure out what was going on. I’ve used several tools to analyze the site but it seems no matter what I do the site gets marginally faster at best. I still receive daily alerts of “your site took more than 15 seconds to respond” from the Windows Azure monitoring service that pings my hosted Linux virtual machines and alerts me to issues.
Last night I decide to switch some of my tables from MyISAM to InnoDB. The theory being that the MySQL InnoDB engine does row-level locking and since the bulk of my site is order transactions and forum posts the insert and update operations are at least as frequent as the read-only operations. Row level locking should be better.
Wow, did I SCREW THAT ONE UP.
Actually I only partly screwed it up.
I made the change around 11PM last evening and everything looked OK. No notable performance gains but I wanted to see what happened over 24 hours with the “your site is slow as hell” notices I get every day. By noon today it was clear I moved in the wrong direction. I was now getting “your site sucks” warnings from my monitoring service every half-hour. Ouch!
It turns out you should not just convert a table from MyISAM to InnoDB without first “checking under the hood”. While it is easy to convert from MyISAM to InnoDB with a simple MySQL ALTER TABLE <x> ENGINE = InnoDB command (make sure the table does not have Full Text indexes with SHOW INDEXES IN <x> first) that may not be prudent.
When I got on my server today I noticed the CPU would spike to 90% or more, then drop back to normal. Every time it was MySQL spinning the CPU as hard as it could and always when accessing the half-dozen InnoDB tables I created.
Turns out that InnoDB could use some serious tuning before turning it on. InnoDB can really use a helping hand by allowing multiple CPU threads and by utilizing a buffer. Turns out there are some other general MySQL buffers and MyISAM settings that can help performance as well.
For MySQL on a Red Hat based distribution, and many other Linux distros, you can tweak the initial startup settings by editing the my.cnf file. On CentOS it lives in /etc/my.cnf.
Here is what I added to the file in the [mysqld] section:
# default: 1M innodb_additional_mem_pool_size=32M # default: 8M innodb_buffer_pool_size=256M # default: 0 innodb_thread_concurrency=8 # default: 128K (131072) join_buffer_size=64M # default:8M myisam_sort_buffer_size=32M # default: 0 (disabled) query_cache_size=128M # default: 2M sort_buffer_size=32M
The MySQL InnoDB settings page will tell you more about each setting. Google can help you find out what the other settings mean.
You can use the SHOW VARIABLES MySQL command to see what your current settings are. Most of the settings can be added to the my.cnf file and tweaked. Just make sure you read about settings before blindly guessing what correct values are for your server. Learn what they do, tweak, and monitor. Hopefully you can avoid doing this to your CPU and response times:
By increasing the buffer sizes and turning on multiple threads my CPU is not spinning up to 90% or more when an SQL query is sent along from WordPress. With 4GB of RAM on my server I am certain I can tune this even further and there are other settings that will help, but this is a great start to getting MySQL to perform better than it has been.
The lesson here, especially if you are on a dedicated server, is to look at your MySQL configuration. The “out of the box” setup is not usually ready for production in ANY environment. I should know better, but when running a business solo it is one of the many “I’ll do it later” tasks that don’t get addressed until it bites you in the ass. Like it did for me today.
My guess is my quad-core 4GB RAM server can more than handle the load my website brings to it. If I turn all of the right dials in the right direction I should be able to get the page response times to under 1 second. As one of the most impatient web surfers you’ll meet I find my current 4-second average page response time is simply far too long to wait.