Posted on

Locator Patch Finds Missing Locations

Twitter 1500x500 Map 3D Banner

Store Locator Plus 4.2.38 was released today addressing an issue where some sites suddenly “lost” locations.

After a week of research it was discovered that some MySQL and MariaDB installations will set boolean data fields to NULL versus true/false or blank.    This led to an issue where any of the locations that had a null value in the privacy flag entry would be considered private by the Store Locator Plus search mechanism.  The new patch considers locations with a NULL privacy value to not be marked private, which has “recovered” missing locations on the front-end search interface.

Enhancements

In addition to the patch, several updates that were underway have been included as part of the effort to launch a Widget Pack update and start provide more advanced user interface features and WordPress theme support.

SLP 4.2.38 Widget Tagalong Pro Pack
Version 4.2.38 with Widget Pack, Tagalong, and Pro Pack options activated.

A new plugin theme has been bundled with Store Locator Plus that has been test for layout compatibility with the iThemes Herschel WordPress theme.

SLP 4.2.38 iThemes Herschel Plugin Theme
SLP 4.2.38 iThemes Herschel Plugin Theme with Pro Pack, Enhanced Search, and Enhanced Results.

The Simple White Four Column theme has been updated to use the newer SaSS based CSS rules engine, patching some quirks that are addressed with the core Store Locator Plus CSS ruleset included in all SaSS based plugin themes.

SLP 4.2.38 Simple White 4 Column on WP theme iThemes Herschel
WordPress iThemes Herschel theme with Store Locator Plus Simple 4 Col White theme. Uses Pro Pack, Enhanced Search, Enhanced Results for layout control.

A new general layout option for Pro Pack users allows site designers to gain access to not just the location data but also Store Locator Plus plugin option values.   The new [[slp_option nojs=”<option_name>”]] and [[slp_option js=”<option_name>”]] settings allow site designers to do things like place headers on the map with dynamic messages that change when site options change such as “All distances shown are in <miles>” where miles will change to kilometers if you change the default measurement in the UX settings panel.     Many other options such as a fixed default radius and various labels can be displayed using this feature.     It is used in conjunction with Enhanced Search to display heading on search box labels that can be changed via the admin panel (coming in the next Enhanced Search release).

The language translation system was extended, providing better internationalization (i18n) and localization (l10n) support for all add-on packs.  This has been built into the SLP 4.2 add-on framework.

The state filter processor was refined to eliminate redundant code in Enhanced Search and the upcoming Widget Pack update.

The state and country SQL processors were updated to address various null data issues and to provide slightly faster SQL query processing.

Map Center and Zoom Level have been migrated to the newer options system in Store Locator Plus. This also grants access to the slp_option shortcode in Pro pack for displaying these values such as “Distances calculated from <map center address>.” when a map is first loaded.

Change Log

Posted on

WordPress Performance Tuning

MySQL InnoDB Banner

The Charleston Software Associates website has been getting slower every week.   For the past month I’ve been spending an hour or two trying to figure out what was going on.    I’ve used several tools to analyze the site but it seems no matter what I do the site gets marginally faster at best.   I still receive daily alerts of “your site took more than 15 seconds to respond” from the Windows Azure monitoring service that pings my hosted Linux virtual machines and alerts me to issues.

Last night I decide to switch some of my tables from MyISAM to InnoDB.  The theory being that the MySQL InnoDB engine does row-level locking and since the bulk of my site is order transactions and forum posts the insert and update operations are at least as frequent as the read-only operations.   Row level locking should be better.

Wow, did I SCREW THAT ONE UP.

Actually I only partly screwed it up.

I made the change around 11PM last evening and everything looked OK.  No notable performance gains but I wanted to see what happened over 24 hours with the “your site is slow as hell” notices I get every day.   By noon today it was clear I moved in the wrong direction.   I was now getting “your site sucks” warnings from my monitoring service every half-hour.   Ouch!

Fixing InnoDB

It turns out you should not just convert a table from MyISAM to InnoDB without first “checking under the hood”.   While it is easy to convert from MyISAM to InnoDB with a simple MySQL ALTER TABLE <x> ENGINE = InnoDB command (make sure the table does not have Full Text indexes with SHOW INDEXES IN <x> first) that may not be prudent.

When I got on my server today I noticed the CPU would spike to 90% or more, then drop back to normal.  Every time it was MySQL spinning the CPU as hard as it could and always when accessing the half-dozen InnoDB tables I created.

Turns out that InnoDB could use some serious tuning before turning it on.  InnoDB can really use a helping hand by allowing multiple CPU threads and by utilizing a buffer.   Turns out there are some other general MySQL buffers and MyISAM settings that can help performance as well.

For MySQL on a Red Hat based distribution, and many other Linux distros, you can tweak the initial startup settings by editing the my.cnf file.  On CentOS it lives in /etc/my.cnf.

Here is what I added to the file in the [mysqld] section:

# default: 1M
innodb_additional_mem_pool_size=32M
# default: 8M
innodb_buffer_pool_size=256M
# default: 0
innodb_thread_concurrency=8
# default: 128K (131072)
join_buffer_size=64M
# default:8M
myisam_sort_buffer_size=32M
# default: 0 (disabled)
query_cache_size=128M
# default: 2M
sort_buffer_size=32M

The MySQL InnoDB settings page will tell you more about each setting. Google can help you find out what the other settings mean.

You can use the SHOW VARIABLES MySQL command to see what your current settings are.   Most of the settings can be added to the my.cnf file and tweaked.  Just make sure you read about settings before blindly guessing what correct values are for your server.  Learn what they do, tweak, and monitor.   Hopefully you can avoid doing this to your CPU and response times:

InnoDB Tuning Fail
InnoDB Tuning Fail

By increasing the buffer sizes and turning on multiple threads my CPU is not spinning up to 90% or more when an SQL query is sent along from WordPress. With 4GB of RAM on my server I am certain I can tune this even further and there are other settings that will help, but this is a great start to getting MySQL to perform better than it has been.

Lesson Learned

The lesson here, especially if you are on a dedicated server, is to look at your MySQL configuration. The “out of the box” setup is not usually ready for production in ANY environment. I should know better, but when running a business solo it is one of the many “I’ll do it later” tasks that don’t get addressed until it bites you in the ass. Like it did for me today.

My guess is my quad-core 4GB RAM server can more than handle the load my website brings to it.    If I turn all of the right dials in the right direction I should be able to get the page response times to under 1 second.   As one of the most impatient web surfers you’ll meet I find my current 4-second average page response time is simply far too long to wait.

Some good articles on WordPress DB Tuning

http://wordpresswebmaster.net/mysql-database-optimization-on-wordpress/
http://www.prelovac.com/vladimir/wordpress-optimization-guide

Posted on

WordPress dbDelta Bug Affects SLP

There is a bug that remains open in the WordPress Core tracking system that impacts Store Locator Plus database tables.  The bug will degrade database performance after each consecutive update to the Store Locator Plus base plugin.    The problem and resolution are described below.

The Problem

There is a glitch in a core WordPress function, dbDelta(), that creates duplicate table indexes.    Store Locator Plus uses 3 indexes on the locations table.  Indexes significantly increase the performance of database searches when used properly.  For Store Locator Plus an index exists on the store name, the latitude, and the longitude.   The latter two mean that whenever someone does a location search on your website the data returned from the locations table is found much more quickly than if the data index was not in place.  On large installs, those with hundreds or thousands of locations, the difference is notable.

To put the index and any other data table elements in place, SLP uses the built-in dbDelta function from WordPress.  This is the recommended method for managing table and index creation as well as structure updates.  It works great.  Most of the time.

The problem is that dbDelta() has a glitch that creates a new index every time the Store Locator Plus version changes.    It should be looking for the store, lat, and long indexes and if not there it will create them.  Instead if it finds an existing index it creates a new index and adds a numeric suffix.    After a number of upgrades to SLP you end up with a database that has multiple indexes like this:

  • sl_store
  • sl_store_1
  • sl_store_2
  • sl_latitude
  • sl_latitude_1
  • sl_latitude_2
  • sl_longitude
  • sl_longitude_1
  • sl_longitude_2

Those _# indexes are IDENTICAL to the non-suffixed version.

This is bad.   Any time a record is updated or added in the locations table it is MUCH slower as every index needs to be updated.  After just 3 updates MySQL is performing 9 index updates on every record operation versus 3.

It can also confuse and SLOW DOWN record searches as the MySQL engine will take longer to determine which is the best index to use as it has to load the entire index directory.  I assume MySQL is smart enough to drop after the “first perfect match”, but that is purely conjecture.   If that is true the search impact would be minimal.  If MySQL is not that intelligent and it looks at ALL INDEXES before selecting the “best match” during a search the impact here can be notable as well, though not nearly as dramatic as a updating or inserting new data.

Do You Have A Problem?

Unfortunately there are only 2 ways to tell.

First – your WordPress site starts throwing errors (usually hidden in the error log) that are coming from MySQL and tell you your index limit of 64 entries has been exceeded.    You shouldn’t need to increase this limit, it is per-table and there are very few (if any) cases where 64 indexes on a single table is helpful.

Second – use the MySQL command line tool or phpMyAdmin and look at the wp_store_locator table.   If it has _# indexes they need to go.

The Manual Fix

The way to fix installations by hand is to use the MySQL command line tool, or PHPMyAdmin and drop the _# indexes from the wp_store_locator table.   Since Store Locator Plus is not the only plugin affected you may want to check for any other tables that are created by plugins and see if they suffer from the same issue.  Contact the plugin author before randomly deleting indexes.

From the MySQL command line you can execute the DROP INDEX command as often as necessary based on how many duplicate indexes appear:

mysql> DROP INDEX sl_store_2 ON wp_store_locator;
mysql> DROP INDEX sl_latitude_2 ON wp_store_locator;
mysql> DROP INDEX sl_longitude_2 ON wp_store_locator;

You should leave the indexes without the _# intact.

The Automated Fix

I am looking into ways to detect and auto-delete the secondary indexes in Store Locator Plus in the 3.9.X+ updates, however there is a danger of bloating the update/install process which I want to avoid.    I am also looking for ways to prevent the problem from happening in the first place and have been active with the reported bug to see if there is a way to get WordPress Core developers to patch the problem.

If a workaround is found it will be noted in the updates road map for SLP.

###

Alexa Rank: 264,930 / 89,424 / 611

Technorati Rank: 27347

Posted on

MySQL – I Am a Dummy


Today I learned about an interesting command-line option for MySQL:
$ mysql --i-am-a-dummy
This is an alias for --safe-updates, which imposes three restrictions:

  1. Any UPDATE or DELETE that does not have a WHERE or LIMIT is rejected.
  2. Any SELECT without a LIMIT will be automatically limited to 1,000 rows in the result.
  3. Any join that would result in examining more than a million row combinations is rejected.

I thought it was funny at first, but the first restriction alone makes this useful for doing work on the command-line on live servers.  You ever intend to do something like DELETE FROM users WHERE id = … and you forget that tiny little where-clause?  Because I have, and all you want to do is hide under your desk, under the assumption that if no one can see you then you must be invisible.

Posted on

Upgrading Redmine From 8.6 to 9.3

After more than a year of using Redmine to help us manage our projects it was time to upgrade.  Redmine helps us manage our bug lists, wish lists, and to do lists.  It helps us communicate with our clients effectively and efficiently using a web based media in a consistent format that is easy to use for both our developers and our clients.  However, during the past year there have been several changes including the significant upgrades that came out in v9.x some months back.   Our busy schedule kept us from upgrading as each new release came out, and sadly we had fallen far behind.   This past weekend we decided it was time to upgrade.   The notes below record some of the problems we ran into and outlines how we resolved them.  If you are using Redmine for your own projects we hope this guide will help walk you through a major version update of your own.

These are Cyber Sprocket’s notes from our upgrade.  For more information you may want to visit the official documentation site.

Our Environment

The environment we were running before upgrading to Redmine 9.3:

  • Redmine 8.6
  • Apache 2.2.7

Preparation

The first thing we ALWAYS do before upgrading a system is to store a local copy of the database and the source code.  In order to make the archives as small as possible we post a note on the system that Redmine will be offline and at the posted time remove all the session “crud” that has built up.   The process includes a mysql data dump, a file removal, and a tarzip.

  • Go to the directory ABOVE the redmine root directory:
    cd /<redmine-root-dir>; cd ..;
  • Dump MySQL Redmine data:
    mysqldump –user=<your-redmine-db-username> -p <your-redmine-databasename> > redmine_backup.sql
  • Remove the session files:
    rm -rf <redmine-directory>/tmp/sessions/*
  • Tarzip:
    tar cvfz redmine-backup.tgz redmine_backup.sql ./<redmine-directory-name>

Issues

Updating Rails

We realized after some back & forth that our RoR installation needed to be upgraded.  Redmine 9.3 require Ruby 1.8.6 or 1.8.7 (we had 1.8.6 luckily) with Rails 2.3.5 (which we needed to upgrade) and Rack 1.0.1 (which we never touched).

gem install rails -v=2.3.5

Fetching 9.3

We could not perform a simple svn update since we are on an 8.X branch.  A new svn checkout was necessary.  We opted to move our old Redmine install to a different path and do the checkout in our original location:

svn checkout  /redmine

Generation session_store.rb

Later version of Redmine (even 8.X versions beyond 8.6) require a secret key in order for the session system to work.  If you don’t have this you can’t login.  After much trial & error we found that the following command WILL WORK if you have the latest Redmine source (Fetching 9.3) and the latest version of Rails (Updating Rails).   There is not file named config/initializers/session_store.rb in the code repository, it is created by the following rake command:

rake config/initializers/session_store.rb

Updating The Database

The database then needed to be migrated:

rake db:migrate RAILS_ENV=production

Database Upgrade Errors : Migrating Member_Roles and Groups

While performing the database update we immediately ran into a couple of errors about a table already existing. Turns out a simple renaming of the tables fixed the problem, no apparent harm done.

The error message was:

Mysql::Error: Table 'member_roles' already exists:

The fix was as simple as logging into MySQL from the command line and renaming the table:

mysql> rename table member_roles to member_roles_saved
mysql> rename table groups_users to groups_users_saved

Switching from CGI to FCGID

It turns out that RoR does not play well with plain ol’ CGI processing via Apache when running Rails v2.3.5.   We ended up having to upgrade our Apache server to enable mod_fcgid and tweaking our new Redmine install to use that.  We started by following this excellent guide go running Redmine on Apache.  Below are our notes about this process to help save you some time:

  • Do not install fcgi, instead use Apache’s mod_fcgid
  • chmod 755 /var/log/httpd so fgcid can run from Apache and access the socks directory it creates there
  • Modify <redmine-directory>/public/.htaccess to prevent looping with mod_rewrite

Installing FCGID

“Official Apache mod_fcgid”:http://httpd.apache.org/mod_fcgid/ this is the Apache version, seems newer and we had more luck with this than the Coremail hosted version below.

Fetch the code

cd /usr/local/src/
wget 
tar zxvf mod_fcgid.2.3.5.tgz
cd mod_fcgid.2.3.5

Configure and Install

./configure.apxs
make
make install

Permissions

chmod 755 /var/log/httpd
service httpd restart

Install Ruby Gem fcgi

You will need to tell Ruby to work with fcgi for this to work:

gem install fcgi

Errors Installing fcgi gem

If you see this error:

Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.

You probably need the fcgi development kit. Get it from here, build it & install it… THEN do the gem install fcgi again.

http://www.fastcgi.com/drupal/node/5

Prevent Redirects

You may end up with looping with mod_rewrite if you had a CGI version installed first.   We commented out the non-fcgid lines and that kept things running smoothly.

Edit <redmine-directory>/public/.htaccess

Comment all the lines for the Rewrite rules for the dispatcher except the FCGI rule for fcgid

#<IfModule mod_fastcgi.c>
#       RewriteRule ^(.*)$ dispatch.fcgi [QSA,L]
#</IfModule>
#<IfModule mod_fcgid.c>
       RewriteRule ^(.*)$ dispatch.fcgi [QSA,L]
#</IfModule>
#<IfModule mod_cgi.c>
#       RewriteRule ^(.*)$ dispatch.cgi [QSA,L]
#</IfModule>

Getting Errors With FCGID?

This is a very common error.  For some reason Ruby + mod_fcgid do not always play well with each other.  We have two near-identical servers running CentOS 5, Apache 2.2.x, and the same exact versions of Ruby + Rails + gems installed.   Yet on one server Redmine works fine.  On the other we get this:

undefined method `env_table’ for nil:NilClass

The “magic pill” seems to be running Passenger.  While we didn’t believe this at first since we got it to work fine on our development server, it turns out that there are some gremlins buried deep within the bowels of Ruby & mod_fcgid.    These few steps fixed the problem on our production server:

gem install passenger
passenger-install-apache2-module

Edit the httpd.conf file and add these lines (check your paths that Passenger gives you during the install – they may be different on your server):

LoadModule passenger_module /usr/local/lib/ruby/gems/1.8/gems/passenger-2.2.11/ext/apache2/mod_passenger.so
PassengerRoot /usr/local/lib/ruby/gems/1.8/gems/passenger-2.2.11
PassengerRuby /usr/local/bin/ruby

Restart httpd…

service httpd restart

Test your Redmine install.

Checking Logs

If you have problems check the log files in your Redmine installation directory, such as ./log/production.log. You may also want to check your Apache log files, assuming you’ve set those up. To log Apache messages you need to have an ErrorLog statement in your httpd.conf file that tells Apache where you want your log file written (normally /usr/local/apache/logs/redmine-error.log).

Posted on

Easy Documentation for Git, MySQL, PHP, et cetera

This is what I do on my box to quickly find documentation, which you guys may find helpful.  Especially those of you on Linux—although you could do this on Windows too.

Most package managers make available ‘-doc’ packages, like php-doc, mysql-doc, and so on.  Install these for all the major software you use.

Next, install ‘screen’.

Now put this is your Bash config:

# Displays various types of documentation.

function doc() {
    case "$1" in
    'llvm')
        screen -t 'LLVM Documentation' w3m /usr/share/doc/llvm-doc/html/index.html ;;
    'erlang')
        screen -t 'Erlang Documentation' firefox /usr/share/doc/erlang-doc-html/html/doc/index.html ;;
    'python')
        screen -t 'Python Documentation' w3m /usr/share/doc/python3-doc/html/index.html ;;
    'php')
        screen -t 'PHP Documentation' w3m /usr/share/doc/php-doc/html/index.html ;;
    'ghc')
        firefox /usr/share/doc/ghc6-doc/index.html & ;;
    'postgresql')
        screen -t 'PostgreSQL Documentation' w3m /usr/share/doc/postgresql-doc-8.4/html/index.html ;;
    'mysql')
        screen -t 'MySQL Documentation' w3m /usr/share/doc/mysql-doc-5.0/refman-5.0-en.html-chapter/index.html ;;
    'apache')
        screen -t 'Apache Documentation' w3m /usr/share/doc/apache2-doc/manual/index.html ;;
    'j')
        screen -t 'J Documentation' w3m ~/Software/j602/help/index.htm ;;
    'lua')
        screen -t 'Lua Documentation' w3m /usr/share/doc/lua5.1-doc/doc/index.html ;;
    'git')
        screen -t 'Git Documentation' w3m /usr/local/share/doc/git-doc/index.html ;;
    'lighttpd')
        screen -t 'Lighttpd Documentation' w3m /usr/share/doc/lighttpd-doc/ ;;
    'plt-scheme')
        screen -t 'PLT Scheme Documentation' w3m /usr/share/plt/doc/index.html ;;
    'gambit')
        screen -t 'Gambit Documentation' w3m /usr/share/doc/gambit-doc/html/index.html ;;
    'tintin++')
        screen -t 'TinTin++ Documentation' zless /usr/share/doc/tintin++/tintin19.txt.gz ;;
    'sqlite')
        screen -t 'SQLite Documentation' w3m /usr/share/doc/sqlite3-doc/index.html ;;
    'django')
        screen -t 'Django Documentation' w3m /usr/share/doc/python-django-doc/html/index.html ;;
    'sbcl')
        screen -t 'SBCL Documentation' w3m /usr/share/doc/sbcl-doc/html/index.html ;;
    'boost')
        screen -t 'Boost Documentation' w3m /usr/share/doc/libboost-doc/HTML/index.htm ;;
    'smalltalk')
        screen -t 'GNU Smalltalk Documentation' info Smalltalk  ;;
    'haskell-tutorial')
        screen -t 'Haskell 98 Tutorial' w3m /usr/share/doc/haskell98-tutorial/html/index.html ;;
    'haskell-report')
        screen -t 'Haskell 98 Report' w3m /usr/share/doc/haskell98-report/html/index.html ;;
    'java')
        firefox "/home/eric/Documents/Books/Programming/Java SDK/index.html" & ;;
    esac
}

Replace ‘w3m’ with the browser you want to use.  And make sure the paths are correct.  If you’re on a Debian-based box, that’s where those doc packages will end up.

Now whenever you’re at the terminal you can simply run stuff like

$ doc git
$ doc postgresql

to browse through the official docs.

For Git in particular you will have to build the HTML docs.  In the Git source directory:

$ make html
$ sudo make install-html

There ya go, easy way to look up docs quickly.

Posted on

Powerful Data Insertion Features in MySQL

There are several powerful techniques for getting your data into a MySQL database. MySQL supports non-standard extensions to SQL, giving you more flexibility in certain situations. If you want to take full advantage of MySQL, keep these tips in mind.

The Problem

Say you need to shove some new data into a table.  Naturally, you want to INSERT.  But it so happens that if the data is already there, you want to UPDATE the existing data. You don’t care if you have to INSERT or UPDATE – you just want the new data in the table. You decide do one of two things. You might try

DELETE FROM yourtable WHERE key='yourkey';
INSERT INTO yourtable (key, data1, data2) VALUES ('yourkey', 1, 2);

This is simple and effective. You could also check to see if the record already exists, and if so, UPDATE. What you really wish you had is an “INSERT OR UPDATE” command.

Allow me to introduce two features specific to MySQL. “REPLACE”, and “ON DUPLICATE KEY UPDATE”.

The Solutions

REPLACE

The following query is equivalent to the DELETE / INSERT pair above.

REPLACE INTO yourtable (key, data1, data2) values ('yourkey', 1, 2);

The MySQL manual states: “REPLACE works exactly like INSERT, except that if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE  index, the old row is deleted before the new row is inserted.”

So it simply deletes any row that gets in the way if there is one, and then inserts the new data. This can be quite handy. Do not confuse it with the “Replace” function, however, which is part of standard SQL. The “Replace” function is for replacing parts of strings. REPLACE, as shown above, just blazes a trail, inserting right over existing data if it is there. I love it.

ON DUPLICATE KEY UPDATE

A more powerful but potentially slower method is ON DUPLICATE KEY UPDATE.  The following query would accomplish the same thing as the previous two examples:

INSERT INTO yourtable (key, data1, data2) values ('yourkey', 1, 2)
ON DUPLICATE KEY UPDATE data1=VALUES(data1), data2=VALUES(data2);

This will detect the duplicate key, and instead of performing the insert, it will perform the given update, referencing the values using the VALUES function. This is much faster than going back to the calling application with that logic, and also faster than other SQL-only means to the same end. Essentially, every INSERT may potentially trigger a duplicate key constraint violation. The ON DUPLICATE KEY UPDATE query essentially traps that error, and tries the update instead of throwing the error.

You can also do fancier stuff like the following, with “item_id” as a unique key:

INSERT INTO yourtable (item_id, quantity)
VALUES ('foo', 2), ('bar', 4), ('baz', 5)
ON DUPLICATE KEY UPDATE quantity=quantity+VALUES(quantity);

This will try to insert the values given. If it encounters a duplicate key constraint violation, it will update the row, incrementing the quantity by the given amount. This is very powerful and quite a nice feature.

Caveats

Be careful using these techniques on tables with multiple unique keys or compound keys. Remember that unique constraint violations may occur on any constrained column – not just on your primary key.

Also be careful with REPLACE if you are using InnoDB as your database engine. If you have any ON DELETE CASCADE relationships defined for the table you are working with, remember that REPLACE may delete records during its execution, in preparation for INSERTs. That DELETE will trigger a cascade, deleting from related tables. This is one downside of the REPLACE query as compared to the ON DUPLICATE KEY UPDATE query.

All in all, these are quite useful techniques.

Further reading can be found at MySQL’s REPLACE documentation and ON DUPLICATE KEY UPDATE documentation pages.

Posted on

Issues With MySQL

If I had my choice I’d ALWAYS choose PostgreSQL over MySQL every time. It is a stronger, more reliable, more robust database engine without question. So why do I even think about MySQL, ever? Because I have to. It is my job to know technology and, unfortunately for the industry at large, MySQL is the defacto standard for nearly every open source web application in existence. The very predominance of MySQL does not make it better, just a necessary evil that we have to deal with every day. Unfortunately the pure power of numbers often forces us to create new systems in MySQL just for the sake of simplicity on our client systems.

The Complaints List

So why do I dislike MySQL so much? There are a number of reasons. Granted, MySQL has done a good job in the past 2 years of finally putting decades-old technology in play. Things like stored procedures, though still quite a bit wonky in it’s early iteration, is not longer a “can’t do it” mark against MySQL. Same goes for views.

However, there are still many nagging issues out there that get to me every time I dig into MySQL.  Here is my short list as of the v5.1 release:

  • Want full text search? Can’t have foreign keys!
    This is because the MyISAM engine supports full text search but NOT foreign keys.
  • Want foreign keys?  Can’t have full text search!
    Same thing as above, but in reverse.   InnoDB supports foreign keys (hey, who needs ’em it’s only a relational DB we want) but alas does not provide any method for full text search.
  • Want foreign keys?  Can’t have static length records.
    Another give & take between MyISAM and InnoDB.  MyISAM provides for static length records, which provide MUCH faster access times in large lookup sets.   InnoDB does not have this.  Unfortunately the heaviest use of lookup tables happens to be in relational databases which by their definition require the foriegn keys of InnoDB.
  • Default NOW()
    This staple of many database engines like Oracle and PostgreSQL allows you to set datetime fields to a default that sets the time automatically for when a record was updated or inserted. Not available in MySQL  Yes,  you can use TIMESTAMP but you are only allowed ONE default CURENT_TIMESTAMP per table.  So much for created/modified date + time fields.
  • Default <any function here>
    Expanding on the above concept… you can’t use ANY function as a default value.  Yeah, it’s difficult to code in an DB engine, but not impossible.  Nearly all the other engines allow for this.  That means no default USER() either.
  • Stored Procedure Delimiters
    While stored procedures are a welcome addition, why do we need to redefined the standard end-of-command delimeter when creating the functions?   Every other DB engine is smart enough to know you are in a stored procedure block and allow for nested semi-colons.  Not MySQL, you need to change your delimiter to something funky that you’ll never use like maybe // and then remember to change it back when you’re done or all hell breaks lose.
  • Default Engine = MyISAM
    Yes, it is faster.  Yes, it has full text search.  BUT it is less stable than InnoDB AND it does not support foreign keys.  A properly designed database application should really be using foreign keys.   It ensures data integrity at the database v. application level and until the advent of stored procedures in MysQL was a good way to get deletes and updates to cascade.   Yes, you can change the deault, but shouldn’t a good database be relational and inherintly more stable by default?  How about making speed, instability, and lack of relations secondary?   Even more fun is the fact that if you did create a table with the default MyISAM engine and then need to refer to it via a foreign key in InnoDB, you need to conver the entire table.  Hope there’s not much data in there!

Summary

While there are some good features behind MySQL, most notably the strong online community and user documentation, for true heavy lifting I’ll always argue that PostgreSQL can do the same thing better 99% of the time. Until I can convince enough people to make the conversion, I’ll keep adding to my “dislike list” for MySQL.

Posted on

Custom Site & Store Builder with Energy Inc.

The Energy Detective (TED) is a consumer based product that helps home users track their energy usage on a per-device or cross-household level. When Energy Inc, the makers or TED needed to upgrade their site with an easy-to update content management system (CMS) and the addition of a custom storefront, they came to Cyber Sprocket Labs.

Within months we had ported their old static-page driven site to our new custom site builder. They could now easily update their own content without getting developers involved, and better yet – the system protected them from inadvertently breaking their site design. The staff at Energy Inc. soon became experts at the system and added new content as well as new product models to the site.

The site also started with a simple storefront module. It allowed Energy Inc. to upload new products and track inventory levels to ensure customers knew when an item was put on backorder. The new storefront module allowed Energy Inc. to easily show and sell their wares while automating part of the order process on the back end.

Soon the orders started to roll in and Energy Inc. needed more sophisticated order tracking and management. Updates were made to add automated interfaces with FedEx for real time shipping quotes anywhere in the US and it’s territories. New order search and tracking features where added so that Energy Inc. knew what shipped, what was backordered, and what was being returned under their return merchandise authorization policy.

Energy Inc’s TED product was doing well, and the media started to notice. So did Google. As one of the first partners in Google’s new energy management program, Energy Inc. realized that their shared Linux server was not going to be able to handle the new influx of traffic. Luckily, Cyber Sprocket Labs had already been working on the Amazon Web Services cloud for more than 18 months. We knew our way around the system and helped Energy Inc. navigate the maze of cloud computing and served as a guide to the new platform. Energy Inc. decided to make the move to the nearly infinite scalability and on-demand compute environment of cloud computing.

Cyber Sprocket Labs helped migrate Energy Inc over to the Amazon Cloud in less than a week. No downtime while at the same time providing a significant boost in processing power… just in time for Google’s big announcement.

Congratulations on your success, Dolph! Glad we could be there to help get your web services off the ground!

Technical Overview

Services Provided

  • Custom website builder
  • Custom shopping cart
  • Custom order processing and management system
  • Porting to Amazon Web Services Cloud

Platform Details