Posted on

Configuring Apache 2.4 Connections For WordPress Sites

Recently I upgraded my web server to PHP 5.6.14. Along the way the process managed to obliterate my Apache web server configuration files. Luckily it saves them during the upgrade process, but one thing I forgot to restore was the settings that help Apache manage memory. Friday night around midnight, because this stuff ALWAYS happens when you’re asleep… the server crashed. And since it was out of memory with a bazillion people trying to surf the site; every time I restarted the server I could not log in fast enough to get a connection and fix the problem.

Eventually I had to disconnect my AWS public IP address, connect to a private address with SSH, and build the proper Apache configuration file to ensure Apache didn’t go rogue and try to take over the Internet from my little AWS web server.

Here are my cheat-sheet notes about configuring Apache 2.4 so that it starts asking site visitors to “hold on a second” when memory starts getting low. That is much nicer than grabbing more memory than it should and just crashing EVERYTHING.

My Configuration File

I put this new configuration file in the /etc/httpd/conf.d directory and named it mpm_prefork.conf. That should help prevent it from going away on a future Apache upgrade. This configuration is for an m3.large server running with 7.4GB of RAM with a typical WordPress 4.4 install with WooCommerce and other plugins installed.

# prefork MPM for Apache 2.4
#
# use httpd -V to determine which MPM module is in use.
#
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# ServerLimit: maximum value for MaxRequestWorkers for the lifetime of the server
#
# MaxRequestWorkers: maximum number of server processes allowed to start
#
#
# TOTAL SYTEM RAM: free -m (first column) = 7400 MB
# USED SYSTEM RAM: free -m (second column) = 2300 MB
#
# AVG APACHE RAM LOAD: htop (filter httpd, average RES column = loaded in physical RAM) = 87MB
# TOTAL APACHE RAM LOAD: (htop sum RES column) 1900 MB
#
# BASE SYSTEM RAM LOAD: USED SYSTEM RAM - TOTAL APACHE RAM LOAD = 2300 - 1900 = 400MB
#
# AVAILABLE FOR APACHE: TOTAL SYSTEM RAM - BASE SYSTEM RAM LOAD = 7400 - 400 = 7000MB
#
# ServerLimit = sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process
# MaxRequestWorkers = number of simultaneous child processes to serve requests , must increase ServerLimit
#
# If both ServerLimit and MaxRequestWorkers are set to values higher than the system can handle,
# Apache httpd may not start or the system may become unstable.
#
# MaxConnectionsPerChild = how many requests are served before the child process dies and is restarted
# find your average requests served per day and divide by average servers run per day
# a good starting default for most servers is 1000 requests
#
# ServerLimit = AVAILABLE FOR APACHE / AVG APACHE RAM LOAD = 7000MB / 87MB = 80
#
#

ServerLimit 64
MaxRequestWorkers 64
MaxConnectionsPerChild 2400

The Directives

With Apache 2.4 you only need to adjust 3 directives. ServerLimit, MaxRequestWorkers (renamed from earlier versions) , and MaxConnectionsPerChild (also renamed).

ServerLimit / MaxRequestWorkers

ServerLimit sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process. MaxRequestWorkers is the number of simultaneous child processes to serve requests. This seems a bit redundant, but it is an effect of using the prefork MPM module which is a threadless design. That means it runs a bit faster but eats up a bit more memory. It is the default mode for Apache running on Amazon Linux. I prefer it as I like stability over performance and some older web technologies don’t play well with multi-threaded design. If I was going to go with a more stable multi-thread environment I’d switch to nginx. For this setup setting ServerLimit and MaxRequestWorkers to the same value is fine. This says “don’t ever run more than this many web servers at one time”.

In essence this is the total simultaneous web connections you can serve at one time. What does that mean? With the older HTTP and HTTPS protocol that means every element of your page that comes from your server is a connection. The page text, any images, scripts, and CSS files are all a separate request. Luckily most of this comes out of the server quickly so a page with 20 web objects on it will use up 20 of your 64 connections but will spit them out in less than 2 seconds leaving those connections ready for the next site visitor while the first guy (or gal) reads your content. With newer HTTP/2 (and SPDY) connections a single process (worker) may handle multiple content requests from the same user so you may well end up using 1 or 2 connections even with a page with multiple objects loading. While that is an over-simplification, the general premise shows why you should update your site to https and get on services that support HTTP/2.

Calculating A Value

# TOTAL SYTEM RAM: free -m (first column) = 7400 MB
# USED SYSTEM RAM: free -m (second column) = 2300 MB
# TOTAL APACHE RAM LOAD: (htop sum RES column) 1900 MB
# AVG APACHE RAM LOAD: htop (filter httpd, average RES column = loaded in physical RAM) = 87MB
# BASE SYSTEM RAM LOAD: USED SYSTEM RAM - TOTAL APACHE RAM LOAD = 2300 - 1900 = 400MB
# AVAILABLE FOR APACHE: TOTAL SYSTEM RAM - BASE SYSTEM RAM LOAD = 7400 - 400 = 7000MB
# ServerLimit = AVAILABLE FOR APACHE / AVG APACHE RAM LOAD = 7000MB / 87MB = 80

There you go, easy, right? Figuring our RAM resources can be complicated, but to simplify the process start with the built-in Linux free command and I suggest installing htop which provides a simpler interface to see what is running on your server. You will want to do this on your live server under normal load if possible.

Using free -m from the Linux command line will tell you the general high-level overview of your server’s memory status. You want to know how much is installed and how much is in use. In my case I have 7400MB of RAM and 2300MB was in use.

Next you want to figure out how much is in use by Apache and how much an average web connection is using per request. Use htop, filter to show only the httpd processes, and do math. My server was using 1900MB for the httpd processes. The average RAM per process was 87MB.

You can now figure out how much RAM is used by “non-web stuff” on your server. Of the 2300MB of used RAM, Apache was using up 1900MB. That means my server uses about 400MB for general system overhead and various background processes like my system-level backup service. That means on a “clean start” my server should show about 7000MB available for web work. I can verify that by stopping Apache and running free -m after the system “rests” for a few minutes to clear caches and other stuff.

Since I will have 7000MB available for web stuff I can determine that my current WordPress configuration, PHP setup, and other variables will come out to about 87MB being used for each web session. That means I can fit about 80 web processes into memory at one time before all hell breaks loose.

Since I don’t like to exhaust memory and I’m a big fan of the 80/20 rule, I set my maximum web processed to 64. 7000MB / 87MB = 80 * .8 = 64.

That is where you want to set your ServerLimit and MaxRequestWorkers.

MaxConnectionsPerChild

This determines how long those workers are going to “live” before they die off. Any worker will accept a request to send something out to your site visitor. When it is done it doesn’t go away. Instead is tells Apache “hey, I’m ready for more work”. However every-so-often one of the things that is requested breaks. A bad script in PHP may be leaking memory, for example. As a safety valve Apache provides the MaxConnectionsPerChild directive. This tells Apache that after this child has served this many objects to die. Apache will start a new process to replace it. This ensures and memory “cruft” that is built up is cleared out should something go wrong.

Set this number too low and you server spends valuable time killing and creating Apache processes. You don’t want that. Set it too high and you run the risk of “memory cruft” building up and eventually having Apache kill your server with out-of-memory issues. Most system admins try to set this to a value that has it reset once every 24 hours. This is hard to calculate unless you know your average objects requested every day, how many processes served those objects, and other factors like HTTP versus HTTP2 can come into play. Not too mention fluctuations like weekend versus weekday load. Most system admins target 1000 requests. For my server load I am guessing 2400 requests is a good value, especially since I’ve left some extra room for memory “cruft”.

Posted on

Boosting WordPress Site Performance : Upgrade PHP

As with every single WordCamp I’ve attended there is something new to be learned no matter how much of a veteran you are.   My 5th WordCamp at WordCamp US 2015 was no different.    There are a lot of things I will be adding to my system admin and my development tool belt after the past 48 hours in Philadelphia.

Today’s update that was just employed on the Store Locator Plus website:   Upgrading PHP.

Turns out that many web hosting packages and server images, including the Amazon Linux Image, run VERY OLD versions of PHP.    I knew that.   What I didn’t know was the PERFORMANCE GAINS of upgrading even a minor version of PHP.    PHP 5.6 is about 25% faster than PHP 5.3.    PHP 5.3 was the version I was running on this site until midnight.

WP Performance On PHP
WP Performance on PHP. Source: http://talks.php.net/fluent15#/wpbench

The upgrade process?  A few dozen command-line commands, testing the site, and restoring the name server configurations from the Apache config file which the upgrade process auto-saved for me.  EASY.

What about PHP 7?   That is 2-3x faster.  Not 2%.  100 – 200%.   WOW!    As soon as Amazon releases the install packages for their RHEL derivative OS it will be time to upgrade.

 

If you are not sure what version your web server is running (it can be different than command line on you server) you can find that info in the Store Locator Plus info tab.

SLP PHP Info
SLP PHP Info

The take-away?   If you are not running PHP 5.6, the latest release of PHP prior to PHP 7, get on it.  One of the main components of your WordPress stack will be running a lot faster, have more bug fixes, security patches, and more.

Posted on

Setting Up AWS Elastic Beanstalk Tools On Linux

AWS Beanstalk WordPress 445x200

AWS provides an “officially unsupported” set of scripts for Windows, OSX, and Linux that will help with managing and deploying your AWS Elastic Beanstalk applications.   This can be useful as I could not find a simple way to SSH into my ELB-based EC2 instance using standard methodologies.  I’m sure I missed something but deploying and updating via git commands is going to be easier and my preferred production method; might as well go there  now.

Download and install AWS Elastic Beanstalk Command Line Tool.

Unzip the file.

You will now have a directory that contains three types of command sets.  In the appropriately-named eb subdirectory is a series of OS command-line scripts via “eb” commands.   In the api directory is a full-fledged ruby-based implementation of very long command names that require ruby, ruby-developer, and the JSON gem to function.    In AWSDevTools is and extension of git commands that add new AWS-specific scripts to the git command.

 

Activating “eb” Command Line

Edit your OS PATH variable to point to your unzipped download directory.    I changed my unzipped directory to be something shorter and put it in my Linux root directory.   To activate the eb command:

Add the path to the proper Linux Python directory (I am running 2.7.X).  My CentOS .bash_profile:

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/aws-elb-2.6.4/eb/linux/python2.7/

export PATH

export AWS_CREDENTIAL_FILE=$HOME/.ssh/aws.credentials

Save and reload .bash_profile into my current environment (next time you log out / in this will not be necessary… and yes, dot-space-dot is correct):

# . .bash_profile

Activating Extended Command Line

The “extended” command line are the ruby-based scripts that give you some very long command names that do a lot of different things.

First make sure ruby , ruby-develop, and the JSON gems are installed. For CentOS:

# yum install ruby ruby-develop

# gem install json

Go create an AWS credentials file.

I put mine in my .ssh directory.  It looks like this (use your key IDs):

AWSAccessKeyId=<your-access-key>
AWSSecretKey=<your-secret-key>

Read the article on Deploying WordPress 4.2.2 On Elastic Beanstalk, Part 1 and setup a unique IAM account for this.  Using your main AWS login credential is not recommended.  If they get compromised…   well… just don’t do that.

Then edit your PATH using the same methodology as noted above.  

This time adding the api directory to your path:

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/aws-elb-2.6.4/eb/linux/python2.7/:$HOME/aws-elb-2.6.4/api/bin/

export PATH

OK, now add this to your current running Linux environment:

# . .bash_profile

Test.

elastic-beanstalk-describe-applications

It will likely come back with “no applications found”.

Setup git Tools For AWS

Yup, same idea as above.  Edit your path file to include the git tool kit, but a slight twist here.  Once you do that you will need to run the setup command noted below in each repository where you want AWS tools.

Edit your PATH and invoke it the double-dot-bash-trick noted above.

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/aws-elb-2.6.4/eb/linux/python2.7/:$HOME/aws-elb-2.6.4/api/bin/:$HOME/aws-elb-2.6.4/AWSDevTools/Linux

export PATH

New tricks… go set this up in your project directory.

Your project directory is where your WordPress PHP application resides and you’ve create a git repository to manage it.   You’ve already done your git init and committed stuff to the repository.    Dig around this site or the Internet to find out how to do that if you’re not sure. Again, I recommend the  Deploying WordPress 4.2.2 On Elastic Beanstalk, Part 1 article as it has some special Elastic Beanstalk config files in it that will be used by ELB to connect RDS dynamically and set your WP Salt values.

For this to work you are going to need to have Python (same with “eb” above) and the Python Boto library installed.   I

If you don’t have boto yet, you install it on CentOS with:

# sudo yum install python-boto

Assuming you already have your WordPress stuff in a git repo, go to that directory.

In my case /var/www/html holds my WordPress install that has been put into a git repo.

# cd /var/www/wpslp/

Now setup the git extensions using this command:

# AWSDevTools-RepositorySetup.sh

Test.

If everything is setup correctly you can check the git commands with something like:

# git aws.push

It will likely come back with an “Updating the AWS Elastic Beanstalk environment None…” message.

Either that or it will update the entire Internet , or at least the Amazon store, with your WordPress code.

 

Combined with your ELB Environment you setup from the previous article on the subject, your are ready to go conquer the world with your new git-deployed WordPress installation on ELB.

You can learn more about setting up the AWS-specific git parameters and how to use git with AWS and this tookit on at this .git Develop, Test, and Deploy article.

Next I will figure out how to marry the two and will share my crib notes here.

 

Posted on

Deploying WordPress 4.2.2 On Elastic Beanstalk, Part 1

AWS Beanstalk WordPress 445x200

I spent a good part of the past 24 hours trying to get a basic WordPress 4.2.2 deployment up-and-running on Elastic Beanstalk.   It is part of the “homework” in preparing for the next generation of store location and directory technology I am working on .    I must say that even for a tech geek that loves this sort of thing, it was a chore.   This article is my “crib sheet” for the next time around.   Hopefully I don’t miss anything important as I wasted hours chasing my own rear-end trying to get some things to work.

I used the Deploying WordPress with AWS Elastic Beanstalk fairly extensively for this process.    It is easy to miss steps and is not completely up-to-date with the screen shots and information which makes some of it hard to follow the first time through.  I will try to highlight the differences here when I catch them.

The steps here will get a BASIC non-scalable WordPress installation onto AWS.    Part 2 will make this a scalable instance.    If my assumptions are correct, which happens from time-to-time, I can later use command-line tools with git on my local dev box to push updated applications out the the server stack.  If that works it will be Part 3 of the series on WP ELB Deployment.

Getting Started

The “shopping list” for getting started using my methodology.    Some of these you can change to suit your needs, especially the “local dev” parts.  Don’t go setting all of this up yet, some things need to be setup a specific way.  This is just the general list of what you will be getting into. In addition to this list you will need lots and lots of patience.  It may help to be bald; if not you will lose some hair during the process.

 

Part 1 : Installation

  • A local virtual machine.  I use VirtualBox.
  • A clean install of the latest WordPress code on that box, no need to run the setup, just the software install.
  • An AWS account.
  • A “WP Deployment” specific AWS user that has IAM rules to secure your deployment.
  • AWS Elastic  Beanstalk to manage the AWS Elastic Load Balancer and EC2 instances.

Part 2 : Scalability

  • AWS S3 bucket for storing static shared content (CSS rules, images, etc.)
  • AWS Elasticache for setting up Memcache for improved database performance.
  • AWS Cloudfront to improve the delivery of content across your front-end WordPress nodes.
  • AWS RDS to share the main WordPress data between your Elastic Beanstalk nodes.

Creating The “Application”

The first step is to create the web application.  In this case, WordPress.

I recommend creating a self-contained environment versus installing locally on your machine, but whatever you’re comfortable with.   I like to use VirtualBox , sometimes paired with Vagrant if I want to distribute the box to others, with a CentOS GUI development environment.  Any flavor of OS will work as the application building is really just hacking some of the WordPress config files and creating an “environment variables” directory for AWS inside a standard WP install.

Got your box booted?  Great!

Fetch the latest download of WordPress.

Install it locally.

Remove wp-config-sample.php.

Create a new wp-config.php that looks like this:

<?php

// An AWS ELB friendly config file.

/** Detect if SSL is used. This is required since we are
terminating SSL either on CloudFront or on ELB */
if (($_SERVER['HTTP_CLOUDFRONT_FORWARDED_PROTO'] == 'https') OR ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https'))
{$_SERVER['HTTPS']='on';}

/** The name of the database for WordPress */ define('DB_NAME', $_SERVER["RDS_DB_NAME"]);
/** MySQL database username */
define('DB_USER', $_SERVER["RDS_USERNAME"]);

/** MySQL database password */ define('DB_PASSWORD', $_SERVER["RDS_PASSWORD"]); /** MySQL hostname */
define('DB_HOST', $_SERVER["RDS_HOSTNAME"]);

/** Database Charset to use in creating database tables. */ define('DB_CHARSET', 'utf8');
/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');

/**#@+
 * Authentication Unique Keys and Salts.
 * Change these to different unique phrases!
 */
define('AUTH_KEY',$_SERVER["SECURE_AUTH_KEY"]);
define('SECURE_AUTH_KEY',$_SERVER["AUTH_KEY"]);
define('LOGGED_IN_KEY',$_SERVER["LOGGED_IN_KEY"]);
define('NONCE_KEY',$_SERVER["NONCE_KEY"]);
define('AUTH_SALT',$_SERVER["AUTH_SALT"]);
define('SECURE_AUTH_SALT', $_SERVER["SECURE_AUTH_SALT"]);
define('LOGGED_IN_SALT', $_SERVER["LOGGED_IN_SALT"]);
define('NONCE_SALT', $_SERVER["NONCE_SALT"]);

/**#@-*/

/**
 * WordPress Database Table prefix.
 *
 * You can have multiple installations in one database if you give each a unique
 * prefix. Only numbers, letters, and underscores please!
 */
$table_prefix  = 'wp_';

/**
 * For developers: WordPress debugging mode.
 *
 * Change this to true to enable the display of notices during development.
 * It is strongly recommended that plugin and theme developers use WP_DEBUG
 * in their development environments.
 */
define('WP_DEBUG', false);
/* Multisite */
//define( 'WP_ALLOW_MULTISITE', true );

/* That's all, stop editing! Happy blogging. */

/** Absolute path to the WordPress directory. */
if ( !defined('ABSPATH') )
        define('ABSPATH', dirname(__FILE__) . '/');

/** Sets up WordPress vars and included files. */
require_once(ABSPATH . 'wp-settings.php');

Do not move this wp-config.php file out of the root directory.  It is a common security practice but it will be missing from your AWS deployment.  There are probably ways to secure this by changing your target destination when setting up AWS Cloufront, but that is beyond the scope of this article.

Settings like the $_SERVER[‘RDS_USERNAME’] will come from the AWS Elastic Beanstalk environment you will create later.  This is set dynamically by AWS when you attach the RDS instance to the application environment.  This ensures the persistent data for WordPress, things like your dynamic site content including pages, posts, users, order information, etc. is shared on a single highly-reliable database server and each new node in your scalable app pulls from the same data set.

Settings for the “Salt” come from a YAML-style config file you will add next.     This is bundled with the WordPress “source” for the application to ensure the salts are the same across each node of your WordPress deployment.     This ensures consistency when your web app scales, firing up server number 3, 4, and 5 while under load.

Create a directory in the root WordPress folder named .ebextensions.

Fetch new salts from WordPress.

Create a new file named keys.conf in the .ebextensions directory that looks like this, but using YOUR salts:

option_settings:
- option_name: AUTH_KEY
  value: '0VghKxxxxxn?%H$}jc5.-y1U%L)*&Ha/?)To<E>vTB9ukbd-FNoq^+.4A+I1Y/zp'
- option_name: SECURE_AUTH_KEY
  value: 'z 7)&E~NjioIREE@g+TKs-~yO-P)uq2Zm&98Zw>GK_rYb_}a,C#HD[K98ALxxxxx'
- option_name: LOGGED_IN_KEY
  value: 'yq@K{i=z(xxxxxm1VOi80~.H?[,h+F+_wua]I:z-YZF|a-vEV[n/6pRBlw+qAe^q'
- option_name: NONCE_KEY
  value: 'Bq=kbD|H#iMt5#[d[qURMP8C}xxxxxf[WaI6.oF5=r1h#:E?BZ-L28,7x~@oZw#7'
- option_name: AUTH_SALT
  value: 'O;4uq817 CSs3-ZAUY>e%#xxxxx<:u~=Is4d6:CI3io;aL<h]+x~;S_fc3E oEB1_'
- option_name: SECURE_AUTH_SALT
  value: 'nF94Rasp-0iaxxxxxm:|e82*M9R!y>% b68[oN|?_&4MRbl.)n8uB-ph|*qIPq|e'
- option_name: LOGGED_IN_SALT
  value: '&Ah^OIb<`xxxxx+lKV=zFER_^`+gA%.UWCIy|fJ+RfKiYKBP^&,[|%6K<%C[eU]n'
- option_name: NONCE_SALT
  value: 'ZiKejG|xxxxx k3>nr)~AN5?*hd!aO-)E^fR^^!_PR1n[oq{??F`,NQmdfE2Mj:`'

Zip up your application to make it ready for deployment.

Do NOT start from the parent directory. The zip should start from the WordPress root directory. On Linux I used the this command from the main WordPress directory where wp-config.php lives:
zip -r ../wordpress-site-for-elb.zip .

Create The Elastic Beanstalk Environment

Login to the AWS Console.

Go to Elastic Beanstalk.

Go to ELB Create New Application.

 

AWS ELB application info
AWS ELB application info

Select Create Web Server.

AWS ELB Web Server Environment
AWS ELB Web Server Environment

Select the default permissions (I didn’t have a choice here).

AWS ELB Permissions
AWS ELB Permissions

Set the Environment to PHP and Load Balancing, auto scaling.

AWS ELB Environment Type
AWS ELB Environment Type

Upload your .zip file you created above as the source for the application.
Leave Deployment Limits at their defaults.
As a side note, this will create an application that you can later user for other environments, making it easy to launch new sites with their own RDS and Cloudfront settings but using the same WordPress setup.

AWS ELB Application Version
AWS ELB Application Version

Set your new Environment Name.
If your application name was unique you can use the default.
If your application name is “WordPress” it is likely in use on ELB, try something more unique.

AWS ELB Environment Name
AWS ELB Environment Name

Tell ELB to create an RDS instance for you.
I chose not to put his in a VPC, which is the default.
The guide I linked to above, shows a non-VPC, but then gives instructions on a VPC deployment.   This caused issues.
Some instance sizes for both RDS and the EC2 instance ELB creates will ONLY run in a VPC (anything with a “t” level).
You will need to choose the larger “m-size” instances for RDS and EC2 otherwise the ELB setup will fail after 15-20 minutes of “spinning its wheels”.

AWS ELB Create RDS not in VPC
AWS ELB Create RDS not in VPC

Set your configuration details.

Choose an instance type of m*, I chose m3.medium the first time around, but m1.small should suffice for a small WP site.

Select an EC2 key pair to be able to connect with SSH. If you did not create one on your MAIN AWS login, got the the IAM panel and do that now. Save the private key on your local ox and make a backup of it.

The email address is not required, I like to know if the environment changed especially if I did not change it.

Set the application health check URL to
HTTP:80/readme.html

Uncheck rolling updates.

Defaults for the rest will work.

AWS ELB Configuration Details
AWS ELB Configuration Details

You can set some tags for the environment, but it is not necessary. Supposedly they help in reporting on account usage, but I’m not that far along yet.

AWS ELB Tags
AWS ELB Tags

Setup your RDS instance.
Again, choose an m* instance as the t* instances will not boot unless you are in a VPC.
If you choose the wrong instance ELB will “sit and spin” for something that seems to be a decade, before booting to “gray state” which is AWS terminology for half-ass and useless.
If you cannot tell, this was the most frustrating part of the setup as I tried SEVERAL different instance classes.    Each time the ELB would hang and then take forever to delete.

Enter your DB username and password.
They will be auto-configured by the wp-config.php hack you made earlier.I do recommend, however, saving these somewhere in case you need to connect to MySQL remotely.  I hosed my host and siteurl and needed to go to my local dev box, fire up MySQL command line, and update the wp_options table after I booted my application in ELB.    Having the username/password for the DB is helpful for that type of thing.

AWS ELB RDS Config
AWS ELB RDS Config

Review your settings, launch and wait.

Reviewing ELB Settings

When you are done your Elastic Beanstalk should look something like this:

AWS ELB Web Tier Final Config
AWS ELB Web Tier Final Config
AWS ELB Data and Network Final Config
AWS ELB Data and Network Final Config

Useful Resources

Deploying WordPress with AWS Elastic Beanstalk – single or multiple zone, fully scalable, cached.

Deploying a WP install with git on ELB – single zone and may not scale.

 

Posted on

The Net Neutrality Debate

I’ve been seeing a lot of articles and commentary from people regarding Net Neutrality and the ongoing debate about the FCC’s involvement.   It is easy to boil down the issue to a simple “for” or “against” Net Neutrality stance.    Today that generally means “for the FCC involvement” or “against the FCC involvement”.   Like most things in life, the decision is not that simple.

What Is Net Neutrality

In simple terms, I’ll use my version of what Tim Wu said, Net Neutrality declares that Internet Service Providers (ISPs) should not be limiting service to-or-from Internet content.    In other words, Comcast should not make your connection to NetFlix run far slower than a connection to their competing online video services.    Comcast, and others, have used their control of the “Internet pipes” (which Al Gore may or may not have invented) to go to companies like NetFlix and demand a “ransom”.  “Pay us a bazillion dollars or we will make your movies so  slow people would rather read books than use your service.”,  was the general terms of the agreement.    Of course I’m generalizing here and am only using a completely fictional account of past historic events, but something similar to this is already going on.

In general most people, including nearly every single Internet related service in the United States is 100% for Net Neutrality.    Except the ISPs.   Comcast HATES it, mostly because they hate anything that makes their customers happy.     A bunch of other companies including AT&T, Verizon, and many of the other biggest established price-fixing corporations also hate Net Neutrality.    It takes away a big part of the future revenue that continues to bolster their profits by billions of dollars.    The absolutely do not want to rework budget forecasts that make for huge dividends and stock price increases over the next 20 years.

Why Net Neutrality Now Sucks

As seems to be the case any time someone comes up with a good idea, the Federal Government has gotten involved and completely screwed things up.    Leave it to the current administration to make matters even worse.    In a brilliant play, the politicians that want the US Government to control-and-tax every single aspect of your life have commandeered the previously-sane concept of “Net Neutrality” and applied it to THEIR VERSION of Net Neutrality which means “The FCC rules the Internet”.    They do this by reclassifying all Internet Service Providers under Title II of the US Telecommunications Act of 1934 (Title 47, Chapter 5, Subchapter II to be specific).   This is what is generally known in the Net Neutrality discussion as “Title II”.

The short layperson’s version of what this means is that the FCC will make all the rules about Internet access.    They will decide how it operates, what taxes, fees, and related income-grabbing levies to assess.    They will decide WHO gets to provide those services by refusing to grant licenses when a new ISP comes in to threaten established ISPs.     While the incumbent ISPs have fought hard against Net Neutrality in its original form, they are now secretly “doing the happy dance” while the new incumbent-friendly softer version of Net Neutrality managed by the FCC is on the table.

Yes, the incumbent ISPs will be required to adhere to yet more government regulations, but at the end of the day the Title II version of Net Neutrality kills any innovation in “last mile” Internet service (last mile is the part you, as a consumer, care about).     Title II Net Neutrality means you will likely NEVER see a new ISP offering to come onto the market.   Like your electric utility, what you have now is what you’ve got FOREVER.    It also means that you are VERY LIKELY to be paying higher fees to your ISP as they claim the new regulations increase their costs due to compliance.  You will end up paying MORE for the same sucky service you already have.

The amazing part about Title II Net Neutrality is that it does NOT explicitly prevent the ISPs from creating those “Internet fast lanes” you keep hearing about.    It just means that if they do something the FCC doesn’t like they pay a fine.    There are a thousand different ways to craft those fast lanes that are perfectly FINE under Title II which means the ISPs aren’t breaking the rules and won’t have to worry about the FCC stopping them.

Some interesting notes about Title II Net Neutrality.  Tom Wheeler is pushing for it and he was a BIG TIME cable company lobbyist.   The current administration, that gave us unprecedented domestic surveillance, is all for it.    Most Internet content providers, the companies whose services you like to visit most on the Internet are AGAINST this flavor of Net Neutrality.

Bottom line:   Title II Net Neutrality is a BAD idea.

Net Neutrality That Doesnt’ Suck

What is now largely ignored is a proposal for a NEW BILL that has been crafted by John Thune and Ford Upton that provides the version of Net Neutrality that everyone wanted when they said they supported “Net Neutrality”, myself included.     This bill is designed SPECIFICALLY to take control out of the FCCs hands.  Everyone knows the FCC is a regulatory quagmire with a penchant for generally “screwing things up” and making it nearly impossible for a small business to operate in any industry they touch.    The new bill AMMENDS the Communications act of 1934 and specifically prevents section 706 from taking control of Internet services.

This is truly the best approach.      However the Obama Adminstration and the FCC and the incumbent ISPs are STRONGLY OPPOSED to this measure.    In fact it is largely rumored that the current administration has been pushing the FCC to take immediate and decisive action to pass the Title II version of Net Neutrality before the proposed bill can get any traction.     Various pundits have been employed to bury the discussion about the proposed bill.

Most small businesses that live-and-die on the Internet support this type of bill.    Personally I don’t think this bill is perfect, but is a whole-hell-of-a-lot better than Title II Net Neutrality and it is better than “let the ISPs do whatever the hell they want”.

Thune Upton Net Neutrality is a GOOD idea.

Summary

The sad part of all this is that if government control over “last mile” was not already screwing things up we’d have more than one or two true broadband ISPs to choose from  in a majority of cities and towns in America.   If that were the case there would be TRUE COMPETITION in the market and Net Neutrality would be a non-issue.    However we live in a state of Internet access where it is a quasi-government-regulated market where the only person getting screwed is the consumer and the ISPs hold all the cards.

If there is any argument that more clearly highlights why Title II Net Neutrality should NOT be passed, it is that lack of innovation and service selection over the last mile.    You cannot start a company that provides high speed to homes and businesses without incurring major obstacles.  Why?   Government fees and regulations.   This will only get FAR WORSE if Title II Net Neutrality is passed.

Thune Upton Net Neutrality is not perfect.    It is, however, a necessary evil given the current state of affairs with Internet access in America.   Sadly, Thune/Upton SUCK at marketing.   They needed to take over the term Net Neutrality for their own, much like the FCC did.    Now they need to come up with a new “sexy” label they can apply to their bill that the general population can get stuck in their brain.     Maybe something as simple as “Unsucky Net Neutrality”.    OK, I’m not a marketing guy either.

Whatever they call it, they need to get traction NOW before Obama and his FCC cronies do something horrible that permanently damages Internet-centric innovation in America.

Net Neutrality sucks except when it doesn’t.

 

 

Posted on

Google Wallet versus PayPal

Google Thats An Error Banner

PayPal went offline for over an hour last night making the second time in a month and the third time in the past 2 months that the service was unavailable.   PayPal services have become increasingly unstable over the past year with numerous technical issues and down time that has impacts hundreds-of-thousands, if not millions, of users.    My business was impacted last night during one of the busiest days of the past 2 years as the long-awaited Store Locator Plus 4 release was launched yesterday morning.

Once again I set out to find a suitable replacement.    After some research into Amazon Payments, which has the same fee schedule as PayPal yet also has a “reserve” clause that holds your funds for 7-to-14 days, I opted not to use them.  Same with Elavon and their ridiculous 3.5% + $0.40 per transaction fee, a monthly processing fee, batch processing fees and another myriad of add-on fees and costs that I had completely forgotten about after leaving the hard goods retail world a decade ago.   Same for almost all “merchant services” (talk about a misnomer) credit card processors out there, charging more fees for less service.   That left only two choices on my short list:   PayPal and Google Checkout which is soon to be known as Google Wallet for Digital Goods.

As you can probably surmise by the name of the service, Google Wallet for Digital Goods will ONLY be useful for merchants that sell and ship digital goods.   If you are shipping physical goods you can stop reading now, suck it up and go with PayPal.    For those selling digital goods online or via mobile platforms you may want to keep reading.  Maybe.   As I learned along the way, the re-branding of Google Checkout to Google Wallet remains half-ass.    Clearly Google has not assigned their “A-Team” to this project and it appears to be the red-headed stepchild of the Google Business offerings.    As such I decided, like those with physical goods online stores, to just “suck it up” and stick with PayPal and all the warts that come with it.     Yes, PayPal rates are higher than Google’s rates.  Yes, PayPal SUCKS at helping merchants fight fraudulent chargebacks and actually turns a profit processing those chargebacks.    But PayPal clearly thinks of merchant services as their primary business and not a “give these college kids something to work on”-back-room project like Google does.    Pretty harsh review about Google Wallet for Business, I agree, but I also feel it is warranted.

Google Wallet Search
The Google Wallet Search Form – looks pretty. You’d think Google, of all people, would have this working.
Google Wallet Search Fail
What happens when you use search on the Google Wallet pages.

I’ve checked out Google merchant services many times in the past.   Despite some cleaned up modern graphics to help sell the service on the front-end, the back-end is a virtually unchanged hack job of an interface.   Not only is the interface very pedestrian, it is rife with links to outdated help documents, is completely lacking the tools any serious online business needs to research and report on their sales, and is over-simplified to the point of being utterly useless for any real accounting such as import and processing transactions in QuickBooks.    It is no wonder the Google Checkout service failed to ever gain ground against PayPal or the relatively-new-to-market Amazon Payments services.

The Fee Schedule

As with any merchant service one of the first things I look at is processing fees.    Many credit card processors, places like Elavon and Authorize.net, eat you alive  in processing fees.   10-cents here, a quarter-there.  Before you know it you’ve shelled out $900 for $10,000 worth of sales.     It is death by a million paper cuts.    It was true 15 years ago and is true today, traditional credit card processing companies suck at dealing with new-economy merchants.    On the other hand, places like PayPal, Amazon Payments, and Google Wallet for Digital Goods are all tooled specifically to help new economy merchants and have fee structures that are friendly to small businesses.    As such, the first stop at Google Wallet is the fee schedule.

At Google Wallet you will find a very simple web page that states the fees are the LOWER of 5% of the sale OR 1.9% + $0.30.    For anything that is selling at $10 or more the rate is 1.9% + $0.30 regardless of volume.    Both PayPal and Amazon Payments require merchants to sell $100,000 PER MONTH before you qualify for that rate.    If you sell $100k PER YEAR this lowers the fees you pay to the merchant processor by $600 when compared to PayPal.

However, when you sign up for the Google Wallet for Digital Goods service you are required to agree to the Terms of Service agreement.   Within that document they have a myriad of links to various addendum pages including the Rate Schedule (list of fees).   That link goes to an old Google Checkout transaction processing fees page that states the fees are IDENTICAL to the Amazon Payments/PayPal structure with one critical exception; Google charges 1% more per transaction if the buyer and seller are in different countries.  So much for competitive rates.

Google Checkout Fees
The now defunct, maybe-who-knows, Google Checkout fee schedule as linked in the Google Wallet Terms of Service.

Customer Service

When I discovered the discrepancy in published fees I decided I better get an answer to which fee schedule is the REAL schedule.   If it is the original 1.9% deal then making a switch may be worth the effort.   If, on the other hand, it is the tiered schedule that starts at 2.9% AND has a 1% “different country” penalty the switch would be a bad decision as I would lose money in fees AND a week of productivity would be lost during the transition.   Thus I wrote Google an email via the customer support link at Google Wallet.

Kudos to Google Customer Service, they did respond quickly and gave me an example of a $9.67 transaction and a table that was cut from the web page I already read that states the fee as 1.9% plus $0.30.    However they completely ignored the fact that the Terms of Service were wrong.    Nor do I think that if this guy pulled up the WRONG information that Google would stand behind the rate schedule some customer support dude sent me via email.  I can already see Google’s response when my first 3.9% processing fee for an order from the UK comes in… “Sorry Mr. Cleveland, the rate IS 3.9%, the customer service dude gave you bad information.  You did read and agree to the Terms of Service, didn’t you?”.     Customer Service also completely skirted the “buyer and seller in a different country” portion of my question and whether or not the 1% add-on fee applies.   Though he did avoid answering the question in a very subtle ways saying, just before his $9.67 example “for transactions in the US”.    If my read-between-the-lines skill are what I think they are then his answer was “yeah man, we’re gonna nail you for an extra 1% for selling anything to those dang non-Americans” which is EXACTLY what I don’t want as I try to expand my sales into an international customer base.

Cut and Paste Answers
Google Customer Service cut-and-paste answers.

A Collection of Fail

I’m not sure why Google even has the Google Wallet for Business / Google Wallet for Digital Goods / Google Checkout That Is Almost Dead service online in its current state.   How do they expect anyone to have any confidence that their transactions will be processed properly when Google, king of the World of Internet Searches, cannot even build a half-functional website.    It doesn’t bode well for the service if the production manager on this site doesn’t take the time to hire one of their bazillion interns to try to surf the site and make sure it works.   Fixing basic things like broken links or non-conflicting information would be nice.

Broken Help Links
Google’s inline help links are not very helpful. Did anyone at Google even try to use this site?

Final Decision

Final decision?  Not really.  These kind of services change frequently and if Google ever decides to stop putting merchant services in the back room and “letting the kids play with it” I think they can be competitor.  Especially as it is tied to their prolific mobile platforms payment engine that handles all of those android app sales.    However someone needs to be put in charge for the non-app-store side of that business and try to actually compete with PayPal.  Until they do so Google Checkout will continue to be a second-rate service that does not instil enough confidence in business owners such as my self to start putting all their online sales into the “Google Wallet Basket”.

Maybe next time around I’ll choose Google. For now I’ve decided to keep dating the wart-nosed older sister with more experience and stability than the younger less-refined and very schizophrenic cheaper date.   That old girl may trip over her walker and make us late to our next dinner date, but at least I know she’ll be there.  Warts and all.   As for that younger sister, maybe she’ll grow up some day.

Posted on

PayPal Is Currently Down

PayPal Down

PayPal is currently down with no ETA on when the service will be back. Since PayPal is currently the only payment service I use at the Charleston Software Associates site that means no orders, which means no way for users to order the Store Locator Plus 4 upgrades or add-on packs.

Nice going PayPal.

I will look into alternatives, but by the time I get something wired in PayPal will likely be back online. I also do not want to create a nightmare for my accountant that keeps my QuickBooks stuff straight. However this is the second time in a month that PayPal has impacted the user experience on my site and that is NOT acceptable. Combined with ridiculous chargeback fees I think it is time to locate a new payment provider.

 

PayPal Down Oct 7 2013
PayPal Down Oct 7 2013
Posted on

Website Email Issues at CharlestonSW.com

Spamhaus Banner

I’ve been getting a lot of reports from users that are not getting their email notifications when resetting a password. Or they never get their license key email. Or forum notifications. Not everyone has this problem, but enough users that I decided to take a break from coding Store Locator Plus 4.0 and figure out what was going on.

Spamhaus PBL

Turns out Spamhaus PBL was the culprit. While some site administrators may be familiar with spam block lists (or blacklists), the lesser known sibling can be just as much of an issue. Unlike the spam block list (SBL), the policy block list (PBL) lists millions of IP addresses as potential sources of spam. Unless you are a large company with a static IP block that is known to be well controlled there is a good chance your IP address is on the PBL. Especially if you are using shared hosting or virtual hosting.

It turns out the IP address for the charlestonsw.com web server is on the PBL. In fact the ENTIRE set of Microsoft Azure services is on the PBL. The general consensus is that the IP addresses are far too dynamic and cloud hosting is a prime breeding ground for the festering wound of the Internet known as spam houses.

Being on the PBL is not an indication that a site is in any way related to spam or that the server on which it resides may host a spamming company. It simply means that the propensity for spam to originate from some server within the IP block is high, mostly because the IP address may be shared at some future date with other companies that are spam houses.

Many companies have expanded their email black list services beyond the typical “block any site/server on the SBL” to “block any site on ANY block list, or *BL” including the PBL. This is a good policy for strict email controls over spam, but it certainly drops a lot of “good email” originating from cloud hosted sites and services. Like Charleston Software Associates.

User Side Fixes

If you are not getting email from charlestonsw.com you should add the server www.charlestonsw.com and charlestonsw.com to your email whitelist. You can also add info at charlestonsw period com if you have an email-specific whitelist. However there are emails coming from the web system that are not reading the info at charlestonsw period com header and may originate from other sources.

Business Side Fixes

What I am working on from the server side is getting the web server to push email out through a specific Google Apps web mail account. This required setting up a specific email account at Google Apps, enabling the account for relay, configuring the mail server to connect securely to the Google account and push email messages out on that server.

This is required because the Windows Azure cloud hosting does not support static IP addresses. While the IP address is persistent it can change. As such the Charleston Software Associates server cannot be put on the PBL “good neighbors” list. This requires more work, more expense, and more drastic measures. This is one more feather in the Amazon Web Services (AWS) cloud hosting cap. AWS still proves to be far ahead of the competition when it comes to the cloud hosting space. When it is time for an upgrade the CSA services will be moved to Amazon.

Maybe after the Store Locator Plus 4.0 release is done.

Posted on

Bruce Bedrick : Kind Clinics on Wall of Shame

Chargeback Kind Clinics

Yet another customer had their credit card stolen or their PayPal account hacked.    The amazing part is how many of these identity thieves not only steal their credit card information but also hack into the person’s website.   Then they go so far as to do the most diabolical thing ever!    No, not grab their customer list.  No, not insert redirect links to cheap nike shoes.   No.  Far worse.   They INSTALL MY SOFTWARE on the site.   These thieves are ruthless.

Trust me, I really want to catch these guys.    Every time they do this sort of thing PayPal sends me a customer initiated chargeback message like this one:

Hello Lance Cleveland,

We were recently notified that one of your buyers filed a chargeback and
asked the credit card issuer to reverse a payment made to you on Jun 10,
2013.

The buyer claims that this purchase was made without authorization to use
the credit card. Their credit card issuer needs additional information from
you about this transaction.

———————————–
Transaction Details
———————————–
Buyer’s Name: Bruce Bedrick
Buyer’s Email: drbruce@kindclinics.com
Buyer’s Transaction ID: 2NT293502W259540N
Transaction Date: Jun 10, 2013
Transaction Amount: -$230.00 USD
Invoice ID: WC-10870
Case #: PP-002-432-640-244
Your Transaction ID: 2G720304L9657970L

———————————–
What to Do Next
———————————–

Please respond within 10 days so that we can help resolve this chargeback.
To respond, log in to your PayPal account and go to the Resolution Center
to provide information about this transaction.

The credit card issuer decides if the buyer’s claim is legitimate. Once the
credit card issuer receives your information, it may take up to 75 days to
make a final decision.

Because the credit card issuer has reversed the charge for this
transaction, we’ve placed a temporary hold on the funds associated with
this transaction until the case is resolved. Our user agreement explains
our policies on holding funds.

You can learn more about chargebacks in the Resolution Center tutorials.

———————————–
Other Details
———————————–
There are no other details regarding this transaction at the moment.

Sincerely,
PayPal
Chargeback Department
CB:PP-002-432-640-244:USD230.00:6/21/2013:2G720304L9657970L
PPID PP767

Big deal, right? Well, sort of. Not only do I have to burn 15 minutes responding to PayPal so they don’t close my account, at the end of it all the poor victim (Dr. Bruce Bedrick in this case) gets his money back and I end up not only with nothing for the sale but also an additional $50 in fees assessed by PayPal and the credit card company for letting identity thieves buy something from me.  Wonderful, isn’t it.    The best part is PayPal and the credit card company actually make a PROFIT in this transaction.  No wonder they don’t really give a damn that this happens so frequently.

Kind Clinics

Here is Bruce’s website at KindClinics.com along with the installed hacked software:

Kind Clinics Bruce Bedrick
Kind Clinics Bruce Bedrick
Kind Clinics Website
Kind Clinics website with find locations feature.
Kinds Clinics Store Locator Plus
Kinds Clinics Store Locator Plus
Kinds Clinics Store Pages
Kinds Clinics Store Pages
Kind Clinics Enhanced Maps
Kind Clinics Enhanced Maps
Kind Clinics Enhanced Results
Kind Clinics Enhanced Results
Kind Clinics Enhanced Search
Kind Clinics Enhanced Search

 

Kind Clinics Tagalong
Kind Clinics Tagalong

 

Kind Clinics Pro Pack
Kind Clinics Pro Pack

The Victim : Bruce Bedrick

Poor Bruce Bedrick.  I really feel bad for him and his stolen credit card.   I’m posting the information the identity thieves used during the transaction here so you can track him down and let him know I care.

Address:
Bruce Bedrick
Medbox
7047 E Greenway Parkway
Scottsdale, Arizona 85254

Email: drbruce@kindclinics.com

Phone: 800-762-1452

Customer IP: 98.167.201.90

Posted on

Wall of Shame : Point Immatriculation, Saisson France

Sadly I have another addition to make to my customer “Wall of Shame”.

Newest Wall of Shame Addition

Point Immatriculation

Marteau claims that their PayPal account, which is still active, was used fraudulently and that the purchases they made at CSA were not authorized.

Interesting that they claim fraud yet their site is using Store Locator Plus.

Guess the thief that stole their card also hacked their website and installed the Store Locator Plus application!

Wonder how this got on their live site if the charge was unauthorized?

http://www.cartegrise-pointimmatriculation.fr/wp-content/plugins/store-locator-le/readme.txt

http://www.cartegrise-pointimmatriculation.fr/wp-content/plugins/slp-enhanced-results/readme.txt

http://www.cartegrise-pointimmatriculation.fr/wp-content/plugins/slp-pages/readme.txt

Contact:

Point Immatriculation
Marteau Jacques-Antoine
6 rue du beffroi
02200 SOISSONS
France

Email: jacquesantoinevia@gmail.com

Phone: 0323727272

http://www.cartegrise-pointimmatriculation.fr/ou-faire-carte-grise/

Prior Wall of Shame Inductees

Honfeng Dong (iphonex@hotmail.co.uk)

iPhonex claims their credit card was stolen and this charge was not authorized.   Their $15 purchase for MoneyPress : eBay Pro Pack will now cost CSA more than $23.   Was this user too lazy to contact us and request a refund?  Or didn’t want to go to PayPal and start a charge dispute?  Or was the card truly stolen and this is a fraudulent credit card charge?  Who knows.   Send then an email and find out!

Ryan (Mark) Chesney (chesneyryan@gmail.com)

Mark decided that the best route for getting a refund on a product that was purchased 3 months ago was to issue a chargeback through his credit card company rather than contacting me directly.    Nice move, now he gets his refund and his “purchase” costs me an additional $22.   What a tool.

Want to ask Ryan what he’s thinking?  You can reach him at:

Attn: Ryan Chesney
K2S
516 W 860 N
American Fork, UT 84003
USA
(801) 369-3635

Stacy Kaufman

Stacy claims that the purchase she made here is an unauthorized charge, yet her PayPal account remains open and active.  Odd, why would PayPal keep an account open and active when someone claims the account has been compromised?   I can’t quite figure that one out.

Oddly, Stacy is using the full version of Store Locator Plus on the Pure Brazilian website.   Kind of retarded that people don’t know you can actually VERIFY use of the product on a live website.   Not only is Stacy using the product but using a version that was just published a few weeks ago.

You can get in touch with Stacy here:

Email: purebrazilianhair@gmail.com

Tel: 9542171980

Billing address

Stacy Kaufman
Pure Brazilian
905 Shotgun Road
Sunrise, Florida 33326

Posted on

Hosting WordPress

I get a lot of questions about where to host a WordPress site.   While I’ve not found the “perfect host for all people”, I have learned a few things about who NOT to use , who I use, and who I *think* will be good to use based on your needs.

Let’s start with who to stay away from:

GoDaddy

DO NOT host with GoDaddy.

Besides my personal issues with their support of national policies that hamper an open Internet, they also have notable technical issues.    Just last fall they mis-configured a router and took tens-of-thousands of businesses offline for several days.  No, it was not Anonymous as first reported.  It was incompetence.  Even if you were not hosting at GoDaddy but had names served by the GoDaddy DNS service your site could have been impacted. Mine site was offline for several days.

The bad part was not that the sites went offline.  That happens.  It shouldn’t, but it does.   The thing that made GoDaddy suck beyond normal suck-itude, was the fact that after several attempts to contact them they ignored ALL communication.  No offer of a credit for the down time.  Nothing other than a blanket generic email saying “our stuff broke, we fixed it”.   Thanks GoDaddy.  My site, as well as thousands of others lost hundreds, if not thousands, of dollars in revenue and your only response was a generic bulk email saying “my bad”.

Even more troublesome is the fact that I’ve been doing business with GoDaddy for over a decade, was a reseller for years, and brought them hundreds of name service and hosting clients over the years.  They can’t even take 2 seconds to respond with a personal email.  Sad.

Enough about the stories of how bad their service is.  The big issue and the main reason I do not recommend them for hosting is the fact that in 8-of-8 paid support requests where the client was having issues and was hosted at GoDaddy, we traced the problem to being hosted on GoDaddy in EVER CASE.   Permissions are configured differently on different servers.  IP addresses are shared en-masse which makes geocoding lookups essentially useless.  Servers time out when overloading, breaking the AJAX listener.

In short, if you want your WordPress stuff to work, do not host on GoDaddy.

LiquidWeb

Do not host with Liquidweb.

I used them for years.  I rented, and still do, a dedicated server there.   I have used their virtual private server and have brought many clients to Liquidweb.  For years their service and prices were above par.    In the past 4 years it has been getting worse every year.

3 years ago, they crashed my dedicated server with a hard fault.  It took them 5 days to get it back online, for a multi-million-dollar software consulting firm.  They had a team working on it, which was good, but it was obvious their claims of “warm server” and “4 hour maximum down time” were false.   They had to order new hardware, wait for it to arrive, configure it, then move our stuff.   After all that the new server was NOT configured the same way which incurred weeks of “oh, that’s broke too”.

This past fall they crashed a new VPS server that was hosting my account.   It also crashed several client accounts.    All the sites on that server were offline for days.   They eventually got it fixed and I was given access to a top-level support rep, but they never did offer any form of compensation for the down time.    Again, the newly configured server was not configured the same way as the old server and stuff never worked right after that.    When I finally showed them that their server was not limiting or allocating resources they told me “your site is too big for the server”.  Really?   I moved it and the new server, which is smaller, runs at less than 10% maximum CPU usage, 25% peak memory usage, and 1% disk I/O usage.

They also made access to any real support basically impossible.   They put tickets in a generic pool and let any tech resolve them. Sometimes you get a guy with a clue, most times not.    I should not be educating my server admin on how to admin a server.

Microsoft Azure

www.windowsazure.com

This is who I use today.    I have several virtual machines running there.    I like the simple interface much more than the Amazon Web Services interface.  It is also slightly less expensive than Amazon services.   However you must be a tech geek (or know one) to use these services.  It is much like running your own server.   If you are not a server admin this is not for you.

If you ARE a server admin, or have on on staff, then you may qualify for Microsoft Bizspark.  This will give you free (or near-free) Azure services for several years.   You can also scale up or down the server as needed with relative ease.    If you are comfortable configuring your base operating system (I use CentOS), installing PHP, MySQL, WordPress and the other components, and managing security then Azure is a fully flexible and expandable platform for a WordPress site.

This type of setup is only for uber geeks or companies that employ them.

ClickHost

www.clickhost.com

I have not used ClickHost myself, however I spoke to many people at WordCamp Atlanta and the general work about ClickHost was that they get WordPress hosting.   They seem like nice people and do seem to go the extra mile to make sure you will be taken care of.    They give you a pre-configured hosting account with the WordPress goodies installed.  Even better, they are very affordable.  A basic setup can cost you as little as $50/year.

For my clients that are cost-aware I will be recommending ClickHost.

RackSpace

www.rackspace.com

If you want a site that never crashes, use RackSpace.  You will pay top-dollar but they have very responsive support and know how to manage servers.  I’ve not used them personally but I know several clients that have used them in the past.  Their support is top-notch and they know their stuff (or have access to someone that does).   They are not cheap, but if you want high performance and high reliability this is a good option.    I’m not familiar with their newer virtualized offerings, which are lower costs, but I have to imagine they are good enough to carry the RackSpace name and reliability image.

Posted on

Google Drive – Did They Hear Me?

A few weeks ago I was on my Google Drive and organizing stuff from several project, prospective business ventures, and my WordPress plugins.  I have a half-dozen “things” going on these days and need to keep my notes, spreadsheets, flow diagrams, and other materials organized.    I created folders and moved stuff around.    To make it even easier to find things I assigned a color code to each folder as visual clues make for faster navigation once you train yourself on things like colors and shapes.  This is why good icon design is paramount, given mobile desktop UX that proliferates our lives thees days.

However, at Google Drive I noticed that the colors I assigned to various project folders only shows up as a colored box on the drop down menus as well as a few other discrete places.   I decided to drop Google a note via the web feedback form.    “Hey Google.  Why don’t you color code the FOLDERS themselves instead of keeping them all gray?   Should be easy enough to do.  You obviously are passing color code data to the UX already.”.   Something to that effect.

I never got a response, but today when I went to my Google Drive I saw EXACTLY what I had requested as part of the updated UX.

google drive colored folders
google drive colored folders

Unfortunately Google never responded to my suggestion other than their typical bot responder.   Did they look at my suggestion and send it to a developer that said “Great idea, that will take 2-seconds to put in place” and baked it into the experience?  Or did they already have this planned for months?  Approved by committee  and a band of meetings to to a UX review analysis and full UX studies?    I’d like to think Google is still able to operate in an Agile fashion, fast & nimble and responding to input quickly.    Or have they become a typical corporate giant where it takes a year to get even a single pixel moved after design, analysis, re-design, and several board meetings before anything happens?

I’m not sure if my request and the 3-weeks to seeing it go live was just a coincidence.  Probably.   But I’ll fool myself into thinking that maybe I was the 10th request they got this month for that feature and some dev just “threw it in there”.   If only Google would communicate with the user base, or even just the paid business apps accounts (yeah, I pay for gMail… I know, right?), and give us some clue that someone is listening.  Whenever I communicate with Google I hear the “on hold music” playing  in my head.. “Is anybody out there?   Just nod if you can hear me…”  – Pink Floyd.

Regardless, I’m happy that my Google Drive is no longer color blind.   Thank you Google!

 

 

 

Posted on

Charleston High Speed Internet Grade: D+

I am really getting frustrated with high speed Internet here in Charleston.   Most of Charleston, whether a business or residential address does NOT have access to fiber.    In the few business locations that do have access the fiber is at least $100/Mb and you can only get pricing that low if you sign a 3 year contract with steep disconnect penalties.   Then you get to pay all those fun telecommunications fees on top which makes it more like $140/month.    Days like today make me miss the REAL Internet available in Boston and Los Angeles.

Why am I writing about this on a blog about software development?  Because the access of RELIABLE high speed Internet is critical to productivity.   Case in point.   My main workstation was acting up this morning.  Time for a reboot and when I do that I install all the Windows Security Updates.    First issue, could not connect to the Internet at all.    Reboot the modem, my router, and my PC.   Finally get connected but it is S-L-O-W.    Windows is pulling a piddly-little 58MB of update files.    On my 20Mb/2Mb service (fastest Knology provides) it should take seconds.    20 minutes later the file is downloaded.    Not good.   So I connect to Speed Test (not a “tweaked ISP version”) and find this:

knology speed test 2013-02-19
knology speed test 2013-02-19

As suspected, my throughput with Knology is less than 1/10th of the promised speed on the download side.    As it says the download from a test server less than 100 miles away is worse than 61% of other US services, a grade they give a D+.

This means pushing my updates, pulling new repos, pulling test data, and more all take 10x longer than normal.   In an average development day that sucks.   It means drinking 5x as much coffee as a 20-second operation now takes 3 minutes.  Literally.

Soooo frustrating.

Alternatives

What are the other choices here at my home office in Mount Pleasant?

mount pleasant providers
mount pleasant providers

AT&T/Bellsouth DSL

They have THE WORST customer service in the industry.  By far.  Hands down.   They are also expensive and while I’ve not tried their new “highly touted” (meaning 3 sales people/month coming to my door) AT&T Uverse service, the speeds are horrific.    They also cannot keep a network online.    Last time I used the, over 5 years ago, the service was off multiple times/week and average throughput was < 1Mbps/256k.   Welcome back to dial-up days.   I guess that should be expected from an old-fashioned telco.   At least they finally upgraded from the telegraph this year as I was having a hard time typing in my passwords in Morse Code!

Grade: F

Comcast

High speed Internet, yup they have it.   I can get 50Mbps/10Mbps service at my house.  It actually is almost that fast most of the time.  Until 10 of my neighbors get online and start streaming Honey Boo Boo on Hulu.   Any evening the actual throughput drops to more like 5Mbps/2Mbps.   Comcast swears up & down that this does not happen and that they don’t pool resources on a business class line.   It is a lie.    Real world tests prove otherwise and like clockwork the network comes back up to full speed when the “normal people” go back to work.

Also like clockwork… the network goes offline.   Just about every week on Friday and Saturday evening after midnight the network goes offline.  Sometimes for 20 minutes, sometimes for several hours.      You will also find that many Saturday mornings it is still not online.    That sucks when you are trying to push out a new development release.

Comcast swears they do not do routine maintenance at that time.   I find that hard to believe.  Literally nearly every-single Friday or Saturday this happens.  The Internet is completely offline and I can see in the modem log files it is the head end router (their end) not responding.

So true high speed but always off.  Also kind of pricey at $150/month for just the Internet service alone.

Grade: C-

Knology

Finally Knology dropped some new lines in the ground 2 summers ago.  I could now choose something besides the OMFG This Is HORRIBLE service of Bellsouth and the now-you-see-it-now-you-don’t service of Comcast.    Knology would save the day, or so I thought.     Last year I decided to finally “make the switch”.   20M/2M service for Internet plus phone plus TV for just $130/month after taxes.   Great!  Not as fast as Comcast, but I’ll take it.

The honeymoon was perfect.   I always got 20M/2M or BETTER speeds.   The network connection NEVER EVER NEVER went down.   I think I had 3 minutes of down time in 6 months.

Then the “in laws” arrived.  And by “in laws” I mean Knology got bought out by a company that was trying to become the next big conglomerate to follow in the footsteps of Comcast.    It didn’t take long.   Within a month my new in laws trashed the place.    Someone decided to reconfigure the network and while Knology claims “backhoe in idaho” (Atlanta actaully) the network was 100% completely dead for most of the southeast for more than 3 days.

First of all, if a SINGLE network fiber can shut down the entire southeast segment of the network and you are an ISP then you’re doing it wrong.  Fire the CTO.  Now.   The guy is an idiot.  And while you are at it fire the network engineer.    Secondly, it is obvious from the network routes (thanks traceroute) that the configuration of the network was notably changed.

After the “backhoe was backed up” and the problem was fixed, nothing has ever been the same.    The in laws may have left but the damage to the relationship at home has been done.   And just to remind you of the experience, the “in laws” send a friendly post card in the mail every now & then… OK about every DAY…. by giving you 1/10th your contracted service speed.    The New Knology (post merger) has completely FUBARed their network.    Now days of 1Mb downloads are common and it is killing my productivity.

Not too mention I can no longer stream Honey Boo Boo without waiting 12 hours… I might as well watch it on… GASP…. regular ol’ television.     I’ll have plenty of time while I wait for my Windows Update to download.

Grade:  A (last summery) to D+ (today)

High Tech Yes, High Speed No

Apparently, even with all the high tech businesses in the neighborhood, NONE of the perks that come with it have trickled out into the local economy.    Benefit Focus and Blackbaud are miles from my home, they have high speed.  But they only pulled private fiber to their buildings.    Google is up in Goose Greek about 20 miles away.   Again, nothing for us down here in Charleston County as the major trunks all stop right outside of town at their facility.  Amazon is in the upstate and again has a private fiber network that nobody up there can get.  People Matter is moving in downtown, I’m not sure what they are using but I’m guessing they will provision with AT&T or SCANA directly to get a dark fiber lit.   Again, the network here is not improving.

Part of the problem is that SCANA owns 80% of the fiber in the ground.  More than half of it is not lit.   But I’ve tried working with them in the past and they want an exorbitant rate to utilize their lines.   You also can’t get it very far from the main Route 26 corridor, which means tens-of-thousands of dollars to run it to most “outlying” locations like the “boonies of Mount Pleasant” (a significant metro area adjacent to downtown Charleston).

charleston population
charleston population

I’m not sure what “powers that be” are stopping new companies from getting into the ISP business, but some unusual market factors are at work.   Charleston Metro Area hosts over 300,000 residents and it growing by more than 10% annually.   It is an up-and-coming major metro market yet there is near-zero investment in high speed infrastructure ANYWHERE in the state.   You can see from $7.2 BILLION dollars that was offered by the US Government to improve high speed just how much South Carolina earned in comparison to our neighbors.

south carolina broadband act share
south carolina broadband act share

So here I wait for some new savior to come along.    Google Fiber…. Verizon FIOS… maybe some company down the street on a private fiber line will install WiMAX.    I can dream can’t I?

Looks like my 10MB download is finally done… back to work…. for now.

Posted on

A Review of WiMAX Technology

I have been teaching myself about WiMax technology  and have started a “living document” over on my Google Docs Drive.   I am setting this to a public “research paper” so anyone else interested in learning about this can review.

The Highlights

WiMAX is NOT the same as “4G”, at least not as it is thought of in the common American vernacular.   Most people mean “4G/LTE” not “4G WiMAX” when they say 4G.

Think of WiMAX as WiFi on steroids.

The specification is officially called 802.16, much like the WiFi 802.11 standard.   It too has suffixes like 802.16d/e similar to WiFi’s 802.11b/g/n which many people are familiar with.

Many countries, like Korea, have a sophisticated WiMAX network.  Many consumer devices, such as cell phones and tablets, that are sold in these countries have WiMAX built in.   Very much like American devices having WiFi built in.    Many devices that have WiMAX have WiFi and LTE or CDMA built in (wow, that is a lot of antennas and signal processors!)

As last mile (the piece from the hub on the street to your house) services fall behind the demand curve in America more & more people will be looking at WiMAX solutions as they come available.  Clear Communications has Clear WiMAX in a number of cities, as does Sprint.  As cable & legacy telephone companies continue to fail at meeting customer needs (I’m looking at YOU AT&T, Comcast, and Knology!) this will become far more prolific.

Building A WiMAX Network

Unfortunately the starting installation costs run nearly $15k for a single cell (not cell phone, though that is where the common name comes from, a cell is a radio signal “footprint” or range in which you can “see” it).     However, do it right and you can get an initial cell that covers 8 miles or MORE with 4Mbps to 30Mbps throughput.      Then you need to pay someone for connection to the Internet backbone, just  like putting WiFi in your home.    In Charleston a solid Fiber connection runs $100/Mb with price drops not coming until the 10Mbps level.      

However I still hope to either find funding or get enough cash on hand to build my first experimental Free Public WiMAX cell in Mount Pleasant.    Who knows, maybe Kick Starter or some other crowd funding can help.

I’d love to see  a donation based system where high speed internet is ubiquitous and DISRUPTIVE to the incumbent communication carriers.  In Charleston, at least, home Internet is still exceedingly expensive and quality of service sucks.

Let’s do something about that!

The Document

My more formal notes will go in here.   This is far from complete and I will be adding to it as I learn more and consider building out my own network.

Can’t see it in an iframe?   View my “A Review of WiMAX Technology” here.

Posted on

Choosing A Wireless Router

Last week the network dropped.  Again.   This was the 5th time in about a month that I lost all connectivity mid-session.  I was in the middle of pushing some web updates and, as usual,  Comcast left me hanging.   When I made my 10PM call to customer service I was met with one of the rudest know-it-all “customer disservice” people I ever encountered.   She argued with me about everything and told me I had no idea what I was talking about when I told her that rebooting my laptop would not get my cable modem to sync up with their head end router.   (I had checked the logs on the modem and it lost sync and the signal level was out of spec.)

Even though the Comcast Business Class service rep., who came out the next morning instead of THREE DAYS later as the “service rep” insisted was the ONLY option, was very helpful and knowledgeable ; the damage had been done.  I was sick of sudden drops, lag, and network throttling that Comcast insists they do not do.    It was time for a change.

What does this have with wireless routers?  We will get there in a minute… just bear with me.

Knology To The Rescue

Fast forward three weeks.  The Knology installation guy shows up at my house EARLY (take THAT Comcast), was courteous, professional, and *gasp* actually knowledgeable about his trade.     He tested the lines, replaced several faulty splitters that Comcast had installed and eventually got a perfectly clean signal at the modem connection point.    We connected the modem and had a great connection.   The 20M/2M service was actually pulling 27M/2M consistently with 0.0001% rate fluctuation.    This guy actually tested things after he installed (take THAT TOO Comcast).   Everything looked great.   Then all hell broke loose.

I HAD NO WIRELESS ROUTER!

My old Comcast modem had wireless.   The new Knology modem did not.

Setting Up My Wireless

I left the install connected to my wired hub and went to work.  While at the office I picked up a couple of pieces of wireless network equipment we had lying around that was no longer being used.   In the mix I had an old Netopia Wireless DSL modem, which can be used as a wireless access point if you disable the DSL port and a 2-year-old Belkin Wireless N router that was a $200 top-of-the-line unit back in the day.

When I got home the first thing I did was hook up the Belkin Wireless N.  I was connected within minutes.  However I did notice the network was lagging.    I attributed it to being on wireless and having several devices on the wireless network as well as the TiVo and DVD connected.     Then I started getting dropped connections. However this time the modem logs looked perfect.  NO errors, no sync problems no dropped connections there.     Eventually I narrowed down the problem.  It was the Belkin router.    It was getting all kinds of packet loss and transmission errors and was dropping a TON of packets with .190-199 in the last IP address octet.  Very odd.

I temporarily tried the Netopia Wireless but that is a simple A/B series wireless router.  It worked, but was very quickly saturated as soon as other devices came online.  It simply did not have the bandwidth over the wireless channels to get the job done with a tablet, 2 wireless phones, the VOIP hard line phone, 2 laptops, the TiVo and the DVD player.    It worked but was slow as heck at peak load.

I needed something better.

The Netgear Utopia

Netgear N600
Netgear N600

I did some homework and found several glowing reviews for the Netgear N600 series wireless N routers.   Since it was now Sunday and neither my Netopia DSL router or my Belking N router were up to task for a big marketing and site update project, I decided to shop local.   Turns out Walmart had the very router I was looking at AND it was a fair price.   Even with taxes it was within $5 of the Amazon pricing and was near or below most online competitors.

40 minutes later I had returned from Wally World with my new router (and a big-bag of M&Ms, a new garden hose, and 3 coloring books for my son… this is WHY you don’t go to Walmart to shop for “just a router”… dang impulse buys).      Within 15 minutes my new router was installed, fully configured to my liking with a new SSID and passwords, and was online.

HOLY SMOKES WAS THIS THING FAST!!!

I mean LIGHTNING FAST compared to ANYTHING I was using before.     I immediately saw my laptop speed tests pulling the full 27M/2M speeds we had seen with the wired test unit at the router.  This was with all the other network equipment still online.

Bad Communication = Slow Networks

After doing a good bit of testing, re-trying the Belkin, re-connecting the Comcast service (it was not turned off yet), and doing a bunch of general cross-checking and sanity tests it had become clear.    Choosing the right networking equipment is paramount to maintaining solid throughput to your desktop (or tablet) computers.  If any link in the chain is weak you will suffer.

The technical reasons for highly variant network performance has a lot to do with packet re-transmission.   To keep it somewhat less technical, think of it as a simple phone conversation where you MUST get every word right.   To do this you ask the other party to repeat every word they hear.   If they say a word incorrectly you repeat that word until they say it back correctly.    On a poor connection this may happen 3 or 4 times on every-other-word.   That can make for a VERRRRRYYYY long conversation.

In today’s networks a lot of things can go wrong to make your surfing destination and your computer “repeat the words” over & over again.   A wireless network often adds a lot more possibilities for interference.   For example, turning on the microwave oven, or a neighbor turning on their TV.    You don’t HEAR the interference, but your wireless network does.  Think of it like someone turning on a vacuum cleaner right next to you while you are doing the “repeat every word” conversation with your long distance friend.  You are likely not going to hear very well and be repeating a lot of words.

Erradicating Slow

In my case several things were causing problems.   The Comcast connection to my house is not very good which means the “volume” of the conversation is very inconsistent, too loud some moments, too soft at others.   Then the modem Comcast had was an old model that was very slow, think of it as if you had a semi-retarded phone operator in the middle trying to keep up with the “repeat the word” conversation and they just skip words when they fall behind.    The Belkin router refused to repeat any word with the “ch” sound in it, like a Chinese waiter mixing up L’s and R’s and you trying to guess what they really meant.     The Netopia DSL router was mostly just very retarded and easily distracted, barely being able to keep up with a slow deliberate conversation.

In the end I eliminated all the slow, retarded, missed-translation, volume related issues.    A tested solid clean connection with a modern high-speed modem from Knology connected directly to the Netgear N600 Wireless N Router keeps everything humming along.  The conversations are crystal clear and the Netgear N600 + Knology modem rarely, if ever, repeat a word.   A 2-minute conversation takes 2-minutes, not 20.    That translates into getting the full 20M (27M) /2M service all the way from “the Internet” straight into my wireless network.

Get The Best

In your network, choose the best equipment you can afford.   Read online reviews and select the RIGHT solution.   Higher price does not always mean better performance.    In my case the reviews proved out to be well founded and I too give the Netgear N600 (WNDR3400v2) 5-stars.

Netgear N900
Netgear N900

I liked the Netgear N600 so much I bought the “big brother” N900 (WNDR4500) for the office and I like that one EVEN better.  It too was quick to setup and improved network performance.  It also gave us the ability to quickly and easily turn a USB drive into a network share and turn my old Brother MFC-4800 laser (another great piece of equipment, by the way) printer/scanner into a network printer/scanner within minutes and with one quick/simple applet install on our Windows and Mac computers.

If you are in the market for a wireless router I highly recommend the Netgear N600 and N900 routers.