Posted on

Configuring Apache 2.4 Connections For WordPress Sites

Recently I upgraded my web server to PHP 5.6.14. Along the way the process managed to obliterate my Apache web server configuration files. Luckily it saves them during the upgrade process, but one thing I forgot to restore was the settings that help Apache manage memory. Friday night around midnight, because this stuff ALWAYS happens when you’re asleep… the server crashed. And since it was out of memory with a bazillion people trying to surf the site; every time I restarted the server I could not log in fast enough to get a connection and fix the problem.

Eventually I had to disconnect my AWS public IP address, connect to a private address with SSH, and build the proper Apache configuration file to ensure Apache didn’t go rogue and try to take over the Internet from my little AWS web server.

Here are my cheat-sheet notes about configuring Apache 2.4 so that it starts asking site visitors to “hold on a second” when memory starts getting low. That is much nicer than grabbing more memory than it should and just crashing EVERYTHING.

My Configuration File

I put this new configuration file in the /etc/httpd/conf.d directory and named it mpm_prefork.conf. That should help prevent it from going away on a future Apache upgrade. This configuration is for an m3.large server running with 7.4GB of RAM with a typical WordPress 4.4 install with WooCommerce and other plugins installed.

# prefork MPM for Apache 2.4
#
# use httpd -V to determine which MPM module is in use.
#
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# ServerLimit: maximum value for MaxRequestWorkers for the lifetime of the server
#
# MaxRequestWorkers: maximum number of server processes allowed to start
#
#
# TOTAL SYTEM RAM: free -m (first column) = 7400 MB
# USED SYSTEM RAM: free -m (second column) = 2300 MB
#
# AVG APACHE RAM LOAD: htop (filter httpd, average RES column = loaded in physical RAM) = 87MB
# TOTAL APACHE RAM LOAD: (htop sum RES column) 1900 MB
#
# BASE SYSTEM RAM LOAD: USED SYSTEM RAM - TOTAL APACHE RAM LOAD = 2300 - 1900 = 400MB
#
# AVAILABLE FOR APACHE: TOTAL SYSTEM RAM - BASE SYSTEM RAM LOAD = 7400 - 400 = 7000MB
#
# ServerLimit = sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process
# MaxRequestWorkers = number of simultaneous child processes to serve requests , must increase ServerLimit
#
# If both ServerLimit and MaxRequestWorkers are set to values higher than the system can handle,
# Apache httpd may not start or the system may become unstable.
#
# MaxConnectionsPerChild = how many requests are served before the child process dies and is restarted
# find your average requests served per day and divide by average servers run per day
# a good starting default for most servers is 1000 requests
#
# ServerLimit = AVAILABLE FOR APACHE / AVG APACHE RAM LOAD = 7000MB / 87MB = 80
#
#

ServerLimit 64
MaxRequestWorkers 64
MaxConnectionsPerChild 2400

The Directives

With Apache 2.4 you only need to adjust 3 directives. ServerLimit, MaxRequestWorkers (renamed from earlier versions) , and MaxConnectionsPerChild (also renamed).

ServerLimit / MaxRequestWorkers

ServerLimit sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process. MaxRequestWorkers is the number of simultaneous child processes to serve requests. This seems a bit redundant, but it is an effect of using the prefork MPM module which is a threadless design. That means it runs a bit faster but eats up a bit more memory. It is the default mode for Apache running on Amazon Linux. I prefer it as I like stability over performance and some older web technologies don’t play well with multi-threaded design. If I was going to go with a more stable multi-thread environment I’d switch to nginx. For this setup setting ServerLimit and MaxRequestWorkers to the same value is fine. This says “don’t ever run more than this many web servers at one time”.

In essence this is the total simultaneous web connections you can serve at one time. What does that mean? With the older HTTP and HTTPS protocol that means every element of your page that comes from your server is a connection. The page text, any images, scripts, and CSS files are all a separate request. Luckily most of this comes out of the server quickly so a page with 20 web objects on it will use up 20 of your 64 connections but will spit them out in less than 2 seconds leaving those connections ready for the next site visitor while the first guy (or gal) reads your content. With newer HTTP/2 (and SPDY) connections a single process (worker) may handle multiple content requests from the same user so you may well end up using 1 or 2 connections even with a page with multiple objects loading. While that is an over-simplification, the general premise shows why you should update your site to https and get on services that support HTTP/2.

Calculating A Value

# TOTAL SYTEM RAM: free -m (first column) = 7400 MB
# USED SYSTEM RAM: free -m (second column) = 2300 MB
# TOTAL APACHE RAM LOAD: (htop sum RES column) 1900 MB
# AVG APACHE RAM LOAD: htop (filter httpd, average RES column = loaded in physical RAM) = 87MB
# BASE SYSTEM RAM LOAD: USED SYSTEM RAM - TOTAL APACHE RAM LOAD = 2300 - 1900 = 400MB
# AVAILABLE FOR APACHE: TOTAL SYSTEM RAM - BASE SYSTEM RAM LOAD = 7400 - 400 = 7000MB
# ServerLimit = AVAILABLE FOR APACHE / AVG APACHE RAM LOAD = 7000MB / 87MB = 80

There you go, easy, right? Figuring our RAM resources can be complicated, but to simplify the process start with the built-in Linux free command and I suggest installing htop which provides a simpler interface to see what is running on your server. You will want to do this on your live server under normal load if possible.

Using free -m from the Linux command line will tell you the general high-level overview of your server’s memory status. You want to know how much is installed and how much is in use. In my case I have 7400MB of RAM and 2300MB was in use.

Next you want to figure out how much is in use by Apache and how much an average web connection is using per request. Use htop, filter to show only the httpd processes, and do math. My server was using 1900MB for the httpd processes. The average RAM per process was 87MB.

You can now figure out how much RAM is used by “non-web stuff” on your server. Of the 2300MB of used RAM, Apache was using up 1900MB. That means my server uses about 400MB for general system overhead and various background processes like my system-level backup service. That means on a “clean start” my server should show about 7000MB available for web work. I can verify that by stopping Apache and running free -m after the system “rests” for a few minutes to clear caches and other stuff.

Since I will have 7000MB available for web stuff I can determine that my current WordPress configuration, PHP setup, and other variables will come out to about 87MB being used for each web session. That means I can fit about 80 web processes into memory at one time before all hell breaks loose.

Since I don’t like to exhaust memory and I’m a big fan of the 80/20 rule, I set my maximum web processed to 64. 7000MB / 87MB = 80 * .8 = 64.

That is where you want to set your ServerLimit and MaxRequestWorkers.

MaxConnectionsPerChild

This determines how long those workers are going to “live” before they die off. Any worker will accept a request to send something out to your site visitor. When it is done it doesn’t go away. Instead is tells Apache “hey, I’m ready for more work”. However every-so-often one of the things that is requested breaks. A bad script in PHP may be leaking memory, for example. As a safety valve Apache provides the MaxConnectionsPerChild directive. This tells Apache that after this child has served this many objects to die. Apache will start a new process to replace it. This ensures and memory “cruft” that is built up is cleared out should something go wrong.

Set this number too low and you server spends valuable time killing and creating Apache processes. You don’t want that. Set it too high and you run the risk of “memory cruft” building up and eventually having Apache kill your server with out-of-memory issues. Most system admins try to set this to a value that has it reset once every 24 hours. This is hard to calculate unless you know your average objects requested every day, how many processes served those objects, and other factors like HTTP versus HTTP2 can come into play. Not too mention fluctuations like weekend versus weekday load. Most system admins target 1000 requests. For my server load I am guessing 2400 requests is a good value, especially since I’ve left some extra room for memory “cruft”.

Posted on

Setting Up AWS Elastic Beanstalk Tools On Linux

AWS Beanstalk WordPress 445x200

AWS provides an “officially unsupported” set of scripts for Windows, OSX, and Linux that will help with managing and deploying your AWS Elastic Beanstalk applications.   This can be useful as I could not find a simple way to SSH into my ELB-based EC2 instance using standard methodologies.  I’m sure I missed something but deploying and updating via git commands is going to be easier and my preferred production method; might as well go there  now.

Download and install AWS Elastic Beanstalk Command Line Tool.

Unzip the file.

You will now have a directory that contains three types of command sets.  In the appropriately-named eb subdirectory is a series of OS command-line scripts via “eb” commands.   In the api directory is a full-fledged ruby-based implementation of very long command names that require ruby, ruby-developer, and the JSON gem to function.    In AWSDevTools is and extension of git commands that add new AWS-specific scripts to the git command.

 

Activating “eb” Command Line

Edit your OS PATH variable to point to your unzipped download directory.    I changed my unzipped directory to be something shorter and put it in my Linux root directory.   To activate the eb command:

Add the path to the proper Linux Python directory (I am running 2.7.X).  My CentOS .bash_profile:

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/aws-elb-2.6.4/eb/linux/python2.7/

export PATH

export AWS_CREDENTIAL_FILE=$HOME/.ssh/aws.credentials

Save and reload .bash_profile into my current environment (next time you log out / in this will not be necessary… and yes, dot-space-dot is correct):

# . .bash_profile

Activating Extended Command Line

The “extended” command line are the ruby-based scripts that give you some very long command names that do a lot of different things.

First make sure ruby , ruby-develop, and the JSON gems are installed. For CentOS:

# yum install ruby ruby-develop

# gem install json

Go create an AWS credentials file.

I put mine in my .ssh directory.  It looks like this (use your key IDs):

AWSAccessKeyId=<your-access-key>
AWSSecretKey=<your-secret-key>

Read the article on Deploying WordPress 4.2.2 On Elastic Beanstalk, Part 1 and setup a unique IAM account for this.  Using your main AWS login credential is not recommended.  If they get compromised…   well… just don’t do that.

Then edit your PATH using the same methodology as noted above.  

This time adding the api directory to your path:

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/aws-elb-2.6.4/eb/linux/python2.7/:$HOME/aws-elb-2.6.4/api/bin/

export PATH

OK, now add this to your current running Linux environment:

# . .bash_profile

Test.

elastic-beanstalk-describe-applications

It will likely come back with “no applications found”.

Setup git Tools For AWS

Yup, same idea as above.  Edit your path file to include the git tool kit, but a slight twist here.  Once you do that you will need to run the setup command noted below in each repository where you want AWS tools.

Edit your PATH and invoke it the double-dot-bash-trick noted above.

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/aws-elb-2.6.4/eb/linux/python2.7/:$HOME/aws-elb-2.6.4/api/bin/:$HOME/aws-elb-2.6.4/AWSDevTools/Linux

export PATH

New tricks… go set this up in your project directory.

Your project directory is where your WordPress PHP application resides and you’ve create a git repository to manage it.   You’ve already done your git init and committed stuff to the repository.    Dig around this site or the Internet to find out how to do that if you’re not sure. Again, I recommend the  Deploying WordPress 4.2.2 On Elastic Beanstalk, Part 1 article as it has some special Elastic Beanstalk config files in it that will be used by ELB to connect RDS dynamically and set your WP Salt values.

For this to work you are going to need to have Python (same with “eb” above) and the Python Boto library installed.   I

If you don’t have boto yet, you install it on CentOS with:

# sudo yum install python-boto

Assuming you already have your WordPress stuff in a git repo, go to that directory.

In my case /var/www/html holds my WordPress install that has been put into a git repo.

# cd /var/www/wpslp/

Now setup the git extensions using this command:

# AWSDevTools-RepositorySetup.sh

Test.

If everything is setup correctly you can check the git commands with something like:

# git aws.push

It will likely come back with an “Updating the AWS Elastic Beanstalk environment None…” message.

Either that or it will update the entire Internet , or at least the Amazon store, with your WordPress code.

 

Combined with your ELB Environment you setup from the previous article on the subject, your are ready to go conquer the world with your new git-deployed WordPress installation on ELB.

You can learn more about setting up the AWS-specific git parameters and how to use git with AWS and this tookit on at this .git Develop, Test, and Deploy article.

Next I will figure out how to marry the two and will share my crib notes here.

 

Posted on

Deploying WordPress 4.2.2 On Elastic Beanstalk, Part 1

AWS Beanstalk WordPress 445x200

I spent a good part of the past 24 hours trying to get a basic WordPress 4.2.2 deployment up-and-running on Elastic Beanstalk.   It is part of the “homework” in preparing for the next generation of store location and directory technology I am working on .    I must say that even for a tech geek that loves this sort of thing, it was a chore.   This article is my “crib sheet” for the next time around.   Hopefully I don’t miss anything important as I wasted hours chasing my own rear-end trying to get some things to work.

I used the Deploying WordPress with AWS Elastic Beanstalk fairly extensively for this process.    It is easy to miss steps and is not completely up-to-date with the screen shots and information which makes some of it hard to follow the first time through.  I will try to highlight the differences here when I catch them.

The steps here will get a BASIC non-scalable WordPress installation onto AWS.    Part 2 will make this a scalable instance.    If my assumptions are correct, which happens from time-to-time, I can later use command-line tools with git on my local dev box to push updated applications out the the server stack.  If that works it will be Part 3 of the series on WP ELB Deployment.

Getting Started

The “shopping list” for getting started using my methodology.    Some of these you can change to suit your needs, especially the “local dev” parts.  Don’t go setting all of this up yet, some things need to be setup a specific way.  This is just the general list of what you will be getting into. In addition to this list you will need lots and lots of patience.  It may help to be bald; if not you will lose some hair during the process.

 

Part 1 : Installation

  • A local virtual machine.  I use VirtualBox.
  • A clean install of the latest WordPress code on that box, no need to run the setup, just the software install.
  • An AWS account.
  • A “WP Deployment” specific AWS user that has IAM rules to secure your deployment.
  • AWS Elastic  Beanstalk to manage the AWS Elastic Load Balancer and EC2 instances.

Part 2 : Scalability

  • AWS S3 bucket for storing static shared content (CSS rules, images, etc.)
  • AWS Elasticache for setting up Memcache for improved database performance.
  • AWS Cloudfront to improve the delivery of content across your front-end WordPress nodes.
  • AWS RDS to share the main WordPress data between your Elastic Beanstalk nodes.

Creating The “Application”

The first step is to create the web application.  In this case, WordPress.

I recommend creating a self-contained environment versus installing locally on your machine, but whatever you’re comfortable with.   I like to use VirtualBox , sometimes paired with Vagrant if I want to distribute the box to others, with a CentOS GUI development environment.  Any flavor of OS will work as the application building is really just hacking some of the WordPress config files and creating an “environment variables” directory for AWS inside a standard WP install.

Got your box booted?  Great!

Fetch the latest download of WordPress.

Install it locally.

Remove wp-config-sample.php.

Create a new wp-config.php that looks like this:

<?php

// An AWS ELB friendly config file.

/** Detect if SSL is used. This is required since we are
terminating SSL either on CloudFront or on ELB */
if (($_SERVER['HTTP_CLOUDFRONT_FORWARDED_PROTO'] == 'https') OR ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https'))
{$_SERVER['HTTPS']='on';}

/** The name of the database for WordPress */ define('DB_NAME', $_SERVER["RDS_DB_NAME"]);
/** MySQL database username */
define('DB_USER', $_SERVER["RDS_USERNAME"]);

/** MySQL database password */ define('DB_PASSWORD', $_SERVER["RDS_PASSWORD"]); /** MySQL hostname */
define('DB_HOST', $_SERVER["RDS_HOSTNAME"]);

/** Database Charset to use in creating database tables. */ define('DB_CHARSET', 'utf8');
/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');

/**#@+
 * Authentication Unique Keys and Salts.
 * Change these to different unique phrases!
 */
define('AUTH_KEY',$_SERVER["SECURE_AUTH_KEY"]);
define('SECURE_AUTH_KEY',$_SERVER["AUTH_KEY"]);
define('LOGGED_IN_KEY',$_SERVER["LOGGED_IN_KEY"]);
define('NONCE_KEY',$_SERVER["NONCE_KEY"]);
define('AUTH_SALT',$_SERVER["AUTH_SALT"]);
define('SECURE_AUTH_SALT', $_SERVER["SECURE_AUTH_SALT"]);
define('LOGGED_IN_SALT', $_SERVER["LOGGED_IN_SALT"]);
define('NONCE_SALT', $_SERVER["NONCE_SALT"]);

/**#@-*/

/**
 * WordPress Database Table prefix.
 *
 * You can have multiple installations in one database if you give each a unique
 * prefix. Only numbers, letters, and underscores please!
 */
$table_prefix  = 'wp_';

/**
 * For developers: WordPress debugging mode.
 *
 * Change this to true to enable the display of notices during development.
 * It is strongly recommended that plugin and theme developers use WP_DEBUG
 * in their development environments.
 */
define('WP_DEBUG', false);
/* Multisite */
//define( 'WP_ALLOW_MULTISITE', true );

/* That's all, stop editing! Happy blogging. */

/** Absolute path to the WordPress directory. */
if ( !defined('ABSPATH') )
        define('ABSPATH', dirname(__FILE__) . '/');

/** Sets up WordPress vars and included files. */
require_once(ABSPATH . 'wp-settings.php');

Do not move this wp-config.php file out of the root directory.  It is a common security practice but it will be missing from your AWS deployment.  There are probably ways to secure this by changing your target destination when setting up AWS Cloufront, but that is beyond the scope of this article.

Settings like the $_SERVER[‘RDS_USERNAME’] will come from the AWS Elastic Beanstalk environment you will create later.  This is set dynamically by AWS when you attach the RDS instance to the application environment.  This ensures the persistent data for WordPress, things like your dynamic site content including pages, posts, users, order information, etc. is shared on a single highly-reliable database server and each new node in your scalable app pulls from the same data set.

Settings for the “Salt” come from a YAML-style config file you will add next.     This is bundled with the WordPress “source” for the application to ensure the salts are the same across each node of your WordPress deployment.     This ensures consistency when your web app scales, firing up server number 3, 4, and 5 while under load.

Create a directory in the root WordPress folder named .ebextensions.

Fetch new salts from WordPress.

Create a new file named keys.conf in the .ebextensions directory that looks like this, but using YOUR salts:

option_settings:
- option_name: AUTH_KEY
  value: '0VghKxxxxxn?%H$}jc5.-y1U%L)*&Ha/?)To<E>vTB9ukbd-FNoq^+.4A+I1Y/zp'
- option_name: SECURE_AUTH_KEY
  value: 'z 7)&E~NjioIREE@g+TKs-~yO-P)uq2Zm&98Zw>GK_rYb_}a,C#HD[K98ALxxxxx'
- option_name: LOGGED_IN_KEY
  value: 'yq@K{i=z(xxxxxm1VOi80~.H?[,h+F+_wua]I:z-YZF|a-vEV[n/6pRBlw+qAe^q'
- option_name: NONCE_KEY
  value: 'Bq=kbD|H#iMt5#[d[qURMP8C}xxxxxf[WaI6.oF5=r1h#:E?BZ-L28,7x~@oZw#7'
- option_name: AUTH_SALT
  value: 'O;4uq817 CSs3-ZAUY>e%#xxxxx<:u~=Is4d6:CI3io;aL<h]+x~;S_fc3E oEB1_'
- option_name: SECURE_AUTH_SALT
  value: 'nF94Rasp-0iaxxxxxm:|e82*M9R!y>% b68[oN|?_&4MRbl.)n8uB-ph|*qIPq|e'
- option_name: LOGGED_IN_SALT
  value: '&Ah^OIb<`xxxxx+lKV=zFER_^`+gA%.UWCIy|fJ+RfKiYKBP^&,[|%6K<%C[eU]n'
- option_name: NONCE_SALT
  value: 'ZiKejG|xxxxx k3>nr)~AN5?*hd!aO-)E^fR^^!_PR1n[oq{??F`,NQmdfE2Mj:`'

Zip up your application to make it ready for deployment.

Do NOT start from the parent directory. The zip should start from the WordPress root directory. On Linux I used the this command from the main WordPress directory where wp-config.php lives:
zip -r ../wordpress-site-for-elb.zip .

Create The Elastic Beanstalk Environment

Login to the AWS Console.

Go to Elastic Beanstalk.

Go to ELB Create New Application.

 

AWS ELB application info
AWS ELB application info

Select Create Web Server.

AWS ELB Web Server Environment
AWS ELB Web Server Environment

Select the default permissions (I didn’t have a choice here).

AWS ELB Permissions
AWS ELB Permissions

Set the Environment to PHP and Load Balancing, auto scaling.

AWS ELB Environment Type
AWS ELB Environment Type

Upload your .zip file you created above as the source for the application.
Leave Deployment Limits at their defaults.
As a side note, this will create an application that you can later user for other environments, making it easy to launch new sites with their own RDS and Cloudfront settings but using the same WordPress setup.

AWS ELB Application Version
AWS ELB Application Version

Set your new Environment Name.
If your application name was unique you can use the default.
If your application name is “WordPress” it is likely in use on ELB, try something more unique.

AWS ELB Environment Name
AWS ELB Environment Name

Tell ELB to create an RDS instance for you.
I chose not to put his in a VPC, which is the default.
The guide I linked to above, shows a non-VPC, but then gives instructions on a VPC deployment.   This caused issues.
Some instance sizes for both RDS and the EC2 instance ELB creates will ONLY run in a VPC (anything with a “t” level).
You will need to choose the larger “m-size” instances for RDS and EC2 otherwise the ELB setup will fail after 15-20 minutes of “spinning its wheels”.

AWS ELB Create RDS not in VPC
AWS ELB Create RDS not in VPC

Set your configuration details.

Choose an instance type of m*, I chose m3.medium the first time around, but m1.small should suffice for a small WP site.

Select an EC2 key pair to be able to connect with SSH. If you did not create one on your MAIN AWS login, got the the IAM panel and do that now. Save the private key on your local ox and make a backup of it.

The email address is not required, I like to know if the environment changed especially if I did not change it.

Set the application health check URL to
HTTP:80/readme.html

Uncheck rolling updates.

Defaults for the rest will work.

AWS ELB Configuration Details
AWS ELB Configuration Details

You can set some tags for the environment, but it is not necessary. Supposedly they help in reporting on account usage, but I’m not that far along yet.

AWS ELB Tags
AWS ELB Tags

Setup your RDS instance.
Again, choose an m* instance as the t* instances will not boot unless you are in a VPC.
If you choose the wrong instance ELB will “sit and spin” for something that seems to be a decade, before booting to “gray state” which is AWS terminology for half-ass and useless.
If you cannot tell, this was the most frustrating part of the setup as I tried SEVERAL different instance classes.    Each time the ELB would hang and then take forever to delete.

Enter your DB username and password.
They will be auto-configured by the wp-config.php hack you made earlier.I do recommend, however, saving these somewhere in case you need to connect to MySQL remotely.  I hosed my host and siteurl and needed to go to my local dev box, fire up MySQL command line, and update the wp_options table after I booted my application in ELB.    Having the username/password for the DB is helpful for that type of thing.

AWS ELB RDS Config
AWS ELB RDS Config

Review your settings, launch and wait.

Reviewing ELB Settings

When you are done your Elastic Beanstalk should look something like this:

AWS ELB Web Tier Final Config
AWS ELB Web Tier Final Config
AWS ELB Data and Network Final Config
AWS ELB Data and Network Final Config

Useful Resources

Deploying WordPress with AWS Elastic Beanstalk – single or multiple zone, fully scalable, cached.

Deploying a WP install with git on ELB – single zone and may not scale.

 

Posted on

Google Drive – Did They Hear Me?

A few weeks ago I was on my Google Drive and organizing stuff from several project, prospective business ventures, and my WordPress plugins.  I have a half-dozen “things” going on these days and need to keep my notes, spreadsheets, flow diagrams, and other materials organized.    I created folders and moved stuff around.    To make it even easier to find things I assigned a color code to each folder as visual clues make for faster navigation once you train yourself on things like colors and shapes.  This is why good icon design is paramount, given mobile desktop UX that proliferates our lives thees days.

However, at Google Drive I noticed that the colors I assigned to various project folders only shows up as a colored box on the drop down menus as well as a few other discrete places.   I decided to drop Google a note via the web feedback form.    “Hey Google.  Why don’t you color code the FOLDERS themselves instead of keeping them all gray?   Should be easy enough to do.  You obviously are passing color code data to the UX already.”.   Something to that effect.

I never got a response, but today when I went to my Google Drive I saw EXACTLY what I had requested as part of the updated UX.

google drive colored folders
google drive colored folders

Unfortunately Google never responded to my suggestion other than their typical bot responder.   Did they look at my suggestion and send it to a developer that said “Great idea, that will take 2-seconds to put in place” and baked it into the experience?  Or did they already have this planned for months?  Approved by committee  and a band of meetings to to a UX review analysis and full UX studies?    I’d like to think Google is still able to operate in an Agile fashion, fast & nimble and responding to input quickly.    Or have they become a typical corporate giant where it takes a year to get even a single pixel moved after design, analysis, re-design, and several board meetings before anything happens?

I’m not sure if my request and the 3-weeks to seeing it go live was just a coincidence.  Probably.   But I’ll fool myself into thinking that maybe I was the 10th request they got this month for that feature and some dev just “threw it in there”.   If only Google would communicate with the user base, or even just the paid business apps accounts (yeah, I pay for gMail… I know, right?), and give us some clue that someone is listening.  Whenever I communicate with Google I hear the “on hold music” playing  in my head.. “Is anybody out there?   Just nod if you can hear me…”  – Pink Floyd.

Regardless, I’m happy that my Google Drive is no longer color blind.   Thank you Google!

 

 

 

Posted on

Backing Up A Linux Directory To The Cloud

We use Amazon S3 to backup a myriad of directories and data dumps from our local development and public live servers.  The storage is cheap, easily accessible, and is in a remote third party location with decent resilience.  The storage is secure unless you share your bucket information and key files with a third party.

In this article we explore the task of backing up a Linux directory via the command line to an S3 bucket.   This article assumes you’ve signed up for Amazon Web Services (AWS) and have S3 capabilities enabled on your account.  That can all be done via the simple web interface at Amazon.

Step 1 : Get s3tools Installed

The easiest way to interface with Amazon from the command line is to install the open source s3tools application toolkit from the web.  You can get the toolkit from http://www.s3tools.org/.  If you are on a Redhat based distribution you can create the yum repo file and simply to a yum install.  For all other distributions you’ll need to fetch and build from source (actually running python setup.py install) after you download.

Once you have s3cmd installed you will need to configure it.  Run the following command (not you will need your access key and secret key from your Amazon AWS account):
s3cmd --configure

Step 2 : Create A Simple Backup Script

Go to the directory you wish to backup and create the following script named backthisup.sh:

#!/bin/sh
SITENAME='mysite'
# Create a tarzip of the directory
echo 'Making tarzip of this directory...'
tar cvz --exclude backup.tgz -f backup.tgz ./*
# Make the s3 bucket (ignored if already there)
echo 'Create bucket if it is not there...'
s3cmd mb s3://backup.$SITENAME
# Put that tarzip we just made on s3
echo 'Storing files on s3...'
s3cmd put backup.tgz s3://backup.$SITENAME

Note that this is a simple backup script.  It tarzips the current directory and then pushes it to the s3 bucket.  This is good for a quick backup but not the best solution for ongoing repeated backups.  The reason is that most of the time you will want to perform a differential backup, only putting the stuff that is changed or newly created into the s3 bucket. AWS charges you for every put and get operation and for bandwidth.  Granted the fees are low, but every penny counts.

Next Steps : Differential Backups

If you don’t want to always push all your files to the server every time you run the script you can do a differential backup.   This is easily accomplished with S3Tools by using the sync instead of the push command.   We leave that to a future article.

Posted on

Custom Site & Store Builder with Energy Inc.

The Energy Detective (TED) is a consumer based product that helps home users track their energy usage on a per-device or cross-household level. When Energy Inc, the makers or TED needed to upgrade their site with an easy-to update content management system (CMS) and the addition of a custom storefront, they came to Cyber Sprocket Labs.

Within months we had ported their old static-page driven site to our new custom site builder. They could now easily update their own content without getting developers involved, and better yet – the system protected them from inadvertently breaking their site design. The staff at Energy Inc. soon became experts at the system and added new content as well as new product models to the site.

The site also started with a simple storefront module. It allowed Energy Inc. to upload new products and track inventory levels to ensure customers knew when an item was put on backorder. The new storefront module allowed Energy Inc. to easily show and sell their wares while automating part of the order process on the back end.

Soon the orders started to roll in and Energy Inc. needed more sophisticated order tracking and management. Updates were made to add automated interfaces with FedEx for real time shipping quotes anywhere in the US and it’s territories. New order search and tracking features where added so that Energy Inc. knew what shipped, what was backordered, and what was being returned under their return merchandise authorization policy.

Energy Inc’s TED product was doing well, and the media started to notice. So did Google. As one of the first partners in Google’s new energy management program, Energy Inc. realized that their shared Linux server was not going to be able to handle the new influx of traffic. Luckily, Cyber Sprocket Labs had already been working on the Amazon Web Services cloud for more than 18 months. We knew our way around the system and helped Energy Inc. navigate the maze of cloud computing and served as a guide to the new platform. Energy Inc. decided to make the move to the nearly infinite scalability and on-demand compute environment of cloud computing.

Cyber Sprocket Labs helped migrate Energy Inc over to the Amazon Cloud in less than a week. No downtime while at the same time providing a significant boost in processing power… just in time for Google’s big announcement.

Congratulations on your success, Dolph! Glad we could be there to help get your web services off the ground!

Technical Overview

Services Provided

  • Custom website builder
  • Custom shopping cart
  • Custom order processing and management system
  • Porting to Amazon Web Services Cloud

Platform Details