Posted on

Setting Up AWS Elastic Beanstalk Tools On Linux

AWS Beanstalk WordPress 445x200

AWS provides an “officially unsupported” set of scripts for Windows, OSX, and Linux that will help with managing and deploying your AWS Elastic Beanstalk applications.   This can be useful as I could not find a simple way to SSH into my ELB-based EC2 instance using standard methodologies.  I’m sure I missed something but deploying and updating via git commands is going to be easier and my preferred production method; might as well go there  now.

Download and install AWS Elastic Beanstalk Command Line Tool.

Unzip the file.

You will now have a directory that contains three types of command sets.  In the appropriately-named eb subdirectory is a series of OS command-line scripts via “eb” commands.   In the api directory is a full-fledged ruby-based implementation of very long command names that require ruby, ruby-developer, and the JSON gem to function.    In AWSDevTools is and extension of git commands that add new AWS-specific scripts to the git command.

 

Activating “eb” Command Line

Edit your OS PATH variable to point to your unzipped download directory.    I changed my unzipped directory to be something shorter and put it in my Linux root directory.   To activate the eb command:

Add the path to the proper Linux Python directory (I am running 2.7.X).  My CentOS .bash_profile:

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/aws-elb-2.6.4/eb/linux/python2.7/

export PATH

export AWS_CREDENTIAL_FILE=$HOME/.ssh/aws.credentials

Save and reload .bash_profile into my current environment (next time you log out / in this will not be necessary… and yes, dot-space-dot is correct):

# . .bash_profile

Activating Extended Command Line

The “extended” command line are the ruby-based scripts that give you some very long command names that do a lot of different things.

First make sure ruby , ruby-develop, and the JSON gems are installed. For CentOS:

# yum install ruby ruby-develop

# gem install json

Go create an AWS credentials file.

I put mine in my .ssh directory.  It looks like this (use your key IDs):

AWSAccessKeyId=<your-access-key>
AWSSecretKey=<your-secret-key>

Read the article on Deploying WordPress 4.2.2 On Elastic Beanstalk, Part 1 and setup a unique IAM account for this.  Using your main AWS login credential is not recommended.  If they get compromised…   well… just don’t do that.

Then edit your PATH using the same methodology as noted above.  

This time adding the api directory to your path:

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/aws-elb-2.6.4/eb/linux/python2.7/:$HOME/aws-elb-2.6.4/api/bin/

export PATH

OK, now add this to your current running Linux environment:

# . .bash_profile

Test.

elastic-beanstalk-describe-applications

It will likely come back with “no applications found”.

Setup git Tools For AWS

Yup, same idea as above.  Edit your path file to include the git tool kit, but a slight twist here.  Once you do that you will need to run the setup command noted below in each repository where you want AWS tools.

Edit your PATH and invoke it the double-dot-bash-trick noted above.

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/aws-elb-2.6.4/eb/linux/python2.7/:$HOME/aws-elb-2.6.4/api/bin/:$HOME/aws-elb-2.6.4/AWSDevTools/Linux

export PATH

New tricks… go set this up in your project directory.

Your project directory is where your WordPress PHP application resides and you’ve create a git repository to manage it.   You’ve already done your git init and committed stuff to the repository.    Dig around this site or the Internet to find out how to do that if you’re not sure. Again, I recommend the  Deploying WordPress 4.2.2 On Elastic Beanstalk, Part 1 article as it has some special Elastic Beanstalk config files in it that will be used by ELB to connect RDS dynamically and set your WP Salt values.

For this to work you are going to need to have Python (same with “eb” above) and the Python Boto library installed.   I

If you don’t have boto yet, you install it on CentOS with:

# sudo yum install python-boto

Assuming you already have your WordPress stuff in a git repo, go to that directory.

In my case /var/www/html holds my WordPress install that has been put into a git repo.

# cd /var/www/wpslp/

Now setup the git extensions using this command:

# AWSDevTools-RepositorySetup.sh

Test.

If everything is setup correctly you can check the git commands with something like:

# git aws.push

It will likely come back with an “Updating the AWS Elastic Beanstalk environment None…” message.

Either that or it will update the entire Internet , or at least the Amazon store, with your WordPress code.

 

Combined with your ELB Environment you setup from the previous article on the subject, your are ready to go conquer the world with your new git-deployed WordPress installation on ELB.

You can learn more about setting up the AWS-specific git parameters and how to use git with AWS and this tookit on at this .git Develop, Test, and Deploy article.

Next I will figure out how to marry the two and will share my crib notes here.

 

Posted on

Deploying WordPress 4.2.2 On Elastic Beanstalk, Part 1

AWS Beanstalk WordPress 445x200

I spent a good part of the past 24 hours trying to get a basic WordPress 4.2.2 deployment up-and-running on Elastic Beanstalk.   It is part of the “homework” in preparing for the next generation of store location and directory technology I am working on .    I must say that even for a tech geek that loves this sort of thing, it was a chore.   This article is my “crib sheet” for the next time around.   Hopefully I don’t miss anything important as I wasted hours chasing my own rear-end trying to get some things to work.

I used the Deploying WordPress with AWS Elastic Beanstalk fairly extensively for this process.    It is easy to miss steps and is not completely up-to-date with the screen shots and information which makes some of it hard to follow the first time through.  I will try to highlight the differences here when I catch them.

The steps here will get a BASIC non-scalable WordPress installation onto AWS.    Part 2 will make this a scalable instance.    If my assumptions are correct, which happens from time-to-time, I can later use command-line tools with git on my local dev box to push updated applications out the the server stack.  If that works it will be Part 3 of the series on WP ELB Deployment.

Getting Started

The “shopping list” for getting started using my methodology.    Some of these you can change to suit your needs, especially the “local dev” parts.  Don’t go setting all of this up yet, some things need to be setup a specific way.  This is just the general list of what you will be getting into. In addition to this list you will need lots and lots of patience.  It may help to be bald; if not you will lose some hair during the process.

 

Part 1 : Installation

  • A local virtual machine.  I use VirtualBox.
  • A clean install of the latest WordPress code on that box, no need to run the setup, just the software install.
  • An AWS account.
  • A “WP Deployment” specific AWS user that has IAM rules to secure your deployment.
  • AWS Elastic  Beanstalk to manage the AWS Elastic Load Balancer and EC2 instances.

Part 2 : Scalability

  • AWS S3 bucket for storing static shared content (CSS rules, images, etc.)
  • AWS Elasticache for setting up Memcache for improved database performance.
  • AWS Cloudfront to improve the delivery of content across your front-end WordPress nodes.
  • AWS RDS to share the main WordPress data between your Elastic Beanstalk nodes.

Creating The “Application”

The first step is to create the web application.  In this case, WordPress.

I recommend creating a self-contained environment versus installing locally on your machine, but whatever you’re comfortable with.   I like to use VirtualBox , sometimes paired with Vagrant if I want to distribute the box to others, with a CentOS GUI development environment.  Any flavor of OS will work as the application building is really just hacking some of the WordPress config files and creating an “environment variables” directory for AWS inside a standard WP install.

Got your box booted?  Great!

Fetch the latest download of WordPress.

Install it locally.

Remove wp-config-sample.php.

Create a new wp-config.php that looks like this:

<?php

// An AWS ELB friendly config file.

/** Detect if SSL is used. This is required since we are
terminating SSL either on CloudFront or on ELB */
if (($_SERVER['HTTP_CLOUDFRONT_FORWARDED_PROTO'] == 'https') OR ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https'))
{$_SERVER['HTTPS']='on';}

/** The name of the database for WordPress */ define('DB_NAME', $_SERVER["RDS_DB_NAME"]);
/** MySQL database username */
define('DB_USER', $_SERVER["RDS_USERNAME"]);

/** MySQL database password */ define('DB_PASSWORD', $_SERVER["RDS_PASSWORD"]); /** MySQL hostname */
define('DB_HOST', $_SERVER["RDS_HOSTNAME"]);

/** Database Charset to use in creating database tables. */ define('DB_CHARSET', 'utf8');
/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');

/**#@+
 * Authentication Unique Keys and Salts.
 * Change these to different unique phrases!
 */
define('AUTH_KEY',$_SERVER["SECURE_AUTH_KEY"]);
define('SECURE_AUTH_KEY',$_SERVER["AUTH_KEY"]);
define('LOGGED_IN_KEY',$_SERVER["LOGGED_IN_KEY"]);
define('NONCE_KEY',$_SERVER["NONCE_KEY"]);
define('AUTH_SALT',$_SERVER["AUTH_SALT"]);
define('SECURE_AUTH_SALT', $_SERVER["SECURE_AUTH_SALT"]);
define('LOGGED_IN_SALT', $_SERVER["LOGGED_IN_SALT"]);
define('NONCE_SALT', $_SERVER["NONCE_SALT"]);

/**#@-*/

/**
 * WordPress Database Table prefix.
 *
 * You can have multiple installations in one database if you give each a unique
 * prefix. Only numbers, letters, and underscores please!
 */
$table_prefix  = 'wp_';

/**
 * For developers: WordPress debugging mode.
 *
 * Change this to true to enable the display of notices during development.
 * It is strongly recommended that plugin and theme developers use WP_DEBUG
 * in their development environments.
 */
define('WP_DEBUG', false);
/* Multisite */
//define( 'WP_ALLOW_MULTISITE', true );

/* That's all, stop editing! Happy blogging. */

/** Absolute path to the WordPress directory. */
if ( !defined('ABSPATH') )
        define('ABSPATH', dirname(__FILE__) . '/');

/** Sets up WordPress vars and included files. */
require_once(ABSPATH . 'wp-settings.php');

Do not move this wp-config.php file out of the root directory.  It is a common security practice but it will be missing from your AWS deployment.  There are probably ways to secure this by changing your target destination when setting up AWS Cloufront, but that is beyond the scope of this article.

Settings like the $_SERVER[‘RDS_USERNAME’] will come from the AWS Elastic Beanstalk environment you will create later.  This is set dynamically by AWS when you attach the RDS instance to the application environment.  This ensures the persistent data for WordPress, things like your dynamic site content including pages, posts, users, order information, etc. is shared on a single highly-reliable database server and each new node in your scalable app pulls from the same data set.

Settings for the “Salt” come from a YAML-style config file you will add next.     This is bundled with the WordPress “source” for the application to ensure the salts are the same across each node of your WordPress deployment.     This ensures consistency when your web app scales, firing up server number 3, 4, and 5 while under load.

Create a directory in the root WordPress folder named .ebextensions.

Fetch new salts from WordPress.

Create a new file named keys.conf in the .ebextensions directory that looks like this, but using YOUR salts:

option_settings:
- option_name: AUTH_KEY
  value: '0VghKxxxxxn?%H$}jc5.-y1U%L)*&Ha/?)To<E>vTB9ukbd-FNoq^+.4A+I1Y/zp'
- option_name: SECURE_AUTH_KEY
  value: 'z 7)&E~NjioIREE@g+TKs-~yO-P)uq2Zm&98Zw>GK_rYb_}a,C#HD[K98ALxxxxx'
- option_name: LOGGED_IN_KEY
  value: 'yq@K{i=z(xxxxxm1VOi80~.H?[,h+F+_wua]I:z-YZF|a-vEV[n/6pRBlw+qAe^q'
- option_name: NONCE_KEY
  value: 'Bq=kbD|H#iMt5#[d[qURMP8C}xxxxxf[WaI6.oF5=r1h#:E?BZ-L28,7x~@oZw#7'
- option_name: AUTH_SALT
  value: 'O;4uq817 CSs3-ZAUY>e%#xxxxx<:u~=Is4d6:CI3io;aL<h]+x~;S_fc3E oEB1_'
- option_name: SECURE_AUTH_SALT
  value: 'nF94Rasp-0iaxxxxxm:|e82*M9R!y>% b68[oN|?_&4MRbl.)n8uB-ph|*qIPq|e'
- option_name: LOGGED_IN_SALT
  value: '&Ah^OIb<`xxxxx+lKV=zFER_^`+gA%.UWCIy|fJ+RfKiYKBP^&,[|%6K<%C[eU]n'
- option_name: NONCE_SALT
  value: 'ZiKejG|xxxxx k3>nr)~AN5?*hd!aO-)E^fR^^!_PR1n[oq{??F`,NQmdfE2Mj:`'

Zip up your application to make it ready for deployment.

Do NOT start from the parent directory. The zip should start from the WordPress root directory. On Linux I used the this command from the main WordPress directory where wp-config.php lives:
zip -r ../wordpress-site-for-elb.zip .

Create The Elastic Beanstalk Environment

Login to the AWS Console.

Go to Elastic Beanstalk.

Go to ELB Create New Application.

 

AWS ELB application info
AWS ELB application info

Select Create Web Server.

AWS ELB Web Server Environment
AWS ELB Web Server Environment

Select the default permissions (I didn’t have a choice here).

AWS ELB Permissions
AWS ELB Permissions

Set the Environment to PHP and Load Balancing, auto scaling.

AWS ELB Environment Type
AWS ELB Environment Type

Upload your .zip file you created above as the source for the application.
Leave Deployment Limits at their defaults.
As a side note, this will create an application that you can later user for other environments, making it easy to launch new sites with their own RDS and Cloudfront settings but using the same WordPress setup.

AWS ELB Application Version
AWS ELB Application Version

Set your new Environment Name.
If your application name was unique you can use the default.
If your application name is “WordPress” it is likely in use on ELB, try something more unique.

AWS ELB Environment Name
AWS ELB Environment Name

Tell ELB to create an RDS instance for you.
I chose not to put his in a VPC, which is the default.
The guide I linked to above, shows a non-VPC, but then gives instructions on a VPC deployment.   This caused issues.
Some instance sizes for both RDS and the EC2 instance ELB creates will ONLY run in a VPC (anything with a “t” level).
You will need to choose the larger “m-size” instances for RDS and EC2 otherwise the ELB setup will fail after 15-20 minutes of “spinning its wheels”.

AWS ELB Create RDS not in VPC
AWS ELB Create RDS not in VPC

Set your configuration details.

Choose an instance type of m*, I chose m3.medium the first time around, but m1.small should suffice for a small WP site.

Select an EC2 key pair to be able to connect with SSH. If you did not create one on your MAIN AWS login, got the the IAM panel and do that now. Save the private key on your local ox and make a backup of it.

The email address is not required, I like to know if the environment changed especially if I did not change it.

Set the application health check URL to
HTTP:80/readme.html

Uncheck rolling updates.

Defaults for the rest will work.

AWS ELB Configuration Details
AWS ELB Configuration Details

You can set some tags for the environment, but it is not necessary. Supposedly they help in reporting on account usage, but I’m not that far along yet.

AWS ELB Tags
AWS ELB Tags

Setup your RDS instance.
Again, choose an m* instance as the t* instances will not boot unless you are in a VPC.
If you choose the wrong instance ELB will “sit and spin” for something that seems to be a decade, before booting to “gray state” which is AWS terminology for half-ass and useless.
If you cannot tell, this was the most frustrating part of the setup as I tried SEVERAL different instance classes.    Each time the ELB would hang and then take forever to delete.

Enter your DB username and password.
They will be auto-configured by the wp-config.php hack you made earlier.I do recommend, however, saving these somewhere in case you need to connect to MySQL remotely.  I hosed my host and siteurl and needed to go to my local dev box, fire up MySQL command line, and update the wp_options table after I booted my application in ELB.    Having the username/password for the DB is helpful for that type of thing.

AWS ELB RDS Config
AWS ELB RDS Config

Review your settings, launch and wait.

Reviewing ELB Settings

When you are done your Elastic Beanstalk should look something like this:

AWS ELB Web Tier Final Config
AWS ELB Web Tier Final Config
AWS ELB Data and Network Final Config
AWS ELB Data and Network Final Config

Useful Resources

Deploying WordPress with AWS Elastic Beanstalk – single or multiple zone, fully scalable, cached.

Deploying a WP install with git on ELB – single zone and may not scale.

 

Posted on

Backing Up A Linux Directory To The Cloud

We use Amazon S3 to backup a myriad of directories and data dumps from our local development and public live servers.  The storage is cheap, easily accessible, and is in a remote third party location with decent resilience.  The storage is secure unless you share your bucket information and key files with a third party.

In this article we explore the task of backing up a Linux directory via the command line to an S3 bucket.   This article assumes you’ve signed up for Amazon Web Services (AWS) and have S3 capabilities enabled on your account.  That can all be done via the simple web interface at Amazon.

Step 1 : Get s3tools Installed

The easiest way to interface with Amazon from the command line is to install the open source s3tools application toolkit from the web.  You can get the toolkit from http://www.s3tools.org/.  If you are on a Redhat based distribution you can create the yum repo file and simply to a yum install.  For all other distributions you’ll need to fetch and build from source (actually running python setup.py install) after you download.

Once you have s3cmd installed you will need to configure it.  Run the following command (not you will need your access key and secret key from your Amazon AWS account):
s3cmd --configure

Step 2 : Create A Simple Backup Script

Go to the directory you wish to backup and create the following script named backthisup.sh:

#!/bin/sh
SITENAME='mysite'
# Create a tarzip of the directory
echo 'Making tarzip of this directory...'
tar cvz --exclude backup.tgz -f backup.tgz ./*
# Make the s3 bucket (ignored if already there)
echo 'Create bucket if it is not there...'
s3cmd mb s3://backup.$SITENAME
# Put that tarzip we just made on s3
echo 'Storing files on s3...'
s3cmd put backup.tgz s3://backup.$SITENAME

Note that this is a simple backup script.  It tarzips the current directory and then pushes it to the s3 bucket.  This is good for a quick backup but not the best solution for ongoing repeated backups.  The reason is that most of the time you will want to perform a differential backup, only putting the stuff that is changed or newly created into the s3 bucket. AWS charges you for every put and get operation and for bandwidth.  Granted the fees are low, but every penny counts.

Next Steps : Differential Backups

If you don’t want to always push all your files to the server every time you run the script you can do a differential backup.   This is easily accomplished with S3Tools by using the sync instead of the push command.   We leave that to a future article.

Posted on

Custom Site & Store Builder with Energy Inc.

The Energy Detective (TED) is a consumer based product that helps home users track their energy usage on a per-device or cross-household level. When Energy Inc, the makers or TED needed to upgrade their site with an easy-to update content management system (CMS) and the addition of a custom storefront, they came to Cyber Sprocket Labs.

Within months we had ported their old static-page driven site to our new custom site builder. They could now easily update their own content without getting developers involved, and better yet – the system protected them from inadvertently breaking their site design. The staff at Energy Inc. soon became experts at the system and added new content as well as new product models to the site.

The site also started with a simple storefront module. It allowed Energy Inc. to upload new products and track inventory levels to ensure customers knew when an item was put on backorder. The new storefront module allowed Energy Inc. to easily show and sell their wares while automating part of the order process on the back end.

Soon the orders started to roll in and Energy Inc. needed more sophisticated order tracking and management. Updates were made to add automated interfaces with FedEx for real time shipping quotes anywhere in the US and it’s territories. New order search and tracking features where added so that Energy Inc. knew what shipped, what was backordered, and what was being returned under their return merchandise authorization policy.

Energy Inc’s TED product was doing well, and the media started to notice. So did Google. As one of the first partners in Google’s new energy management program, Energy Inc. realized that their shared Linux server was not going to be able to handle the new influx of traffic. Luckily, Cyber Sprocket Labs had already been working on the Amazon Web Services cloud for more than 18 months. We knew our way around the system and helped Energy Inc. navigate the maze of cloud computing and served as a guide to the new platform. Energy Inc. decided to make the move to the nearly infinite scalability and on-demand compute environment of cloud computing.

Cyber Sprocket Labs helped migrate Energy Inc over to the Amazon Cloud in less than a week. No downtime while at the same time providing a significant boost in processing power… just in time for Google’s big announcement.

Congratulations on your success, Dolph! Glad we could be there to help get your web services off the ground!

Technical Overview

Services Provided

  • Custom website builder
  • Custom shopping cart
  • Custom order processing and management system
  • Porting to Amazon Web Services Cloud

Platform Details