Posted on

SLP versions: Power add-on, Experience add-on version 4.6.1 updates to import functions

shem-001

Store Locator Plus 4.6.1 Highlights

• Allow add-ons to load JS on a per-admin-tab basis. Reduces browser overhead and memory footprint on admin pages.

• Simplify and improve the new option manager. More consistent option handling, more security option management, better performance.

•  Checkbox on/off validation for all add ons – fixes instant checkbox saving on admin panel for some add on options

• All language files are now pulled from the MySLP website.  • Missing translations? See MySLP and add your translation updates there.

Change Log for SLP Power Add-on

Posted on

Configuring Apache 2.4 Connections For WordPress Sites

Recently I upgraded my web server to PHP 5.6.14. Along the way the process managed to obliterate my Apache web server configuration files. Luckily it saves them during the upgrade process, but one thing I forgot to restore was the settings that help Apache manage memory. Friday night around midnight, because this stuff ALWAYS happens when you’re asleep… the server crashed. And since it was out of memory with a bazillion people trying to surf the site; every time I restarted the server I could not log in fast enough to get a connection and fix the problem.

Eventually I had to disconnect my AWS public IP address, connect to a private address with SSH, and build the proper Apache configuration file to ensure Apache didn’t go rogue and try to take over the Internet from my little AWS web server.

Here are my cheat-sheet notes about configuring Apache 2.4 so that it starts asking site visitors to “hold on a second” when memory starts getting low. That is much nicer than grabbing more memory than it should and just crashing EVERYTHING.

My Configuration File

I put this new configuration file in the /etc/httpd/conf.d directory and named it mpm_prefork.conf. That should help prevent it from going away on a future Apache upgrade. This configuration is for an m3.large server running with 7.4GB of RAM with a typical WordPress 4.4 install with WooCommerce and other plugins installed.

# prefork MPM for Apache 2.4
#
# use httpd -V to determine which MPM module is in use.
#
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# ServerLimit: maximum value for MaxRequestWorkers for the lifetime of the server
#
# MaxRequestWorkers: maximum number of server processes allowed to start
#
#
# TOTAL SYTEM RAM: free -m (first column) = 7400 MB
# USED SYSTEM RAM: free -m (second column) = 2300 MB
#
# AVG APACHE RAM LOAD: htop (filter httpd, average RES column = loaded in physical RAM) = 87MB
# TOTAL APACHE RAM LOAD: (htop sum RES column) 1900 MB
#
# BASE SYSTEM RAM LOAD: USED SYSTEM RAM - TOTAL APACHE RAM LOAD = 2300 - 1900 = 400MB
#
# AVAILABLE FOR APACHE: TOTAL SYSTEM RAM - BASE SYSTEM RAM LOAD = 7400 - 400 = 7000MB
#
# ServerLimit = sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process
# MaxRequestWorkers = number of simultaneous child processes to serve requests , must increase ServerLimit
#
# If both ServerLimit and MaxRequestWorkers are set to values higher than the system can handle,
# Apache httpd may not start or the system may become unstable.
#
# MaxConnectionsPerChild = how many requests are served before the child process dies and is restarted
# find your average requests served per day and divide by average servers run per day
# a good starting default for most servers is 1000 requests
#
# ServerLimit = AVAILABLE FOR APACHE / AVG APACHE RAM LOAD = 7000MB / 87MB = 80
#
#

ServerLimit 64
MaxRequestWorkers 64
MaxConnectionsPerChild 2400

The Directives

With Apache 2.4 you only need to adjust 3 directives. ServerLimit, MaxRequestWorkers (renamed from earlier versions) , and MaxConnectionsPerChild (also renamed).

ServerLimit / MaxRequestWorkers

ServerLimit sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process. MaxRequestWorkers is the number of simultaneous child processes to serve requests. This seems a bit redundant, but it is an effect of using the prefork MPM module which is a threadless design. That means it runs a bit faster but eats up a bit more memory. It is the default mode for Apache running on Amazon Linux. I prefer it as I like stability over performance and some older web technologies don’t play well with multi-threaded design. If I was going to go with a more stable multi-thread environment I’d switch to nginx. For this setup setting ServerLimit and MaxRequestWorkers to the same value is fine. This says “don’t ever run more than this many web servers at one time”.

In essence this is the total simultaneous web connections you can serve at one time. What does that mean? With the older HTTP and HTTPS protocol that means every element of your page that comes from your server is a connection. The page text, any images, scripts, and CSS files are all a separate request. Luckily most of this comes out of the server quickly so a page with 20 web objects on it will use up 20 of your 64 connections but will spit them out in less than 2 seconds leaving those connections ready for the next site visitor while the first guy (or gal) reads your content. With newer HTTP/2 (and SPDY) connections a single process (worker) may handle multiple content requests from the same user so you may well end up using 1 or 2 connections even with a page with multiple objects loading. While that is an over-simplification, the general premise shows why you should update your site to https and get on services that support HTTP/2.

Calculating A Value

# TOTAL SYTEM RAM: free -m (first column) = 7400 MB
# USED SYSTEM RAM: free -m (second column) = 2300 MB
# TOTAL APACHE RAM LOAD: (htop sum RES column) 1900 MB
# AVG APACHE RAM LOAD: htop (filter httpd, average RES column = loaded in physical RAM) = 87MB
# BASE SYSTEM RAM LOAD: USED SYSTEM RAM - TOTAL APACHE RAM LOAD = 2300 - 1900 = 400MB
# AVAILABLE FOR APACHE: TOTAL SYSTEM RAM - BASE SYSTEM RAM LOAD = 7400 - 400 = 7000MB
# ServerLimit = AVAILABLE FOR APACHE / AVG APACHE RAM LOAD = 7000MB / 87MB = 80

There you go, easy, right? Figuring our RAM resources can be complicated, but to simplify the process start with the built-in Linux free command and I suggest installing htop which provides a simpler interface to see what is running on your server. You will want to do this on your live server under normal load if possible.

Using free -m from the Linux command line will tell you the general high-level overview of your server’s memory status. You want to know how much is installed and how much is in use. In my case I have 7400MB of RAM and 2300MB was in use.

Next you want to figure out how much is in use by Apache and how much an average web connection is using per request. Use htop, filter to show only the httpd processes, and do math. My server was using 1900MB for the httpd processes. The average RAM per process was 87MB.

You can now figure out how much RAM is used by “non-web stuff” on your server. Of the 2300MB of used RAM, Apache was using up 1900MB. That means my server uses about 400MB for general system overhead and various background processes like my system-level backup service. That means on a “clean start” my server should show about 7000MB available for web work. I can verify that by stopping Apache and running free -m after the system “rests” for a few minutes to clear caches and other stuff.

Since I will have 7000MB available for web stuff I can determine that my current WordPress configuration, PHP setup, and other variables will come out to about 87MB being used for each web session. That means I can fit about 80 web processes into memory at one time before all hell breaks loose.

Since I don’t like to exhaust memory and I’m a big fan of the 80/20 rule, I set my maximum web processed to 64. 7000MB / 87MB = 80 * .8 = 64.

That is where you want to set your ServerLimit and MaxRequestWorkers.

MaxConnectionsPerChild

This determines how long those workers are going to “live” before they die off. Any worker will accept a request to send something out to your site visitor. When it is done it doesn’t go away. Instead is tells Apache “hey, I’m ready for more work”. However every-so-often one of the things that is requested breaks. A bad script in PHP may be leaking memory, for example. As a safety valve Apache provides the MaxConnectionsPerChild directive. This tells Apache that after this child has served this many objects to die. Apache will start a new process to replace it. This ensures and memory “cruft” that is built up is cleared out should something go wrong.

Set this number too low and you server spends valuable time killing and creating Apache processes. You don’t want that. Set it too high and you run the risk of “memory cruft” building up and eventually having Apache kill your server with out-of-memory issues. Most system admins try to set this to a value that has it reset once every 24 hours. This is hard to calculate unless you know your average objects requested every day, how many processes served those objects, and other factors like HTTP versus HTTP2 can come into play. Not too mention fluctuations like weekend versus weekday load. Most system admins target 1000 requests. For my server load I am guessing 2400 requests is a good value, especially since I’ve left some extra room for memory “cruft”.

Posted on

HTTPS On Amazon Linux With LetsEncrypt

Internet2

In order to provide faster and more secure connections to the Store Locator Web service we have added https support through Sucuri.   Adding https will allow us to take advantage of SPDY and HTTP2 which are the latest improvements to web connection technology.   There are many reasons to get your servers onto full https support.   As we learned it isn’t a one-click operation, but without too much additional effort you can get your servers running on Amazon Linux with a secured connection.   Here are the cheat sheet notes based on our experience.

EC2 Server Rules

With EC2 you will want to make sure you set your security group rules to allow incoming connections on port 443.  By default no ports are open, you already added port 80 for web support.   Make sure you go back and add port 443 as an open inbound rule.

Apache SSL Support

Next you need to configure the Apache web server to handle SSL connections.   The easiest way to get started is to install the mod_ssl library which will create the necessary ssl.conf file in /etc/httpd/conf.d/ssl.conf and turn on the port 443 listener.


# sudo service httpd stop
# sudo yum update -y
# sudo yum install -y mod24_ssl

Get Your Let’s Encrypt Certificate

This is more of a challenge if you don’t know where to start. Part of the issue is Amazon Linux runs Python 2.6 and Let’s Enrypt likes Python 2.7. Luckily there has been progress on getting this working so you can cheat a bit.

# git clone https://github.com/letsencrypt/letsencrypt
# cd letsencrypt
# git checkout amazonlinux
# sudo ./letsencrypt-auto --agree-dev-preview --server https://acme-v01.api.letsencrypt.org/directory certonly -d yourdomain.name -d www.yourdomain.name -v --debug

You may get some warnings and other messages but eventually you will get an ANSI-mode dialogue screen (welcome to 1985) that walks you through accepting terms and the certification. Answer the questions and accept your way to a new cert.

Your certs will be placed in /etc/letsencrypt/live/ , remember this path as you will need it later.

Update SSL.conf

Go to the /etc/httpd/conf.d directory and edit the ssl.conf file.

Look for these 3 directives and change them to point to the cert.pem, privkey.pem, and chain.pem file.

SSLCertificateFile
SSLCertificateKeyFile
SSLCertificateChainFile

Restart Apache & Get Secure

No restart apache and check by surfing to https:///

# service httpd start

You may need to update various setting on your web apps especially if you use .htaccess to rewrite URLS with http or https.

Posted on

Fixing VVV svn cleanup Invalid cross-device link messages

Ran into a unique situation while updating my VVV box after  a weekend of WordPress Core and plugin development at WordCamp US this past weekend.   Today, after the formal release of WordPress 4.4 I needed to update the code in my WordPress trunk directory on the VVV box.  Since I have other things in progress I didn’t want to take the time to reprovision the entire box.  Though, as it turn out, that would have been faster.

The issue was when I tried to do the svn up command to update the /srv/www/wordpress-trunk directory and make sure I was on the latest code.   The command failed, insisting that a previous operation was incomplete.  Not surprising since the connectivity at the conference was less-than-consistent.    svn kindly suggest I run svn cleanup.  Which I did.  And was promptly met with an “invalid device cross-link” error when it tried to restore hello.php to the plugin directory.

The problem is that I develop plugins for a living.   As such I have followed the typical VVV setup and have linked my local plugin source code directory to the vvv plugin directory for each of the different source directories on that box.    I created the suggested Customfile on my host system and mapped the different directory paths.     On the guest box, however, the system sees this mapping as a separate drive.  Which it is.  And, quite honestly I’m glad they have some security in place to protect this.  Otherwise a rogue app brought in via the Vagrant guest could start writing stuff to your host drive.   I can think of more than one way to do really bad things if that was left wide-open as a two-way read-write channel.

VVV Customfile Cross Device Maker
VVV Customfile Cross Device Maker

The solution?

Comment out the mapping in Customfile on the host server.  Go to your vvv directory and find that Customfile.  Throw a hashtag (or pound sign for us old guys) in front of the directory paths you are trying to update with svn.  In my case wordpress-trunk.

Run the vagrant reload command so you don’t pull down and provision a whole new box, but DO break the linkage to the host directory and guest directory.

Go run your svn cleanup and update on the host to fetch the latest WP code.

Go back to the host, kill the hashtag, and reload.

 

Hope that saves you an extra 20 minutes surfing Google, or your favorite search service, for the answer.

 

Posted on

Boosting WordPress Site Performance : Upgrade PHP

As with every single WordCamp I’ve attended there is something new to be learned no matter how much of a veteran you are.   My 5th WordCamp at WordCamp US 2015 was no different.    There are a lot of things I will be adding to my system admin and my development tool belt after the past 48 hours in Philadelphia.

Today’s update that was just employed on the Store Locator Plus website:   Upgrading PHP.

Turns out that many web hosting packages and server images, including the Amazon Linux Image, run VERY OLD versions of PHP.    I knew that.   What I didn’t know was the PERFORMANCE GAINS of upgrading even a minor version of PHP.    PHP 5.6 is about 25% faster than PHP 5.3.    PHP 5.3 was the version I was running on this site until midnight.

WP Performance On PHP
WP Performance on PHP. Source: http://talks.php.net/fluent15#/wpbench

The upgrade process?  A few dozen command-line commands, testing the site, and restoring the name server configurations from the Apache config file which the upgrade process auto-saved for me.  EASY.

What about PHP 7?   That is 2-3x faster.  Not 2%.  100 – 200%.   WOW!    As soon as Amazon releases the install packages for their RHEL derivative OS it will be time to upgrade.

 

If you are not sure what version your web server is running (it can be different than command line on you server) you can find that info in the Store Locator Plus info tab.

SLP PHP Info
SLP PHP Info

The take-away?   If you are not running PHP 5.6, the latest release of PHP prior to PHP 7, get on it.  One of the main components of your WordPress stack will be running a lot faster, have more bug fixes, security patches, and more.

Posted on

Critical Persistent XSS 0day in WordPress | Sucuri Blog

If you have comments enabled on your WordPress site you may want to disable them until a patch is issued.    Hackers can overload the comments and inject JavaScript-based code into your comment stream.  While this will not likely allow access into your WordPress site, the hackers can use this method to make your website the distribution point for JavaScript code that attacks your site visitors devices.  The most vulnerable users will be those visiting your site using desktops or laptops.

Read about the security issue at the Sucuri blog.

Who’s affected If your WordPress site allows users to post comments via the WordPress commenting system, you’re at risk. An attacker could leverage a bug in the way comments are stored in the site’s database to insert malicious scripts on your site, thus potentially allowing them to infect your visitors with malware, inject SEO spam or even insert backdoor in the site’s code if the code runs when in a logged-in administrator browser. You should definitely disable comments on your site until a patch is mad

Source: Critical Persistent XSS 0day in WordPress | Sucuri Blog

Posted on

OSX Mapping Control F To Find

After using Windows for more than 20 years I have found the switch to OS/X to have been a move that should have happened years ago. I cannot count the thousands of hours of lost productivity. To be fair, it is likely a few hundred hours as OS/X was not a true alternative to the power of Windows until the latest OS/X iterations over the past 5 years. After spending more than EIGHT HOURS trying to get Minecraft working on a 3-year-old PC (windows updates, driver updates, incompatible graphics drivers for Java… the usual Windows debacle) I decided I will rarely-if-ever recommend ANYONE every buy a Windows PC from this point forward.

However the move to OS/X has not been completely pain free. I am heavily trained on Windows and Linux keyboard shortcuts. One of the BIGGEST FAILINGS of OS/X was their decision to create a proprietary keyboard system for OS/X. They introduced things like the command key and the “apple key” along the way when the rest of the OS world standardized on keyboard mappings using the more-than-adequate 100+ keys including control, alt, shift, and the thousands of combinations therein.

After 6 months of using OS/X I still find myself pressing things like control-SOMETHING to perform an action. Control (^) C for copy. ^f for find. ^x for cut. ^v for paste. To perpetuate the keyboard training, I use a Linux virtual machine in GUI mode for my daily WordPress development. Linux adopted the defacto standards of the industry in the early 80s and uses the same key presses defined by , not Microsoft, but IBM.

Today I find the control-key training to slow me down significantly when using native OS/X apps. I found a “fix” for mapping the edit operations fairly quickly. You can do so without using Karabiner (which causes some odd side effects). You can edit the DefaultKeyBinding.dict file in your Users directory on OS/X. Create or edit the file:
~/Library/KeyBindings/DefaultKeyBinding.dict

You can assign keys like this:

{
/* Remap Home / End to PC Edition */
"\UF729"  = "moveToBeginningOfLine:";                   /* Home         */
"\UF72B"  = "moveToEndOfLine:";                         /* End          */
"$\UF729" = "moveToBeginningOfLineAndModifySelection:"; /* Shift + Home */
"$\UF72B" = "moveToEndOfLineAndModifySelection:";       /* Shift + End  */

    "^x" = cut:;
    "^c" = copy:;
    "^v" = paste:;
}

That remaps the oft-used ^c, ^x, and ^v commands to their OS/X equivalents of Command (@) X, @-c, and @-v respectively.

But mapping ^f is a whole other kettle of fish. Mapping the action for find is NOT quite as simple. I tried mapping ^f to @f, which executes a “find action” in most OS/X apps. I tried using the action code “find:”, to no avail.

However there is an easy way to map ^f to find in most apps. MOST apps. Not OS/X Firefox, which appears to not like changing the default keyboard mapping utility. More on that later.

For most apps, let’s take Google Chrome as an example, you can change the ^f to be the find key by using OS/X System Preferences.

OSX Keyboard Shortcuts
OSX Keyboard Shortcuts interface.
  • Go to System Preferences
  • Select Keyboard
  • Select Shortcuts
  • Select App Shortcuts
  • Then click the + to add a new app shortcut.
  • Pick your app, Google Chrome in this case.
  • Now you need to find the EXACT menu text for the find command.  On Google Chrome it is “Find…”.  The … is important.
  • Press the Control-F key combination to set that as the new find command.
  • Exit the app, i.e. Google Chrome, if you had it running (which you did because you had to look at  the Find menu entry text).
  • Restart the app.
  • Your menu should now show the Find shortcut code is ^f
Chrome ^f as Find
Chrome ^f as Find

What about Firefox?   It turns out the mapping shows up but does not work.   The menu shows ^f is the new find key, but it does not activate.  I had to install the Customizable Shortcuts add-on for Firefox and set the find key to be ^f instead of modifier-F (the system-set default).     For some reason Firefox has hard-bound to the modifier key internally instead of using standard OS/X keyboard mapping system calls.   The menu rendering is apparently a separate piece of code.

 

Posted on

AWS gMail Relay Setup

SMTP Relay Banner

After moving to a new AWS server I discovered that my mail configuration files were not configured as part of my backup service on my old server. In addition my new server is using sendmail instead of postfix for mail services. That mean re-learning and re-discovering how to setup mail relay through gmail.

Why Relay?

Cloud servers tend to be blacklisted. Sure enough, my IP address on the new server is on the Spamhaus PBL list. While Amazon allows for elastic IP addresses, a quasi-permanent IP address that acts like a static IP, which can be added to the whitelist on the Spamhaus PBL it is not the best option. Servers change, especially in the cloud. I find the best option is to route email through a trusted email service. I use Google Business Apps email accounts and have one setup just for this purpose. Now to configure sendmail to re-route all outbound mail from my server to my gmail account.

Configuring Amazon Linux

Here are my cheat-sheet notes about getting an Amazon Linux (RHEL flavor of Linux) box to use the default sendmail to push content through gmail.

Install packages needed.

# sudo su -
# yum install cyrus-sasl ca-certificates sendmail make

Create your certificates

This is needed for the TLS authentication.

</p>
# cd /etc/pki/tls/certs
# make sendmail.pem
# cd /etc/mail
# mkdir certs
# chmod 700 certs
# cd certs
# cp /etc/pki/tls/certs/ca-bundle.crt /etc/mail/certs/ca-bundle.crt
# cp /etc/pki/tls/certs/sendmail.pem /etc/mail/certs/sendmail.pm

Setup your authinfo file

The AuthInfo entries start with the relay server host name and port.

U = the AWS server user that will be the source of the email.

I = your gmail user name, if using business apps it is likely @yourdomain.com not @gmail.com

P = your gmail email password

M = the method of authentication, PLAIN will suffice

# cd /etc/mail
# vim gmail-auth

AuthInfo:smtp-relay.gmail.com "U:ec2-user" "I:your-gmail-addy@gmail.com" "P:yourpassword" "M:PLAIN"
AuthInfo:smtp-relay.gmail.com "U:apache" "I:your-gmail-addy@gmail.com" "P:yourpassword" "M:PLAIN"
AuthInfo:smtp-relay.gmail.com:587 "U:ec2-user" "I:your-gmail-addy@gmail.com" "P:yourpassword" "M:PLAIN"
AuthInfo:smtp-relay.gmail.com:587 "U:apache" "I:your-gmail-addy@gmail.com" "P:yourpassword" "M:PLAIN"

# chmod 600 gmail-auth
# makemap -r hash gmail-auth < gmail-auth

Configure Sendmail

Edit the sendmail.mc file and run make to turn it into a sendmail.cf configuration file.  Look for each of the entries noted in the sendmail.mc comments.  Uncomment the entries and/or change them as noted.    A couple of new lines will need to be added to the sendmail.mc file.   I add the new lines just before the MAILER(smpt)dnl line at the end of the file.

Most of these exist throughout the file and are commented out.   I uncommented the lines and modified them as needed so they appear near the comment blocks that explain what is going on:

# vim /etc/mail/sendmail.mc
define(`SMART_HOST', `smtp-relay.gmail.com')dnl
define(`confAUTH_OPTIONS', `A p')dnl
TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl
define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl
define(`confCACERT_PATH', `/etc/mail/certs')dnl
define(`confCACERT', `/etc/mail/certs/ca-bundle.crt')dnl
define(`confSERVER_CERT', `/etc/mail/certs/sendmail.pem')dnl
define(`confSERVER_KEY', `/etc/mail/certs/sendmail.pem')dnl

Add these lines to the end of sendmail.mc just above the first MAILER()dnl entries:

</p>
<p style="padding-left: 30px;">define(`RELAY_MAILER_ARGS', `TCP $h 587')dnl</p>
<p style="padding-left: 30px;">define(`ESMTP_MAILER_ARGS', `TCP $h 587')dnl</p>
<p style="padding-left: 30px;">FEATURE(`authinfo',`hash -o /etc/mail/gmail-auth.db')dnl</p>
<p style="padding-left: 30px;">

If you are using business apps you may need these settings to make the email come from your domain and to pass authentication based on your Gmail relay settings.    These are also in sendmail.mc:

MASQUERADE_AS(`charlestonsw.com')dnl
FEATURE(masquerade_envelope)dnl
FEATURE(masquerade_entire_domain)dnl
MASQUERADE_DOMAIN(localhost)dnl
MASQUERADE_DOMAIN(localhost.localdomain)dnl
MASQUERADE_DOMAIN(charlestonsw.com)dnl

Make the configuration-helper into a sendmail.mc file and restart sendmail:

# make
# service sendmail restart

Configure Gmail Services

This is for business apps users, you need to turn on relay.

Go to “manage this domain” for your business apps account.

Go to “Google Apps”.

Click on “Gmail”.

Click “advanced settings”.

Find the “SMTP relay service” entry.    Add a  new entry.

Only addresses in my domain, require SMTP, require TLS all need to be selected.

Give it a name.

Save.

Save again.

Posted on

Configuring Apache Connections

Apache Banner

In preparation for WordCamp Charleston I updated my server to add more RAM.  The upgrade was the perfect opportunity to check my Apache connections configuration.  Here is the background on my calculations and how I configure my Apache 2.2 server on CentOS for both a 7GB and a 14GB dedicated server.

Apache 2.2 Configuration Directives

from: http://httpd.apache.org/docs/2.2/mod/prefork.html

The StartServers, MinSpareServers, MaxSpareServers, and MaxClients regulate how the parent process creates children to serve requests. In general, Apache is very self-regulating, so most sites do not need to adjust these directives from their default values. Sites which need to serve more than 256 simultaneous requests may need to increase MaxClients, while sites with limited memory may need to decrease MaxClients to keep the server from thrashing (swapping memory to disk and back). More information about tuning process creation is provided in the performance hints documentation.

While the parent process is usually started as root under Unix in order to bind to port 80, the child processes are launched by Apache as a less-privileged user. The User and Groupdirectives are used to set the privileges of the Apache child processes. The child processes must be able to read all the content that will be served, but should have as few privileges beyond that as possible.

MaxRequestsPerChild controls how frequently the server recycles processes by killing old ones and launching new ones.

Apache 2.2. Prefork Settings Summary

StartServers – how many connectors to start with (waiting for a HTTP request)

MinSpareServers – how many IDLE connectors to keep online ALWAYS

MaxSpareServers – how many IDLE connectors to keep online at one time

MaxClients – how many active connectors to spin up, total

ServerLimit – absolute maximum for MaxClients with runtime config tools

MaxRequestsPerChild – how many times to allow a connector to serve requests before dying to free resources

Determining Your Server RAM

 

  • TOTAL RAM
    Total RAM available to the OS.

 

    1. free -m
    2. TOTAL_RAM = “total” (first column) : 14,254 MB

 

  • USED RAM
    Total RAM in use for all applications.

 

      1. free -mf
      2. all running processes (second column): 1,717 MB
    1. APACHE RAM TOTAL / AVG
      The RAM used by Apache.
      Since Apache uses more RAM “under load” it is good to get this average both after startup and during/shortly-after an hour with peak user connections.

      1. ps aux | grep ‘httpd’ | awk ‘{print $6}’  // gets RSS Memory

 

  • APACHE TOTAL RAM

 

      1. sum 3a : 1,715MB

 

  • APACHE AVG RAM

 

      1. average 3a: 8GMB

Calculating MaxClients

The maximum number of simultaneous requests that can be served.

Max Clients = floor((TOTAL_RAM –  USED_RAM + APACHE TOTAL RAM) / AVG_RAM_LOAD) – 1

Let’s break that down:

  • floor( … blah … ) – 1
    the Be Conservative SectionThis rounds DOWN always and takes away 1 connection for a safety buffer.    This is a semi-conservative approach to avoid maxing out the memory resources and causing connection issues.
  • (TOTAL_RAM – USED_RAM + APACHE_TOTAL_RAM)
    the RAM For Apache SectionThe first part inside the floor function call determines how much RAM is available to Apache.  It takes the total RAM available to the system, takes away the amount of total RAM in use and adds back any RAM already in use by Apache.
  • (… blah …) / AVG_RAM_LOAD
    The Precise Possible Connections SectionThis calculates the exact number of possible connections that can be fit into the RAM available for Apache based on your average per-connection RAM load.

You may need to adjust the USED_RAM to accommodate more memory use of things like MySQL when the system is under load.  I find it best to run these calculations on a system that is running under load, adjust the Apache configuration and re-run numbers during peak load after each adjustment.

Operating system updates, web application updates including WordPress core, and other factors will change this number over time.  Re-run this calculation and update your configuration on a regular basis.

My Apache Server Calculations
My Apache Server Calculations

My 7GB Server Calculation

TOTAL RAM: 6853MB

USED RAM: 894MB

APACHE TOTAL RAM: 572MB

APACHE AVG RAM: 72MB

Max Clients = floor( (6853 – 894 + 572) /  72) = floor(6531 / 72) – 1 = 89

<IfModule prefork.c>
StartServers       20
MinSpareServers    15
MaxSpareServers   30
ServerLimit      61
MaxClients       60
MaxRequestsPerChild  300
</IfModule>

My 14GB Server Configuration

TOTAL RAM: 13,920MB

USED RAM: 3,495MB

APACHE TOTAL RAM: 3,044MB

APACHE AVG RAM: 105MB

Max Clients = floor ( (13920 – 3495 + 3044) / 105 ) -1 = 127

<IfModule prefork.c>
StartServers       20
MinSpareServers    15
MaxSpareServers   30
ServerLimit      128
MaxClients       127
MaxRequestsPerChild  300
</IfModule>

 

Posted on

Creating A CentOS GUI Vagrant Base Box

CentOS 6.5 Vagrant Login Banner

While playing with PuPHPet and Vagrant I realized my needs are specific enough to warrant building my own Vagrant Base Box.    My process is outlined below.

Setup VirtualBox Hardware

Start VirtualBox and build a new guest “hardware” profile:

  • Base Memory: 2048MB
  • Processors: 2
  • Boot Order: CD/DVD , Hard Disk
  • Acceleration: VT-x/AMD-V , Nested Paging , PAE/NX
  • Display: 32MB Video Memory , 3D Acceleration
  • Network: Intel PRO/1000 MT Desktop (NAT)
  • Drive: SATA with 20GB pre-allocated fixed disk
  • CD/DVD : IDE Secondary Master Empty
  • No USB, Audio, or Shared Folders
VirtualBox CentOS 6.5 GUI Base Box
VirtualBox CentOS 6.5 GUI Base Box

Base Box “Unbest” Practice

These base settings do not fall within the Vagrant Base Box best practices, however I need something a bit different than the typical Vagrant box configuration which is why I am building my own.   I build my boxes with a full GUI which enables me to spin up the virtual environment, login to the GUI, and have my entire development environment in a self-contained portable setting.    There are “lightweight” ways to accomplish this but I do have my reasons for building out my WordPress development environment this way which has been outlined in previous posts.

Adding the Operating System

Now that I have the base box setup it is time to layer on the CentOS 6.5 operating system.   I setup my box for the English language with a time zone of New York (United States EST, UTC+5), no kernel dump abilities, full drive allocated to the operating system.     It is built as a “Desktop” server which gives me the full GUI login which makes it easier to setup my GUI dev environment further on down the road.  It does add some GUI apps I don’t need very often but it is nice to have things like a simple GUI text editor and GUI system management tools for the rare cases when I want them and am too lazy to jump out to my host box to do the work.

Per Vagrant standards the box profile is setup with the root password of “vagrant” and with a base user for daily use with an username and password also set to “vagrant”.

After a couple of reboots the system is ready for a GUI login, but not quite ready for full production.

CentOS 6.5 Login Screen
CentOS 6.5 Login Screen

Adding VirtualBox Guest Additions

One of the first things to do with a VirtualBox install running a GUI is to get VirtualBox Guest Additions installed.  It helps the guest communicate with the host in a more efficient manner which greatly improves the display and the mouse tracking.  Without it the mouse lag in the guest is horrid and is likely responsible for at least 300 of the 3,000 missing hair follicles on my big bald head.

While this SHOULD be a simple operation, the CentOS desktop installation makes it a multi-step process.   Selecting “insert Guest Additions CD” from the VirtualBox server menu after starting up the new box will mount the disk.   It will prompt to autorun the disk and then ask for the root user credentials.    The shell script starts running through the Guest Additions setup but it always falls while building the main Guest Additions module.     The reason is that kernel build kits are needed and they are not installed by default.    I will outline the typical user process here as a point of reference, though most often the first commands I run to fix the issue are those listed at the end of this section.  I’ve done this enough times to know what happens and don’t usually execute the autorun until AFTER I setup the kernel build kit.  You may want to do the same.

Here is what the output looks like after a default CentOS desktop install followed by an autorun of the Guest Additions CD:

Guest Additions Fail on CentOS
This is what happens when you don’t have Kernel build tools setup and try to run Guest Additions on VirtualBox.

[box type=”info” style=”rounded”]Mouse tracking driving you crazy? Toggle to a command line screen on any Linux box with CTRL-ALT-F2. Toggle back to the GUI with CTRL-ALT-F1.[/box]

With the mouse tracking driving me nuts I toggle over to the text console with ctrl-alt-F1 and login as root on there.   You can learn what broke the Guest Addition install by going to the log files:

more vboxxadd-install.log

The typical CentOS desktop build fails the Guest Additions install with this log:

/tmp/vobx.0/Makefile.include.header:97: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop.<br />Creating user for the Guest Additions.<br />Creating udev rule for the Guest Additions kernel module.<br />

With Guest Additions disabled and the VirtualBox not fully configured it is time to do some basic maintenance and get the kernel build environment available for Guest Additions.  Since I am logged in as root via the console I can start by getting yum updated, however the network connection does not always start up before Guest Additions is available.    The steps for getting the kernel dev in place:

Turn on the network interface eth0 (zero not oh) running:

ifup eth0

Make sure all of the installed software is updated to the latest revision:

yum update

Install the Linux kernel development files which are needed for the Guest Additions installation:

yum install kernel-devel

Install the development tool kit including compilers and other items needed to Guest Additions to hook into the kernel:

yum groupinstall "Development Tools"

Once you have the updates installed reboot the system with a shutdown -r now command while logged in as root.

The Guest Additions CD can now be mounted and autorun without error.

After running Guest Additions, reboot the server.

Turn On The Network At Boot

Now that the GUI is running and the mouse is tracking I can log in as the vagrant user and turn on the network connections.   Login, go  to System / Preferences / Network Connections on the main menu.    Check off “Connect Automatically” on the System eth0 connection.

Now the network will be enabled on boot.   That’s useful.

CentOS 6.5 Turn On Network At Boot
CentOS 6.5 turning on the network at boot.

Provide SSH Insecure Keypair To Vagrant

Best practices for Vagrant base boxes is to add an insecure keypair to the vagrant user.   While logged in as vagrant go to Applications/Systems Tools/Terminal to get to the command line.   Go the .ssh subdirectory and create the authorized_keys file by copying the public key from the Vagrant keypair repository into the authorized_keys file.

I use vim and copy the keypair content and paste it into the file.  You can use cat or other tools as well to get the content into the file.  Make sure not to introduce new whitespace in the middle of the key or it will not work.

Change the permissions of the authorized_keys file by using chmod, permission settings are very important for the file:

chmod 0600 authorized_keys 

Give Unrestricted Super Powers To Vagrant

Most users expect the vagrant login to have unrestricted access to all system commands. This is handled via the sudo application. CentOS restricts access by default and requires some updates to get it working per Vagrant best practices. Log back in to the command line console as root and edit the sudo file.

visudo

This brings up the vim editor with the sudo config file. Find the requiretty line and comment it out by adding a # before it. Then add the following line to the bottom of the file:

vagrant ALL=(ALL) NOPASSWD: ALL

Logout of the vagrant and root sessions and log back in as vagrant from the GUI. You should be able to open a terminal and run any sudo commands without a password prompt. You should also be able to run sudo commands “remotely” via the ssh connection to the system.

Make SSH Faster When DNS Is Not Available

If the host and/or virtual box cannot connect to the Internet the SSH access into the Vagrant virtual box will be slow.   Editing the sshd_config file and turning off DNS lookups will fix that.   Now that you have “vagrant super powers” you can do this by logging in as the vagrant user and opening the terminal:

sudo vim /etc/ssh/sshd_config

Add this line to the bottom of the file:

UseDNS no

Host To Guest SSH Access

Connecting from the host system to the guest system WITHOUT using the graphical login or console takes a couple of extra steps. To test the SSH connection I go back to my favorite SSH tool, PuTTY.     Before testing the connection the port forwarding needs to be setup on VirtualBox Manager.

  • Go to the new system listed on the VirtualBox Manager.
  • Right-click and select Settings.
  • Select Network.
  • Click the Port Forwarding button.
  • Add the following rule:
    • Name: SSH Local To Guest
    • Protocol: TCP
    • Host IP: 127.0.0.1
    • Host Port: 4567
    • Guest IP: leave this blank
    • Guest Port: 22

Save the settings.   Open PuTTY and connect to hostname 127.0.0.1 and change the port to be 4567.   You should get a login prompt.   Login with user vagrant.

VirtualBox SSH Port Forwarding
VirtualBox SSH port forwarding for Vagrant.

The issue with logging in with the vagrant private key file is that PuTTY only supports the proprietary PuTTY Private Key format.    You can download puttygen to convert the Vagrant private key file to the PuTTY Private Key file format (click to download the converted OpenSSH key in PPK format).

To use SSH keys in PuTTY, start a new session, enter 127.0.01 as the host and 4567 as the port, then set the PuTTY Private Key:

  • Click on “connection / SSH” in the left side menu to expand that selection.
  • Click on “Auth”.
  • Under Authentication parameters browse to your saved PPK file in the “Private key file for authentication” box.
Setting PuTTY Vagrant PPK
Setting PuTTY Vagrant PPK files.

Now you can connect with PuTTY and login by simply supplying a username.   This tells us that the remote vagrant command line should be able to execute all of the scripted setup commands without any issues.

Building A Box

Now that the basic system is in place it is time to “build the box”.   Vagrant has a command for doing this and if you’ve read my previous articles on setting up Vagrant you will know that I have a Windows command line shortcut that runs in my WP Development Kit folder.   With Vagrant already installed building a box is a one-line command.   I only need my machine name, which I’ve shorted to “CentOS6.5 GUI Base Box”.  Start up the Windows command line and run this:

vagrant package --base "CentOS6.5 GUI Base Box"

It will run for a while and eventually create a packaged Vagrant box ready for distribution.    By default the file will be named package.box.    I’ve renamed mine to centos6_5-gui-base.box for distribution purposes.   You can find it on my Vagrant Cloud account.

You can learn more about the box-building process via the Vagrant Creating A Base Box page.

Launching The Box

To launch the new box hosted on Vagrant Cloud I go to my local folder and execute these commands:

Download the image (stored on my Google Drive account) using Vagrant Cloud as a proxy:

vagrant box add charlestonsw/centos6.5-gui-base-box 

Create the vagrantfile that assists in the box startup command sequence:

vagrant init charlestonsw/centos6.5-gui-base-box

Start the box on VirtualBox:

vagrant up

By default, Vagrant starts boxes in headless mode, meaning no active console.   I want the GUI login so I shut down the box and find the vagrantfile to add the GUI startup line.    The command is already in the file and only needs a few lines to be uncommented to allow a GUI startup with a console.    Edit the vagrantfile and look for these lines:

config.vm.provider "virtualbox" do |v|
v.gui = true
end

There are few other comments in the default vagrantfile, you can leave the limits tweaks commented.  You will end up with a vagrantfile section that looks like this:


# Provider-specific configuration so you can fine-tune various
 # backing providers for Vagrant. These expose provider-specific options.
 # Example for VirtualBox:
 #
 config.vm.provider "virtualbox" do |vb|
 # Don't boot with headless mode
 vb.gui = true

 # # Use VBoxManage to customize the VM. For example to change memory:
 # vb.customize ["modifyvm", :id, "--memory", "1024"]
 end

Save the file and restart the box with the vagrant up box.

That’s it… a new Vagrant box.   Now on to the system tweaks to get my WP Dev Kit setup.

Posted on

Automated Virtual Box Creation V1.0 Notes

PuPHPet Banner

If you read my previous article,  WordPress Workflow : Automated Virtual Box Creation , you have an idea of what I am trying to accomplish with improving my WordPress development work flow.    The short version, I want to be able to create a fresh install of a virtual machine that has my entire development system intact with minimal input on my part.    The idea is to run a few commands, wait for the installs and updates, and be coding on a “clean” machine shortly after.    Once I get my own work flow updated I will also be able to share my scripts and tools via a git repository with the remote developers that are now working on Store Locator Plus add-on packs and hopefully simplify their development efforts or at least get all of us on a similar baseline of tools to improve efficiency in our efforts.

Here are my notes from the first virtual development box efforts via PuPHPet, Vagrant, and Puppet.    This build was done with recent “off-the-shelf” versions of each of these tools and using a base configuration with a handful of options from the PuPHPet site.

Headless Configuration

The VirtualBox machine appears to be created as a “headless” box, meaning no monitor or other display device is active.   I will need to tweak that as I work “on the box” with GUI development tools.    I know that I can install all of my development tools on my host system and read/write from a shared directory to get all of my work onto the virtual machine, but that is not my methodology.    Having worked with a team of developers I know all too well that eventually the host hardware will die.   A laptop will need to be sent off for repair.   Guess what happens?   You lose half-a-day, or more, setting up a new host with a whole new install of development tools.

The better solution, for my work flow, is to keep as much of the development environment “self contained” within the virtual box as possible.   This way when I backup my virtual disk image I get EVERYTHING I need in an all-in-one restore point.   I can also replicate and share my EXACT environment to any location in the world and be fully  “up and running” in the time it takes to pull down a 20GB install file.  In today’s world of super-fast Internet that is less of an issue than individually pulling down and installing a half-dozen working tools and hoping they are all configured properly.

What does this all mean?    I need to figure out how to get the PuPHPet base configuration tweaked so I can start up right from the VirtualBox console with a full Linux console available.  I’ll likely need to update Puppet as well to make sure it pulls down the Desktop package on CentOS.

I wonder if I can submit a build profile via a git pull request to PuPHPet.

Out-Of-Box Video Memory Too Low

The first hurdle with configuring a “login box” with monitor support will be adjusting the video RAM.   My laptop has 4GB of dedicated video RAM on a Quadro K3100M GPU.   It can handle a few virtual monitors and has PLENTY of room for more video RAM.   Tweaking the default video configuration is in order.

Since Vagrant “spins up” the box when running the vagrant up command the initial fix starts by sending an ACPI shutdown request to the system.     Testing the video RAM concept is easy.   Get to the VirtualBox GUI, right-click the box and select properties.   Adjust the video RAM to 32MB and turn on 3D accelerator (it makes the GUI desktop happy) and restart.

Looks like I can now get direct console login.  Nice!

PuPHPet Virtual Box with Active Console
PuPHPet Virtual Box with Active Console

Access Credentials

The second issue, which I realized after seeing the login prompt, is that I have NO IDEA what the login credentials are for the system.   This doesn’t matter much when you read/write the shared folders on your host to update the server and only “surf to” the box on port 8080 or SSH in with a pre-shared key, but for console login a username and password are kind of important.   And I have no clue what the default is configured as.  Time for some research.   First stop?  The vagrantfile that built the beast.

Buried within that vagrantfile, which looks just like Ruby syntax (I’m fairly certain it is Ruby code), is a user name “vagrant”.    My first guess?  Username: vagrant, password: vagrant.     Looks like that worked just fine.    Now I have a console login that “gets me around”, but it is not an elevated permissions user level such as root.   However, a simple sudo su – resolves that issue granting me full “keys to the kingdom”.

[box type=”info” size=”large” style=”rounded”]Vagrant Boxes Credentials are username vagrant, password vagrant[/box]

A good start.   Now to wreak some havoc to see what is on this box and where so I can start crafting some Puppet rule changes.   Before I get started I want to get a GUI desktop on here.

GUI Desktop

To get a GUI desktop on CentOS you typically run the yum package installer with yum groupinstall Desktop.    A visit under sudo su and executing that command gets yum going and pulling down the full X11/Gnome desktop environment.

A quick reboot with shutdown -r now from the root command line should bring up the desktop this time around… but clearly I missed a step as I still have a console login.  Most likely a missing startx command or something similar in the boot sequence of init.d.

A basic startx & from the command line after logging back in as vagrant/vagrant and my GUI desktop is in place, so clearly I need to turn on the GUI login/boot loader.

Tweaking PuPHPet Box Parameters

Now that I know what needs to change I need to go and create that environment via the PuPHPet/Vagrant/Puppet files so I can skip the manual tweaking process.   After some digging I found the config.yaml file.    When you use PuPHPet this file will be put in the .zip download you receive at the end of the PuPHPet process.   It is in the <boxid>/puphpet/ directory.

PuPHPet config.yaml
PuPHPet config.yaml

While some of the box parameters can be adjusted in these files, it appears much of the hardware cannot be manipulated.  There is a site called “Vagrant Cloud” that has multiple boxes that can be configured.   To switch boxes you can edit the config.yaml file and replace the box_url line to point to one of the other variants that may be closer to your configuration.  Since I don’t see one that is close to my needs it looks like I will have to build my own box profile to be hosted in the cloud.   That is content for another article.

 

Posted on

WordPress Workflow : Automated Virtual Box Creation

PuPHPet Vagrant Puppet Banner

I am into my first full day back after WordCamp Atlanta (#wcatl) and have caught up on most of my inbox, Twitter, and Facebook communications.   As I head into a new week of WordPress plugin production I decided now is as good a time as any to update my work flow.

I learned a lot of new things at WordCamp and if there is one thing I’ve learned from past experience it is DO NOT WAIT.   I find the longer I take to start implementing an idea the less chance I have of executing.

My first WordCamp Atlanta 2014 work flow improvement starts right at the base level.   Setting up a clean local development box.   I had started this process last week by manually configuring a baseline CentOS box and was about to setup MySQL, PHP, and all the other goodies by hand.  That was before I learned more about exactly what Vagrant can do.   I had heard of Vagrant but did not fully internalize how it can help me.  Not until this past weekend, that is.

My Work Environment

Before I outline my experience with the process I will share my plugin development work environment.

  • Host System: Windows 8.1 64-bit on an HP Zbook laptop with 16GB of RAM with a 600GB SATA drive
  • Guest System: CentOS 6.5 (latest build) with 8GB RAM on an Oracle VirtualBox virtual machine
    • Linux Kernel 2.6.32-431
    • PHP v5.4.23
    • MySQL v 14.14 dist 5.5.35
  • Dev Took Kit: NetBeans, SmartGit, Apigen and phpDoc, MySQL command line, vim
HP Zbook Windows 411
My Development System laptop config.

While that is my TYPICAL development environment, every-so-often I swap something out such as the MySQL version or PHP version and it is a HUGE PAIN.    This is where Vagrant should help.  I can spin up different virtual boxes such as a single-monitor versus three-monitor configuration when I am on the road or a box with a different version of PHP.     At least that is the theory anyway.   For now I want to focus on getting a “clean” CentOS 6.5 build with my core applications running so I can get back to releasing the Store Locator Plus Enhanced Results add-on pack this week.

Getting Started With Vagrant

The Rockin’ Local Development With Vagrant talk that Russel Fair gave on Saturday had me a bit worried as he was clearly on the OS/X host and the examples looked great from a command line standpoint.  Being a Linux geek I love command line, but I am not about to run virtual development boxes in in a VirtualBox guest.   Seems like a Pandora’s box to me… or at least a Russian doll that will surely slow down performance.   Instead I want to make sure I have Vagrant running on my Windows 8.1 bare metal host.    That is very much against my “full dev environment in a self-contained and portable virtual environment” standard, but one “helper tool” with configurations backed up to my remote Bitbucket repository shouldn’t be too bad, as long as I don’t make it a habit to put dev workflow tools on my host box. Yes, Vagrant does have a Windows installer and I’m fairly certain I won’t need to be running command-line windows to make stuff work.   If I’m running Windows I expect native apps to be fully configurable via the GUI.  Worst case I may need to open a text editor to tweak some files, but no command line please.

Here is the process for a Windows 8.1 install.

  • Download Vagrant.
  • Install needs to be run as admin and requires a system reboot.
  • Ok… it did something… but what?   No icons on the desktop or task bar or … well… anywhere that I can find!

Well… sadly it turns out that Vagrant appears to be a command line only port of the Linux/OSX variants.    No desktop icons, no GUI interface.   I get it.  Doing that is the fast and easy process, but to engage people on the Microsoft desktop you really do need a GUI.    Yes, I’m geek enough to do this and figure it out.   I can also run git command line with no problem but I am FAR more efficient with things like the SmartGit GUI interface.

Maybe I’m not a real geek, but I don’t think using command line and keyboard interaction as the ONLY method for interacting with a computer makes you a real techie.    There is a reason I use a graphical IDE instead of vim these days.    I can do a majority of my work with vim, but it is FAR more efficient to use the GUI elements of my code editor.

Note to Vagrant: if you are doing a windows port at least drop a shortcut icon on the desktop and/or task bar and setup a Windows installer.   Phase 2: consider building a GUI interface on top of the command line system.

It looks like Vagrant is a lower-level command line tool.   It will definitely still have its place, but much like git, this is a too on which other “helpers” need to be added to make my workflow truly efficient.  Time to see what other tools are out there.

Kinda GUI Vagrant : PuPHPet

Luckily some other code geeks seem to like the idea of  GUI configuration system and guess what?   Someone created a tool called PuPHPet (which I also saw referenced at WordCamp so it must be cool)  and even wrote an article about Vagrant and Puppet.   Puppet is a “add on”, called a provisioner,  to setup the guest software environment.

PuPHPet is an online form-based system that builds the text-file configuration scripts that are needed by Vagrant to build and configure your Virtualbox (or VMWare) servers.   It is fairly solid for building a WordPress development environment, but it does mean reverting back to CentOS 6.4 as CentOS 6.5 build scripts are not online.     Though I am sure I can tweak that line of the config files and fix that, but that takes me one-step away from the “point and click” operation I am looking for.

Either way, PuPHPet, is very cool and definitely worth playing with if you are going to be doing any WordPress-centric Vagrant work.

PuPHPet Intro Page
The PuPHPet online configuration tool for creating Vagrant + Puppet config files.

 

Puppet Makes Vagrant and PuPHPet Smarter

Now that I have Vagrant installed and I discovered PuPHPet I feel like I am getting closer to a “spin me up a new virtual dev box, destroy-as-desired, repeat” configuration.  The first part of my workflow improvement process.   BUT…. I need one more thing to take care of it seems… get Puppet installed.   I managed to wade through the documentation (and a few videos) to find the Windows installers.

Based on what is coming up in the install window it looks like the installer will roll out some Apache libs, ruby, and the windows kits that help ruby run on a windows box.

Puppet Install Licenses
The Puppet installer on Windows.

Again, much like Vagrant, Puppet completes the installation with little hint of what it has done.    Puppet is another command line utility that runs at a lower-level to configure the server environments.   It will need some of the “special sauce” to facilitate its use.     A little bit of digging has shown that the Puppet files are all installed under the C:\Program Files (x86)\Puppet Labs folder.    On Windows 8.1 the “Start Menu” is MIA, so the documentation about finding shortcuts there won’t help you.    Apparently those shortcuts are links to HTML doc pages and some basic Windows shell scripts (aka Batch Files) so nothing critical appears to have gone missing.

The two files that are referenced most often are the puppet and facter scripts, so we’ll want to keep track of those.   I’ll create a new folder under My Documents called “WP Development Kit” where I can start dumping things that will help me managed my Windows hosted virtual development environment for WordPress. While I’m at it I will put some links in there for Vagrant and get my PuPHPet files all into a single reference point.

WP Dev Kit Directory
The start of my WP Dev Kit directory. Makes finding my PuPHPet, Vagrant, and Puppet files easier.

Now to get all these command line programs to do my bidding.

Getting It Up

After a few hours or reading, downloading, installing, reading some more, and chasing my son around the house as the “brain eating dad-zombie”, I am ready to try to make it all do something for me.    Apparently I need to use something called a “command line”.  On Windows 8.1.

I’m giving in with the hopes that this small foray into the 1980’s world of command line system administration will yield great benefits that will soon make me forget that DOS still exists under all these fancy icons and windows.   Off to the “black screen of despair”, on of the lesser-known Windows brethren of the “blue screen of death”.     Though Windows 8 tries very hard to hide the underpinnings of the operating system, a recent Windows 8 patch and part of Windows 8.1 since “birth” is the ever-useful Windows-x keyboard shortcut.   If you don’t know this one, you should.   Hold down the Windows key and press x.   You will get a Windows pop-up menu that will allow you to select, among many other things, the Command Prompt application.

If you right-click on the “do you really want to go down this rabbit hole” confirmation box that comes up with the Command Prompt (admin) program you will see that it is running C:\Windows\system32\cmd.exe.     This will be useful for creating a shortcut link that will allow me to not only be in command mode but also to be in the “source” directory of my PuPHPet file set.    I’m going to create a shortcut to that application in my new WP Development Kit directory along with some new parameters:

  • Search for cmd.exe and find the one in the Windows\system32 directory.
  • Right-click and drag the file over to my WP Development Kit folder, selecting “create shortcuts here” when I drop it.
  • My shortcut to cmd.exe is put in place, but needs tweaking…
  • Right-click the shortcut and set the “Start in” to my full WP Development Kit folder.

Now I can double-click the command prompt shortcut in my WP Development Kit folder and not need to change directory to a full path or “up and down the directory tree” to get to my configuration environment.

Running Vagrant andn Puppet via PuPHPet Scripts
Running Vagrant andn Puppet via PuPHPet Scripts

A few key presses later and I’ve managed to change to my downloaded PuPHPet directory and execute the “vagrant up” command.   Gears starting whirring, download counters started ticking, and it appears the PuPHPet/Vagrant/Puppet trio are working together to make something happen.  At the very least it is downloading a bunch of stuff from far away lands and filling up my hard drive.   Hopefully with useful Virtualbox disk images and applications required to get things fired up for my new WordPress dev box.

We’ll see…

Link Summary

Posted on

Forcing Display Resolution on VirtualBox and CentOS 6.5

VirtualBox Display Resolution

Last evening my Oracle VM VirtualBox development system stopped auto-detecting my guest display resolution when I re-connected my laptop to the docking station.   The maximum resolution I could get was 1600 x 1200 instead of the native display resolution of 1920 x 1200.   After literally hours of research this morning with many dead-ends I found the proper solution.  Here is my “cheat sheet” on how I got it working in my dev environment.

For CentOS 6.x systems the system-config-display command is obsolete.  The replacement, for today anyway, is xrandr.

VBoxManage is useless unless you are running the virtual box management service, which is not a typical default setup for VirtualBox on a Windows host.

Updating VirtualBox guest additions does not help if you already have a current version.  You WILL need VirtualBox guest additions for the display driver interface on the guest operating system to function properly.   If you don’t have that installed you can use the GUI interface and finding the “machine / install guest additions” option.   It should drop a CD image on your CentOS 6.5 desktop that you can run with autoprompt.  Run it as a priv’ed user such as root.

Once you have VirtualBox guest additions installed login to your system and get to the command prompt.    Switch to a priv’ed user.  I login as my standard account and execute the command:

# sudo su -

To setup xrandr and add a manual resolution to your list you need to get the configuration setting line.   Use the utility cvt to get the right command line.  Here is the command to find the xrandr mode for a 1920 x 1200 resolution:

# cvt 1920 1200

It returns the line:

Modeline "1920x1200_60.00" 193.25 1920 2056 2256 2592 1200 1203 1209 1245 -hsync +vsync

Those are the parameters for my particular monitor configuration.  It is a basic reference label, a configuration tag, and monitor timing, resolution, and sync timings.  This will be specific to your monitor so run the cvt command, don’t just copy the line here.

For xrandr you will need everything AFTER the Modeline portion.

Find out what monitors your system thinks it has.  I have 3 monitors so this is my output:

# xrandr
Screen 0: minimum 64 x 64, current 4800 x 1200, maximum 16384 x 16384
VBOX0 connected 1600x1200+0+0 0mm x 0mm
   1600x1200      60.0*+
   1440x1050      60.0  
   1280x960       60.0  
   1024x768       60.0  
   800x600        60.0  
   640x480        60.0  
VBOX1 connected 1600x1200+1600+0 0mm x 0mm
   1600x1200      60.0*+
   1440x1050      60.0  
   1280x960       60.0  
   1024x768       60.0  
   800x600        60.0  
   640x480        60.0  
VBOX2 connected 1600x1200+3200+0 0mm x 0mm
   1600x1200      60.0*+
   1440x1050      60.0  
   1280x960       60.0  
   1024x768       60.0  
   800x600        60.0  
   640x480        60.0  
  1920x1200_60.00 (0x10c)  193.2MHz
        h: width  1920 start 2056 end 2256 total 2592 skew    0 clock   74.6KHz
        v: height 1200 start 1203 end 1209 total 1245           clock   59.9Hz

Now to add the manual entry so I can later use the CentOS 6.5 GUI display manager to set the resolution:

# xrandr --addmode VBOX0 "1920x1200_60.00"
# xrandr --addmode VBOX1 "1920x1200_60.00"
# xrandr --addmode VBOX2 "1920x1200_60.00"

Now I can go to System / Preferences / Display on the system admin menu.

CentOS 6.5 Forced Display Resolution
CentOS 6.5 Forced Display Resolution
Posted on

Skip Obtaining Drivers From Windows Update

Windows 7 Banner

For the past month I’ve been annoyed by the amount of time it takes for my new bluetooth headset to pair itself with windows.  The problem is not the bluetooth pairing but the actual device “coming online” and Windows 7 subsequently deciding it need to locate and install a new driver EVERY SINGLE TIME the device is connected.    The problem is that Windows 7 always searches the online directory first for any new drivers or updates.  That takes FOREVER.

Slight diversion into my theory of Windows 7 Driver Search technology.  I swear they are sending the driver request to a computer that has electrodes hooked up to trained baby squirrels.  The squirrels then go out into the forest looking for a nut they buried at some point in the past.  This process takes approximately 10 minutes 37.23 seconds to complete.   The special Microsoft Squirrel Automatron (the MSA) then retrieves said nut from the trained baby squirrel and scans it for a bar code that indicates which driver should be downloaded to your PC.    Most of the time the squirrels eat the nut on the way and you get “no driver found”.   Every now-and-then a squirrel gets hit by a car which is the real reason why your PC locks up randomly or throws a blue screen of death.     Either way the process is slow and it sucks.

So in an effort to save the lives of baby squirrels I finally decided to take 3 minutes to turn of that damned “Searching Windows Update For Drivers” process that happens EVERY SINGLE TIME I turn on my bluetooth headset.   It only took about 33 different times of either waiting for Windows to finally get a nut from the squirrel or for me to abort the process early and click the “skip obtaining drivers from windows update” link before I did this. Hopefully you found this post after the 3rd or 4th day of dealing with that driver installation delay.

Here is how you turn off the automatic “Searching Windows Update for drivers”.   This is especially useful for devices you’ve installed previously and know you have a driver that works already on your system.   It will also keep the Windows Update driver search intact for those times when you install a new piece of hardware and do not have a driver already available locally.

  • Go to the Windows Start Menu.
  • Right-click on Computer.
  • Select Properties.
Windows 7 Computer Properties
Windows 7 Computer Properties

 

  • Click Advanced System Settings.
Windows 7 Advanced System Settings
Windows 7 Advanced System Settings

 

  • Click on the Hardware tab.
  • Click the Device Installation Settings button.
Windows 7 Device Installation Settings
Windows 7 Device Installation Settings
  • Click the No, let me choose what to do radio button.
  • Click the Install driver software from Windows Update if it is not found on my computer radio button. (the Save The Squirrels button).
  • Click Save Changes.
Windows 7 No Let Me
Windows 7 No Let Me

Click OK or the close window box back through the stack of windows until you are back to your starting point.

 

Posted on

Windows Azure Virtual Machines, Not Ready For Prime Time

Just last month, Microsoft announced that their Windows Azure Virtual Machines were no longer considered a pre-release service.  In other words, that was the official notification from Microsoft that they feel their Virtual Machines offering is ready for enterprise class deployments.   In fact they even offer uptime guarantees if you employ certain round-robin and/or load balancing deployments that help mitigate the downtime in your cloud environment.

Essentially the Virtual Machines offering on Windows Azure equates to a virtual dedicated server that you would employ from most hosting companies.  The only different with the Windows Azure platform, like most cloud-based offerings, is that you need to serve as your own system admin.   This is not web hosting for business owners but for tech geeks.    In other words, it works perfect for guys like me.

Or so I thought.

Different Shades of White

As I learned tonight, there are differences between the various cloud offerings that are not easy to tease out of the hundreds of pages of online documentation touting how awesome a service provider’s cloud services are.   Sure, there are the metrics.  You can compare instance sizes in terms of disk space, CPU, and bandwidth.   You can comparing pricing and the relative costs of operating your server on each of the cloud platforms.    You can even get the background information on the company providing the virtualized environment, getting some clue (though never a clear picture) of where the servers are physically located, how many servers they have, how secure the environment is, and more.

At the end of the day they all look very similar.  Sure there are discrete elements you can point to on each comparison spreadsheet you throw together, but in the end the differences are relatively minor.   They pricing is similar.   The network and server room build-outs are similar.   The support offerings look similar.     When all is said-and-done you end up making a choice based on price, the reputation of the company, the quality of the online documentation, and the overall user interface experience (UX) that is presented during your research.

After a lot of research, and with quite a bit of experience with Amazon Web Services, all the cloud based offerings were very similar.   Different shades of white.     In the end I decided to try the Microsoft Windows Azure offering.    Microsoft has a good reputation in the tech world, they are not going anywhere, and as a Microsoft Bizspark member I also have preview access and discount services.

My decision to go against the recommendations I’ve been making to my clients for years, “Amazon was one of the first, constantly innovates, and is the leader in the space”, was flawed.    Yes, I tested and evaluated the options for months before making the move.   But it takes an unusual event to truly test the mettle of any service provider.

Breaking A Server

After following the advice of a Microsoft employee that was presented in a Windows Azure forum about Linux servers, I managed to reset the Windows Azure Linux Agent (or WALinuxAgent) application.    No, I did not do this on a whim.   I needed to install a GUI application on the server and followed the instructions presented.  It turns out that Microsoft has deployed a custom application that allows their Azure management interface to “talk” to the Linux server.  That same application DISABLES the basic NetworkManager package on CentOS.  To install any kind of GUI applications or interface you must disable WALinuxAgent, enable NetworkManager, install, disable NetworkManager, then re-enable WALinuxAgent.  The only problem with the instructions that are published in several places is they omit a very important step.  While connected with elevated privileges (sudo or su) you must DISABLE the WALinuxAgent (waagent) provisioning so that it does not employ the Windows Azure proprietary security model on top of your installation.  If you do not do this  and you log out of that elevated privs session y ou will NEVER have access to an elevated privs account again.

Needless to say, you cannot keep an enterprise level server running in this state.  Eventually you need to install updates and patches for security or other reasons.

As I would learn, there is ZERO support on recovering from this situation.

Support versus support

In the years of working with Amazon Web Services and hosting a number of cloud deployments on their platform, I had come accustomed to being able to gain access to support personnel that actually TRY to help you out.   They often go above-and-beyond what is required by contract and try to either get you back on track through their own efforts of at least provide you with enough research and information that you can recover from any issues you have with limited effort.    Amazon support services can be pricey, but having access to not just the level one but also higher level techs is an invaluable resource.

The bottom line is that Microsoft offers NO support services for their Linux images, even those they provide as “sanctioned images”, beyond making sure the ORIGINAL image is stable and that the virtual machine did not crash.    Not only do they not have any apparent means to elevate support tickets, as it turns out there is NO SUPPORT if you are running a Linux image.

Clearly Microsoft does not put this “front and center” on ANY of their Windows Azure literature.  In fact, just the opposite.  Microsoft has made an extended effort in all their “before the purchase” propaganda to try and make it sound like they EMBRACE Linux.   They go out of their way to make you feel like Linux is a welcome member of their family and that they work closely with multiple vendors to ensure a top-quality experience.

Until you have a problem.   At which point they wash their hands, as is evident in this support response along with a link to the Knowledgebase article saying “Linux.  Not our problem.”:

Hello Lance, I understand your concerns and frustration, but Microsoft does not offer technical support for CentOS or any other Linux OS at this time.

 Please, review guidelines for the Linux support on Windows Azure Virtual Machines: http://support.microsoft.com/kb/2805216

No Azure Support
No Azure Support

Other Issues

While the lack of support and the inability to regain privileged user access to my server is the primary concern that has me on the path of choosing a new hosting provider, there have been other issues as well.

A few times in the past several months the WordPress application has put Apache in a tailspin.  This consumes the memory on the server.   While that is not necessarily an issue with Windows Azure, the fact that the “restart virtual image” process DOES NOT WORK at least 50% of the time IS a big issue.   Windows Azure is apparently overly-reliant on that dreaded WALinuxAgent on the server.   If it does not response, because memory is over-allocated for example, the server will not reboot.   The only thing you can do is press the restart button, wait 15 minutes to see if it happened to get enough memory to catch the restart command, and try again.  Ouch.

The Azure interface is also not as nice as I first thought.   While better than the original UX at Amazon Web Services, it is overly simplistic in some places and downright confusing in others.  Try looking at your bill.  Or your subscription status.   You end up jumping between seemingly dis-jointed sites.    Forget about online support forums.  Somehow you end up in the MSDN network, far removed from your cloud portal.    I often find myself with a dozen windows open so I can keep track of where I was or what I need to reference, lest I lose my original navigation path and have to start over.   Not too mention the number of times that this site-to-site hand-off fails and your login is suddenly deemed “invalid” mid-session.

Azure Session Amensia
Azure Session Amensia

Moving Servers

So once again, I find myself looking for a new hosting provider. Luckily I recently made the move to Windows Azure and not only have VaultPress available to make it easy to relocate the WordPress site but also Crash Plan Pro to get all the “auxiliary” installation “cruft” moved along with it.

Where will I go?

In my mind there are only two choices for an expandable cloud deployment running Linux boxes. Amazon Web Services or Rackspace. I’ll likely end up with Amazon again, but who knows… maybe it is time to try the legendary support at Rackspace once again. We’ll see. Stay tuned.