Posted on

Configuring Apache 2.4 Connections For WordPress Sites

Recently I upgraded my web server to PHP 5.6.14. Along the way the process managed to obliterate my Apache web server configuration files. Luckily it saves them during the upgrade process, but one thing I forgot to restore was the settings that help Apache manage memory. Friday night around midnight, because this stuff ALWAYS happens when you’re asleep… the server crashed. And since it was out of memory with a bazillion people trying to surf the site; every time I restarted the server I could not log in fast enough to get a connection and fix the problem.

Eventually I had to disconnect my AWS public IP address, connect to a private address with SSH, and build the proper Apache configuration file to ensure Apache didn’t go rogue and try to take over the Internet from my little AWS web server.

Here are my cheat-sheet notes about configuring Apache 2.4 so that it starts asking site visitors to “hold on a second” when memory starts getting low. That is much nicer than grabbing more memory than it should and just crashing EVERYTHING.

My Configuration File

I put this new configuration file in the /etc/httpd/conf.d directory and named it mpm_prefork.conf. That should help prevent it from going away on a future Apache upgrade. This configuration is for an m3.large server running with 7.4GB of RAM with a typical WordPress 4.4 install with WooCommerce and other plugins installed.

# prefork MPM for Apache 2.4
#
# use httpd -V to determine which MPM module is in use.
#
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# ServerLimit: maximum value for MaxRequestWorkers for the lifetime of the server
#
# MaxRequestWorkers: maximum number of server processes allowed to start
#
#
# TOTAL SYTEM RAM: free -m (first column) = 7400 MB
# USED SYSTEM RAM: free -m (second column) = 2300 MB
#
# AVG APACHE RAM LOAD: htop (filter httpd, average RES column = loaded in physical RAM) = 87MB
# TOTAL APACHE RAM LOAD: (htop sum RES column) 1900 MB
#
# BASE SYSTEM RAM LOAD: USED SYSTEM RAM - TOTAL APACHE RAM LOAD = 2300 - 1900 = 400MB
#
# AVAILABLE FOR APACHE: TOTAL SYSTEM RAM - BASE SYSTEM RAM LOAD = 7400 - 400 = 7000MB
#
# ServerLimit = sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process
# MaxRequestWorkers = number of simultaneous child processes to serve requests , must increase ServerLimit
#
# If both ServerLimit and MaxRequestWorkers are set to values higher than the system can handle,
# Apache httpd may not start or the system may become unstable.
#
# MaxConnectionsPerChild = how many requests are served before the child process dies and is restarted
# find your average requests served per day and divide by average servers run per day
# a good starting default for most servers is 1000 requests
#
# ServerLimit = AVAILABLE FOR APACHE / AVG APACHE RAM LOAD = 7000MB / 87MB = 80
#
#

ServerLimit 64
MaxRequestWorkers 64
MaxConnectionsPerChild 2400

The Directives

With Apache 2.4 you only need to adjust 3 directives. ServerLimit, MaxRequestWorkers (renamed from earlier versions) , and MaxConnectionsPerChild (also renamed).

ServerLimit / MaxRequestWorkers

ServerLimit sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process. MaxRequestWorkers is the number of simultaneous child processes to serve requests. This seems a bit redundant, but it is an effect of using the prefork MPM module which is a threadless design. That means it runs a bit faster but eats up a bit more memory. It is the default mode for Apache running on Amazon Linux. I prefer it as I like stability over performance and some older web technologies don’t play well with multi-threaded design. If I was going to go with a more stable multi-thread environment I’d switch to nginx. For this setup setting ServerLimit and MaxRequestWorkers to the same value is fine. This says “don’t ever run more than this many web servers at one time”.

In essence this is the total simultaneous web connections you can serve at one time. What does that mean? With the older HTTP and HTTPS protocol that means every element of your page that comes from your server is a connection. The page text, any images, scripts, and CSS files are all a separate request. Luckily most of this comes out of the server quickly so a page with 20 web objects on it will use up 20 of your 64 connections but will spit them out in less than 2 seconds leaving those connections ready for the next site visitor while the first guy (or gal) reads your content. With newer HTTP/2 (and SPDY) connections a single process (worker) may handle multiple content requests from the same user so you may well end up using 1 or 2 connections even with a page with multiple objects loading. While that is an over-simplification, the general premise shows why you should update your site to https and get on services that support HTTP/2.

Calculating A Value

# TOTAL SYTEM RAM: free -m (first column) = 7400 MB
# USED SYSTEM RAM: free -m (second column) = 2300 MB
# TOTAL APACHE RAM LOAD: (htop sum RES column) 1900 MB
# AVG APACHE RAM LOAD: htop (filter httpd, average RES column = loaded in physical RAM) = 87MB
# BASE SYSTEM RAM LOAD: USED SYSTEM RAM - TOTAL APACHE RAM LOAD = 2300 - 1900 = 400MB
# AVAILABLE FOR APACHE: TOTAL SYSTEM RAM - BASE SYSTEM RAM LOAD = 7400 - 400 = 7000MB
# ServerLimit = AVAILABLE FOR APACHE / AVG APACHE RAM LOAD = 7000MB / 87MB = 80

There you go, easy, right? Figuring our RAM resources can be complicated, but to simplify the process start with the built-in Linux free command and I suggest installing htop which provides a simpler interface to see what is running on your server. You will want to do this on your live server under normal load if possible.

Using free -m from the Linux command line will tell you the general high-level overview of your server’s memory status. You want to know how much is installed and how much is in use. In my case I have 7400MB of RAM and 2300MB was in use.

Next you want to figure out how much is in use by Apache and how much an average web connection is using per request. Use htop, filter to show only the httpd processes, and do math. My server was using 1900MB for the httpd processes. The average RAM per process was 87MB.

You can now figure out how much RAM is used by “non-web stuff” on your server. Of the 2300MB of used RAM, Apache was using up 1900MB. That means my server uses about 400MB for general system overhead and various background processes like my system-level backup service. That means on a “clean start” my server should show about 7000MB available for web work. I can verify that by stopping Apache and running free -m after the system “rests” for a few minutes to clear caches and other stuff.

Since I will have 7000MB available for web stuff I can determine that my current WordPress configuration, PHP setup, and other variables will come out to about 87MB being used for each web session. That means I can fit about 80 web processes into memory at one time before all hell breaks loose.

Since I don’t like to exhaust memory and I’m a big fan of the 80/20 rule, I set my maximum web processed to 64. 7000MB / 87MB = 80 * .8 = 64.

That is where you want to set your ServerLimit and MaxRequestWorkers.

MaxConnectionsPerChild

This determines how long those workers are going to “live” before they die off. Any worker will accept a request to send something out to your site visitor. When it is done it doesn’t go away. Instead is tells Apache “hey, I’m ready for more work”. However every-so-often one of the things that is requested breaks. A bad script in PHP may be leaking memory, for example. As a safety valve Apache provides the MaxConnectionsPerChild directive. This tells Apache that after this child has served this many objects to die. Apache will start a new process to replace it. This ensures and memory “cruft” that is built up is cleared out should something go wrong.

Set this number too low and you server spends valuable time killing and creating Apache processes. You don’t want that. Set it too high and you run the risk of “memory cruft” building up and eventually having Apache kill your server with out-of-memory issues. Most system admins try to set this to a value that has it reset once every 24 hours. This is hard to calculate unless you know your average objects requested every day, how many processes served those objects, and other factors like HTTP versus HTTP2 can come into play. Not too mention fluctuations like weekend versus weekday load. Most system admins target 1000 requests. For my server load I am guessing 2400 requests is a good value, especially since I’ve left some extra room for memory “cruft”.

Posted on

Automated Virtual Box Creation V1.0 Notes

PuPHPet Banner

If you read my previous article,  WordPress Workflow : Automated Virtual Box Creation , you have an idea of what I am trying to accomplish with improving my WordPress development work flow.    The short version, I want to be able to create a fresh install of a virtual machine that has my entire development system intact with minimal input on my part.    The idea is to run a few commands, wait for the installs and updates, and be coding on a “clean” machine shortly after.    Once I get my own work flow updated I will also be able to share my scripts and tools via a git repository with the remote developers that are now working on Store Locator Plus add-on packs and hopefully simplify their development efforts or at least get all of us on a similar baseline of tools to improve efficiency in our efforts.

Here are my notes from the first virtual development box efforts via PuPHPet, Vagrant, and Puppet.    This build was done with recent “off-the-shelf” versions of each of these tools and using a base configuration with a handful of options from the PuPHPet site.

Headless Configuration

The VirtualBox machine appears to be created as a “headless” box, meaning no monitor or other display device is active.   I will need to tweak that as I work “on the box” with GUI development tools.    I know that I can install all of my development tools on my host system and read/write from a shared directory to get all of my work onto the virtual machine, but that is not my methodology.    Having worked with a team of developers I know all too well that eventually the host hardware will die.   A laptop will need to be sent off for repair.   Guess what happens?   You lose half-a-day, or more, setting up a new host with a whole new install of development tools.

The better solution, for my work flow, is to keep as much of the development environment “self contained” within the virtual box as possible.   This way when I backup my virtual disk image I get EVERYTHING I need in an all-in-one restore point.   I can also replicate and share my EXACT environment to any location in the world and be fully  “up and running” in the time it takes to pull down a 20GB install file.  In today’s world of super-fast Internet that is less of an issue than individually pulling down and installing a half-dozen working tools and hoping they are all configured properly.

What does this all mean?    I need to figure out how to get the PuPHPet base configuration tweaked so I can start up right from the VirtualBox console with a full Linux console available.  I’ll likely need to update Puppet as well to make sure it pulls down the Desktop package on CentOS.

I wonder if I can submit a build profile via a git pull request to PuPHPet.

Out-Of-Box Video Memory Too Low

The first hurdle with configuring a “login box” with monitor support will be adjusting the video RAM.   My laptop has 4GB of dedicated video RAM on a Quadro K3100M GPU.   It can handle a few virtual monitors and has PLENTY of room for more video RAM.   Tweaking the default video configuration is in order.

Since Vagrant “spins up” the box when running the vagrant up command the initial fix starts by sending an ACPI shutdown request to the system.     Testing the video RAM concept is easy.   Get to the VirtualBox GUI, right-click the box and select properties.   Adjust the video RAM to 32MB and turn on 3D accelerator (it makes the GUI desktop happy) and restart.

Looks like I can now get direct console login.  Nice!

PuPHPet Virtual Box with Active Console
PuPHPet Virtual Box with Active Console

Access Credentials

The second issue, which I realized after seeing the login prompt, is that I have NO IDEA what the login credentials are for the system.   This doesn’t matter much when you read/write the shared folders on your host to update the server and only “surf to” the box on port 8080 or SSH in with a pre-shared key, but for console login a username and password are kind of important.   And I have no clue what the default is configured as.  Time for some research.   First stop?  The vagrantfile that built the beast.

Buried within that vagrantfile, which looks just like Ruby syntax (I’m fairly certain it is Ruby code), is a user name “vagrant”.    My first guess?  Username: vagrant, password: vagrant.     Looks like that worked just fine.    Now I have a console login that “gets me around”, but it is not an elevated permissions user level such as root.   However, a simple sudo su – resolves that issue granting me full “keys to the kingdom”.

[box type=”info” size=”large” style=”rounded”]Vagrant Boxes Credentials are username vagrant, password vagrant[/box]

A good start.   Now to wreak some havoc to see what is on this box and where so I can start crafting some Puppet rule changes.   Before I get started I want to get a GUI desktop on here.

GUI Desktop

To get a GUI desktop on CentOS you typically run the yum package installer with yum groupinstall Desktop.    A visit under sudo su and executing that command gets yum going and pulling down the full X11/Gnome desktop environment.

A quick reboot with shutdown -r now from the root command line should bring up the desktop this time around… but clearly I missed a step as I still have a console login.  Most likely a missing startx command or something similar in the boot sequence of init.d.

A basic startx & from the command line after logging back in as vagrant/vagrant and my GUI desktop is in place, so clearly I need to turn on the GUI login/boot loader.

Tweaking PuPHPet Box Parameters

Now that I know what needs to change I need to go and create that environment via the PuPHPet/Vagrant/Puppet files so I can skip the manual tweaking process.   After some digging I found the config.yaml file.    When you use PuPHPet this file will be put in the .zip download you receive at the end of the PuPHPet process.   It is in the <boxid>/puphpet/ directory.

PuPHPet config.yaml
PuPHPet config.yaml

While some of the box parameters can be adjusted in these files, it appears much of the hardware cannot be manipulated.  There is a site called “Vagrant Cloud” that has multiple boxes that can be configured.   To switch boxes you can edit the config.yaml file and replace the box_url line to point to one of the other variants that may be closer to your configuration.  Since I don’t see one that is close to my needs it looks like I will have to build my own box profile to be hosted in the cloud.   That is content for another article.

 

Posted on

WordPress Workflow : Automated Virtual Box Creation

PuPHPet Vagrant Puppet Banner

I am into my first full day back after WordCamp Atlanta (#wcatl) and have caught up on most of my inbox, Twitter, and Facebook communications.   As I head into a new week of WordPress plugin production I decided now is as good a time as any to update my work flow.

I learned a lot of new things at WordCamp and if there is one thing I’ve learned from past experience it is DO NOT WAIT.   I find the longer I take to start implementing an idea the less chance I have of executing.

My first WordCamp Atlanta 2014 work flow improvement starts right at the base level.   Setting up a clean local development box.   I had started this process last week by manually configuring a baseline CentOS box and was about to setup MySQL, PHP, and all the other goodies by hand.  That was before I learned more about exactly what Vagrant can do.   I had heard of Vagrant but did not fully internalize how it can help me.  Not until this past weekend, that is.

My Work Environment

Before I outline my experience with the process I will share my plugin development work environment.

  • Host System: Windows 8.1 64-bit on an HP Zbook laptop with 16GB of RAM with a 600GB SATA drive
  • Guest System: CentOS 6.5 (latest build) with 8GB RAM on an Oracle VirtualBox virtual machine
    • Linux Kernel 2.6.32-431
    • PHP v5.4.23
    • MySQL v 14.14 dist 5.5.35
  • Dev Took Kit: NetBeans, SmartGit, Apigen and phpDoc, MySQL command line, vim
HP Zbook Windows 411
My Development System laptop config.

While that is my TYPICAL development environment, every-so-often I swap something out such as the MySQL version or PHP version and it is a HUGE PAIN.    This is where Vagrant should help.  I can spin up different virtual boxes such as a single-monitor versus three-monitor configuration when I am on the road or a box with a different version of PHP.     At least that is the theory anyway.   For now I want to focus on getting a “clean” CentOS 6.5 build with my core applications running so I can get back to releasing the Store Locator Plus Enhanced Results add-on pack this week.

Getting Started With Vagrant

The Rockin’ Local Development With Vagrant talk that Russel Fair gave on Saturday had me a bit worried as he was clearly on the OS/X host and the examples looked great from a command line standpoint.  Being a Linux geek I love command line, but I am not about to run virtual development boxes in in a VirtualBox guest.   Seems like a Pandora’s box to me… or at least a Russian doll that will surely slow down performance.   Instead I want to make sure I have Vagrant running on my Windows 8.1 bare metal host.    That is very much against my “full dev environment in a self-contained and portable virtual environment” standard, but one “helper tool” with configurations backed up to my remote Bitbucket repository shouldn’t be too bad, as long as I don’t make it a habit to put dev workflow tools on my host box. Yes, Vagrant does have a Windows installer and I’m fairly certain I won’t need to be running command-line windows to make stuff work.   If I’m running Windows I expect native apps to be fully configurable via the GUI.  Worst case I may need to open a text editor to tweak some files, but no command line please.

Here is the process for a Windows 8.1 install.

  • Download Vagrant.
  • Install needs to be run as admin and requires a system reboot.
  • Ok… it did something… but what?   No icons on the desktop or task bar or … well… anywhere that I can find!

Well… sadly it turns out that Vagrant appears to be a command line only port of the Linux/OSX variants.    No desktop icons, no GUI interface.   I get it.  Doing that is the fast and easy process, but to engage people on the Microsoft desktop you really do need a GUI.    Yes, I’m geek enough to do this and figure it out.   I can also run git command line with no problem but I am FAR more efficient with things like the SmartGit GUI interface.

Maybe I’m not a real geek, but I don’t think using command line and keyboard interaction as the ONLY method for interacting with a computer makes you a real techie.    There is a reason I use a graphical IDE instead of vim these days.    I can do a majority of my work with vim, but it is FAR more efficient to use the GUI elements of my code editor.

Note to Vagrant: if you are doing a windows port at least drop a shortcut icon on the desktop and/or task bar and setup a Windows installer.   Phase 2: consider building a GUI interface on top of the command line system.

It looks like Vagrant is a lower-level command line tool.   It will definitely still have its place, but much like git, this is a too on which other “helpers” need to be added to make my workflow truly efficient.  Time to see what other tools are out there.

Kinda GUI Vagrant : PuPHPet

Luckily some other code geeks seem to like the idea of  GUI configuration system and guess what?   Someone created a tool called PuPHPet (which I also saw referenced at WordCamp so it must be cool)  and even wrote an article about Vagrant and Puppet.   Puppet is a “add on”, called a provisioner,  to setup the guest software environment.

PuPHPet is an online form-based system that builds the text-file configuration scripts that are needed by Vagrant to build and configure your Virtualbox (or VMWare) servers.   It is fairly solid for building a WordPress development environment, but it does mean reverting back to CentOS 6.4 as CentOS 6.5 build scripts are not online.     Though I am sure I can tweak that line of the config files and fix that, but that takes me one-step away from the “point and click” operation I am looking for.

Either way, PuPHPet, is very cool and definitely worth playing with if you are going to be doing any WordPress-centric Vagrant work.

PuPHPet Intro Page
The PuPHPet online configuration tool for creating Vagrant + Puppet config files.

 

Puppet Makes Vagrant and PuPHPet Smarter

Now that I have Vagrant installed and I discovered PuPHPet I feel like I am getting closer to a “spin me up a new virtual dev box, destroy-as-desired, repeat” configuration.  The first part of my workflow improvement process.   BUT…. I need one more thing to take care of it seems… get Puppet installed.   I managed to wade through the documentation (and a few videos) to find the Windows installers.

Based on what is coming up in the install window it looks like the installer will roll out some Apache libs, ruby, and the windows kits that help ruby run on a windows box.

Puppet Install Licenses
The Puppet installer on Windows.

Again, much like Vagrant, Puppet completes the installation with little hint of what it has done.    Puppet is another command line utility that runs at a lower-level to configure the server environments.   It will need some of the “special sauce” to facilitate its use.     A little bit of digging has shown that the Puppet files are all installed under the C:\Program Files (x86)\Puppet Labs folder.    On Windows 8.1 the “Start Menu” is MIA, so the documentation about finding shortcuts there won’t help you.    Apparently those shortcuts are links to HTML doc pages and some basic Windows shell scripts (aka Batch Files) so nothing critical appears to have gone missing.

The two files that are referenced most often are the puppet and facter scripts, so we’ll want to keep track of those.   I’ll create a new folder under My Documents called “WP Development Kit” where I can start dumping things that will help me managed my Windows hosted virtual development environment for WordPress. While I’m at it I will put some links in there for Vagrant and get my PuPHPet files all into a single reference point.

WP Dev Kit Directory
The start of my WP Dev Kit directory. Makes finding my PuPHPet, Vagrant, and Puppet files easier.

Now to get all these command line programs to do my bidding.

Getting It Up

After a few hours or reading, downloading, installing, reading some more, and chasing my son around the house as the “brain eating dad-zombie”, I am ready to try to make it all do something for me.    Apparently I need to use something called a “command line”.  On Windows 8.1.

I’m giving in with the hopes that this small foray into the 1980’s world of command line system administration will yield great benefits that will soon make me forget that DOS still exists under all these fancy icons and windows.   Off to the “black screen of despair”, on of the lesser-known Windows brethren of the “blue screen of death”.     Though Windows 8 tries very hard to hide the underpinnings of the operating system, a recent Windows 8 patch and part of Windows 8.1 since “birth” is the ever-useful Windows-x keyboard shortcut.   If you don’t know this one, you should.   Hold down the Windows key and press x.   You will get a Windows pop-up menu that will allow you to select, among many other things, the Command Prompt application.

If you right-click on the “do you really want to go down this rabbit hole” confirmation box that comes up with the Command Prompt (admin) program you will see that it is running C:\Windows\system32\cmd.exe.     This will be useful for creating a shortcut link that will allow me to not only be in command mode but also to be in the “source” directory of my PuPHPet file set.    I’m going to create a shortcut to that application in my new WP Development Kit directory along with some new parameters:

  • Search for cmd.exe and find the one in the Windows\system32 directory.
  • Right-click and drag the file over to my WP Development Kit folder, selecting “create shortcuts here” when I drop it.
  • My shortcut to cmd.exe is put in place, but needs tweaking…
  • Right-click the shortcut and set the “Start in” to my full WP Development Kit folder.

Now I can double-click the command prompt shortcut in my WP Development Kit folder and not need to change directory to a full path or “up and down the directory tree” to get to my configuration environment.

Running Vagrant andn Puppet via PuPHPet Scripts
Running Vagrant andn Puppet via PuPHPet Scripts

A few key presses later and I’ve managed to change to my downloaded PuPHPet directory and execute the “vagrant up” command.   Gears starting whirring, download counters started ticking, and it appears the PuPHPet/Vagrant/Puppet trio are working together to make something happen.  At the very least it is downloading a bunch of stuff from far away lands and filling up my hard drive.   Hopefully with useful Virtualbox disk images and applications required to get things fired up for my new WordPress dev box.

We’ll see…

Link Summary

Posted on

VMWare Mounting Windows Host Folder On CentOS Guest

VMWare WorkstationAfter a failed upgrade of VMWare-Tools on VMWare Workstation 7.1.5, I ended up with a shared folder that would not mount automatically. After a bit of digging on the Internet I found the solution.  Here are my brief notes on how to manually mount a shared Windows host folder.

  1. Make sure you have the host folder shared and always enabled or enabled until next power off.

     

    VMWare Share Folder
    VMWare Share Folder
  2. Run the vmware mount client:
    /usr/bin/vmware-hgfsclient

    It will return the name of the shared host folder, it was named “Documents” in my case.

  3. Mount the host folder:
    mount -t vmhgfs .host:/Documents /mnt/hgfs

That’s it.  Hope that helps others that may have lost their auto-mounted windows host folders.

Posted on

WordPress – Sharing A Base Class Amongst Plugins

Introduction

The new series of MoneyPress plugins that is coming out in the next month is going to be based on a common foundation.  This allows us to maintain consistency, share new features across the product line, and provide an improved quality product that gets out to the consumer.

However, during the migration to this new shared platform we uncovered some problem areas deep within the bowels of WordPress.  Yes, even with the recently released 3.0 version.   However we don’t blame this on WordPress.   Far from it.  WordPress is  a well engineered application, it’s only fault is being tied into archaic versions of PHP… which means anything prior to PHP 5.3 when namespaces were finally introduced.   There is a reason many languages have had namespaces for years, but that is a discussion for another post.

One of the more nagging problems was an issue with adding the settings pages to the admin panel menus.  As soon as we activated a 2nd (or 3rd) plugin using the same base classes, the program broke.  Only the latest plugin to be loaded would show up.  We ruled out basic syntax and logic errors fairly quickly.

The problem, it turns out, has to do with how WordPress builds the internal names for all of these objects we are creating based on the same class.   It was a lot of debugging of our code as well as the WordPress code that resulted in a simple solution.

Assumptions – We All Know What They Say About That!

Secondly, WordPress uses the *class name*, not the *object* itself when passing an array as the function to the add_options_page or add_action functions.  This is a very important distinction.  The way the docs are written one would ASSUME WordPress is using the object to fire off the functions.

Direct quote from the WordPress Docs…

The function must be referenced in one of two ways:

  1. 1. if the function is a member of a class within the plugin it should be referenced as array( $this, ‘function_name’ )
  2. 2. in all other cases, using the function name itself is sufficient

Since $this would be the instantiated class, I jumped to the same conclusion as I believe Chris (and/or Eric) would have… “you pass the object, therefore it must call that object’s function_name function”.  Not true.

Why The Problem Occurs

The code, buried in the very end of wp_includes/plugin.php does something LIKE THIS (I’ve put in the debugging  statement versus actual code, it gets the point across in 10 fewer lines) on versions BEFORE 5.2  :

 echo 'returning ' . $tag .' ID for ' . get_class($function[0]) . ' as ' . $obj_idx . '<br/>';

That means they are getting the CLASS NAME for the object, attaching the 2nd string which is the function name and using THAT to come up with a unique key.  Guess what happens if the class name + function name are already IN the list?  That’s right, NOTHING.    Since all of our plugins are now based on the WPCSL-GENERIC class name, only one menu item is added and only one render-the-settings page is put into the queue.

In case you’re wondering they do a function_exists(‘spl_object_hash’) to test for 5.2+ functionality in PHP.   If that function exists they do this instead, which yields similar unwanted results:

 return spl_object_hash($function[0]) . $function[1];

As a side note – I like how they do their 5.2 “upgrades on the fly” by checking for a function from 5.2, using that if they can, otherwise use their own code.

All told, this is yet ANOTHER great argument for PHP based applications to all upgrade to 5.3 and start using namespaces.   Also, IMO this is a bad way to manage this.  Every plugin on every site must have unique class names or this will fail.  As we found out when doing the “right thing” and basing all of our WordPress plugins off the same base classes.

The Fix

So after all of this, there is a somewhat simple fix.   Set the wp_filter_id propery on your class.

In our class we simply need to set the property wp_filter_id = a unique int.   The only thing we need to make sure of, since we are now our own ID number managers, is to ensure EVERY PLUGIN we create has a unique wp_filter_id property.  My shortlist for wp_filter_ids (which we need to record in our internal docs):

  • MP : CafePress Edition = 1
  • MP: Commission Junction Edition = 2
  • MP: eBay Edition = 3
  • MP: BuyAt Master Edition = 4
  • MP: Ticketmaster Edition = 5
  • MP: NY Times Store Edition = 6

Not a GREAT fix because it bypasses the WordPress ID generator for the filter system.   It is very much like turning off auto-index on a database primary key for a FEW records.   Luckily the IDs in WordPress are compound keys made up of class name + function name + autoID.    Since our class name + function name is somewhat long & complex there is little chance that setting a manual ID will cause problems.

The reason this works?   In that deep dark function called _wp_filter_build_unique_id the guys on the WordPress development team left an “out”.   If your class has wp_filter_id set as a property it skips their auto-generation of the ID, assuming you know what the heck you’re doing.   This means it doesn’t find the class name + function name already in the table and “skip it” because it thinks it’s doing the same thing twice.   You told it use ID # X so it will do that… and thus create a manually generated unique ID for each plugin even though they share the same base class.  When it comes time to render the page both new filters are on the render stack & will get popped off, drawing each menu item where it belongs.

What a way to get a crash course on WP3.0 internals.  Now on to launching some new products…

Posted on

HTTP Errors When Uploading/Connecting in WordPress

Having problems browsing themes, uploading plug-ins, or doing just about anything that “talks” to the outside world via WordPress? We have had a development server buried deep in our network behind several routers and firewalls that had a similar problem. Whenever we’d log into the dashboard we’d get various timeout error messages on each of the news sections. We’d not get our automatic update messages whenever there was a plugin update or a WordPress update (3.0 is coming soon!).
Well it turns out that we needed to fix 2 things to help speed up the network connection.

Fix #1 – DNS Resolution

We run this particular development box on Linux.   That meant updating our /etc/resolv.conf file to talk directly to the DNS servers. If you use DHCP configuration or go through a router this file is often empty.   Force-feeding our Internet Service Providers (ISPs) DNS server IP addresses into this file sped up domain name lookups significantly.  This meant looking up things on wordpress.org took 1-2 seconds versus the previous 10-20 second lookup times.   Here is what we put in our file for our Bellsouth/AT&T DNS in Charleston South Carolina:

search cybersprocket.com
nameserver 205.152.37.23
nameserver 205.152.132.23
nameserver 192.168.3.254

Fix # 2 – Adjust PHP Timeout

This seemed to help with the problem, though we’re not sure why.  WordPress should be overriding the default PHP.ini settings but maybe something was missed deep in the bowels of the WordPress codebase… either that or this was pure coincidence.  Either way, we’re listing it here because as soon as we did these two things our timeout issues went away.

Update php.ini, on our linux server this is /etc/php.ini, and change the default_socket_timeout setting to 120.  That section of our php.ini now looks like this:

; Default timeout for socket based streams (seconds)
;default_socket_timeout = 60
default_socket_timeout = 120

Hopefully these notes will help you resolve any timeout issues you’re having with your WordPress site.

Posted on

Upgrading Logwatch on CentOS 5

Introduction

I finally got tired at looking at the thousand-plus line daily reports coming to my inbox from Logwatch every evening.  Don’t get me wrong, I love logwatch.  It helps me keep an eye on my servers without having to scrutinize every log file.  If you aren’t using logwatch on your Linux boxes I strongly suggest you look into it and turn on this very valuable service.  Most Linux distros come with this pre-installed.

The problem is that on CentOS the version of logwatch that comes with the system was last updated in 2006.   The logwatch project itself, however, was updated just a few months ago.  As of this writing the version running on CentOS 5 is 7.3 (released 03/24/06) and the version on the logwatch SourceForge site is 7.3.6 (updated March 2010).   In this latest version there are a log of nice updates to the scripts that monitor your log files for you.

The one I’m after, consolidating brute force hacking attempt reports, is a BIG thing.  We see thousands of entries in our daily log files from China hackers trying to get into our servers.   This is typical of most servers these days, however in many cases ignorance is bliss.  Many site owners and IPPs don’t have logging turned on because they get sick of all the reports of hacking attempts.  Luckily we block these attempts on our server, but our Fireant labs project is configured to have iptables tell us whenever an attempt is blocked at the kernel level (we like to monitor what our labs scripts are doing while they are still in alpha testing).   This creates THOUSANDS of lines of output in our daily email.   Logwatch 7.3.6 helps mitigate this.

Logwatch 7.3.6 has a lot of new reports that default to “summary mode”.  You see a single line entry for each notable event, v. a line for each time the event occured.  For instance we see a report more like this for IMAPD..

 [IMAPd] Logout stats:
 ====================
 User | Logouts | Downloaded |  Mbox Size
 --------------------------------------- | ------- | ---------- | ----------
 cpanel@localhost |     287 |          0 |          0
 xyz@cybersprocket.com |       4 |          0 |          0
 ---------------------------------------------------------------------------
 291 |          0 |          0

Versus the older output like this:

--------------------- IMAP Begin ------------------------
 **Unmatched Entries**
LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[32811], protocol=IMAP: 1 Time(s)
LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[32826], protocol=IMAP: 1 Time(s)
LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[32981], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[32988], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[33040], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[33245], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[33294], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[33310], protocol=IMAP: 1 Time(s)
 repeat 280 more times...
 

So as you can imagine, with 10 sections to our logwatch report, the new summary reports make our email a LOT easier to scan for potential problems in our log files.

Upgrading Logwatch

In order to get these cool new features you need to spend 10 minutes, 5 if you’re good with command line Linux, and install the latest version of logwatch. In essence you are downloading a tarzip that is full of new shell and Perl script files.  The install does not compile anything, it simply copies scripts files to the proper directory on your server.

Our example here are all based on the default CentOS 5 paths.

  • Go to a temp install or source directory on your server.
    # cd /usr/local/src
  • Get the source for logwatch
    # wget http://downloads.sourceforge.net/project/logwatch/logwatch-7.3.6.tar.gz?use_mirror=iweb
  • Extract the files
    # tar xvfz logwatch-7.3.6.tar.gz
  • Make the install script executable
    # cd logwatch-7.3.6
    # chmod a+x install_logwatch.sh
  • Run the script & enter the correct paths for logwatch:
    # ./install_logwatch.sh
    ...Logwatch Basedir [/usr/share/logwatch]  : /etc/log.d
    ...Logwatch ConfigDir [/etc/logwatch] : /etc/log.d
    ...temp files [/var/cache/logwatch] : <enter>
    ...perl [/usr/bin/perl] : <enter>
    ...manpage [/usr/share/man] : <enter>

Conclusion

That’s it.  You should now be on the latest version of logwatch.

You can tweak a lot of the settings by editing the files in /etc/log.d/default.conf/services/<service-name>, for example we ask logwatch to only tell us when someones attempt to connect to our server has been dropped more than 10 times by our Fireant scripts (we do this via the iptables service setting).

Hope you find this latest update useful.   We certainly did!

Posted on

Upgrading Redmine From 8.6 to 9.3

After more than a year of using Redmine to help us manage our projects it was time to upgrade.  Redmine helps us manage our bug lists, wish lists, and to do lists.  It helps us communicate with our clients effectively and efficiently using a web based media in a consistent format that is easy to use for both our developers and our clients.  However, during the past year there have been several changes including the significant upgrades that came out in v9.x some months back.   Our busy schedule kept us from upgrading as each new release came out, and sadly we had fallen far behind.   This past weekend we decided it was time to upgrade.   The notes below record some of the problems we ran into and outlines how we resolved them.  If you are using Redmine for your own projects we hope this guide will help walk you through a major version update of your own.

These are Cyber Sprocket’s notes from our upgrade.  For more information you may want to visit the official documentation site.

Our Environment

The environment we were running before upgrading to Redmine 9.3:

  • Redmine 8.6
  • Apache 2.2.7

Preparation

The first thing we ALWAYS do before upgrading a system is to store a local copy of the database and the source code.  In order to make the archives as small as possible we post a note on the system that Redmine will be offline and at the posted time remove all the session “crud” that has built up.   The process includes a mysql data dump, a file removal, and a tarzip.

  • Go to the directory ABOVE the redmine root directory:
    cd /<redmine-root-dir>; cd ..;
  • Dump MySQL Redmine data:
    mysqldump –user=<your-redmine-db-username> -p <your-redmine-databasename> > redmine_backup.sql
  • Remove the session files:
    rm -rf <redmine-directory>/tmp/sessions/*
  • Tarzip:
    tar cvfz redmine-backup.tgz redmine_backup.sql ./<redmine-directory-name>

Issues

Updating Rails

We realized after some back & forth that our RoR installation needed to be upgraded.  Redmine 9.3 require Ruby 1.8.6 or 1.8.7 (we had 1.8.6 luckily) with Rails 2.3.5 (which we needed to upgrade) and Rack 1.0.1 (which we never touched).

gem install rails -v=2.3.5

Fetching 9.3

We could not perform a simple svn update since we are on an 8.X branch.  A new svn checkout was necessary.  We opted to move our old Redmine install to a different path and do the checkout in our original location:

svn checkout  /redmine

Generation session_store.rb

Later version of Redmine (even 8.X versions beyond 8.6) require a secret key in order for the session system to work.  If you don’t have this you can’t login.  After much trial & error we found that the following command WILL WORK if you have the latest Redmine source (Fetching 9.3) and the latest version of Rails (Updating Rails).   There is not file named config/initializers/session_store.rb in the code repository, it is created by the following rake command:

rake config/initializers/session_store.rb

Updating The Database

The database then needed to be migrated:

rake db:migrate RAILS_ENV=production

Database Upgrade Errors : Migrating Member_Roles and Groups

While performing the database update we immediately ran into a couple of errors about a table already existing. Turns out a simple renaming of the tables fixed the problem, no apparent harm done.

The error message was:

Mysql::Error: Table 'member_roles' already exists:

The fix was as simple as logging into MySQL from the command line and renaming the table:

mysql> rename table member_roles to member_roles_saved
mysql> rename table groups_users to groups_users_saved

Switching from CGI to FCGID

It turns out that RoR does not play well with plain ol’ CGI processing via Apache when running Rails v2.3.5.   We ended up having to upgrade our Apache server to enable mod_fcgid and tweaking our new Redmine install to use that.  We started by following this excellent guide go running Redmine on Apache.  Below are our notes about this process to help save you some time:

  • Do not install fcgi, instead use Apache’s mod_fcgid
  • chmod 755 /var/log/httpd so fgcid can run from Apache and access the socks directory it creates there
  • Modify <redmine-directory>/public/.htaccess to prevent looping with mod_rewrite

Installing FCGID

“Official Apache mod_fcgid”:http://httpd.apache.org/mod_fcgid/ this is the Apache version, seems newer and we had more luck with this than the Coremail hosted version below.

Fetch the code

cd /usr/local/src/
wget 
tar zxvf mod_fcgid.2.3.5.tgz
cd mod_fcgid.2.3.5

Configure and Install

./configure.apxs
make
make install

Permissions

chmod 755 /var/log/httpd
service httpd restart

Install Ruby Gem fcgi

You will need to tell Ruby to work with fcgi for this to work:

gem install fcgi

Errors Installing fcgi gem

If you see this error:

Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.

You probably need the fcgi development kit. Get it from here, build it & install it… THEN do the gem install fcgi again.

http://www.fastcgi.com/drupal/node/5

Prevent Redirects

You may end up with looping with mod_rewrite if you had a CGI version installed first.   We commented out the non-fcgid lines and that kept things running smoothly.

Edit <redmine-directory>/public/.htaccess

Comment all the lines for the Rewrite rules for the dispatcher except the FCGI rule for fcgid

#<IfModule mod_fastcgi.c>
#       RewriteRule ^(.*)$ dispatch.fcgi [QSA,L]
#</IfModule>
#<IfModule mod_fcgid.c>
       RewriteRule ^(.*)$ dispatch.fcgi [QSA,L]
#</IfModule>
#<IfModule mod_cgi.c>
#       RewriteRule ^(.*)$ dispatch.cgi [QSA,L]
#</IfModule>

Getting Errors With FCGID?

This is a very common error.  For some reason Ruby + mod_fcgid do not always play well with each other.  We have two near-identical servers running CentOS 5, Apache 2.2.x, and the same exact versions of Ruby + Rails + gems installed.   Yet on one server Redmine works fine.  On the other we get this:

undefined method `env_table’ for nil:NilClass

The “magic pill” seems to be running Passenger.  While we didn’t believe this at first since we got it to work fine on our development server, it turns out that there are some gremlins buried deep within the bowels of Ruby & mod_fcgid.    These few steps fixed the problem on our production server:

gem install passenger
passenger-install-apache2-module

Edit the httpd.conf file and add these lines (check your paths that Passenger gives you during the install – they may be different on your server):

LoadModule passenger_module /usr/local/lib/ruby/gems/1.8/gems/passenger-2.2.11/ext/apache2/mod_passenger.so
PassengerRoot /usr/local/lib/ruby/gems/1.8/gems/passenger-2.2.11
PassengerRuby /usr/local/bin/ruby

Restart httpd…

service httpd restart

Test your Redmine install.

Checking Logs

If you have problems check the log files in your Redmine installation directory, such as ./log/production.log. You may also want to check your Apache log files, assuming you’ve set those up. To log Apache messages you need to have an ErrorLog statement in your httpd.conf file that tells Apache where you want your log file written (normally /usr/local/apache/logs/redmine-error.log).

Posted on

Setting Up Stunnel On Linux

We need your help!


Cyber Sprocket is looking to qualify for a small business grant so we can continue our development efforts. We are working on a custom application builder platform so you can build custom mobile apps for your business. If we reach our 250-person goal have a better chance of being selected.

It is free and takes less than 2 minutes!

Go to www.missionsmallbusiness.com.
Click on the “Login and Vote” button.
Put “Cyber Sprocket” in the search box and click search.
When our name comes up click on the vote button.

 

And now on to our article…

 

Intro

This article was written while getting SMTP authentication working with AT&T Business Class DSL services.   The SMTP service requires authentication via a secure connection on port 465.   Other articles will get into further details, this article’s focus is on the stunnel part of the equation, which we use to wrap the standard sendmail/SMTP configuration.

In This Article

  • An example stunnel config file for talking to AT&T SMTP servers on port 465 (SMTPS)
  • Testing the connection to AT&T SMTPS is working via telnet
  • Getting stunnel running on system boot.

Our Environment

  • CentOS release 5.2
  • stunnel 4.15-2

We assume you have stunnel and telnet installed.  If not, research the yum install commands for CentOS.  You will also need superuser access to update the running services on your box.

Setting up stunnel

Stunnel will allow you to listen for data connections on a local port and redirect that traffic through an SSH wrapper to another system.  In our case we are using stunnel to listen on port 2525 on our local server, wrap the communication in ssh and send it along to our local AT&T SMTP Server at smtp.att.yahoo.com on port 465 (aka SMTPS).

Install

To do this you will need stunnel installed.   If yum is configured properly and the remote yum servers are online you can try this:

# yum install stunnel

Configure

You will then need to create or edit the stunnel configuration file and setup the AT&T SMTPS redirect.  Your config file should look like this (your remote SMTPS server may have a different URL, check with your ISP):

client=yes
[rev-smtps]
accept=127.0.0.1:2525
connect=smtp.att.yahoo.com:smtps

Test

Run stunnel in a detached daemon mode:

# stunnel &

Then telnet in to localhost port 2525, which should SSH wrap the connection to the AT&T SMTP Server

# telnet 127.0.0.1 2525

You should see something like this:

[root@dev xinetd.d]# telnet localhost 2525
Trying 127.0.0.1...
Connected to localhost.localdomain (127.0.0.1).
Escape character is '^]'.
220 smtp104.sbc.mail.re3.yahoo.com ESMTP
EHLO
250-smtp104.sbc.mail.re3.yahoo.com
250-AUTH LOGIN PLAIN XYMCOOKIE
250-PIPELINING
250 8BITMIME
quit

Connection closed by foreign host.

Stop the test process by killing the detached process.  Find the process ID with ps and kill it.

# ps -ef | grep stunnel

You should see something like this:

root      6181     1  0 11:37 ?        00:00:00 stunnel
root     10698  3626  0 14:11 pts/0    00:00:00 grep stunnel

Kill the process.

# kill <pid>

Starting up stunnel on boot.

stunnel can be started by using the simple # stunnel & command via a shell script that runs at startup.  This method allows for session caching and generally improves performance over an xinetd controlled session.

Configure

Create /etc/init.d/stunnel:

#!/bin/bash#
#       /etc/rc.d/init.d/stunnel
#
# Starts the stunnel daemon
#
# Source function library.
. /etc/init.d/functions
test -x /usr/sbin/stunnel || exit 0
RETVAL=0
#
#       See how we were called.
#
prog="stunnel"
start() {
    # Check if stunnel is already running
    if [ ! -f /var/lock/subsys/stunnel ]; then
    echo -n $"Starting $prog: "
    daemon /usr/sbin/stunnel
    RETVAL=$?
    [ $RETVAL -eq 0 ] && touch /var/lock/subsys/stunnel
    echo
    fi
    return $RETVAL
}
stop() {
    echo -n $"Stopping $prog: "
    killproc /usr/sbin/stunnel
    RETVAL=$?
    [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/stunnel
    echo
    return $RETVAL
}
restart() {
    stop
    start
}
reload() {
    restart
}
status_at() {
    status /usr/sbin/atd
}
case "$1" in
start)
    start
    ;;
stop)
    stop
    ;;
reload|restart)
    restart
    ;;
condrestart)
    if [ -f /var/lock/subsys/atd ]; then
    restart
    fi
    ;;status)
    status_at
    ;;
*)
    echo $"Usage: $0 {start|stop|restart|condrestart|status}"
    exit 1
esac
exit $?
exit $RETVAL

Set the stunnel script to run at startup level 3:

# ln -s /etc/init.d/stunnel /etc/rc3.d/S58stunnel

Test

Run the same telnet test to port 2525 on localhost as noted above.  Don’t kill the process when you are done.

Running via xinetd

xinetd runs various port listening services through a single program (xinet) that runs as a daemon.  Since our box (and most RHEL variants) runs xinetd by default, we simply need to create our configuration file for stunnel and put it in the xinet.d directory & restart the xinetd process.  This is NOT the recommended method for running stunnel.

Install

If xinetd is not installed and running on your system (it should be) then grab it with yum

# yum install xinetd

Configure

Create a new stunnel configuration file in the /etc/xinetd.d directory.

# description: stunnel listner to map local ports to outside ports
service stunnel
{
    disable         = no
    flags           = REUSE
    socket_type     = stream
    wait            = no
    user            = root
    port            = 2525
    server          = /usr/sbin/stunnel
}

You can learn more about xinetd configuration files here:
http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/s1-tcpwrappers-xinetd-config.html

You will also need to change your stunnel config file as the accept port is now handled by xinetd.  You can learn more via the stunnel manual by using # man stunnel at your linux prompt.

The new stunnel.conf file:

client=yes
connect=smtp.att.yahoo.com:smtps

Test

#service xinetd restart
#telnet 127.0.0.1 2525

You should see the same results as the stunnel test above.

Posted on

Scheduling Linux Apps

Executing Programs On A Schedule

The Linux scheduling application is known as cron. Cron is your friend when you want a program to run at a specific time every day, or at multiple times during the day.

The primary schedule is kept in a file known as crontab on most systems. A relatively normal location for this file is /etc/crontab. It is a text file that looks a bit odd, but is easily managed to create a variety of ways to get stuff to run when you want it to on a regular basis.

A sample crontab file:

SHELL=/bin/sh
PATH=/usr/bin:/usr/sbin:/sbin:/bin:/usr/lib/news/bin
MAILTO=root
#
# check scripts in cron.hourly, cron.daily, cron.weekly, and cron.monthly
#
-*/15 * * * *   root  test -x /usr/lib/cron/run-crons && /usr/lib/cron/run-crons >/dev/null 2>&1
*/5 *  * * *     root run-parts /etc/cron.fivemin
59 *  * * *     root  rm -f /var/spool/cron/lastrun/cron.hourly
10 2  * * *     root  run-parts /etc/cron.0210
14 4  * * *     root  rm -f /var/spool/cron/lastrun/cron.daily
29 4  * * 6     root  rm -f /var/spool/cron/lastrun/cron.weekly
44 4  1 * *     root  rm -f /var/spool/cron/lastrun/cron.monthly

This crontab tells the sytem to run a half-dozen different schedules including:

  • Every 5 minutes – run whatever shell scripts you find in the /etc/cron.fivemin directory
  • Every hour on the 59th minute – run whatever is in the /etc/cron/lastrun/cron.hourly directory
  • Every night at 2:10 AM – run the scripts in /etc/cron.210
  • Every night at 4:14 – run the shell scripts in /etc/cron.daily

You can see the pattern if you look closely. Details can be found on numerous websites and in your system docs (try man or info).

Getting Something To Run Every Hour

Using the sample crontab noted above, you can get a program to run every five minutes by simply creating a shell script (you do know how to do that, don’t you?) and placing it in the /etc/cron.fivemin directory. You’ll need to remember to set the executable bit BTW (755 is usually a good permission setting for the file).

Typically the shell script will run a program written in another language such as perl. For example a file named “notifyifcrashed.sh” might contain:

#!/bin/sh
cd /var/www/cgi-bin
./monitor.pl
exit 0

We’ll assume that the sendhello.pl perl applet does something friendly like send a notification email to our server admin if the applet detects the Apache web server has gone offline.

Devices

How Much Disc Space Is In Use

To list the disc space on a Linux system:

df -h

Results in something like this:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda7             2.0G  261M  1.7G  14% /
/dev/sda1            1012M   46M  915M   5% /boot
none                  2.0G     0  2.0G   0% /dev/shm
/dev/sda8             121G   96G   19G  84% /home
/dev/sda6             2.0G   71M  1.8G   4% /tmp
/dev/sda2             9.9G  4.8G  4.6G  52% /usr
/dev/sda5             9.9G  1.9G  7.5G  21% /var
/dev/sdb1             147G   93M  140G   1% /backup
/tmp                  2.0G   71M  1.8G   4% /var/tmp

Listing Mounted Drive Partitions

For a Linux system running CentOS:

sort /etc/mtab

Will result in something like this:

/dev/sda1 /boot ext3 rw 0 0
/dev/sda2 /usr ext3 rw,usrquota 0 0
/dev/sda5 /var ext3 rw,usrquota 0 0
/dev/sda6 /tmp ext3 rw,noexec,nosuid 0 0
/dev/sda7 / ext3 rw,usrquota 0 0
/dev/sda8 /home ext3 rw,usrquota 0 0
/dev/sdb1 /backup ext3 rw 0 0
none /dev/pts devpts rw,gid=5,mode=620 0 0
none /dev/shm tmpfs rw 0 0
none /proc proc rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
none /sys sysfs rw 0 0
/tmp /var/tmp none rw,noexec,nosuid,bind 0 0
usbfs /proc/bus/usb usbfs rw 0 0

The columns:

  • Device name
  • Where it is mounted on the filesystem (the directory it is attached to)
  • The file system type
  • the rest of the columns are defaul permissions, etc.

Disc Device Names

If you look in the mounted drives table above you’ll see something interesting about the standard drive names:

  • most start with /dev/sda
  • one starts with /dev/sdb

This indicates that we have two physical drives mounted in the system. A “drive a” and “drive b”.

The numbers after the “a” or “b” indicate the partition on that drive. Our first drive is broken up into 6 pieces that we have access to:

/dev/sda1 /boot
/dev/sda2 /usr
/dev/sda5 /var
/dev/sda6 /tmp
/dev/sda7 /
/dev/sda8 /home

The second drive is one piece mounted at /backup

/dev/sdb1 /backup