Posted on

SLP versions: Power add-on, Experience add-on version 4.6.1 updates to import functions

shem-001

Store Locator Plus 4.6.1 Highlights

• Allow add-ons to load JS on a per-admin-tab basis. Reduces browser overhead and memory footprint on admin pages.

• Simplify and improve the new option manager. More consistent option handling, more security option management, better performance.

•  Checkbox on/off validation for all add ons – fixes instant checkbox saving on admin panel for some add on options

• All language files are now pulled from the MySLP website.  • Missing translations? See MySLP and add your translation updates there.

Change Log for SLP Power Add-on

Posted on

Configuring Apache 2.4 Connections For WordPress Sites

Recently I upgraded my web server to PHP 5.6.14. Along the way the process managed to obliterate my Apache web server configuration files. Luckily it saves them during the upgrade process, but one thing I forgot to restore was the settings that help Apache manage memory. Friday night around midnight, because this stuff ALWAYS happens when you’re asleep… the server crashed. And since it was out of memory with a bazillion people trying to surf the site; every time I restarted the server I could not log in fast enough to get a connection and fix the problem.

Eventually I had to disconnect my AWS public IP address, connect to a private address with SSH, and build the proper Apache configuration file to ensure Apache didn’t go rogue and try to take over the Internet from my little AWS web server.

Here are my cheat-sheet notes about configuring Apache 2.4 so that it starts asking site visitors to “hold on a second” when memory starts getting low. That is much nicer than grabbing more memory than it should and just crashing EVERYTHING.

My Configuration File

I put this new configuration file in the /etc/httpd/conf.d directory and named it mpm_prefork.conf. That should help prevent it from going away on a future Apache upgrade. This configuration is for an m3.large server running with 7.4GB of RAM with a typical WordPress 4.4 install with WooCommerce and other plugins installed.

# prefork MPM for Apache 2.4
#
# use httpd -V to determine which MPM module is in use.
#
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# ServerLimit: maximum value for MaxRequestWorkers for the lifetime of the server
#
# MaxRequestWorkers: maximum number of server processes allowed to start
#
#
# TOTAL SYTEM RAM: free -m (first column) = 7400 MB
# USED SYSTEM RAM: free -m (second column) = 2300 MB
#
# AVG APACHE RAM LOAD: htop (filter httpd, average RES column = loaded in physical RAM) = 87MB
# TOTAL APACHE RAM LOAD: (htop sum RES column) 1900 MB
#
# BASE SYSTEM RAM LOAD: USED SYSTEM RAM - TOTAL APACHE RAM LOAD = 2300 - 1900 = 400MB
#
# AVAILABLE FOR APACHE: TOTAL SYSTEM RAM - BASE SYSTEM RAM LOAD = 7400 - 400 = 7000MB
#
# ServerLimit = sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process
# MaxRequestWorkers = number of simultaneous child processes to serve requests , must increase ServerLimit
#
# If both ServerLimit and MaxRequestWorkers are set to values higher than the system can handle,
# Apache httpd may not start or the system may become unstable.
#
# MaxConnectionsPerChild = how many requests are served before the child process dies and is restarted
# find your average requests served per day and divide by average servers run per day
# a good starting default for most servers is 1000 requests
#
# ServerLimit = AVAILABLE FOR APACHE / AVG APACHE RAM LOAD = 7000MB / 87MB = 80
#
#

ServerLimit 64
MaxRequestWorkers 64
MaxConnectionsPerChild 2400

The Directives

With Apache 2.4 you only need to adjust 3 directives. ServerLimit, MaxRequestWorkers (renamed from earlier versions) , and MaxConnectionsPerChild (also renamed).

ServerLimit / MaxRequestWorkers

ServerLimit sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process. MaxRequestWorkers is the number of simultaneous child processes to serve requests. This seems a bit redundant, but it is an effect of using the prefork MPM module which is a threadless design. That means it runs a bit faster but eats up a bit more memory. It is the default mode for Apache running on Amazon Linux. I prefer it as I like stability over performance and some older web technologies don’t play well with multi-threaded design. If I was going to go with a more stable multi-thread environment I’d switch to nginx. For this setup setting ServerLimit and MaxRequestWorkers to the same value is fine. This says “don’t ever run more than this many web servers at one time”.

In essence this is the total simultaneous web connections you can serve at one time. What does that mean? With the older HTTP and HTTPS protocol that means every element of your page that comes from your server is a connection. The page text, any images, scripts, and CSS files are all a separate request. Luckily most of this comes out of the server quickly so a page with 20 web objects on it will use up 20 of your 64 connections but will spit them out in less than 2 seconds leaving those connections ready for the next site visitor while the first guy (or gal) reads your content. With newer HTTP/2 (and SPDY) connections a single process (worker) may handle multiple content requests from the same user so you may well end up using 1 or 2 connections even with a page with multiple objects loading. While that is an over-simplification, the general premise shows why you should update your site to https and get on services that support HTTP/2.

Calculating A Value

# TOTAL SYTEM RAM: free -m (first column) = 7400 MB
# USED SYSTEM RAM: free -m (second column) = 2300 MB
# TOTAL APACHE RAM LOAD: (htop sum RES column) 1900 MB
# AVG APACHE RAM LOAD: htop (filter httpd, average RES column = loaded in physical RAM) = 87MB
# BASE SYSTEM RAM LOAD: USED SYSTEM RAM - TOTAL APACHE RAM LOAD = 2300 - 1900 = 400MB
# AVAILABLE FOR APACHE: TOTAL SYSTEM RAM - BASE SYSTEM RAM LOAD = 7400 - 400 = 7000MB
# ServerLimit = AVAILABLE FOR APACHE / AVG APACHE RAM LOAD = 7000MB / 87MB = 80

There you go, easy, right? Figuring our RAM resources can be complicated, but to simplify the process start with the built-in Linux free command and I suggest installing htop which provides a simpler interface to see what is running on your server. You will want to do this on your live server under normal load if possible.

Using free -m from the Linux command line will tell you the general high-level overview of your server’s memory status. You want to know how much is installed and how much is in use. In my case I have 7400MB of RAM and 2300MB was in use.

Next you want to figure out how much is in use by Apache and how much an average web connection is using per request. Use htop, filter to show only the httpd processes, and do math. My server was using 1900MB for the httpd processes. The average RAM per process was 87MB.

You can now figure out how much RAM is used by “non-web stuff” on your server. Of the 2300MB of used RAM, Apache was using up 1900MB. That means my server uses about 400MB for general system overhead and various background processes like my system-level backup service. That means on a “clean start” my server should show about 7000MB available for web work. I can verify that by stopping Apache and running free -m after the system “rests” for a few minutes to clear caches and other stuff.

Since I will have 7000MB available for web stuff I can determine that my current WordPress configuration, PHP setup, and other variables will come out to about 87MB being used for each web session. That means I can fit about 80 web processes into memory at one time before all hell breaks loose.

Since I don’t like to exhaust memory and I’m a big fan of the 80/20 rule, I set my maximum web processed to 64. 7000MB / 87MB = 80 * .8 = 64.

That is where you want to set your ServerLimit and MaxRequestWorkers.

MaxConnectionsPerChild

This determines how long those workers are going to “live” before they die off. Any worker will accept a request to send something out to your site visitor. When it is done it doesn’t go away. Instead is tells Apache “hey, I’m ready for more work”. However every-so-often one of the things that is requested breaks. A bad script in PHP may be leaking memory, for example. As a safety valve Apache provides the MaxConnectionsPerChild directive. This tells Apache that after this child has served this many objects to die. Apache will start a new process to replace it. This ensures and memory “cruft” that is built up is cleared out should something go wrong.

Set this number too low and you server spends valuable time killing and creating Apache processes. You don’t want that. Set it too high and you run the risk of “memory cruft” building up and eventually having Apache kill your server with out-of-memory issues. Most system admins try to set this to a value that has it reset once every 24 hours. This is hard to calculate unless you know your average objects requested every day, how many processes served those objects, and other factors like HTTP versus HTTP2 can come into play. Not too mention fluctuations like weekend versus weekday load. Most system admins target 1000 requests. For my server load I am guessing 2400 requests is a good value, especially since I’ve left some extra room for memory “cruft”.

Posted on

Boosting WordPress Site Performance : Upgrade PHP

As with every single WordCamp I’ve attended there is something new to be learned no matter how much of a veteran you are.   My 5th WordCamp at WordCamp US 2015 was no different.    There are a lot of things I will be adding to my system admin and my development tool belt after the past 48 hours in Philadelphia.

Today’s update that was just employed on the Store Locator Plus website:   Upgrading PHP.

Turns out that many web hosting packages and server images, including the Amazon Linux Image, run VERY OLD versions of PHP.    I knew that.   What I didn’t know was the PERFORMANCE GAINS of upgrading even a minor version of PHP.    PHP 5.6 is about 25% faster than PHP 5.3.    PHP 5.3 was the version I was running on this site until midnight.

WP Performance On PHP
WP Performance on PHP. Source: http://talks.php.net/fluent15#/wpbench

The upgrade process?  A few dozen command-line commands, testing the site, and restoring the name server configurations from the Apache config file which the upgrade process auto-saved for me.  EASY.

What about PHP 7?   That is 2-3x faster.  Not 2%.  100 – 200%.   WOW!    As soon as Amazon releases the install packages for their RHEL derivative OS it will be time to upgrade.

 

If you are not sure what version your web server is running (it can be different than command line on you server) you can find that info in the Store Locator Plus info tab.

SLP PHP Info
SLP PHP Info

The take-away?   If you are not running PHP 5.6, the latest release of PHP prior to PHP 7, get on it.  One of the main components of your WordPress stack will be running a lot faster, have more bug fixes, security patches, and more.

Posted on

Windows Azure Virtual Machines, Not Ready For Prime Time

Just last month, Microsoft announced that their Windows Azure Virtual Machines were no longer considered a pre-release service.  In other words, that was the official notification from Microsoft that they feel their Virtual Machines offering is ready for enterprise class deployments.   In fact they even offer uptime guarantees if you employ certain round-robin and/or load balancing deployments that help mitigate the downtime in your cloud environment.

Essentially the Virtual Machines offering on Windows Azure equates to a virtual dedicated server that you would employ from most hosting companies.  The only different with the Windows Azure platform, like most cloud-based offerings, is that you need to serve as your own system admin.   This is not web hosting for business owners but for tech geeks.    In other words, it works perfect for guys like me.

Or so I thought.

Different Shades of White

As I learned tonight, there are differences between the various cloud offerings that are not easy to tease out of the hundreds of pages of online documentation touting how awesome a service provider’s cloud services are.   Sure, there are the metrics.  You can compare instance sizes in terms of disk space, CPU, and bandwidth.   You can comparing pricing and the relative costs of operating your server on each of the cloud platforms.    You can even get the background information on the company providing the virtualized environment, getting some clue (though never a clear picture) of where the servers are physically located, how many servers they have, how secure the environment is, and more.

At the end of the day they all look very similar.  Sure there are discrete elements you can point to on each comparison spreadsheet you throw together, but in the end the differences are relatively minor.   They pricing is similar.   The network and server room build-outs are similar.   The support offerings look similar.     When all is said-and-done you end up making a choice based on price, the reputation of the company, the quality of the online documentation, and the overall user interface experience (UX) that is presented during your research.

After a lot of research, and with quite a bit of experience with Amazon Web Services, all the cloud based offerings were very similar.   Different shades of white.     In the end I decided to try the Microsoft Windows Azure offering.    Microsoft has a good reputation in the tech world, they are not going anywhere, and as a Microsoft Bizspark member I also have preview access and discount services.

My decision to go against the recommendations I’ve been making to my clients for years, “Amazon was one of the first, constantly innovates, and is the leader in the space”, was flawed.    Yes, I tested and evaluated the options for months before making the move.   But it takes an unusual event to truly test the mettle of any service provider.

Breaking A Server

After following the advice of a Microsoft employee that was presented in a Windows Azure forum about Linux servers, I managed to reset the Windows Azure Linux Agent (or WALinuxAgent) application.    No, I did not do this on a whim.   I needed to install a GUI application on the server and followed the instructions presented.  It turns out that Microsoft has deployed a custom application that allows their Azure management interface to “talk” to the Linux server.  That same application DISABLES the basic NetworkManager package on CentOS.  To install any kind of GUI applications or interface you must disable WALinuxAgent, enable NetworkManager, install, disable NetworkManager, then re-enable WALinuxAgent.  The only problem with the instructions that are published in several places is they omit a very important step.  While connected with elevated privileges (sudo or su) you must DISABLE the WALinuxAgent (waagent) provisioning so that it does not employ the Windows Azure proprietary security model on top of your installation.  If you do not do this  and you log out of that elevated privs session y ou will NEVER have access to an elevated privs account again.

Needless to say, you cannot keep an enterprise level server running in this state.  Eventually you need to install updates and patches for security or other reasons.

As I would learn, there is ZERO support on recovering from this situation.

Support versus support

In the years of working with Amazon Web Services and hosting a number of cloud deployments on their platform, I had come accustomed to being able to gain access to support personnel that actually TRY to help you out.   They often go above-and-beyond what is required by contract and try to either get you back on track through their own efforts of at least provide you with enough research and information that you can recover from any issues you have with limited effort.    Amazon support services can be pricey, but having access to not just the level one but also higher level techs is an invaluable resource.

The bottom line is that Microsoft offers NO support services for their Linux images, even those they provide as “sanctioned images”, beyond making sure the ORIGINAL image is stable and that the virtual machine did not crash.    Not only do they not have any apparent means to elevate support tickets, as it turns out there is NO SUPPORT if you are running a Linux image.

Clearly Microsoft does not put this “front and center” on ANY of their Windows Azure literature.  In fact, just the opposite.  Microsoft has made an extended effort in all their “before the purchase” propaganda to try and make it sound like they EMBRACE Linux.   They go out of their way to make you feel like Linux is a welcome member of their family and that they work closely with multiple vendors to ensure a top-quality experience.

Until you have a problem.   At which point they wash their hands, as is evident in this support response along with a link to the Knowledgebase article saying “Linux.  Not our problem.”:

Hello Lance, I understand your concerns and frustration, but Microsoft does not offer technical support for CentOS or any other Linux OS at this time.

 Please, review guidelines for the Linux support on Windows Azure Virtual Machines: http://support.microsoft.com/kb/2805216

No Azure Support
No Azure Support

Other Issues

While the lack of support and the inability to regain privileged user access to my server is the primary concern that has me on the path of choosing a new hosting provider, there have been other issues as well.

A few times in the past several months the WordPress application has put Apache in a tailspin.  This consumes the memory on the server.   While that is not necessarily an issue with Windows Azure, the fact that the “restart virtual image” process DOES NOT WORK at least 50% of the time IS a big issue.   Windows Azure is apparently overly-reliant on that dreaded WALinuxAgent on the server.   If it does not response, because memory is over-allocated for example, the server will not reboot.   The only thing you can do is press the restart button, wait 15 minutes to see if it happened to get enough memory to catch the restart command, and try again.  Ouch.

The Azure interface is also not as nice as I first thought.   While better than the original UX at Amazon Web Services, it is overly simplistic in some places and downright confusing in others.  Try looking at your bill.  Or your subscription status.   You end up jumping between seemingly dis-jointed sites.    Forget about online support forums.  Somehow you end up in the MSDN network, far removed from your cloud portal.    I often find myself with a dozen windows open so I can keep track of where I was or what I need to reference, lest I lose my original navigation path and have to start over.   Not too mention the number of times that this site-to-site hand-off fails and your login is suddenly deemed “invalid” mid-session.

Azure Session Amensia
Azure Session Amensia

Moving Servers

So once again, I find myself looking for a new hosting provider. Luckily I recently made the move to Windows Azure and not only have VaultPress available to make it easy to relocate the WordPress site but also Crash Plan Pro to get all the “auxiliary” installation “cruft” moved along with it.

Where will I go?

In my mind there are only two choices for an expandable cloud deployment running Linux boxes. Amazon Web Services or Rackspace. I’ll likely end up with Amazon again, but who knows… maybe it is time to try the legendary support at Rackspace once again. We’ll see. Stay tuned.

Posted on

Building A Site for Digital Content Sales

Today I received an email from a friend asking if I could help someone he knows in building a website.   The request is simple, help build a website that connects to social media and allows for registered users to download a paper he has written, keeping track of these registrations as leads.

The immediate answer is easy.

Use WordPress.

Ok, so maybe too simple an answer.   WordPress is way beyond a simple blogging platform.  It is a complete website and even a web applications building platform.   Take the website, glue on the right theme, add a few plugins, configure.  Done.

Far easier than 20 years ago when I built my first web engine for an ecommerce site,  writing thousands of lines of Perl code. It just about took a Phd in computer science to build a site like that back then.  Today, WordPress… click, click, type some settings, click… write some content… done.   But how do you get there?

Step 1: Pick A Host

This has come up TWICE today, so I’ll tell you who I use then tell you who I would and would not go with for most sites.

First, what I use.   Microsoft.  Yup, them.   Running a Linux server.    CentOS 6-something.  In a virtual dedicated server setup.  I know, I know… Microsoft and Linux?   Yeah.  And it didn’t even burst into flames within moments of doing the install.    So how does that work?     Microsoft has a service they call Windows Azure.   Don’t let the name confuse you.    “Azure”, as I like to call it, is basically the Microsoft equivalent of the Amazon Web Services environment.   In other words “cloud computing”.   It is NOT just Windows.

A Slight Diversion : Cloud Servers

What is the cloud?  A fancy name for remote computers and web services.  Really no different than rented servers from any other ISP, but today the term “cloud” tends to refer to any online service that gives you a simple web interface and programming APIs to control the resource.  This includes web hosting and web servers.   Just like the web servers you’d rent from an “Internet Presence Provider” (IPP) 5 years ago.   The only real difference here is they tend to put an emphasis on using virtual machines, just like those you run on a desktop like VMWare or Virtual Box.

That said, there are basically the same options with “cloud computing”, like “the cloud” provided by Amazon and Microsoft, as there are with renting a server.  You can get a website-only plan, a shared hosting plan, and a dedicated hosting plan.   This is sometimes called something different like “virtual private server” and “virtual dedicated server”.

In my opinion, if you are doing cloud computing then you really should be only looking at Virtual Dedicated Servers.  Otherwise just eliminate the confusion of “cloud computing” and go with a standard host.

If your website is going to be HUGE and you are going to get tens-of-thousands of unique visitors (uniques) every day or will have highly variable traffic with peaks of tens-of-thousands of uniques/day, then investigate and learn cloud hosting and dedicated cloud servers.

For the rest of you…

Back To Hosting

Ok, so I use a  Windows Azure virtual dedicated server running Linux.  But I’m a tech geek.  I know system security, system administration, and coding.  I can manage my server without any issues.

However, for a typical hosting company where you may need some assistance and do NOT need  your site to carry a super-heavy load, there are other options.    However, before I make a recommendation here are some companies I would stay away from for various reasons.

Do NOT use:

  • GoDaddy.   Way too many people have problems with GoDaddy hosted sites.   I cannot tell you how many broken sites of clients and customers were fixed when they left GoDaddy.    I also cannot tell you how incompetent it was for GoDaddy to take down MILLIONS of sites for several DAYS because they cannot configure a network router.   Then they refused any form of compensation to anyone.  I don’t even host with GoDaddy but my domain name is registered there and they took me offline for days.   This is NOT the first time this has happened in the past 12 months.   Not too mention most of their support staff is clueless.

  • LiquidWeb.   They  used to be one of my favorites.  As they have grown in size they too have grown in incompetence.  They cannot run a shared server properly to save their life.   I often found myself training their support staff.   They too have crashed my dedicated hardware, my shared server, and those of several customers for days-on-end.  No compensation and no apologies in most of those cases.
  • 1-And-1.   I’ve had no personal experience other than through my clients.  Mis-configured network routing.  Inability to fix blatant DNS issues.  Crashed servers.  Less performance that advertised.  Difficult to get in touch with competent support.  I’ve been paid good money to PROVE that 1-and-1 was the source of several major problems for clients for 1-an-1 to finally admit the issue was theirs then take weeks to address the problem.

Ok… so you know who to stay away from.   Who to use?

Well there are 2 companies I don’t have personal experience with but I’ve heard good things about.  The first, I only know about through casual conversation and what other people said about them.   The other is one many clients, with deep pockets, have used and swear by them.  I’m aware of them but have not used them personally.   In either case I think you are in good hands.

  • ClickHost.  They sponsored WordCamp Atlanta.  Already bonus points there.  They KNOW WordPress and love it.   If you are doing a WordPress site they seem like a perfect it.  Reasonably priced and WordPress knowledgeable.  Plus they just seem like cool people.

  • RackSpace.  They are the “100% guaranteed up time” people.   And from what I here they NEVER go offline.   They also have top-notch support.  And you pay for it.   Probably the most costly of the hosts  that are out there, but if your site can NEVER go down, they have a reputation for pulling that off.   Unless you screw it up yourself.  Then they try to help you fix it.

Step 2: Install WordPress

If you use someone like ClickHost, this is a few clicks and a couple of web-form questions away from being online.   Easy.

If you “go on your own” then you download WordPress, setup the MySQL database, and install via web forms.  Once you get MySQL setup, the 15-minute part of the “famous 15 minute install”, then the WordPress install really is just 15 minutes.  Very cool.

Step 3 : Themes

The harder part now is selecting a theme.    Themes are the skin of the site.  How it looks. There are tens-of-thousands of them online.  There are dozens within the free themes directory on WordPress.  There are a lot more out there in various online stores.  Some are free, some are paid.

But one thing most people overlook?   Themes are not just a pretty face.   MOST come with built-in functionality and features.  Think of it as a skin plus some cool functional elements added in.  While not all themes add functions or features to the site, many do.  Especially premium ones.

It is often easier to find a theme that does 90% of what you want and then add a few plugins.    Finding a theme that LOOKS cool, but does JUST that then adding 20 plugins is often a more difficult route.   If you follow my other threads you’ll know why.  Many plugins in the free directory at WordPress are abandoned.  Some don’t work well.  Others just don’t work.   Don’t let me scare you, plenty are GREAT and work perfectly.  You just need to “separate the wheat from the chafe” and that can take some time.

My recommendation?  Start with WooThemes.  I’ve found they have the best quality themes out there and more importantly, they actually ANSWER SUPPORT QUESTIONS.   Many themes, including premium ones, skip the later point which can be critical in getting a site online.      How to avoid at all costs?  Envato’s Theme Forest.  I’m sure they have a few good themes in the hundreds the promote, but the chances are finding those few are just too low.   Of the 10 “your plugin is broken” messages I get every month, 9 of them (or 10) are from someone using a Theme Forest theme that is horribly written and just plain breaks everything in their way.  Including plugins.   DO NOT use Theme Forest stuff.

Ok.  So you’ve got a theme, it does what you want and/or looks cool.       Now what?

Step 4: Plugins

Go find a few plugins that do what you want.  Start in the free WordPress plugins directory but widen your search to the premium plugins.  Unfortunately there are not a lot of good premium plugin sites out there.  However many of the better free plugins on the WordPress directory have premium upgrades.

Again, in the  3rd party market stay away from Envato’s Code Canyon.   While they offer a few good plugins there are far too many bad ones in the mix.    Not to hammer Envato too hard, they have a good idea but they SUCK at quality control.  They are obviously just playing a numbers game and going for volume over quality.

Got It, But For My Site?

Now you know the components, here is where I would start to build a site like the one described initially.

1) Host with ClickHost.  Small host package is probably fine.

2) Install WordPress 3.5.1 (or whatever the latest version is today).

3) Install WooCommerce as a plugin.  It is in the free directory and you can find it right from the WordPress admin panel by searching “woocommerce” under plugins.

4) Go to WooThemes and find a WooCommerce compatible theme that you like.

5) Go to WooThemes and look at the WooCommerce extensions.  There are several for doing subscriptions and digital content delivery.  They are premium add-ons but relatively inexpensive.

6) Add JetPack to your site.  It is a WordPress plugin from the guys that build WordPress.   It adds a bunch of cool features that you can turn on/off without much effort.  Mostly the social sharing and publishing tools are what we are looking for here.

7) Add VaultPress.  Also from “the WordPress people”. This is your site backup.  You want this.  Trust me, the $15/month is worth it the first time you break your site or it gets hacked.

I also strongly recommend adding Google Authenticator so you have 2-step authentication for your site.  It reduces the chances of someone hacking your password from the web interface.   This is not critical to functionality or security but I do recommend it.

So that is how I would get started.  I’ve not recommended specific themes or WooCommerce extensions because they change frequently and there may be something that better suits your particular needs.

Good luck and happy blogging!

Posted on

Apache Not Following Symlinks

Posted on

web.config Inheritance in IIS

ASP.Net

A couple of notes on IIS and how it works for virtual directories/applications and web.config inheritance and ASP.Net.

 

There is a configuration file that will automatically be inherited by an asp.net application’s web.config. This configuration file is machine.config and it defines the servers/computers asp.net schema for all of its web applications.

The root web.config files is also a server configuration file. This file resides in the same directory as the machine.config and is used to define system wide configurations.

Then you have the web site specific configuration also named web.config. From the websites root directory the web.config seems to work similar to .htaccess files.

Each directory in an asp.net may have its very own web.config file. Each virtual directory may also have its own web.config. Each virtual application has its own web.config file. Each one of these files inherit their parents web.config. This is done so you can define roles in the parent web.config file and it is enforced throughout the website.

Okay a virtual directory is the windows why of performing a soft link. It is not reflected in the file system. It is only reflected in IIS. An example:

Website = c:/intetpub/wwwroot/mysite/

Files = c/users/public/documents/

In IIS you can set a virtual directory by stating c:/inetpub/wwwroot/mysite/sharefiles/ that points to c:/users/public/documents/

You can actually add a virtual folder from another server on your network.

This is not reflected in the file system. If c:/inetpub/wwwroot/mysite/sharefiles/ directory was actually added, IIS will ignore it and point to the virtual directory. This was discovered when installing reporting for MS SQL that by default adds a ~/report virtual application. One of my applications already had an ~/report directory already and the virtual application took precedence. Applications work essentially the same as folders except in an virtual application operates in their own application pool.

If you want to stop inheritance you can the following to the site’s web.config:


    
        ...
    

If you want to not inherit certain sections of the configuration then you add a tag the child section.


    
    ...
Posted on

Using Subdomains With Localhost

I do a lot of development work locally, running apache2, mysql, postgres, and any number of other things on my personal computer so that I can do my work. This offers me a lot of benefits: it’s faster, it doesn’t rely on an Internet connection, and it allows me to have complete control over my environment. There are some drawbacks to this though. Generally, you end up with many different projects and with each one comes a new directory, so after a while you have dozens of sites that look like http://localhost/somesitehere/.

This by itself can cause some issues. First of all, now none of your files are running directly off of the document root which often causes some issues with badly written software. Secondly, it confuses the hell out of firefox’s password manager because it’s host based. It also looks kind of ugly having to put in all those different directory names. So wouldn’t it be nice if you could just write ?

Well, you can, and it’s not even difficult.

First of all, you’re going to need to make some changes to your apache/httpd/whatever config. You need to explicitly set NameVirtualHost to your loopback address (which is almost guaranteed to be 127.0.0.1). You will also need to then set each VirtualHost listing to this address as well:

NameVirtualHost 127.0.0.1
<VirtualHost 127.0.0.1>
...
</VirtualHost>

Secondly, and unsurprisingly, you need to actually specify the subdomains you’re going to be using. If you’ve ever done this by hand before, you’ll know that this is also done with the VirtualHost tag:

<VirtualHost 127.0.0.1>
     ServerName subdomain.localhost
     DocumentRoot /place/where/files/be/at/
</VirtualHost>

At this point, Apache or whatever webserver you’re using is configured to handle the subdomains. However, your computer itself is not. Sure, it knows that “localhost” maps to 127.0.0.1 but it doesn’t know where ‘subdomain.localhost’ is. You can fix this by editing your hosts file. This can be done via various graphical interfaces on some systems, or it can be found at “/etc/hosts” on most systems. Once you’re in there all you have to do is add:

127.0.0.1          subdomain.localhost

If you’re paying attention, you’ll notice that a very similar line already exists in that file for “localhost”. In fact, you can map whatever you want in this file. Just remember that you’ll need to make an additional entry in both your webserver conf file and your hosts file for each subdomain that you want to use.

Posted on

IP Based Firewall with cPanel

CPanel/WHM Based Systems

If you are using a web server from a web hosting company, chances are the CPanel/WHM is the system admin interface you use to manage your server.

The current revision of CPanel/WHM (Mar 5th, 2008) appears to rely on the host access file as a method of preventing access to the system. Access to iptables or ipchains rules is not readily apparent, however it is possible that we have overlooked these options.

Blocking An IP Range

The steps below will help you research who is connecting to your box and how to block them from gaining access to your system through software based IP blocking.

Real World Example

This implementation is based on our experiences after turning on the Logwatch utility on our web server. The logwatch report for PAM shows sshd authentication failures. From our most recent report:

--------------------- pam_unix Begin ------------------------
sshd:
  Authentication Failures:
     unknown (210.205.231.78): 45 Time(s)
     root (210.205.231.78): 10 Time(s)
     unknown (202.118.6.126): 9 Time(s)
     ftp (202.118.6.126): 4 Time(s)
     mail (202.118.6.126): 4 Time(s)
     root (c-68-58-191-51.hsd1.sc.comcast.net): 2 Time(s)
     apache (210.205.231.78): 1 Time(s)
     ftp (210.205.231.78): 1 Time(s)
     mysql (210.205.231.78): 1 Time(s)
     named (210.205.231.78): 1 Time(s)
     postgres (210.205.231.78): 1 Time(s)
  Invalid Users:
     Unknown Account: 54 Time(s)
---------------------- pam_unix End -------------------------

The first entry concerns us since there were 45 attempts to access our system that failed. We check the IP range doing a whois lookup (we use DNS Stuff to do our homework) to determine whether or not a general IP block makes sense. We then use CPanel/WHM utilities to shut down access from the offending IP.

Note: This procedure can prevent ANYONE from accessing your server, including yourself, if not done correctly. If you are not confident in your abilities do not even attempt this. Or as the boys like to say “Don’t attempt anything we’re about to do at home. EVER!”

WHM Host Access Control

Enlarge

WHM Host Access Control

  1. Run a DNSStuff whois lookup:
  2. Connect to our CPanel/WHM service via the web connection that our hosting company gave us (http://host.<domain>.com:2086).
  3. Click on the security icon
  4. Click on security center
  5. Click on host access control
  6. In the four entry boxes that are presented, type:
    • daemon : ALL (do not let them connect to ANYTHING on this box, even the web ports)
    • access list: 210.205.231.78/255.255.255.0 (block anyone connecting from 210.205.231.*)
      • Based on our whois lookup we know that all ip addresses under the 210.205.231.* range are from a specific ISP in Korea. While all the users under that range may not be bad guys, we know from experience that the hackers may get a different IP next week as they tend to be assigned their IP address dynamically. We prefer to block a few of the good guys to shut down the one nuisance user. Your beliefs in the goodness of humanity may dictate a different strategy.
    • action: deny (versus allow which would always let them in regardless of other rules)
    • comment: Korea (you can enter whatever you’d like)
  7. Click Save Host Access List on the bottom of the screen

Go back into security center and click Host Access List. Verify your latest entry appears and that the data is correct. If it is entered incorrectly you may block legitimate users from accessing your system.

Turning On Logwatch

Logwatch notifications may not be enabled on your CPanel/WHM system. Logwatch tends to be running in the background but the notifications go to Never-Never Land by default. You will need to look in system notifications and enter an email address to actually see your messages.

Concepts

Software Based IP Blocking

Software based IP blocking is a method for preventing access to your system by using a program running on the target computer (the computer people are trying to hack) that intercepts the connection by hooking into the TCP/IP process flow.

Software based IP blocking will consume CPU resources and memory on the target box. It can also be susceptible to hacking, although this is unusual, because it is nothing more than another program that runs on the server. For these reasons, many people consider a separate hardware firewall appliance as the better solution.

However, many web hosting services do not offer external firewall appliances. Those that do may charge more than you are willing to spend on security. In these cases you can still protect yourself via a software based IP blocking program. The most common options on Linux boxes are to use a software based firewall (ipchains or iptables) or preventing connections via host access directives.

Implementation of these concepts is discussed elsewhere on this page.