Posted on

HTTPS On Amazon Linux With LetsEncrypt

Internet2

In order to provide faster and more secure connections to the Store Locator Web service we have added https support through Sucuri.   Adding https will allow us to take advantage of SPDY and HTTP2 which are the latest improvements to web connection technology.   There are many reasons to get your servers onto full https support.   As we learned it isn’t a one-click operation, but without too much additional effort you can get your servers running on Amazon Linux with a secured connection.   Here are the cheat sheet notes based on our experience.

EC2 Server Rules

With EC2 you will want to make sure you set your security group rules to allow incoming connections on port 443.  By default no ports are open, you already added port 80 for web support.   Make sure you go back and add port 443 as an open inbound rule.

Apache SSL Support

Next you need to configure the Apache web server to handle SSL connections.   The easiest way to get started is to install the mod_ssl library which will create the necessary ssl.conf file in /etc/httpd/conf.d/ssl.conf and turn on the port 443 listener.


# sudo service httpd stop
# sudo yum update -y
# sudo yum install -y mod24_ssl

Get Your Let’s Encrypt Certificate

This is more of a challenge if you don’t know where to start. Part of the issue is Amazon Linux runs Python 2.6 and Let’s Enrypt likes Python 2.7. Luckily there has been progress on getting this working so you can cheat a bit.

# git clone https://github.com/letsencrypt/letsencrypt
# cd letsencrypt
# git checkout amazonlinux
# sudo ./letsencrypt-auto --agree-dev-preview --server https://acme-v01.api.letsencrypt.org/directory certonly -d yourdomain.name -d www.yourdomain.name -v --debug

You may get some warnings and other messages but eventually you will get an ANSI-mode dialogue screen (welcome to 1985) that walks you through accepting terms and the certification. Answer the questions and accept your way to a new cert.

Your certs will be placed in /etc/letsencrypt/live/ , remember this path as you will need it later.

Update SSL.conf

Go to the /etc/httpd/conf.d directory and edit the ssl.conf file.

Look for these 3 directives and change them to point to the cert.pem, privkey.pem, and chain.pem file.

SSLCertificateFile
SSLCertificateKeyFile
SSLCertificateChainFile

Restart Apache & Get Secure

No restart apache and check by surfing to https:///

# service httpd start

You may need to update various setting on your web apps especially if you use .htaccess to rewrite URLS with http or https.

Posted on

Sorting A Comma-Separated List On Linux

Here is a quick shortcut I used to combine a series of comma separated values into a single list of unique entries.  In my case I was trying to get a unique list of tags that came from several different lists of tags.   If list A had “apples, oranges, bananas” and list b had “apples,grapes,watermelons” I wanted to get “apples, bananas,grapes,oranges,watermelons” back.

There is the shortcut I used:

Paste each comma-separated list into a file named “x”, separate lines are OK.

Run this Linux command on the file to create a file named “Y” that has my sorted unique list of tags:

# tr ‘,’ ‘\n’ < x | sort -u | tr ‘\n’ ‘,’ > y

This is a quick and efficient way to sort comma-separated lists on Linux, which likely includes OS/X as well.

Posted on

Windows Azure Virtual Machines, Not Ready For Prime Time

Just last month, Microsoft announced that their Windows Azure Virtual Machines were no longer considered a pre-release service.  In other words, that was the official notification from Microsoft that they feel their Virtual Machines offering is ready for enterprise class deployments.   In fact they even offer uptime guarantees if you employ certain round-robin and/or load balancing deployments that help mitigate the downtime in your cloud environment.

Essentially the Virtual Machines offering on Windows Azure equates to a virtual dedicated server that you would employ from most hosting companies.  The only different with the Windows Azure platform, like most cloud-based offerings, is that you need to serve as your own system admin.   This is not web hosting for business owners but for tech geeks.    In other words, it works perfect for guys like me.

Or so I thought.

Different Shades of White

As I learned tonight, there are differences between the various cloud offerings that are not easy to tease out of the hundreds of pages of online documentation touting how awesome a service provider’s cloud services are.   Sure, there are the metrics.  You can compare instance sizes in terms of disk space, CPU, and bandwidth.   You can comparing pricing and the relative costs of operating your server on each of the cloud platforms.    You can even get the background information on the company providing the virtualized environment, getting some clue (though never a clear picture) of where the servers are physically located, how many servers they have, how secure the environment is, and more.

At the end of the day they all look very similar.  Sure there are discrete elements you can point to on each comparison spreadsheet you throw together, but in the end the differences are relatively minor.   They pricing is similar.   The network and server room build-outs are similar.   The support offerings look similar.     When all is said-and-done you end up making a choice based on price, the reputation of the company, the quality of the online documentation, and the overall user interface experience (UX) that is presented during your research.

After a lot of research, and with quite a bit of experience with Amazon Web Services, all the cloud based offerings were very similar.   Different shades of white.     In the end I decided to try the Microsoft Windows Azure offering.    Microsoft has a good reputation in the tech world, they are not going anywhere, and as a Microsoft Bizspark member I also have preview access and discount services.

My decision to go against the recommendations I’ve been making to my clients for years, “Amazon was one of the first, constantly innovates, and is the leader in the space”, was flawed.    Yes, I tested and evaluated the options for months before making the move.   But it takes an unusual event to truly test the mettle of any service provider.

Breaking A Server

After following the advice of a Microsoft employee that was presented in a Windows Azure forum about Linux servers, I managed to reset the Windows Azure Linux Agent (or WALinuxAgent) application.    No, I did not do this on a whim.   I needed to install a GUI application on the server and followed the instructions presented.  It turns out that Microsoft has deployed a custom application that allows their Azure management interface to “talk” to the Linux server.  That same application DISABLES the basic NetworkManager package on CentOS.  To install any kind of GUI applications or interface you must disable WALinuxAgent, enable NetworkManager, install, disable NetworkManager, then re-enable WALinuxAgent.  The only problem with the instructions that are published in several places is they omit a very important step.  While connected with elevated privileges (sudo or su) you must DISABLE the WALinuxAgent (waagent) provisioning so that it does not employ the Windows Azure proprietary security model on top of your installation.  If you do not do this  and you log out of that elevated privs session y ou will NEVER have access to an elevated privs account again.

Needless to say, you cannot keep an enterprise level server running in this state.  Eventually you need to install updates and patches for security or other reasons.

As I would learn, there is ZERO support on recovering from this situation.

Support versus support

In the years of working with Amazon Web Services and hosting a number of cloud deployments on their platform, I had come accustomed to being able to gain access to support personnel that actually TRY to help you out.   They often go above-and-beyond what is required by contract and try to either get you back on track through their own efforts of at least provide you with enough research and information that you can recover from any issues you have with limited effort.    Amazon support services can be pricey, but having access to not just the level one but also higher level techs is an invaluable resource.

The bottom line is that Microsoft offers NO support services for their Linux images, even those they provide as “sanctioned images”, beyond making sure the ORIGINAL image is stable and that the virtual machine did not crash.    Not only do they not have any apparent means to elevate support tickets, as it turns out there is NO SUPPORT if you are running a Linux image.

Clearly Microsoft does not put this “front and center” on ANY of their Windows Azure literature.  In fact, just the opposite.  Microsoft has made an extended effort in all their “before the purchase” propaganda to try and make it sound like they EMBRACE Linux.   They go out of their way to make you feel like Linux is a welcome member of their family and that they work closely with multiple vendors to ensure a top-quality experience.

Until you have a problem.   At which point they wash their hands, as is evident in this support response along with a link to the Knowledgebase article saying “Linux.  Not our problem.”:

Hello Lance, I understand your concerns and frustration, but Microsoft does not offer technical support for CentOS or any other Linux OS at this time.

 Please, review guidelines for the Linux support on Windows Azure Virtual Machines: http://support.microsoft.com/kb/2805216

No Azure Support
No Azure Support

Other Issues

While the lack of support and the inability to regain privileged user access to my server is the primary concern that has me on the path of choosing a new hosting provider, there have been other issues as well.

A few times in the past several months the WordPress application has put Apache in a tailspin.  This consumes the memory on the server.   While that is not necessarily an issue with Windows Azure, the fact that the “restart virtual image” process DOES NOT WORK at least 50% of the time IS a big issue.   Windows Azure is apparently overly-reliant on that dreaded WALinuxAgent on the server.   If it does not response, because memory is over-allocated for example, the server will not reboot.   The only thing you can do is press the restart button, wait 15 minutes to see if it happened to get enough memory to catch the restart command, and try again.  Ouch.

The Azure interface is also not as nice as I first thought.   While better than the original UX at Amazon Web Services, it is overly simplistic in some places and downright confusing in others.  Try looking at your bill.  Or your subscription status.   You end up jumping between seemingly dis-jointed sites.    Forget about online support forums.  Somehow you end up in the MSDN network, far removed from your cloud portal.    I often find myself with a dozen windows open so I can keep track of where I was or what I need to reference, lest I lose my original navigation path and have to start over.   Not too mention the number of times that this site-to-site hand-off fails and your login is suddenly deemed “invalid” mid-session.

Azure Session Amensia
Azure Session Amensia

Moving Servers

So once again, I find myself looking for a new hosting provider. Luckily I recently made the move to Windows Azure and not only have VaultPress available to make it easy to relocate the WordPress site but also Crash Plan Pro to get all the “auxiliary” installation “cruft” moved along with it.

Where will I go?

In my mind there are only two choices for an expandable cloud deployment running Linux boxes. Amazon Web Services or Rackspace. I’ll likely end up with Amazon again, but who knows… maybe it is time to try the legendary support at Rackspace once again. We’ll see. Stay tuned.

Posted on

Linux : Find All Files Older Than…

I recently needed to clean up a directory on my Linux box that included hundreds of files. I wanted to get rid of all the files that hadn’t been updated in over a year. At first I decided just to list the files by date:

ls -lt

This will list the files in long format by time (newest files list before old file). This shows me all the details with the oldest files scrolling to the bottom of the window so the last few files above my command prompt are the oldest.

There are hundreds of files more than a year old.

Employing Find

Find is one of the tools I keep in my Linux tool belt. I don’t need it often, but when I do it saves me quite a bit of time. Find is the Swiss Army Knife of Linux search tools. It is complete, thorough, and comes with just about every “doo-dad” (a technical term) for finding files. It does real-time system searches, so unlike locate it does not rely on a secondary database which may become outdated and not give complete results.

The downside of find is that there are so many options. It is easy to choose the wrong option or, more likely, to string together the options in a manner that the search takes forever and you get no results.

The upside, thanks to how the command shells work, is that you can use the output of find to drive other applications. Like ls or rm. The later two are how we’ll employ find.

Find Files Not Touched In A Year

First we can find all the files in our current directory that are ‘stale’ like this:

find ./ ctime +365

In English “find stuff in this directory (./) where the creation time (ctime) is at least 365 days ago”.

The sister option is mtime, which is “modification time”, and may be more appropriate depending on whether you are truly looking for “modified since” (touched at all) or “created since” (date it was first brought into existence).

Now we can combine this with ls to list the results. It may seem redundant, but I like to test the parameter passing of find to another shell command using something innocuous such as ls. So we test like this:

ls -l `find ./ ctime +365`

The back-ticks take the output of find, which is a simple relative-path based list of the files it located, and uses that as the second parameter to ls.

If all looks good we can now force a remove of those files. Be careful with rm -f. You can do irreparable harm with this. There are other options and if you are not comfortable with power tools that can take a limb off with one keystroke, then drop the -f or us one of the myriad of linux admin tools to help you out. I’ll roll the dice and hope all my limbs remain intact:

rm -f  `find ./ ctime +365`

Other Find Options

There are a lot of ways to find files by other attributes such as “delete all files larger than ? MB” or “delete all files older than <this file>”. This is a good resource that explains some of the options and how to perform different types of find operations:

http://www.linuxquestions.org/linux/answers/Applications_GUI_Multimedia/Find_command_0

Good luck & keep your limbs on!

Posted on

Bash Command Lookup (\!)

I’ve recently found something relatively interesting that you can do in a bash terminal. I recently sent out an email talking about how to implement git completion’s wonderful self to work on macs.

Part of that endeavor meant diving into the way that the terminal displays its information to you on your prompt. Some of the things I found out were using the escape codes like \h to stand for host, \W for working directory w/o the path, etc.

So I set out to find out what some more of those escape characters were, and I found: \!

I’ve learned from Paul that doing a !! will repeat the last command that you put in. This \! will actually list a sequential number (to the last) on your prompt. So now when I’ve added it to my PS1 as before from the git completion tutorial, my prompt now displays:

(527)iMac:~ chase$ _

And when I put in a command, lets say I emptily type grep<enter>

(527)iMac:~ chase$ grep<enter>
Usage: grep [OPTION]... PATTERN [FILE]...
(528)iMac:~ chase$ _

Lets pretend that was some crucially complex command (you know the kind… that escapes you how to do it again later when you really need it) instead of an empty grep, and lets say that through the course of working I’ve since entered dozens or hundreds of other commands into the prompt. I have a few options available:

  • hit the up arrow repeatedly until I find the command (which it doesn’t list with the number next to it)
  • use the <ctrl>+R command and type in parts of the command I remember
  • grep the history
  • lots of things

or, if I’ve remembered that 527 was the line for that crucial command, I can simply type:

(8901)iMac:~ chase$ !527<enter>

And it will repeat the command from that line. The only downside to this, is that eventually if you come to rely on it for remembering several different sets of complex commands… you’ll have to end up remembering several different sets of numbers that corresponds to those lines. Also, this function doesn’t give you any type of “Are you sure?” type of moment to let you know what you’re about to do… so one transposed number or dropped digit could potentially mean catastrophe if you’ve ever run some iffy commands (rm -Rf) .

About This Article…

I pilfered this from “The List”, thanks Chase…
– Lobby

Posted on

Linux mdadm tips & tricks

RAID arrays are an important part of any mission critical enterprise architecture. When we talk RAID here we are talking mirrored RAID, or mirrored and striped RAID, not simply striping which gives you a larger drive from several smaller drives. While that may be great for some home or desktop applications, for a enterprise application that simply doubles your changes of a failed system.

We often spec out RAID 1 or higher mirrored systems with RAID 1+0 being the most common (mirrored and striped) so that you increase access performance AND keep the system up if a single drive fails (on a 3 drive RAID 1+0 configuration). Along the way we’ve learned some tips & tricks that may help you out. To start with we’ll post some info on Linux RAID and eventually expand this article to include Windows information.

Fake v. Real Raid

One thing we’ve learned recently is that in the flood of new low cost servers there has also been a flood of those servers coming with on board RAID controllers. Unfortunately these new RAID controllers use a low cost solution that basically pretends to be a RAID controller by modifying the BIOS software. In essence they are software RAID controllers posing a hardware RAID controllers. This means you have all of the BAD features of both systems.

One easy way to tell if you have a server with “fake raid” is to configure the drives in RAID mode from the BIOS. Then boot and install Linux. If the Linux install sees both drives versus a single drive then the “on board RAID” is a poser. Skip it. Configure the BIOS in standard drive mode & use the software RAID.

Most current Linux distros have RAID setup and configuration built into the setup and installation process.   We’ll leave the details to other web articles.

MDADM – Linux RAID Utility

mdadm is the Linux utility used to manage and monitor RAID arrays.   After configuration a pair of drives, typically denoted with sda0, sdb0 etc. show up in your standard Linux command as md0.   They are “paired up” to make up the single RAID drive that most of your applications care about.

Status Report

mdadm is how you look “inside” the single RAID array and see what is going on.   Here is an example of a simple “show me the status” command on the RAID array.  In this case we have a failed secondary drive in a 2-disk RAID1 array:

[root@dev:log]# mdadm --detail /dev/md0
/dev/md0:
 Version : 00.90.03
 Creation Time : Thu Jan  8 12:20:13 2009
 Raid Level : raid1
 Array Size : 104320 (101.89 MiB 106.82 MB)
 Used Dev Size : 104320 (101.89 MiB 106.82 MB)
 Raid Devices : 2
 Total Devices : 1
Preferred Minor : 0
 Persistence : Superblock is persistent
 Update Time : Wed Jul 28 07:27:08 2010
 State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
 Spare Devices : 0
 UUID : a6ef9671:2a98f9e9:d1146f90:29b5d7da
 Events : 0.826
 Number   Major   Minor   RaidDevice State
 0       8        1        0      active sync   /dev/sda1
 1       0        0        1      removed

[root@dev:~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb[1] sda1[0]
 104320 blocks [2/2] [UU]

md1 : active raid1 sda2[0]
 1020032 blocks [2/1] [U_]

md2 : active raid1 sda5[0]
 482431808 blocks [2/1] [U_]

unused devices: <none>


Rebuild An Array

Shut down the system with the failed drive, unless you have a hot-swap drive setup. Pull the bad drive, partition it if necessary, and tell MDADM to rebuild the array.

[root@dev:~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0]
 104320 blocks [2/1] [U_]

md1 : active raid1 sda2[0]
 1020032 blocks [2/1] [U_]

md2 : active raid1 sda5[0]
 482431808 blocks [2/1] [U_]

unused devices: <none>
[root@dev:~]# mdadm --add /dev/md0 /dev/sdb1
mdadm: added /dev/sdb1
[root@dev:~]# mdadm --add /dev/md1 /dev/sdb2
mdadm: added /dev/sdb2
[root@dev:~]# mdadm --add /dev/md2 /dev/sdb5
mdadm: added /dev/sdb5



This command adds the replaced drive, /dev/sdb in our case for our second SATA drive, to the first RAID array named md0.

Remove A Drive

To remove a drive it must be marked faulty, then removed.

[root@dev:~]# mdadm --fail /dev/md0 /dev/sdb
[root@dev:~]# mdadm --remove /dev/md0 /dev/sdb

We had to do this on our drive because we forgot to partition it into a boot and data (/ and /boot and /dev/shm) partition.  Thus the /dev/sdb instead of /dev/sdb1, etc. as it the norm for a partitioned drive.

Checking Rebuild Progress

[root@dev:~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
 104320 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
 1020032 blocks [2/2] [UU]
md2 : active raid1 sdb5[2] sda5[0]
 482431808 blocks [2/1] [U_]
 [>....................]  recovery =  0.8% (4050176/482431808) finish=114.5min speed=69592K/sec
unused devices: <none>

FDISK – Drive Partitioning

To properly re-add a drive to an array you will need to set the partitions correctly.  You do this with fdisk.  First, look at the partitions on the valid drive then copy that to the new drive that is to replace the failed drive.

[root@dev:~]# fdisk /dev/sda

The number of cylinders for this disk is set to 60801.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
 (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

 Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   fd  Linux raid autodetect
/dev/sda2              14         140     1020127+  fd  Linux raid autodetect
/dev/sda3             141         741     4827532+  8e  Linux LVM
/dev/sda4             742       60801   482431950    5  Extended
/dev/sda5             742       60801   482431918+  fd  Linux raid autodetect


[root@dev:~]# fdisk /dev/sda

Use "n" to create the new partitions, and "t" to set the type to match above.
That should get you started.  Google & Linux man commands are your friend.  As we have time we’ll publish more Linux RAID tricks here.

Posted on

Using Find To Help Manage Files On Linux

We found a system administration problem on a  server today that was being caused by incorrect directory permissions.  Any email that passes through the server-wide spam filter was not going through because of permissions on the /home/<domaindir-here>/etc directory.  That directory needs to be owned by mail.

Here is a quick way to update those directories:

 [root@host:home]# cd /home

The find command only lists directories (much, much faster if you know you only need a certain file type like ‘d’), up to 2 levels deep (.  = current directory = level 1), and matching the name etc…

 [root@host:home]# chgrp mail `find /home -maxdepth 2 -type d -name etc`

Now we pass find as a variable list to the ls command to see what we touched.  The ‘d’ on ls also restricts it to directory level output only, so we don’t descend into those directories and list the contents.

 [root@host:home]# ls -ld `find /home -maxdepth 2 -type d -name etc`
drwxr-x---  3 aaron    mail 4096 Feb 10  2008 /home/aaron/etc
drwxr-x---  2 abundatr mail 4096 Oct 20  2009 /home/abundatr/etc
drwxr-x---  3 alutask  mail 4096 Feb 10  2008 /home/alutask/etc
drwxr-x---  3 banks    mail 4096 Feb 21  2008 /home/banks/etc
drwxr-x---  4 chasvol  mail 4096 Feb 10  2008 /home/chasvol/etc
drwxr-xr-x  3 cyberspr mail 4096 May  7 11:24 /home/cyberspr/etc
drwxr-x---  2 daedalus mail 4096 Mar 27  2008 /home/daedalus/etc
drwxr-x---  7 dolphin  mail 4096 Jul 30  2008 /home/dolphin/etc
drwxr-x---  3 dutchbul mail 4096 Feb 10  2008 /home/dutchbul/etc
drwxr-xr-x  2 eatchas  mail 4096 May 10 21:59 /home/eatchas/etc
drwxr-xr-x  2 fireant  mail 4096 May 25 21:16 /home/fireant/etc
drwxr-xr-x  4 jrsint   mail 4096 Jan 11  2008 /home/jrsint/etc
drwxr-x---  3 lance    mail 4096 Jul  9  2007 /home/lance/etc
drwxr-xr-x  2 memoryve mail 4096 Feb 16 10:29 /home/memoryve/etc
drwxr-x---  2 michaelc mail 4096 May 13  2008 /home/michaelc/etc
drwxr-x---  3 modelloc mail 4096 Dec 18 19:22 /home/modelloc/etc
drwxr-x---  3 monstrss mail 4096 Feb 10  2008 /home/monstrss/etc
drwxr-x---  3 nicolas  mail 4096 Feb 10  2008 /home/nicolas/etc
drwxr-x---  3 outdoor  mail 4096 Aug 26  2008 /home/outdoor/etc
drwxr-xr-x  2 perks    mail 4096 Jun  6 15:17 /home/perks/etc
drwxr-x---  2 pout     mail 4096 Jun 15 12:08 /home/pout/etc
drwxr-x---  3 ravenel  mail 4096 Aug 12  2007 /home/ravenel/etc
drwxr-x---  4 remodel  mail 4096 Feb 10  2008 /home/remodel/etc
drwxr-x---  2 saveag   mail 4096 Oct  9  2008 /home/saveag/etc
drwxr-xr-x  2 shoppout mail 4096 Jun 15 16:46 /home/shoppout/etc
drwxr-x---  3 southern mail 4096 Feb 10  2008 /home/southern/etc
drwxr-x---  2 tbcustom mail 4096 Jun 20  2008 /home/tbcustom/etc
drwxr-x---  3 thebicyc mail 4096 Jun 16  2008 /home/thebicyc/etc
drwxr-xr-x  3 theenerg mail 4096 Feb  9  2008 /home/theenerg/etc
drwxr-x---  2 unclelue mail 4096 Dec 14  2009 /home/unclelue/etc
drwxr-x---  2 vanjean  mail 4096 Feb 16  2009 /home/vanjean/etc
drwxr-x---  3 wwwbrea  mail 4096 Dec 18 01:22 /home/wwwbrea/etc

This same technique can be used for any number of commands when you need to work on directories.   Just be careful with it, this can wreak as much havoc as it can repair damage done by other command line tools that have been wielded without care.

This Red Rider BB Gun is loaded.  Be careful out there!  “You’ll shoot your eye out kid”…

Posted on

Upgrading Logwatch on CentOS 5

Introduction

I finally got tired at looking at the thousand-plus line daily reports coming to my inbox from Logwatch every evening.  Don’t get me wrong, I love logwatch.  It helps me keep an eye on my servers without having to scrutinize every log file.  If you aren’t using logwatch on your Linux boxes I strongly suggest you look into it and turn on this very valuable service.  Most Linux distros come with this pre-installed.

The problem is that on CentOS the version of logwatch that comes with the system was last updated in 2006.   The logwatch project itself, however, was updated just a few months ago.  As of this writing the version running on CentOS 5 is 7.3 (released 03/24/06) and the version on the logwatch SourceForge site is 7.3.6 (updated March 2010).   In this latest version there are a log of nice updates to the scripts that monitor your log files for you.

The one I’m after, consolidating brute force hacking attempt reports, is a BIG thing.  We see thousands of entries in our daily log files from China hackers trying to get into our servers.   This is typical of most servers these days, however in many cases ignorance is bliss.  Many site owners and IPPs don’t have logging turned on because they get sick of all the reports of hacking attempts.  Luckily we block these attempts on our server, but our Fireant labs project is configured to have iptables tell us whenever an attempt is blocked at the kernel level (we like to monitor what our labs scripts are doing while they are still in alpha testing).   This creates THOUSANDS of lines of output in our daily email.   Logwatch 7.3.6 helps mitigate this.

Logwatch 7.3.6 has a lot of new reports that default to “summary mode”.  You see a single line entry for each notable event, v. a line for each time the event occured.  For instance we see a report more like this for IMAPD..

 [IMAPd] Logout stats:
 ====================
 User | Logouts | Downloaded |  Mbox Size
 --------------------------------------- | ------- | ---------- | ----------
 cpanel@localhost |     287 |          0 |          0
 xyz@cybersprocket.com |       4 |          0 |          0
 ---------------------------------------------------------------------------
 291 |          0 |          0

Versus the older output like this:

--------------------- IMAP Begin ------------------------
 **Unmatched Entries**
LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[32811], protocol=IMAP: 1 Time(s)
LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[32826], protocol=IMAP: 1 Time(s)
LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[32981], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[32988], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[33040], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[33245], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[33294], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[33310], protocol=IMAP: 1 Time(s)
 repeat 280 more times...
 

So as you can imagine, with 10 sections to our logwatch report, the new summary reports make our email a LOT easier to scan for potential problems in our log files.

Upgrading Logwatch

In order to get these cool new features you need to spend 10 minutes, 5 if you’re good with command line Linux, and install the latest version of logwatch. In essence you are downloading a tarzip that is full of new shell and Perl script files.  The install does not compile anything, it simply copies scripts files to the proper directory on your server.

Our example here are all based on the default CentOS 5 paths.

  • Go to a temp install or source directory on your server.
    # cd /usr/local/src
  • Get the source for logwatch
    # wget http://downloads.sourceforge.net/project/logwatch/logwatch-7.3.6.tar.gz?use_mirror=iweb
  • Extract the files
    # tar xvfz logwatch-7.3.6.tar.gz
  • Make the install script executable
    # cd logwatch-7.3.6
    # chmod a+x install_logwatch.sh
  • Run the script & enter the correct paths for logwatch:
    # ./install_logwatch.sh
    ...Logwatch Basedir [/usr/share/logwatch]  : /etc/log.d
    ...Logwatch ConfigDir [/etc/logwatch] : /etc/log.d
    ...temp files [/var/cache/logwatch] : <enter>
    ...perl [/usr/bin/perl] : <enter>
    ...manpage [/usr/share/man] : <enter>

Conclusion

That’s it.  You should now be on the latest version of logwatch.

You can tweak a lot of the settings by editing the files in /etc/log.d/default.conf/services/<service-name>, for example we ask logwatch to only tell us when someones attempt to connect to our server has been dropped more than 10 times by our Fireant scripts (we do this via the iptables service setting).

Hope you find this latest update useful.   We certainly did!

Posted on

Finding Which Linux Packages Provide Which Files

There have been multiple situations where I find out that I need a particular file to continue with something I am doing. Most of the time this happens when I am compiling a program. I will be missing a library, or header file, or something. So I end up on search engines looking for whatever package I need to ‘apt-get install’. Well it turns out there is a command line tool that will tell you this information, on systems use Apt, that is.

Enter ‘apt-file’.

I use Ubuntu, and it doesn’t come with that platform by default. Or at least not on 10.04 then I’m using. But you should know how to get it. A simple ‘apt-get install apt-file’.

Once you have it installed, you will have to update the cache it uses for searching. I was prompted to do this automatically, but if you are not then you can run ‘apt-file update’ to do so.

With that done, the command ‘apt-file find’ will let you list packages that include the given file. For example, I was looking for the program ‘xpidl’, which I didn’t have. Easy to find:

    $ apt-file find xpidl
    kompozer: /usr/lib/kompozer/xpidl
    sunbird-dev: /usr/lib/sunbird/xpidl
    thunderbird-dev: /usr/lib/thunderbird-3.0.3/xpidl
    xulrunner-1.9.1: /usr/lib/xulrunner-1.9.1.9/xpidl
    xulrunner-1.9.1-dbg: /usr/lib/debug/usr/lib/xulrunner-1.9.1.9/xpidl
    xulrunner-1.9.2: /usr/lib/xulrunner-1.9.2.2/xpidl
    xulrunner-1.9.2-dbg: /usr/lib/debug/usr/lib/xulrunner-1.9.2.2/xpidl

You can provide the argument ‘-x’ to use a Perl regular expression as your search query.

You can also see what files are in a package by using the command ‘list’ instead of ‘find’. Unlike the ‘dpkg -L’ command, ‘apt-file list’ will work even if you don’t have the package installed or cached on your system.

I wish I had found this tool years ago.