Posted on

AWS gMail Relay Setup

SMTP Relay Banner

After moving to a new AWS server I discovered that my mail configuration files were not configured as part of my backup service on my old server. In addition my new server is using sendmail instead of postfix for mail services. That mean re-learning and re-discovering how to setup mail relay through gmail.

Why Relay?

Cloud servers tend to be blacklisted. Sure enough, my IP address on the new server is on the Spamhaus PBL list. While Amazon allows for elastic IP addresses, a quasi-permanent IP address that acts like a static IP, which can be added to the whitelist on the Spamhaus PBL it is not the best option. Servers change, especially in the cloud. I find the best option is to route email through a trusted email service. I use Google Business Apps email accounts and have one setup just for this purpose. Now to configure sendmail to re-route all outbound mail from my server to my gmail account.

Configuring Amazon Linux

Here are my cheat-sheet notes about getting an Amazon Linux (RHEL flavor of Linux) box to use the default sendmail to push content through gmail.

Install packages needed.

# sudo su -
# yum install cyrus-sasl ca-certificates sendmail make

Create your certificates

This is needed for the TLS authentication.

</p>
# cd /etc/pki/tls/certs
# make sendmail.pem
# cd /etc/mail
# mkdir certs
# chmod 700 certs
# cd certs
# cp /etc/pki/tls/certs/ca-bundle.crt /etc/mail/certs/ca-bundle.crt
# cp /etc/pki/tls/certs/sendmail.pem /etc/mail/certs/sendmail.pm

Setup your authinfo file

The AuthInfo entries start with the relay server host name and port.

U = the AWS server user that will be the source of the email.

I = your gmail user name, if using business apps it is likely @yourdomain.com not @gmail.com

P = your gmail email password

M = the method of authentication, PLAIN will suffice

# cd /etc/mail
# vim gmail-auth

AuthInfo:smtp-relay.gmail.com "U:ec2-user" "I:your-gmail-addy@gmail.com" "P:yourpassword" "M:PLAIN"
AuthInfo:smtp-relay.gmail.com "U:apache" "I:your-gmail-addy@gmail.com" "P:yourpassword" "M:PLAIN"
AuthInfo:smtp-relay.gmail.com:587 "U:ec2-user" "I:your-gmail-addy@gmail.com" "P:yourpassword" "M:PLAIN"
AuthInfo:smtp-relay.gmail.com:587 "U:apache" "I:your-gmail-addy@gmail.com" "P:yourpassword" "M:PLAIN"

# chmod 600 gmail-auth
# makemap -r hash gmail-auth < gmail-auth

Configure Sendmail

Edit the sendmail.mc file and run make to turn it into a sendmail.cf configuration file.  Look for each of the entries noted in the sendmail.mc comments.  Uncomment the entries and/or change them as noted.    A couple of new lines will need to be added to the sendmail.mc file.   I add the new lines just before the MAILER(smpt)dnl line at the end of the file.

Most of these exist throughout the file and are commented out.   I uncommented the lines and modified them as needed so they appear near the comment blocks that explain what is going on:

# vim /etc/mail/sendmail.mc
define(`SMART_HOST', `smtp-relay.gmail.com')dnl
define(`confAUTH_OPTIONS', `A p')dnl
TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl
define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl
define(`confCACERT_PATH', `/etc/mail/certs')dnl
define(`confCACERT', `/etc/mail/certs/ca-bundle.crt')dnl
define(`confSERVER_CERT', `/etc/mail/certs/sendmail.pem')dnl
define(`confSERVER_KEY', `/etc/mail/certs/sendmail.pem')dnl

Add these lines to the end of sendmail.mc just above the first MAILER()dnl entries:

</p>
<p style="padding-left: 30px;">define(`RELAY_MAILER_ARGS', `TCP $h 587')dnl</p>
<p style="padding-left: 30px;">define(`ESMTP_MAILER_ARGS', `TCP $h 587')dnl</p>
<p style="padding-left: 30px;">FEATURE(`authinfo',`hash -o /etc/mail/gmail-auth.db')dnl</p>
<p style="padding-left: 30px;">

If you are using business apps you may need these settings to make the email come from your domain and to pass authentication based on your Gmail relay settings.    These are also in sendmail.mc:

MASQUERADE_AS(`charlestonsw.com')dnl
FEATURE(masquerade_envelope)dnl
FEATURE(masquerade_entire_domain)dnl
MASQUERADE_DOMAIN(localhost)dnl
MASQUERADE_DOMAIN(localhost.localdomain)dnl
MASQUERADE_DOMAIN(charlestonsw.com)dnl

Make the configuration-helper into a sendmail.mc file and restart sendmail:

# make
# service sendmail restart

Configure Gmail Services

This is for business apps users, you need to turn on relay.

Go to “manage this domain” for your business apps account.

Go to “Google Apps”.

Click on “Gmail”.

Click “advanced settings”.

Find the “SMTP relay service” entry.    Add a  new entry.

Only addresses in my domain, require SMTP, require TLS all need to be selected.

Give it a name.

Save.

Save again.

Posted on

Choosing A Wireless Router

Last week the network dropped.  Again.   This was the 5th time in about a month that I lost all connectivity mid-session.  I was in the middle of pushing some web updates and, as usual,  Comcast left me hanging.   When I made my 10PM call to customer service I was met with one of the rudest know-it-all “customer disservice” people I ever encountered.   She argued with me about everything and told me I had no idea what I was talking about when I told her that rebooting my laptop would not get my cable modem to sync up with their head end router.   (I had checked the logs on the modem and it lost sync and the signal level was out of spec.)

Even though the Comcast Business Class service rep., who came out the next morning instead of THREE DAYS later as the “service rep” insisted was the ONLY option, was very helpful and knowledgeable ; the damage had been done.  I was sick of sudden drops, lag, and network throttling that Comcast insists they do not do.    It was time for a change.

What does this have with wireless routers?  We will get there in a minute… just bear with me.

Knology To The Rescue

Fast forward three weeks.  The Knology installation guy shows up at my house EARLY (take THAT Comcast), was courteous, professional, and *gasp* actually knowledgeable about his trade.     He tested the lines, replaced several faulty splitters that Comcast had installed and eventually got a perfectly clean signal at the modem connection point.    We connected the modem and had a great connection.   The 20M/2M service was actually pulling 27M/2M consistently with 0.0001% rate fluctuation.    This guy actually tested things after he installed (take THAT TOO Comcast).   Everything looked great.   Then all hell broke loose.

I HAD NO WIRELESS ROUTER!

My old Comcast modem had wireless.   The new Knology modem did not.

Setting Up My Wireless

I left the install connected to my wired hub and went to work.  While at the office I picked up a couple of pieces of wireless network equipment we had lying around that was no longer being used.   In the mix I had an old Netopia Wireless DSL modem, which can be used as a wireless access point if you disable the DSL port and a 2-year-old Belkin Wireless N router that was a $200 top-of-the-line unit back in the day.

When I got home the first thing I did was hook up the Belkin Wireless N.  I was connected within minutes.  However I did notice the network was lagging.    I attributed it to being on wireless and having several devices on the wireless network as well as the TiVo and DVD connected.     Then I started getting dropped connections. However this time the modem logs looked perfect.  NO errors, no sync problems no dropped connections there.     Eventually I narrowed down the problem.  It was the Belkin router.    It was getting all kinds of packet loss and transmission errors and was dropping a TON of packets with .190-199 in the last IP address octet.  Very odd.

I temporarily tried the Netopia Wireless but that is a simple A/B series wireless router.  It worked, but was very quickly saturated as soon as other devices came online.  It simply did not have the bandwidth over the wireless channels to get the job done with a tablet, 2 wireless phones, the VOIP hard line phone, 2 laptops, the TiVo and the DVD player.    It worked but was slow as heck at peak load.

I needed something better.

The Netgear Utopia

Netgear N600
Netgear N600

I did some homework and found several glowing reviews for the Netgear N600 series wireless N routers.   Since it was now Sunday and neither my Netopia DSL router or my Belking N router were up to task for a big marketing and site update project, I decided to shop local.   Turns out Walmart had the very router I was looking at AND it was a fair price.   Even with taxes it was within $5 of the Amazon pricing and was near or below most online competitors.

40 minutes later I had returned from Wally World with my new router (and a big-bag of M&Ms, a new garden hose, and 3 coloring books for my son… this is WHY you don’t go to Walmart to shop for “just a router”… dang impulse buys).      Within 15 minutes my new router was installed, fully configured to my liking with a new SSID and passwords, and was online.

HOLY SMOKES WAS THIS THING FAST!!!

I mean LIGHTNING FAST compared to ANYTHING I was using before.     I immediately saw my laptop speed tests pulling the full 27M/2M speeds we had seen with the wired test unit at the router.  This was with all the other network equipment still online.

Bad Communication = Slow Networks

After doing a good bit of testing, re-trying the Belkin, re-connecting the Comcast service (it was not turned off yet), and doing a bunch of general cross-checking and sanity tests it had become clear.    Choosing the right networking equipment is paramount to maintaining solid throughput to your desktop (or tablet) computers.  If any link in the chain is weak you will suffer.

The technical reasons for highly variant network performance has a lot to do with packet re-transmission.   To keep it somewhat less technical, think of it as a simple phone conversation where you MUST get every word right.   To do this you ask the other party to repeat every word they hear.   If they say a word incorrectly you repeat that word until they say it back correctly.    On a poor connection this may happen 3 or 4 times on every-other-word.   That can make for a VERRRRRYYYY long conversation.

In today’s networks a lot of things can go wrong to make your surfing destination and your computer “repeat the words” over & over again.   A wireless network often adds a lot more possibilities for interference.   For example, turning on the microwave oven, or a neighbor turning on their TV.    You don’t HEAR the interference, but your wireless network does.  Think of it like someone turning on a vacuum cleaner right next to you while you are doing the “repeat every word” conversation with your long distance friend.  You are likely not going to hear very well and be repeating a lot of words.

Erradicating Slow

In my case several things were causing problems.   The Comcast connection to my house is not very good which means the “volume” of the conversation is very inconsistent, too loud some moments, too soft at others.   Then the modem Comcast had was an old model that was very slow, think of it as if you had a semi-retarded phone operator in the middle trying to keep up with the “repeat the word” conversation and they just skip words when they fall behind.    The Belkin router refused to repeat any word with the “ch” sound in it, like a Chinese waiter mixing up L’s and R’s and you trying to guess what they really meant.     The Netopia DSL router was mostly just very retarded and easily distracted, barely being able to keep up with a slow deliberate conversation.

In the end I eliminated all the slow, retarded, missed-translation, volume related issues.    A tested solid clean connection with a modern high-speed modem from Knology connected directly to the Netgear N600 Wireless N Router keeps everything humming along.  The conversations are crystal clear and the Netgear N600 + Knology modem rarely, if ever, repeat a word.   A 2-minute conversation takes 2-minutes, not 20.    That translates into getting the full 20M (27M) /2M service all the way from “the Internet” straight into my wireless network.

Get The Best

In your network, choose the best equipment you can afford.   Read online reviews and select the RIGHT solution.   Higher price does not always mean better performance.    In my case the reviews proved out to be well founded and I too give the Netgear N600 (WNDR3400v2) 5-stars.

Netgear N900
Netgear N900

I liked the Netgear N600 so much I bought the “big brother” N900 (WNDR4500) for the office and I like that one EVEN better.  It too was quick to setup and improved network performance.  It also gave us the ability to quickly and easily turn a USB drive into a network share and turn my old Brother MFC-4800 laser (another great piece of equipment, by the way) printer/scanner into a network printer/scanner within minutes and with one quick/simple applet install on our Windows and Mac computers.

If you are in the market for a wireless router I highly recommend the Netgear N600 and N900 routers.

Posted on

Is Comcast Playing “Big Brother” With Your Internet?

The Symptom

This morning I spent over an hour trying to publish a new update of Store Locator Plus to the public WordPress extensions directory.   It failed, multiple times.  I assumed it was something wrong with our repository so I decided to move on to something else until the remote server was fixed.

My next task was to get the WordPress language translation tools into my dev kit so we can start providing better international support.  I decided to fetch the latest language dev kit with subversion via the standard checkout:

svn co http://svn.automattic.com/wordpress-i18n/tools/trunk/

 

Here are a couple of the many failure messages I received back after about 10 minutes:

svn: PROPFIND of '/wordpress-i18n/tools/trunk': could not connect to server (http://svn.automattic.com)
svn: PROPFIND of '/wordpress-i18n/!svn/vcc/default': could not connect to server (http://svn.automattic.com)

 

My first assumption was that my virtual machine was having network issues.  I reset the network, then shut down & restarted the virtual machine.  No luck.   I then tried directly from the host.  Again no luck.  I decided to go to the office and try it from there.  Maybe my modem or router at the house was causing issues.

A Clue

I get to the office and try again.  Same problems.    Odd, more Googling was in my future.   After reading a lot of articles about proxy servers with svn (I don’t use a proxy) and doing all the “svn tricks” I know and that I could dig up online, I stumbled across an interesting post at Stack Overflow.  This is what caught my attention:

Update

I had a co-worker test this out on his home connection — he uses Comcast as well. He got the same error as I did. So it appears to be some Comcast-related issue specific to the WordPress svn repository. I was able to checkout other public repositories via http (e.g. from Google Code) just fine.

Huh, that’s interesting.  I too could use SVN with a variety of other services.  I also was using Comcast at the home office and on one-half of the network at the corporate office.   So I decided to try a couple of things.

The Test

First, shut off the Comcast connection at the office and force my system to connect via the T1. Guess, what?  It worked.  The repo was cloned immediately.

Interesting.   Second test, log in to our server in Michigan on multiple backbones, NONE of which are on Comcast.   Hey, look at that… it worked immediately as well.

Back to the office services.  Turn off the T1, turn on just Comcast.  Instant fail.   Well not instant, it waits for about 5 minutes then fails.

Bandwidth Caps

In addition to the failure to pull subversion content directly from the WordPress IP addresses, we have also found several other interesting things about Comcast Business Class Internet.   Comcast is billing us for 50Mbps/10Mbps speeds at both my home office and our corporate office locations.  We have NEVER been able to get anything close to that at either location.  Our download speeds always seem to max out around 20Mbps/4Mbps at home and 26Mbps/6Mbps at the office.

Today, in an effort to understand what may be going on, we ran ShaperProbe from GA Tech.   It turns out that at my home office we are dropping so many upload packets that many tools, including ShaperProbe fail.   We also learned the Comcast is THROTTLING the incoming bandwidth at the home office to 17Mbps, less than HALF the 50Mbps promised and paid for.   This is one method used to ensure all users have some bandwidth when they oversell a neighborhood.   Ouch.

Comcast Traffic Shaping Test
Comcast Traffic Shaping Test

Comcast Speed Test Results

After the “network improvement” work that Comcast did this week our Corporate line is now crawling at 1/10th the advertised download rate.    We are able to receive our packets from WordPress, but now we can’t get more than a handful of simple transfers going at the same time.

Comcast Speed Test December 2011
Comcast Speed Test December 2011

 

Comcast Fails

I am still doing research on this issue and will post updates here.  However it is very obvious from the initial tests that Comcast is doing some sort of traffic shaping or other network manipulation on their business class services and it is “breaking the Internet”.

I’ll try contacting them, but I am 99.999% certain that whomever I get ahold of will be clueless.  They usually are.    In fact I bet the first thing they ask me to do is reboot my computer, then turn off the modem.  Then they’ll bounce the modem remotely.

We have called Comcast Business services and we are quite shocked to have reached Terry at Business Class Services.  She actually emailed us and is escalating the problem to a higher level tech and is going to chase this down for us today, on a Saturday of all days.   Wow.  That was surprising!  Now lets all pray for some results as doing this proxy thing is a pain!

Reaching Comcast

http://business.comcast.com/smb/contact

Business Customer Service:  (800) 391-3000

Residential Customer Service: (800) 266-2278

In the meantime if you are having the same issues with Comcast please share.   Especially if you found a viable workaround to the issue.

Tracking The Issue

Here are some related articles we’ve dug up about this issue:

 

Update

 

12/17/2011 03:45 EST
Comcast has found a routing issue and is working on it.  We’ll see what happens.

 

12/17/2011 04:15 EST
Comcast claims the routing is fine.  The problem, they claim, is on the WordPress servers.   I made it clear via email this is not the case.    The service works 100% fine when I switch all routing to/from the wordpress.org or automattic.com domains to go over the Windstream T1 v. Comcast business service.   Looks like they couldn’t find a quick/easy answer and are back to their lame excuses and passing-the-buck.

 

12/18/2011 01:05AM EST
Comcast reps never pinged me back before they left for the day (10PM) as promised.   Not surprised about that.   “Dave”, the level 2 tech, said it is not a Comcast problem and that was that.  Bah.  Time to ratchet it up a notch.

 

12/21/2011 10:03AM EST
Nobody ever called back or emailed us about this issue.   We did receive two automated calls the past 2 days in a row that our service would be offline from midnight until 5AM for “network improvement
work.    This morning our WordPress packets are now arriving intact.   Slow as can be though.  Our 50M/10M line is now clocking in at 5M/5M as noted by Comcast Speed Test.

 

Similar Articles

 

Posted on

Curl from the Command Line

We most frequently use Curl in the form of `libcurl`, a C library providing function transferring data between servers using many of the popular protocols like HTTP, FTP, SCP, and so on. This library is the foundation things like all the curl_*() functions in PHP, which are useful for writing code that interacts with various web services.

But there is also the Curl command line program, built atop the same library. I find the program useful for debugging and testing certain aspects of web applications, so I wanted to share a list of the things I like to do with Curl, which I hope you will find useful as well.

Headers

To see the headers from a site:

$ curl --head http://example.com

We can use this to make sure any custom headers are being sent properly, and to see things like what cache information the server is sending to browsers. It will also show information like the PHP session ID. Or more importantly sometimes is what the command does not show, if we have an error in our code that prevents necessary headers from being sent.

Cookies

The command above will show cookie info, but if that’s all we’re interested in then we use this:

$ curl --cookie-jar cookies.txt http://example.com/

We can then inspect the cookies to see if the values are set to what we expect. Or to try out different things we can change the values and then run:

$ curl --cookie cookies.txt http://example.com/

to simulate a request using our new cookie values. By using the option `–junk-session-cookies` in conjunction with the above, we can send all of our modified cookies but without any session information. This has the effect of behaving as if we had closed our browser.

Forms

When we want to write a script that deals with submitting a <form>, we can use the --data option to pass in values to the form fields. For example, to test a script where users can post comments to a site:

$ curl --data username='Lobby C Jones' --data email='Lobby@cybersprocket.com' --data message='Nom nom nom' http://localhost/eric/test.php

If the message we wanted to send was really long, we could put it in a text file and then change that particular option to:

--data-urlencode message@input.txt

That is, we can write:

--data-urlencode name@file

to mean the same thing as:

--data name=<contents of file>

This is *not* a file upload; it is simply a way to read contents from a file and use them as a form parameter value. To perform an actual file upload we can use the `–form` option. Let’s say we want to simulate uploading a CSV file to a web application:

 $ curl --form doc=@our-data.csv http://probably.dtuser.com/

This would upload our-data.csv as the doc form field. If needed, we can specify the content type:

$ curl --form "photo=@lobby.png;type=image/png" http://lonelysingles.com/photos/shellfish/upload.php

We can use --get to send our data in the form of GET instead of a POST, although this does not work with --form since it always uses the content type multipart/form-data. But it will modify any --data that we send to be appended to the URL.

Timeouts and Retries

When using Curl in scripts we want to avoid situations where the whole operation might hang, either because the server hangs, or because we are using the script to download something when the network connection is very slow, or because of a solar flare. We can use three options to avoid these problems.

  1. --connect-timeout <N> will wait N seconds for the connection to succeed before bailing. This only affects the connection. Once we successfully initiate communication with the server, there is no time limit. To control that we use…
  2. --max-time <N> which only allows N seconds for the entire operation.
  3. --no-solar-flare avoids all solar flares.

If we are scripting an operation that could fail then we can tell Curl to retry a number of times by using --retry <N>. If the request fails, Curl will wait one second and then try again. That delay then doubles after every successive failure, maxing out at ten minutes.

PUT Requests

We usually don’t deal with web applications that respond to PUT requests(although I think it’s a useful practice). In the cases where we are, we can use Curl to easily test out PUT requests by sending the contents of a file like so:

$ curl -T file.png http://example.com/put/script.php

Or if we wanted to PUT multiple files at once:

$ curl -T "image[1-100].png" http://example.com/put/script.php

This has the effect of PUT-ing the files image1.png, image2.png, and so on up to image100.png.

Other Requests

Besides PUT, there is also DELETE, which again is not commonly encountered. If needed, we can make such requests with Curl like so:

$ curl --request DELETE http://localhost/resource/to/delete/

If we are using Curl to interact with FTP then the request command can be any valid FTP command. And that’s it for my brain-dump about Curl usage. Everything I’ve shown above can be accomplished by browsers, either out-of-the-box or via various add-ons. But where I like to use Curl is in scripts; in contrast to browsers, Curl makes it easy to create a repeatable series of requests to send to a site, and then I can do simple tests on those results to determine whether or not something worked as expected. If you have any questions about Curl, or anything you like to use it for that hasn’t been covered here, then please share.

Posted on

HTTP Errors When Uploading/Connecting in WordPress

Having problems browsing themes, uploading plug-ins, or doing just about anything that “talks” to the outside world via WordPress? We have had a development server buried deep in our network behind several routers and firewalls that had a similar problem. Whenever we’d log into the dashboard we’d get various timeout error messages on each of the news sections. We’d not get our automatic update messages whenever there was a plugin update or a WordPress update (3.0 is coming soon!).
Well it turns out that we needed to fix 2 things to help speed up the network connection.

Fix #1 – DNS Resolution

We run this particular development box on Linux.   That meant updating our /etc/resolv.conf file to talk directly to the DNS servers. If you use DHCP configuration or go through a router this file is often empty.   Force-feeding our Internet Service Providers (ISPs) DNS server IP addresses into this file sped up domain name lookups significantly.  This meant looking up things on wordpress.org took 1-2 seconds versus the previous 10-20 second lookup times.   Here is what we put in our file for our Bellsouth/AT&T DNS in Charleston South Carolina:

search cybersprocket.com
nameserver 205.152.37.23
nameserver 205.152.132.23
nameserver 192.168.3.254

Fix # 2 – Adjust PHP Timeout

This seemed to help with the problem, though we’re not sure why.  WordPress should be overriding the default PHP.ini settings but maybe something was missed deep in the bowels of the WordPress codebase… either that or this was pure coincidence.  Either way, we’re listing it here because as soon as we did these two things our timeout issues went away.

Update php.ini, on our linux server this is /etc/php.ini, and change the default_socket_timeout setting to 120.  That section of our php.ini now looks like this:

; Default timeout for socket based streams (seconds)
;default_socket_timeout = 60
default_socket_timeout = 120

Hopefully these notes will help you resolve any timeout issues you’re having with your WordPress site.

Posted on

Upgrading Logwatch on CentOS 5

Introduction

I finally got tired at looking at the thousand-plus line daily reports coming to my inbox from Logwatch every evening.  Don’t get me wrong, I love logwatch.  It helps me keep an eye on my servers without having to scrutinize every log file.  If you aren’t using logwatch on your Linux boxes I strongly suggest you look into it and turn on this very valuable service.  Most Linux distros come with this pre-installed.

The problem is that on CentOS the version of logwatch that comes with the system was last updated in 2006.   The logwatch project itself, however, was updated just a few months ago.  As of this writing the version running on CentOS 5 is 7.3 (released 03/24/06) and the version on the logwatch SourceForge site is 7.3.6 (updated March 2010).   In this latest version there are a log of nice updates to the scripts that monitor your log files for you.

The one I’m after, consolidating brute force hacking attempt reports, is a BIG thing.  We see thousands of entries in our daily log files from China hackers trying to get into our servers.   This is typical of most servers these days, however in many cases ignorance is bliss.  Many site owners and IPPs don’t have logging turned on because they get sick of all the reports of hacking attempts.  Luckily we block these attempts on our server, but our Fireant labs project is configured to have iptables tell us whenever an attempt is blocked at the kernel level (we like to monitor what our labs scripts are doing while they are still in alpha testing).   This creates THOUSANDS of lines of output in our daily email.   Logwatch 7.3.6 helps mitigate this.

Logwatch 7.3.6 has a lot of new reports that default to “summary mode”.  You see a single line entry for each notable event, v. a line for each time the event occured.  For instance we see a report more like this for IMAPD..

 [IMAPd] Logout stats:
 ====================
 User | Logouts | Downloaded |  Mbox Size
 --------------------------------------- | ------- | ---------- | ----------
 cpanel@localhost |     287 |          0 |          0
 xyz@cybersprocket.com |       4 |          0 |          0
 ---------------------------------------------------------------------------
 291 |          0 |          0

Versus the older output like this:

--------------------- IMAP Begin ------------------------
 **Unmatched Entries**
LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[32811], protocol=IMAP: 1 Time(s)
LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[32826], protocol=IMAP: 1 Time(s)
LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[32981], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[32988], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[33040], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[33245], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[33294], protocol=IMAP: 1 Time(s)

LOGIN, user=cpanel@localhost, ip=[::ffff:127.0.0.1], port=[33310], protocol=IMAP: 1 Time(s)
 repeat 280 more times...
 

So as you can imagine, with 10 sections to our logwatch report, the new summary reports make our email a LOT easier to scan for potential problems in our log files.

Upgrading Logwatch

In order to get these cool new features you need to spend 10 minutes, 5 if you’re good with command line Linux, and install the latest version of logwatch. In essence you are downloading a tarzip that is full of new shell and Perl script files.  The install does not compile anything, it simply copies scripts files to the proper directory on your server.

Our example here are all based on the default CentOS 5 paths.

  • Go to a temp install or source directory on your server.
    # cd /usr/local/src
  • Get the source for logwatch
    # wget http://downloads.sourceforge.net/project/logwatch/logwatch-7.3.6.tar.gz?use_mirror=iweb
  • Extract the files
    # tar xvfz logwatch-7.3.6.tar.gz
  • Make the install script executable
    # cd logwatch-7.3.6
    # chmod a+x install_logwatch.sh
  • Run the script & enter the correct paths for logwatch:
    # ./install_logwatch.sh
    ...Logwatch Basedir [/usr/share/logwatch]  : /etc/log.d
    ...Logwatch ConfigDir [/etc/logwatch] : /etc/log.d
    ...temp files [/var/cache/logwatch] : <enter>
    ...perl [/usr/bin/perl] : <enter>
    ...manpage [/usr/share/man] : <enter>

Conclusion

That’s it.  You should now be on the latest version of logwatch.

You can tweak a lot of the settings by editing the files in /etc/log.d/default.conf/services/<service-name>, for example we ask logwatch to only tell us when someones attempt to connect to our server has been dropped more than 10 times by our Fireant scripts (we do this via the iptables service setting).

Hope you find this latest update useful.   We certainly did!

Posted on

Setting Up Stunnel On Linux

We need your help!


Cyber Sprocket is looking to qualify for a small business grant so we can continue our development efforts. We are working on a custom application builder platform so you can build custom mobile apps for your business. If we reach our 250-person goal have a better chance of being selected.

It is free and takes less than 2 minutes!

Go to www.missionsmallbusiness.com.
Click on the “Login and Vote” button.
Put “Cyber Sprocket” in the search box and click search.
When our name comes up click on the vote button.

 

And now on to our article…

 

Intro

This article was written while getting SMTP authentication working with AT&T Business Class DSL services.   The SMTP service requires authentication via a secure connection on port 465.   Other articles will get into further details, this article’s focus is on the stunnel part of the equation, which we use to wrap the standard sendmail/SMTP configuration.

In This Article

  • An example stunnel config file for talking to AT&T SMTP servers on port 465 (SMTPS)
  • Testing the connection to AT&T SMTPS is working via telnet
  • Getting stunnel running on system boot.

Our Environment

  • CentOS release 5.2
  • stunnel 4.15-2

We assume you have stunnel and telnet installed.  If not, research the yum install commands for CentOS.  You will also need superuser access to update the running services on your box.

Setting up stunnel

Stunnel will allow you to listen for data connections on a local port and redirect that traffic through an SSH wrapper to another system.  In our case we are using stunnel to listen on port 2525 on our local server, wrap the communication in ssh and send it along to our local AT&T SMTP Server at smtp.att.yahoo.com on port 465 (aka SMTPS).

Install

To do this you will need stunnel installed.   If yum is configured properly and the remote yum servers are online you can try this:

# yum install stunnel

Configure

You will then need to create or edit the stunnel configuration file and setup the AT&T SMTPS redirect.  Your config file should look like this (your remote SMTPS server may have a different URL, check with your ISP):

client=yes
[rev-smtps]
accept=127.0.0.1:2525
connect=smtp.att.yahoo.com:smtps

Test

Run stunnel in a detached daemon mode:

# stunnel &

Then telnet in to localhost port 2525, which should SSH wrap the connection to the AT&T SMTP Server

# telnet 127.0.0.1 2525

You should see something like this:

[root@dev xinetd.d]# telnet localhost 2525
Trying 127.0.0.1...
Connected to localhost.localdomain (127.0.0.1).
Escape character is '^]'.
220 smtp104.sbc.mail.re3.yahoo.com ESMTP
EHLO
250-smtp104.sbc.mail.re3.yahoo.com
250-AUTH LOGIN PLAIN XYMCOOKIE
250-PIPELINING
250 8BITMIME
quit

Connection closed by foreign host.

Stop the test process by killing the detached process.  Find the process ID with ps and kill it.

# ps -ef | grep stunnel

You should see something like this:

root      6181     1  0 11:37 ?        00:00:00 stunnel
root     10698  3626  0 14:11 pts/0    00:00:00 grep stunnel

Kill the process.

# kill <pid>

Starting up stunnel on boot.

stunnel can be started by using the simple # stunnel & command via a shell script that runs at startup.  This method allows for session caching and generally improves performance over an xinetd controlled session.

Configure

Create /etc/init.d/stunnel:

#!/bin/bash#
#       /etc/rc.d/init.d/stunnel
#
# Starts the stunnel daemon
#
# Source function library.
. /etc/init.d/functions
test -x /usr/sbin/stunnel || exit 0
RETVAL=0
#
#       See how we were called.
#
prog="stunnel"
start() {
    # Check if stunnel is already running
    if [ ! -f /var/lock/subsys/stunnel ]; then
    echo -n $"Starting $prog: "
    daemon /usr/sbin/stunnel
    RETVAL=$?
    [ $RETVAL -eq 0 ] && touch /var/lock/subsys/stunnel
    echo
    fi
    return $RETVAL
}
stop() {
    echo -n $"Stopping $prog: "
    killproc /usr/sbin/stunnel
    RETVAL=$?
    [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/stunnel
    echo
    return $RETVAL
}
restart() {
    stop
    start
}
reload() {
    restart
}
status_at() {
    status /usr/sbin/atd
}
case "$1" in
start)
    start
    ;;
stop)
    stop
    ;;
reload|restart)
    restart
    ;;
condrestart)
    if [ -f /var/lock/subsys/atd ]; then
    restart
    fi
    ;;status)
    status_at
    ;;
*)
    echo $"Usage: $0 {start|stop|restart|condrestart|status}"
    exit 1
esac
exit $?
exit $RETVAL

Set the stunnel script to run at startup level 3:

# ln -s /etc/init.d/stunnel /etc/rc3.d/S58stunnel

Test

Run the same telnet test to port 2525 on localhost as noted above.  Don’t kill the process when you are done.

Running via xinetd

xinetd runs various port listening services through a single program (xinet) that runs as a daemon.  Since our box (and most RHEL variants) runs xinetd by default, we simply need to create our configuration file for stunnel and put it in the xinet.d directory & restart the xinetd process.  This is NOT the recommended method for running stunnel.

Install

If xinetd is not installed and running on your system (it should be) then grab it with yum

# yum install xinetd

Configure

Create a new stunnel configuration file in the /etc/xinetd.d directory.

# description: stunnel listner to map local ports to outside ports
service stunnel
{
    disable         = no
    flags           = REUSE
    socket_type     = stream
    wait            = no
    user            = root
    port            = 2525
    server          = /usr/sbin/stunnel
}

You can learn more about xinetd configuration files here:
http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/s1-tcpwrappers-xinetd-config.html

You will also need to change your stunnel config file as the accept port is now handled by xinetd.  You can learn more via the stunnel manual by using # man stunnel at your linux prompt.

The new stunnel.conf file:

client=yes
connect=smtp.att.yahoo.com:smtps

Test

#service xinetd restart
#telnet 127.0.0.1 2525

You should see the same results as the stunnel test above.

Posted on

SFTP Tips & Tricks

Using Keyfiles To Access SFTP Services

You can use the private key .pem files to allow you to connect via SFTP on a server that only allows key access.

The trick is to get the .pem file that Amazon gives you onto the sever that you will be using to connect to the EC2 instance.   When you store the .pem file on the local box, you will need to ensure the security level is set to 500 (r-x——).

Here is an example:

# sftp -o IdentityFile=my-amazon-given-key.pem root@domU-11-22-33-00-CC-11

We often use this trick to talk to our Amazon EC2 instances as they do not allow password based authentication by default.   This is a good security mechanism as only people with an authorized key file can gain access.   It also gives you a quick an easy way to shut down all access keys by disabling a single key file, essentially shutting down access from an entire group should there be a breach.

Create SFTP Logins Using Private Keyfiles

This is an example based on creating 3rd party access to SFTP on an Amazon EC2 instance.  The article is written for system administrators that wish to grant SFTP access to their server using a private key file they distribute to their users.  There can be multiple key files per username/directory.

  1. Logon to the EC2 instance with a privileged (root?) account.
  2. Create a keypair and save it to your PC.
  3. Start puttygen on your PC.
    1. Conversion/Import – load the key file you saved in step 2.
    2. Save as a private key (I like to add the -priv.ppk extension).
    3. Copy the Key data from the top private key info box (Public key for pasting into OpenSSH authorized_keys file:).
  4. Login to the server where you want the SFTP user to retrieve their files from.
  5. Change to the home directory of the user you want to grant SFTP access to.
  6. Create a .ssh directory.
    1. chmod 700 on that directory (rwx——)
    2. chmod 750 on that directory (rwxr-x—) to open access to other people in the same user group.
  7. Create an authorized_keys file within the .ssh directory.
    1. Create a SINGLE LINE that has the fingerprint you copied from puttygen above.
    2. Save the file.
    3. Chmod 600 on that file (rw——-)
      1. Use mode 640 (rw-r—–) to open access to other people in the same user group.

Now that you have the private key file from step 2.2 above, you can use that to login via PuTTY or SFTP from any system.  The only thing you need is local access to that key file.

Using Private Keys with Filezilla and EC2

After completing the creation of the key file & server-side tweaks to accept that key, you can now use desktop clients such as Filezilla to access your FTP content.   This assumes the system administrator of the server you are connecting to has given you a key file and they have installed the handshake privelages in the authorized_keys file on the remote end.

Pageant Method

  • Start by running pageant on your local system.
  • Add key
  • Find the key you generated with puttygen in step 3.2 above.
  • Start filezilla
  • In site manager enter the host name.  This will be the same server you logged into on step 4 above.
  • Servertype should be set to SFTP
  • Logontype Normal
  • User will be the name of the user that was given SFTP access (you created a .ssh/authorized_keys file in their home directory on the server)

Filezilla Specified Key Method

  • Start Filezilla
  • File/Site Manager – New Site
  • Enter the host name.  This will be the same server you logged into on step 4 from  Create SFTP Logins Using Private Keyfiles
  • Servertype should be set to SFTP
  • Logontype Normal
  • User will be the name of the user that was given SFTP access (you created a .ssh/authorized_keys file in their home directory on the server)
  • Click OK (NOT CONNECT)
  • Edit/Settings
    • Connection/SFTP
    • Add keyfile… and select the private keyfile you generated with puttygen above.
Filezilla - Edit Settings
Filezilla – Edit Settings
Filezilla Site Manager
Filezilla Site Manager

Now connect to that site.   Filezilla will read through the keys and find the right key for the user/server pair that you are connecting to.