Posted on

Bash Command Lookup (\!)

I’ve recently found something relatively interesting that you can do in a bash terminal. I recently sent out an email talking about how to implement git completion’s wonderful self to work on macs.

Part of that endeavor meant diving into the way that the terminal displays its information to you on your prompt. Some of the things I found out were using the escape codes like \h to stand for host, \W for working directory w/o the path, etc.

So I set out to find out what some more of those escape characters were, and I found: \!

I’ve learned from Paul that doing a !! will repeat the last command that you put in. This \! will actually list a sequential number (to the last) on your prompt. So now when I’ve added it to my PS1 as before from the git completion tutorial, my prompt now displays:

(527)iMac:~ chase$ _

And when I put in a command, lets say I emptily type grep<enter>

(527)iMac:~ chase$ grep<enter>
Usage: grep [OPTION]... PATTERN [FILE]...
(528)iMac:~ chase$ _

Lets pretend that was some crucially complex command (you know the kind… that escapes you how to do it again later when you really need it) instead of an empty grep, and lets say that through the course of working I’ve since entered dozens or hundreds of other commands into the prompt. I have a few options available:

  • hit the up arrow repeatedly until I find the command (which it doesn’t list with the number next to it)
  • use the <ctrl>+R command and type in parts of the command I remember
  • grep the history
  • lots of things

or, if I’ve remembered that 527 was the line for that crucial command, I can simply type:

(8901)iMac:~ chase$ !527<enter>

And it will repeat the command from that line. The only downside to this, is that eventually if you come to rely on it for remembering several different sets of complex commands… you’ll have to end up remembering several different sets of numbers that corresponds to those lines. Also, this function doesn’t give you any type of “Are you sure?” type of moment to let you know what you’re about to do… so one transposed number or dropped digit could potentially mean catastrophe if you’ve ever run some iffy commands (rm -Rf) .

About This Article…

I pilfered this from “The List”, thanks Chase…
– Lobby

Posted on

Curl from the Command Line

We most frequently use Curl in the form of `libcurl`, a C library providing function transferring data between servers using many of the popular protocols like HTTP, FTP, SCP, and so on. This library is the foundation things like all the curl_*() functions in PHP, which are useful for writing code that interacts with various web services.

But there is also the Curl command line program, built atop the same library. I find the program useful for debugging and testing certain aspects of web applications, so I wanted to share a list of the things I like to do with Curl, which I hope you will find useful as well.

Headers

To see the headers from a site:

$ curl --head http://example.com

We can use this to make sure any custom headers are being sent properly, and to see things like what cache information the server is sending to browsers. It will also show information like the PHP session ID. Or more importantly sometimes is what the command does not show, if we have an error in our code that prevents necessary headers from being sent.

Cookies

The command above will show cookie info, but if that’s all we’re interested in then we use this:

$ curl --cookie-jar cookies.txt http://example.com/

We can then inspect the cookies to see if the values are set to what we expect. Or to try out different things we can change the values and then run:

$ curl --cookie cookies.txt http://example.com/

to simulate a request using our new cookie values. By using the option `–junk-session-cookies` in conjunction with the above, we can send all of our modified cookies but without any session information. This has the effect of behaving as if we had closed our browser.

Forms

When we want to write a script that deals with submitting a <form>, we can use the --data option to pass in values to the form fields. For example, to test a script where users can post comments to a site:

$ curl --data username='Lobby C Jones' --data email='Lobby@cybersprocket.com' --data message='Nom nom nom' http://localhost/eric/test.php

If the message we wanted to send was really long, we could put it in a text file and then change that particular option to:

--data-urlencode message@input.txt

That is, we can write:

--data-urlencode name@file

to mean the same thing as:

--data name=<contents of file>

This is *not* a file upload; it is simply a way to read contents from a file and use them as a form parameter value. To perform an actual file upload we can use the `–form` option. Let’s say we want to simulate uploading a CSV file to a web application:

 $ curl --form doc=@our-data.csv http://probably.dtuser.com/

This would upload our-data.csv as the doc form field. If needed, we can specify the content type:

$ curl --form "photo=@lobby.png;type=image/png" http://lonelysingles.com/photos/shellfish/upload.php

We can use --get to send our data in the form of GET instead of a POST, although this does not work with --form since it always uses the content type multipart/form-data. But it will modify any --data that we send to be appended to the URL.

Timeouts and Retries

When using Curl in scripts we want to avoid situations where the whole operation might hang, either because the server hangs, or because we are using the script to download something when the network connection is very slow, or because of a solar flare. We can use three options to avoid these problems.

  1. --connect-timeout <N> will wait N seconds for the connection to succeed before bailing. This only affects the connection. Once we successfully initiate communication with the server, there is no time limit. To control that we use…
  2. --max-time <N> which only allows N seconds for the entire operation.
  3. --no-solar-flare avoids all solar flares.

If we are scripting an operation that could fail then we can tell Curl to retry a number of times by using --retry <N>. If the request fails, Curl will wait one second and then try again. That delay then doubles after every successive failure, maxing out at ten minutes.

PUT Requests

We usually don’t deal with web applications that respond to PUT requests(although I think it’s a useful practice). In the cases where we are, we can use Curl to easily test out PUT requests by sending the contents of a file like so:

$ curl -T file.png http://example.com/put/script.php

Or if we wanted to PUT multiple files at once:

$ curl -T "image[1-100].png" http://example.com/put/script.php

This has the effect of PUT-ing the files image1.png, image2.png, and so on up to image100.png.

Other Requests

Besides PUT, there is also DELETE, which again is not commonly encountered. If needed, we can make such requests with Curl like so:

$ curl --request DELETE http://localhost/resource/to/delete/

If we are using Curl to interact with FTP then the request command can be any valid FTP command. And that’s it for my brain-dump about Curl usage. Everything I’ve shown above can be accomplished by browsers, either out-of-the-box or via various add-ons. But where I like to use Curl is in scripts; in contrast to browsers, Curl makes it easy to create a repeatable series of requests to send to a site, and then I can do simple tests on those results to determine whether or not something worked as expected. If you have any questions about Curl, or anything you like to use it for that hasn’t been covered here, then please share.

Posted on

MySQL – I Am a Dummy


Today I learned about an interesting command-line option for MySQL:
$ mysql --i-am-a-dummy
This is an alias for --safe-updates, which imposes three restrictions:

  1. Any UPDATE or DELETE that does not have a WHERE or LIMIT is rejected.
  2. Any SELECT without a LIMIT will be automatically limited to 1,000 rows in the result.
  3. Any join that would result in examining more than a million row combinations is rejected.

I thought it was funny at first, but the first restriction alone makes this useful for doing work on the command-line on live servers.  You ever intend to do something like DELETE FROM users WHERE id = … and you forget that tiny little where-clause?  Because I have, and all you want to do is hide under your desk, under the assumption that if no one can see you then you must be invisible.

Posted on

PostgreSQL Cheat Sheet

PostgreSQL is one of our favorite database engines for a variety of reasons.  Here is our cheat sheet to help you get online and get around Postgres with minimal effort.

Database Access Security

Database security is handled primarily in two place, from the system service level via a file called pg_hba.conf and within the database metadata files themselves.   The pg_hba.conf file controls what level of credentials are needed based on what IP address the requesting connection is coming from.   The metadata within the engine itself generally controls user level access once they are connected and approved at the system level.

Systemwide Configuration via pg_hba.conf

This file matches IP address with a set of rules to determine how much data you need to provide in the first place before getting access to the database engine.   It includes the IP address, the username trying to connect, and what type of validation is needed.

The data comes in a series of tab separated columns including:

  • Connection Type
    • local = from local tty connection
    • host = from an internet connection
  • Database = which database is the user trying to connect to?
  • User = which user they are connecting as.
  • IP Address = what address are they coming from?
  • Method = how shall we authenticate them?
    • md5 = let them in if the password matches
    • ident sameuser = let them in in the password matches and their login user matches the user they are trying to connect as
    • trust = let them in as long as the ip address matches, no password required

Finding pg_hba.conf

The pg_hba.conf file lives in various places depending on the flavor of Linux.

  • Red Hat, CentOS, Fedora = /var/lib/pgsql/data/
  • Ubuntu = /etc/postgresql/<version>/main/

Command Line Access

You can do a lot of maintenance operations or test queries using the command line interpreter.  The command line in PostgreSQL is accessed via the psql command.   The most often used parameters with psql are to connect as a user other than your login user, provide your password, and give it the name of the database on which to connect.

Example:

# psql -U other_name -W other_db

Command Line Shortcuts

From the command line there are a variety of shortcuts to help you navigate around the database engine or see what is going on.  Here are a few of the most useful:

  • List Databases: \l
  • List (display) Tables : \dt
  • List Columns in a Table: \d <tablename>

Creating A Database

Here is how you create a new database that is owned by  a specific user.  This assumes a “clean slate” install.   You will need to have the postgres user login credentials and/or root access.  You will be creating a PostgreSQL user and password and will change the system-level postgres daemon security settings to allow access with the password regardless of which user you login as.

# # login as postgres user or su postgres if you are root

# psql
psql> create user lance with password ‘cleveland’;
psql> create database myfirstdb with owner lance;
psql> \q
# vi /var/lib/pgsql/data/pg_hba.conf

While in pg_hba.conf change this line:

local   all         all                              ident

to this:

local   all         all                              md5

Backing Up / Dumping Your Data

Data dumps are a quick way to put the schema, data, or a combination of both out into a file that can be used to re-create the database on other systems or just back it up to a remote location.  The PostgreSQL command for this is pg_dump.  It takes the same parameters as thecommand line access.

Simple Data Dump

Example:

# pg_dump -U myname -W the_db_name > dump_thedb_2010_0704_001.sql

Clean Data Dump

This is the format to use if you want to ensure the entire database is dropped & re-created when loading on a new system.

Example:

# pg_dump --clean --create -U myname -W the_db_name > dump_thedb_2010_0704_001.sql

Reloading Dumped Data

To reload such a script into a (freshly created) database named the_db_name :

# psql -d the_db_name -f dump_thedb_2010_0704_001.sql

If the clean data dump method was used you will want to login as postgres and let the sql script create the database:

# su postgres

# psql -f dump_thedb_2010_0704_001.sql

Summary

There are plenty more tips, tricks, and shortcuts.  These will get you started.  We’ll add more as time allows.