Posted on

weaken … XS version of Scalar::Util

weaken is only available with the XS version of Scalar::Util

Every time we upgrade Perl on our CentOS box we get this message.  The fix is very simple .  Re-install Scalar::Util via CPAN.  For some reason the bindings are not updated and the proper version needs to be re-registered with the Perl modules directory.

The command sequence you need to run to restore the Scalar::Util functionality is..

> force install Scalar::Util

The simple command line cpan -i Scalar::Util will not do the trick.  If you already have Scalar::Util installed this command will skip the installation telling you so.

You will also find references online to needing to install perl-Task-Weaken.  That did nothing for us.

Since this is the third time this has happened on our development server running CentOS 5, we figured we’d post it here and maybe help someone else out.  If nothing else we’ll remember what we did in 10 less Google searches next time around!

Posted on

Using Common Sense With Perl

Since I was recently doing Perl work on the Boomerang project, I figured it would be a good time to finally try out common::sense, a CPAN module that, according to the documentation, will save a tree _and_ a kitten when you use it.  I like trees and cats, so Hell, why not try it then?

Long standing advice in the Perl world has been to use warnings and use strict in your code.  However, the style of modern Perl has changed in such a way that it conflicts with those two things.  It is common these days to write something along the line of

@{ $var->[0] }

but with ‘use strict’ enabled, you’re not going to get away with it.

You would instead have to write it as

@{ defined $var->[0] ? $var->[0] : [] }

No thanks.  That doesn’t sit well with my guns-a’blazing, cavalier attitude that I have when writting Perl.

Then there are the useful additions to recent Perl like ‘say’ and ‘given’, which you have to enable to use.  Perl 5.12 is fresh off the line, and 5.10 now has some age behind it.  So it seems valid to start setting 5.10 as the baseline implementation.  I think for the last eighty-seven years everyone has written everything targetting 5.8.8.

It’s time to move on.

Back to common::sense.  It is a collection of settings that assume you are a modern Perl kinda guy.  It’s for the man who uses Unicode, can deal with references, wants ‘state’ variables, and wants his program to die fatally on serious problems.  More specifically, ‘use common::sense’ is equivalent to

use utf8;
use strict qw(vars subs);
use feature qw(say state switch);
no warnings;
use warnings qw(FATAL closed threads internal debugging pack malloc
portable prototype inplace io pipe unpack regexp
deprecated exiting glob digit printf layer
reserved taint closure semicolon);
no warnings qw(exec newline unopened);

I think common::sense is a useful thing, much like common sense.

Using it hasn’t caused me to get any warnings or errors (that I wouldn’t already get…), and it lets me enable more recent features that I prefer to use.  I suggest trying it out sometime.

Note that unlike many pragmas, you cannot write ‘no common::sense’.

You’re already writing Perl, so that’s a given.  No need to spell it out.

Posted on

Perl Regular Expression \K Trick

Regular expressions are a frequently useful tool in our profession, and Perl is probably the most advanced arena for testing your ability to wield regexes.  That’s because Perl has the most feature-rich regular expressions out there (that I know of anyways).  There’s always some new trick to learn about Perl regexes.

Case in point: \K.  Let’s say you want to replace the end of every line that begins with ‘Parent Commit:’, where that string is followed by whitespace and a forty-character hash.  You want to replace the hash.  But you have to hold on to the beginning of the string.  Here’s one way to go about it:

s/^Parent Commit:\s+[0-9a-f]{40}$/Parent Commit: $new_hash/gi

This works, but repeating ‘Parent Commit’ is duplication we would like to avoid.

s/^(Parent Commit:)\s+[0-9a-f]{40}$/$1 $new_hash/gi

Here we capture the beginning of the string so that we can use insert it inside of the replacement part.  This prevents us from having to manually copy the text, but—and maybe this is just me—having to capture that text is annoying.  It kinda feels like a waste of a group.

Enter \K.  When Perl sees this meta-character it throws away everything that it has matched up to that point.  This lets the regex engine continue with a clean slate.  In the context of s///, it means that our replacement won’t affect anything before the \K, because Perl will have forgotten about it.  That means we can write the regex above in the form

s/^Parent Commit:\s+\K[0-9a-f]{40}$/$new_hash/gi

After the \K we are left matching only the hash.  The ‘Parent Commit:\s+’ section gets ignored and we end up performing


except the initial part of the string will still be left intact after the replacement.  This way we don’t need to repeat ‘Parent Commit’ or use a capture group to prevent it from getting replaced.

Anyone have any other regex tricks or tips?  Please share if you do.

Posted on

Changing Directories More Easily

Here is something I have in my Bash config that I have found useful these days. It defines a command called up that lets me move up a given number of directories. For example, up 2 is the same as cd ../.., and up 4 is cd ../../../.., and so on.

function up() {
     cd $(perl -e 'print join("/" => ("..") x shift)' $1)

I found this somewhere online, so I am not taking credit for it. The way this works is we use Perl to create the string ../../.., or however many dots and slashes we need to reach the right parent directory. We can create that string to go up three directories by using the code

("..") x 3

to create the list

(".." ".." "..")

We then use join to insert a slash between each set of dots. This gives us code very close to what is in the function above. The key difference is the use of shift. We don’t know ahead of time how many .. strings to create, since that depends on how many directories upward we want to move. What we want to do then is pass in the number of ..‘s to create as an argument to the Perl script. By default shift will pop off the first command-line argument to the script, which will be our number.

This is how we end up with

perl -e 'print join("/" => ("..") x shift)' $1

Here $1 refers to the first argument of the up shell function. So when we use up 3 we get

perl -e 'print join("/" => ("..") x shift)' 3

which gives us

print join("/" => ("..") x 3)
print join("/" => (".." ".." ".."))
print "../../.."

That string is finally returned as the argument to cd, which moves us up the right number of directories.

Related to this are some aliases I use to treat and as a stack of directories. Bash has two commands called pushd and popd. The former will change to the given directory and put it on the stack. The latter will pop the top of the stack and move to the directory that is now at the top. So I use these aliases to those commands:

alias bd="popd"
alias cd="pushd"
alias rd="popd -n"

The mnemonic for bd is to go ‘back a directory’. The rd alias ‘removes a directory’; it takes the top directory off the stack without switching to it. This is sometimes useful when I end up deleting a directory on the stack, because then ‘bd’ will complain with an error if I try to move back to it.

The command dirs will show you the stack, starting with the current directory on the left. Once you get used to it, I think this is a useful way of moving around directories.

Posted on

Perl Source Code Profiler

Here is a nicely done perl profiler that we’ve used in the past to help us with our Perl work.  You may find this useful as well.  Rather than describe the tool, we’ll let the detailed documentation handle that for us.


 # profile code and write database to ./nytprof.out
 perl -d:NYTProf

 # convert database into a set of html files, e.g., ./nytprof/index.html

 # or into comma seperated files, e.g., ./nytprof/*.csv


Devel::NYTProf is a powerful feature-rich perl source code profiler.

  • Performs per-line statement profiling for fine detail
  • Performs per-subroutine statement profiling for overview
  • Performs per-block statement profiling (the first profiler to do so)
  • Accounts correctly for time spent after calls return
  • Performs inclusive and exclusive timing of subroutines
  • Subroutine times are per calling location (a powerful feature)
  • Can profile compile-time activity, just run-time, or just END time
  • Uses novel techniques for efficient profiling
  • Sub-microsecond (100ns) resolution on systems with clock_gettime()
  • Very fast – the fastest statement and subroutine profilers for perl
  • Handles applications that fork, with no performance cost
  • Immune from noise caused by profiling overheads and I/O
  • Program being profiled can stop/start the profiler
  • Generates richly annotated and cross-linked html reports
  • Trivial to use with mod_perl – add one line to httpd.conf
  • Includes an extensive test suite
  • Tested on very large codebases

NYTProf is effectively two profilers in one: a statement profiler, and a subroutine profiler.

Statement Profiling

The statement profiler measures the time between entering one perl statement and entering the next. Whenever execution reaches a new statement, the time since entering the previous statement is calculated and added to the time associated with the line of the source file that the previous statement starts on.

By default the statement profiler also determines the first line of the current block and the first line of the current statement, and accumulates times associated with those. NYTProf is the only Perl profiler to perform block level profiling.

Another innovation unique to NYTProf is automatic compensation for a problem inherent in simplistic statement-to-statement timing. Consider a statement that calls a subroutine and then performs some other work that doesn’t execute new statements, for example:

  foo(...) + mkdir(...);

In all other statement profilers the time spent in remainder of the expression (mkdir in the example) will be recorded as having been spent on the last statement executed in foo()! Here’s another example:

  while (<>) {

After the first time around the loop, any further time spent evaluating the condition (waiting for input in this example) would be be recorded as having been spent on the last statement executed in the loop!

NYTProf avoids these problems by intercepting the opcodes which indicate that control is returning into some previous statement and adjusting the profile accordingly.

The statement profiler naturally generates a lot of data which is streamed out to a file in a very compact format. NYTProf takes care to not include the measurement and writing overheads in the profile times (some profilers produce ‘noisy’ data due to periodic stdio flushing).

Subroutine Profiling

The subroutine profiler measures the time between entering a subroutine and leaving it. It then increments a call count and accumulates the duration. For each subroutine called, separate counts and durations are stored for each location that called the subroutine.

Subroutine entry is detected by intercepting the entersub opcode. Subroutine exit is detected via perl’s internal save stack. The result is both extremely fast and very robust.

Note that subroutines that recurse directly or indirectly, such as Error::try, will show higher subroutine inclusive times because the time spent recuring will be double-counted. That may change in future.

Application Profiling

NYTProf records extra information in the data file to capture details that may be useful when analysing the performance. It also records the filename and line ranges of all the subroutines.

NYTProf can profile applications that fork, and does so with no loss of performance. There’s (now) no special ‘allowfork’ mode. It just works. NYTProf detects the fork and starts writing a new profile file with the pid appended to the filename.

Fast Profiling

The NYTProf profiler is written almost entirely in C and great care has been taken to ensure it’s very efficient.

Apache Profiling

Just add one line near the start of your httpd.conf file:

        PerlModule Devel::NYTProf::Apache

By default you’ll get a /tmp/nytprof.$$.out file for the parent process and a /tmp/nytprof.$parent.out.$$ file for each worker process.

NYTProf takes care to detect when control is returning back from perl to mod_perl so time spent in mod_perl (such as waiting for the next request) does not get allocated to the last statement executed.

Works with mod_perl 1 and 2. See Devel::NYTProf::Apache for more information.


Usually you’d load Devel::NYTProf on the command line using the perl -d option:

 perl -d:NYTProf

To save typing the ‘:NYTProf’ you could set the PERL5DB env var

 PERL5DB='use Devel::NYTProf'

and then just perl -d would work:

 perl -d

Or you can avoid the need to add the -d option at all by using the PERL5OPT env var:


That’s also very handy when you can’t alter the perl command line being used to run the script you want to profile.


The behavior of Devel::NYTProf may be modified by setting the environment variable NYTPROF. It is possible to use this environment variable to effect multiple setting by separating the values with a :. For example:

    export NYTPROF=trace=2:start=init:file=/tmp/nytprof.out


Append the current process id to the end of the filename.

This avoids concurrent, or consecutive, processes from overwriting the same file.


Set trace level to N. 0 is off (the default). Higher values cause more detailed trace output.


Specify at which phase of program execution the profiler should be enabled:

  start=begin - start immediately (the default)
  start=init  - start at begining of INIT phase (after compilation)
  start=end   - start at begining of END phase
  start=no    - don't automatically start

The start=no option is handy if you want to explicitly control profiling by calling DB::enable_profile() and DB::disable_profile() yourself.


Set to 0 to disable the collection of subroutine inclusive timings.


Set to 0 to disable the determination of block and subroutine location per statement. This makes the profiler about 50% faster (as of July 2008) but you loose some valuable information. The extra cost is likely to be reduced in later versions anyway, as little optimization has been done on that part of the code. The profiler is fast enough that you shouldn’t need to do this.


Set to 0 to disable the extra work done to allocate times accurately when returning into the middle of statement. For example leaving a subroutine and returning into the middle of statement, or re-evaluting a loop condition.

This feature also ensures that in embedded environments, such as mod_perl, the last statement executed doesn’t accumulate the time spent ‘outside perl’.

NYTProf is the only line-level profiler to measure these times correctly. The profiler is fast enough that you shouldn’t need to disable this feature.


Set to 1 to enable use of the traditional DB::DB() subroutine to perform profiling, instead of the faster ‘opcode redirection’ technique that’s used by default. It also disables some extra mechanisms that help ensure more accurate results for things like the last statements in subroutines.

The default ‘opcode redirection’ technique can’t profile subroutines that were compiled before NYTProf was loaded. So using use_db_sub=1 can be useful in cases where you can’t load the profiler early in the life of the application. If this proves to be useful to you then please let us know, otherwise this vestige of old slower ways is likely to be removed.


Measure user CPU + system CPU time instead of the real elapsed ‘wall clock’ time (which is the default).

Measuring CPU time has the advantage of making the measurements independant of time spent blocked waiting for the cpu or network i/o etc. But it also has the severe disadvantage of having typically far less accurate timings.

Most systems use a 0.01 second granularity. With modern processors having multi- gigahertz clocks, 0.01 seconds is like a lifetime. The cpu time clock ‘ticks’ happen so rarely relative to the activity of a most applications that you’d have to run the code for many hours to have any hope of reasonably useful results.


Specify the output file to write profile data to (default: ‘./nytprof.out’).


You can profile only parts of an application by calling DB::enable_profile() and DB::disable_profile() at the appropriate moments.

Using the start=no option let’s you leave the profiler disabled until the right moment, or circumstances, are reached.


The Devel::NYTProf::Data module provides a low-level interface for loading the profile data.

The Devel::NYTProf::Reader module provides an interface for generating arbitrary reports. This means that you can implement your own output format in perl. (Though the module is in a state of flux and may be deprecated soon.)

Included in the bin directory of this distribution are two scripts which implement the Devel::NYTProf::Reader interface:

  • nytprofcsv – creates comma delimited profile reports
  • nytprofhtml – creates attractive, richly annotated, and fully cross-linked html reports (including statistics, source code and color highlighting)


Only profiles code loaded after this module

Loading via the perl -d option ensures it’s loaded first.


Devel::NYTProf is not currently thread safe. If you’d be interested in helping us make it thread safe then please get in touch with us.

For perl versions before 5.8.8 it may change what caller() returns

For example, the Readonly module croaks with an “Invalid tie” when profiled with perl versions before 5.8.8. That’s because Readonly explicitly checking for certain values from caller(). We’re not quite sure what the cause is yet.

Calls made via operator overloading

Calls made via operator overloading are not noticed by any subroutine profiler.


The goto &$sub; isn’t recognised as a subroutine call by the subroutine profiler.


Currently there’s no support for Windows. Some work is being done on a port. If you’d be interested in helping us port to Windows then please get in touch with us.

#line directives

The reporting code currently doesn’t handle #line directives, but at least it warns about them. Patches welcome.




Screenshots of nytprofhtml v2.01 reports can be seen at and A writeup of the new features of NYTProf v2 can be found at and the background story, explaining the “why”, can be found at

Mailing list and discussion at

Public SVN Repository and hacking instructions at

nytprofhtml is a script included that produces html reports. nytprofcsv is another script included that produces plain text CSV reports.

Devel::NYTProf::Reader is the module that powers the report scripts. You might want to check this out if you plan to implement a custom report (though it may be deprecated in a future release).


Adam Kaplan, <akaplan at>. Tim Bunce, and Steve Peters, <steve at>.


  Copyright (C) 2008 by Adam Kaplan and The New York Times Company.
  Copyright (C) 2008 by Tim Bunce, Ireland.

This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.8 or, at your option, any later version of Perl 5 you may have available.


A bit of history and a shameless plug…

NYTProf stands for ‘New York Times Profiler’. Indeed, this module was initially developed from Devel::FastProf by The New York Times Co. to help our developers quickly identify bottlenecks in large Perl applications. The NY Times loves Perl and we hope the community will benefit from our work as much as we have from theirs.

Please visit, our open source blog to see what we are up to, to see some of our open projects and then check out for the latest news!


Subroutine-level profilers:

  Devel::DProf        | 1995-10-31 | ILYAZ
  Devel::AutoProfiler | 2002-04-07 | GSLONDON
  Devel::Profiler     | 2002-05-20 | SAMTREGAR
  Devel::Profile      | 2003-04-13 | JAW
  Devel::DProfLB      | 2006-05-11 | JAW
  Devel::WxProf       | 2008-04-14 | MKUTTER

Statement-level profilers:

  Devel::SmallProf    | 1997-07-30 | ASHTED
  Devel::FastProf     | 2005-09-20 | SALVA
  Devel::NYTProf      | 2008-03-04 | AKAPLAN
  Devel::Profit       | 2008-05-19 | LBROCARD

Devel::NYTProf is a (now distant) fork of Devel::FastProf, which was itself an evolution of Devel::SmallProf.

Adam Kaplan took Devel::FastProf and added html report generation (based on Devel::Cover) and a test suite – a tricky thing to do for a profiler. Meanwhile Tim Bunce had been extending Devel::FastProf to add novel per-sub and per-block timing, plus subroutine caller tracking.

When Devel::NYTProf was released Tim switched to working on Devel::NYTProf because the html report would be a good way to show the extra profile data, and the test suite made development much easier and safer.

Then he went a little crazy and added a slew of new features, in addition to per-sub and per-block timing and subroutine caller tracking. These included the ‘opcode interception’ method of profiling, ultra-fast and robust inclusive subroutine timing, doubling performance, plus major changes to html reporting to display all the extra profile call and timing data in richly annotated and cross-linked reports.

Steve Peters came on board along the way with patches for portability and to keep NYTProf working with the latest development perl versions.

Adam’s work is sponsored by The New York Times Co. Tim’s work was partly sponsored by Shopzilla.

Posted on


Our local updated perl docs. These enhanced notes will be published on our next push to CPAN.


Postgres::Handler – Builds upon DBD::Pg for advanced CGI web apps


Postgres::Handler builds upon the foundation set by DBI and DBD::Pg to create a superset of methods for tying together some of the basic interface concepts of DB management when used in a web server environment. Postgres::Handler is meant to build upon the strengths of DBD::Pg and DBI and add common usability features for a variety of Internet applications.

Postgres::Handler encapsulates error message handling, information message handling, simple caching of requests through a complete iteration of a server CGI request. You will also find some key elements that hook the CGI class to the DBI class to simplify data IO to & from web forms and dynamic pages.


 # Instantiate Object
 use Postgres::Handler;
 my $DB = Postgres::Handler->new(dbname=>'products',dbuser=>'postgres',dbpass=>'pgpassword');

 # Retrieve Data & List Records
 $DB->PrepLEX('SELECT * FROM products');
 while ($item=$DB->GetRecord()) {
     print "$item->{PROD_ID}t$item->{PROD_TITLE}t$item->{PROD_QTY}n";

 # Add / Update Record based on CGI Form
 # assuming objCGI is an instatiated CGI object
 # if the CGI param 'prod_id' is set we update
 # if it is not set we add
 my %cgimap;
 foreach ('prod_id','prod_title','prod_qty') { $cgimap{$_} = $_; }
 $DB->AddUpdate( CGI=>$objCGI     , CGIKEY=>'prod_id',
                 TABLE=>'products', DBKEY=>'prod_id',





DBD::Pg 1.43 or greater (fixes a bug when fetching Postgres varchar[] array data)


Data Access Methods


Create a new Postgres::Handler object.

 dbname => name of the database to connect to
 dbuser => postgres user
 dbpass => password for that user
Set the errortype’ data element to ‘simple’ for short error messages.

 $self->data('errortype') = 'simple';


Get/set the data hash – this is where data fields are stored for the active record.


Returns the database handle for the DB connection.


Get/set postgres user’s password.


Get/set database name. Simple string name of the database.


Get/set postgres username.


Returns the statement handle for the active record selection.

Public Methods


Adds a new record or updates an existing record in the database depending on whether or not a specific CGI parameter has been set.

Useful for processing a posted form that contains form fields that match data fields. Pre-populate the form field that contains the database key field and an update occurs. Set it to blank and a new record is added.

Your database key field should have a default value that is unique and should be set as type ‘PRIMARY KEY’. We always use serial primary key to auto-increment our keys when adding new records.

If a key is provided but is doesn’t match anything in the existing data then the update fails, UNLESS… CHECKKEY=> 1 in which case it will attempt to add the record.

Your CGI->DB key hash reference should look something like this: %mymap = ( tablefld_name => ‘form_name’, tablefld_ssn => ‘htmlform_ssn’ ); And is passed with a simple %mymap as the hrCGIMAP parameter to this function.

Even better, name your CGI form fields the same thing as your Postgres DB field names. Then you can skip the map altogether and just provide the CGISTART variable. All fields that start with the the CGISTART string will be mapped. Want to map every field? Set CGISTART = ‘.’.

Parameters (Required)
 CGI       => a CGI object from the CGI:: module

 DBKEY     => the name of the key field within the table
              defaults to Postgres::Handler Object Property <table>!PGHkeyfld
              must be provided
                                  - or -
                             the <table>!PGHkeyfld option must have
              been setup when creating a new Postgres::Handler object

 TABLE     => the name of the table to play with

 CGISTART or hrCGIMAP must be set (see below)
Parameters (Optional)
 CGISTART  => map all CGI object fields starting with this string
              into equivalently named database fields
                                  only used when hrCGIMAP is not set

 CGIKEY    => the CGI parameter name that stores the data key
              defaults to DBKEY

 CHECKKEY  => set to 1 to perform ADD if the DBKEY is not found in the

 DBSTAMP   => the name of the timestamp field within the table
              defaults to Postgres::Handler Object Property <table>!PGHtimestamp

 DONTSTAMP => set to 1 to stop timestamping
              timestamp field must be set

 hrCGIMAP  => a reference to a hash that contains CGI params as keys and
              DB field names as values

 MD5      => the name of the md5 encrypted field within the table
              defaults to Postgres::Handler Object Property <table>!PGHmd5

 REQUIRED  => array reference pointing to array that holds list of CGI
              params that must contain data

 VERBOSE   => set to 1 to set lastinfo() = full command string
              otherwise returns 'INSERT' or 'UPDATE' on succesful execution

 BOOLEANS  => array reference pointing to the array that holds the list
              of database field booleans that we want to force to false
                                  if not set by the equivalently named CGI field

 RTNSEQ    => set to a sequence name and AddUpdate will return the value of this
              sequence for the newly added record.  Useful for getting keys back
                                  from new records.
 Either adds or updates a record in the specified table.

 Record is added if CGI data key [1] is blank or if CHECKKEY is set
 and the value of the key is not already in the database.

 Record is updated if CGI data key [2] contains a value.
 1 for success, get message with lastinfo()
 0 for failure, get message with lasterror()


Do DBH Command and log any errors to the log file.

Parameters (positional only)
 [0] = SQL command
 [1] = Die on error
 [2] = return error on 0 records affected
 [3] = quiet mode (don't log via carp)
 1 for success
 0 for failure, get message with lasterror


Retreive a field from the specified table.

Parameters (required)
 DATA     => Which data item to return, must be of form "table!field"

 KEY      => The table key to lookup in the database
               Used to determine if our current record is still valid.
               Also used as default for WHERE, key value is searched for
               in the PGHkeyfld that has been set for the Postgres::Handler object.
Parameters (optional)
 WHERE   => Use this where clause to select the record instead of the key

 FORCE   => Force Reload

 HTML    => Return HTML Quoted Strings
 The value of the field.

 Returns 0 and lasterror() is set to a value if an error occurs
               lasterror() is blank if there was no error
 my $objPGDATA = new Postgres::Handler::HTML ('mytable!PGHkeyfld' => 'id');
 my $lookupID = '01123';
 my $data = $objPGDATA->Field(DATA=>'mytable!prod_title', KEY=>$lookupID);

 my $lookupSKU = 'SKU-MYITEM-LG';
 my $data = $objPGDATA->Field(DATA=>'mytable!prod_title', WHERE=>"sku=$lookupSKU");


Retrieves the record in a hash reference with uppercase field names.

Parameters (positional or named)
 [0] or -name     select from the named statement handle,
                  if not set defaults to the last active statement handle

 [1] or -rtype    type of structure to return data in
        'HASHREF' (default) - Returns a hashref via fetchrow_hashref('NAME_uc')
        'ARRAY' - Returns an array via fetchrow_array()
        'ITEM' - Returns a scalar via fetchrow()

 [2] or -finish   set to '1' to close the named statement handle after returning the data
The hashref or array or scaler on success. 0 for failure, get message with lasterror.


Retrieve the latest error produced by a Postgres::Handler object.

The error message.


Retrieve the latest info message produced by a Postgres::Handler object.

The info message.


Retrieve a named statement handle.

The handle, as requested.


Prepare an SQL statement and returns the statement handle, log errors if any.

Parameters (positional or named)
        [0] or -cmd     - required -statement
        [1] or -exec    - execute flag (PREPLE) or die flag (PREPLEX)
        [2] or -die             - die flag     (PREPLE) or null     (PREPLEX)
        [3] or -param   - single parameter passed to execute
        [4] or -name    - store the statement handle under this name
        [5] or -aparam   - An array reference of multiple values to bind to the prepared statement
1 for success


Same as PrepLE but also executes the SQL statement

Parameters (positional or named)
        [0] or -cmd     - required -statement
        [1] or -die             - die flag     (PREPLE) or null     (PREPLEX)
        [2] or -param   - single parameter passed to execute
        [3] or -name    - store the statement handle under this name
1 for success


Quote a parameter for SQL processing via the DBI::quote() function. Sets the data handle if necessary.

Semi-Public Methods

Using these methods without understanding the implications of playing with their values can wreak havoc on the code. Use with caution…


 Internal function to set data handles
 Returns Data Handle

 If you don't want the postgres username and password
 littering your perl code, create a subclass that
 overrides SetDH with DB specific connection info.


 Allows for either ordered or positional parameters in
 a method call AND allows the method to be called as EITHER
 an instantiated object OR as an direct class call.
 [0] - self, the instantiated object
 [1] - the class we are looking to instantiate if necessary
 [2] - reference to hash that will get our named parameters
 [3] - an array of the names of named parameters
       IN THE ORDER that the positional parameters are expected to appear
 [4] - extra parameters, positional or otherwise
 Populates the hash refered to in the first param with keys & values
 An object of type class, newly instantiated if necessary.
 sub MyMethod() {
        my $self = shift;
        my %options;
                $self = SetMethodParms($self,'MYCLASS::SUBCLASS', %options, [PARM1,PARM2,PARM3], @_ );
        print $options{PARM1} if ($options{PARM2} ne '');
        print $options{PARM3};


 Prepare a hash reference for mapping CGI parms to DB fields
 typically used with AddUpdate() from Postgres::Handler.
 hrCGIMAP       - reference to hash that contains the map
 CGI                    - the CGI object
 CGISTART       - map all fields starting with this text
 CGIKEY                 - the cgi key field
 BOOLEANS       - address to list of boolean fields
 @boolist = qw(form_field1 form_field2);
 $item->CGIMap(CGI => $objCGI, hrCGIMAP=>%cgimap, CGISTART=>'cont_', CGIKEY=>'cont_id', BOOLEANS=>@boolist);


Parameters (Named v. Positional)

Some methods allow for parameters to be passed in via both positional and named formats. If you decide to use named parameters with these “bi-modal” methods you must prefix the parameter with a hyphen.

Positional Example
 use Postgres::Handler;
 my $DB = Postgres::Handler->new(dbname=>'products',dbuser=>'postgres',dbpass=>'pgpassword');
 $DB->PrepLEX('SELECT * FROM products');
Named Example
 use Postgres::Handler;
 my $DB = Postgres::Handler->new(dbname=>'products',dbuser=>'postgres',dbpass=>'pgpassword');
 $DB->PrepLEX(  -cmd    =>      'SELECT * FROM products'        );


Short Program
 # Instantiate Object
 use Postgres::Handler;
 my $DB = Postgres::Handler->new(dbname=>'products',dbuser=>'postgres',dbpass=>'pgpassword');

 # Retrieve Data & List Records
 $DB->PrepLEX('SELECT * FROM products');
 while ($item=$DB->GetRecord()) {
        print $item->{PROD_ID}t$item->{PROD_TITLE}t$item->{PROD_QTY}n";

 # Add / Update Record based on CGI Form
 # assuming objCGI is an instatiated CGI object
 # if the CGI param 'prod_id' is set we update
 # if it is not set we add
 my %cgimap;
 foreach ('prod_id','prod_title','prod_qty') { $cgimap{$_} = $_; }
 $DB->AddUpdate( CGI=>$objCGI     , CGIKEY=>'prod_id',
                 TABLE=>'products', DBKEY=>'prod_id',
AddUpdate Example
 # <form method="post" action="/">
 # <input type="submit" name="submit" value="submit">
 # <input type="hidden" name="chat_id" value="">
 # <input type="text" name="chat_text" value="">
 # </form>

 use Postgres::Handler;
 use CGI;
 my $DB = Postgres::Handler->new(dbname=>'products',dbuser=>'postgres',dbpass=>'pgpassword');
 my $CGI = new CGI;
 my $AOK = $DB->AddUpdate(
           CGI     => $CGI,
           DBKEY   => 'chat_id',
           TABLE   => 'chatter',
           CGISTART=> 'chat_'
 print ($AOK?'Awesome!':'Fail!');


Cyber Sprocket Labs (CSL) is and advanced internet technology consulting firm based in Charleston South Carolina. We provide custom software, database, and consulting services for small to mid-sized businesses.

For more information visit our website at


(c) 2008, Cyber Sprocket Labs

This script is covered by the GNU GENERAL PUBLIC LICENSE.


Revision History

 v2.3 - May 2008
      Documentation cleanup.

 v2.2 - Apr 2006
      Fixed problem with SetDH database handle management

 v2.1 - Mar 2006
      Added RTNSEQ feature to AddUpdate so we can get back the key of a newly added record

 v2.0 - Feb 2006
      Moved CGI::Carp outside of the package to prevent perl -w warnings

 v1.9 - Feb 2006
      Update Field() to prevent SIGV error when WHERE clause causes error on statement
                Field() now returns 0 + lasterror() set to value if failed execute
                            returns fldval + lasterror() is blank if execution OK

 v1.8 - Jan 2006
      Bug fix on PrepLE and PrepLEX for perl -w compatability
                Added DoLE param to return error status (0) if the command affects 0 records '0E0'
                Added DoLE param to keep quiet on errors (do not log to syslog via carp)
                Documentation updates

 v1.5 - Nov 2005
                Fixed @BOOLEANS on AddUpdate to force 'f' setting instead of NULL if blank or 0

 v1.5 - Oct 2005
                Fixed return value error on AddUpdate()

 v1.4 - Aug 2005
      Minor patches

 v1.3 - Jul 17 2005
      Minor patches
                Now requires DBD::Pg version 1.43 or greater

 v1.2 - Jun 10 2005
      GetRecord() mods, added 'ITEM'
                test file fix in distribution
                created yml file for added requisites on CPAN

 v1.1 - Jun 9 2005
      pod updates
                Field() cache bug fix
                GetRecord() expanded, added finish option
                Moved from root "PGHandler" namespace to better-suited "Postgres::Handler"

 v0.9 - May 2 2005
      pod updates
                AddUpdate() updated, CGIKEY optional - defaults to DBKEY
                AddUpdate() updated, BOOLEANS feature added
                GetRecord() updated, added check for sth active before executing
                Field() fixed hr cache bug and data bug and trap non-set hr issue

 v0.8 - Apr 26 2005
      Fixed GetRecord() (again) - needed to check $DBI::errstr not $err

 v0.7 - Apr 25 2005
      Added error check on ->Field to ensure hashref returned from getrecord
      Added CGIMAP method
      Invoke CGIMAP from within AddUpdate if missing map
      Fixed GetRecord Return system

 v0.5 - Apr/2005
      Added DBI error trap on DoLE function
      Added named statement handles for multiple/nested PrepLE(X) capability
      Added VERBOSE mode to AddUpdate
      Added NAME to retrieved named statements via GetRecord
      Updated FIELD to use named statement handles

 v0.4 - Apr/2005
                Fixed some stuff

 v0.3 - Apr/2005
      Added REQUIRED optional parameter to AddUpdate
      Improved documentation
      Quoted DBKEY on add/update to handle non-numeric keys

 v0.2 - Mar/2005 -
      Added error messages to object
      Fixed issues with Class:Struct and the object properties
      Updated AddUpdate to use named parameters (hash) for clarity

 v0.1 - Dec/2004
      Initial private release


You can find the published version of Postgres::Handler under the Cyber Sprocket Labs account on CPAN.