Posted on

PHP Switch Vs If Performance

When writing a complex application such as Store Locator Plus, you will often find multiple logic branches to handle a given task.   There are two oft-used methods for processing the logic; the If-Else construct and a Switch statement.    Since I am always looking to optimize the Store Locator Plus codebase for performance, some sites do have hundreds-of-thousands of locations after all, it was time to look into the performance-versus-readability of those two options.

The general consensus, though I’ve not taken the time to run performance tests with the WordPress stack myself, is that “you should use whatever makes your code easier to read are more easily maintained”.  For me that means using switch statements.    I find the construct much easier to extend and not cause inadvertent side effects.  Something I’ve learned in 20-plus years of working on code teams and in long-run projects like Store Locator Plus.

On the pure performance standpoint an if-else can be marginally faster if performing less than 5 logic comparisons.

PHP If Else Statement
PHP If Else Statement

Switch statements will often be faster at-or-near 5 logic comparisons as code optimization within C, and likely carried forth in the PHP psuedo-compiler, will often turn the 5+ logic branches of a switch statement into a hash table.  Hash tables tend to be faster with all branches of the code having equal access time.    Statistically speaking a large number of iterations will favor the equal access time model over the “first-fastest” model of an If-Else.

PHP Switch Statement
PHP Switch Statement

Possibly faster and always easier-to-extend and read, Switch will be my “weapon of choice” whenever I have more than  a simple 2-state if/else logic branch to be tested.

Posted on

Analyzing WordPress PHP Memory Consumption

XDebug Banner

This weekend I have been processing a large 200,000 location data file for a Store Locator Plus customer.   This is one of the larger files I have processed on my test system and it is the first file over 60,000 locations I’ve processed since Store Locator Plus 4.2 and WordPress 4.x have been released.    This large file processing and the geocoding required is taxing several systems in the Store Locator Plus hierarchy.  WordPress, Google OEM API calls, and the locator are all showing their weak spots with this volume of data processing.   They can all handle it to some degree, but maximizing efficiency is the key.

The temporary solution to most of the issues is to increase memory and process limits.   These are some of the key findings, as posted on the CSV Import documentation pages for Store Locator Plus:

Check your php.ini post_max_size setting if doing a direct file import versus a cron URL based import. post_max_size is typically set to 8 (MiB) on most servers.   This is typically enough for around 25,000 locations but it depends on how long your descriptions are and how many data fields you have filled out.   SLP 4.2.41 will warn you if you try to upload a file larger than your post_max_size limit.

Check your php.ini memory_limit setting and make sure it is large enough to handle the WordPress overhead plus the size of your CSV file times two.   The WordPress database interface and the CSV file processing will consume lots of memory.  The more plugins, widgets, and advanced theme features you have more more memory WordPress will use and the more PHP memory will leak over time. A setting of 256M is enough for approximately 15,000 locations.

Check your wp-config WP_MEMORY_LIMIT.   You may need to add this define to wp-config.php.  define(‘WP_MEMORY_LIMIT’ , ‘256M’).  The number needs to be equal-to or less-than the php.ini memory-limit.    It is the WordPress-specific memory limit and works with php.ini memory_limit.

Check your wp-config WP_MAX_MEMORY_LIMIT.   You may need to add this define to wp-config.php.  define(‘WP_MAX_MEMORY_LIMIT’ , ‘256M’).   This is the WordPress admin interface memory limit and works like WP_MEMORY_LIMIT for admin pages.

Set Duplicates Handling to Add especially if you know you do not have duplicate locations in your data.  SLP 4.2.41 further improves the performance when using ‘add’ mode by eliminating extra data reads from the database.

Set Server-To-Server speed to Fast under the General Settings tab unless you are on a shared host or experience a large number of uncoded locations during import.

Set the PHP Time Limit to 0 (unlimited) under the General Settings tab.   For hosting providers that allow your web apps to change this, the unlimited value will let the import run to completion.

Keep in mind Google limits you to 2500 latitude/longitude (geocoding) lookups per 24 hours per server IP address.  If you are on a shared host you share that limit with all other sites on that host.

However, even with all of these settings tweaked to fairly high values for my VirtualBox development system running on a MacBook Pro Retina host, the 4GB of RAM allocated to WordPress still is not enough.   The system eventually runs out of memory when the file gets close to the 45,000 location mark.  Luckily the “skip duplicate addresses” option allows the process to continue.    The “out of memory” error still rears its ugly head in the wpdb  WordPress database engine and is a problem for handling larger files.

Enter Xdebug and memory profiling.   Somewhere buried in the Store Locator Plus code, WordPress code, PHP MySQL interface, or PHP core engine there is a memory leak.  With a complex application environment finding the leak is going to be a monumental task.  It may not be something I can fix, but if I can mitigate the memory usage when processing large files that will help enterprise-class sites use Store Locator Plus with confidence.

Getting Xdebug On CentOS 7

If you follow my blog posts on development you will know that I run a self-contained WordPress development environment.  The system uses Vagrant to fire up a VirtualBox guest that runs CentOS 7 with GUI tools along with a full WordPress install including my plugin code.   This gives me a 2GB “box file” that I can ship around and have my full self-contained development environment on any system capable of running VirutalBox.   Here is how I get Xdebug connected to my local Apache server running WordPress.

Install xdebug from the yum install script.


# sudo yum install php-pecl-xdebug.x86_64

Turn on xdebug in the php.ini file


# find / -name xdebug.so

/usr/lib64/php/modules/xdebug.so

#sudo vim /etc/php.ini

zend_extension="/usr/lib64/php/modules/xdebug.so"

Check if xdebug is installed:


# php --version

... PHP 5.4.16
.... with xdebug v2.2.7

Enable some xdebug features by editing php.ini again.

Read about XDebug Profiling.

Read about XDebug Tracing.


# sudo vim /etc/php.ini

xdebug.default_enable=1  ; turns on xdebug any time a PHP page loads on this local server

xdebug.idekey="PHPSTORM" ; in case I turn on the automated listener for built-in PHP Storm debugging/tracing

xdebug.profiler_enable = 1 ; turn on the profiler which creates cachegrind files for stack trace/CPU execution analysis

xdebug.profiler_enable_trigger = 1;  turn on a cookie "hook" so third party browser plugins can turn the profiler on/off with a bookmark link

xdebug.profiler_output_dir = "/var/www/xdebug" ; make sure this directory is writable by apache and readable by your local user

xdebug.auto_trace = 1 ; when any page loads, enable the trace output for capturing memory data

xdebug.show_mem_delta = 1 ; this is what tells trace to trace memory consumption changes on each function call

xdebug.trace_output_dir = "/var/www/xdebug" ; same idea as the profiler output, this will be where trace txt files go

Restart the web server to get the php.ini settings in effect:


# sudo service httpd restart

At this point I can now open any WordPress page including the admin pages.   Shortly after the page has rendered the web server will finish the processing through xdebug and a trace* file will appear in /var/www/xdebug.   I can now see the stack trace of the functions that were called within WordPress with the memory consumption at each call.     This is the start of tracking down which processes are eating up RAM while loading a large CSV file without adding thousands of debugging output lines in the web app.

Be warned, if you are tracing large repetitive processes your trace file can be many GiB in size, make sure you have the disk space to run a full trace.

Posted on

WordPress Boolean Options and JavaScript : Handle With Care

JavaScript WordPress Banner

After chasing my tail for the past 5 days I finally uncovered the source of a bug in my code that was causing Store Locator Plus to not initialize properly on fresh installs. The culprit? How WordPress handles booleans. In particular, how WordPress handles the persistent storage and the JavaScript localization of booleans.

The important lesson:

WordPress option storage is done using STRINGS*.
* there is a caveat here: SERIALIZED options are stored with the data type,
single options (non-arrays) are stored as strings

WordPress JavaScript localization uses PHP and JavaScript DATA TYPES.

 

What does this mean for plugin coding?

If you are passing WordPress options that have been fetched from the wp_options table with get_option to JavaScript via the wp_localize_script() method your booleans that are set to true/false will be passed to JavaScript as ‘1’/’0′. On the other hand, if you are setting a PHP variable to true/false and passing that to JavaScript they are set as a JavaScript boolean value of true/false.

The difference is important.

In JavaScript you write this for booleans:

if ( myoption.booleanvar ) { alert('true'); }

However you write this for the string version:

if ( myoption.booleanvar === '1' ) { alert('true'); }

How did this break my code?

When my plugin environment loads for the first time it uses an array to store option defaults. Since the user has not yet set or saved the plugin options the get_option call returns an empty array and those defaults are employed. However my option array looked something like this:

$options = array( 'booleanvar' => true );

To further compound the problem, I introduced yet ANOTHER issue by converting what was previously a single-element option to a serialized array in the latest patch release.   Serialized data is MUCH faster.  When you are fetching/setting more than a single option for your plugin.   To be a good WordPress citizen I’ve been slowly migrating all my legacy options to a single serial array.   HOWEVER the update did something like this:

$old_option = get_option('booleanvar');  // singular boolean options are stored as strings '1'/'0'
$options = array ( 'booleanvar' => $old_option ); // $old_option is a string
update_option( 'my_option_setting', $options);
delete_option('booleanvar');

My code is a lot “deeper” than shown here, but the basic idea was to fetch the pre-existing value of the singular option and store it in my options array. I save it back out to persistent storage and blast the old non-serialized option out of the database. THIS conversion for existing sites works well as the singular option variables will store and retrieve boolean data as a STRING. Thus my option comes back as ‘1’/’0′ depending on how the user left it and all is good in JavaScript-land.

HOWEVER, if you note above the NEW DEFAULT is to set booleanvar as a boolean with a default value of true. When storing a compound option (named array) with update_option it stores and retrieves values using the DATA TYPE.

$options = array( 'booleanvar' => true );
update_option( 'my_option_settings', $options );  // serial options are stored as proper data types
$options = get_option( 'my_option_settings');

Here the $options[‘booleanvar’] is set to a proper boolean.

As you can probably guess, this creates inconsistency in the Javascript.

Why? Because my JavaScript has been written for the most common use case which is one where users have set and saved the plugin options at least once. The JavaScript code uses the string comparison code shown above, ( myoption.booleanvar === ‘1’ ). It works as expected every time after the user has saved the options at least once.

Why does it work after options are saved? Because WordPress returns the boolean variable as the string value ‘1’. You can see this for yourself by playing with the update_option() and get_option() functions in WordPress to store/get a boolean true value. Since my code uses the stored values if they are set my booleanvar is not taking on the default true setting after the first save, it is coming back as ‘1’ which is what JavaScript expects to see.

The Lesson?

Use string values of ‘1’ and ‘0’ for your WordPress option values any time you are not using 100% serialized option values. It ensures consistency throughout your application by following the precedence set by the WordPress get_option function.

Yes, you can get away with using boolean true/false values. It even makes the code more readable, IMO. However it can cause issues if you ever decide you need to pass those values along to your JavaScript functions with wp_localize_script() if you are not 100% certain you are using pure non-translated boolean variables throughout.

If you are ever uncertain of your true data types add a gettype($varname) debugging output to your plugin code. Remember that simple print_r statements will convert booleans to strings as well, use gettype to be certain of what flavor variable you have inside PHP.

Posted on

Improved Grunt Tasks for the Vagrant WordPress Dev Box

Grunt WordPress Dev Kit

Last week I found myself having to rebuild my WordPress plugin development box after a “laptop fiasco”.   While it was a lot of work it feels as though I am in a better position to not only recover my environment quickly but also distribute it to other developers that are interested in assisting with plugin development.

If you are interested you can read more about it in the related WordPress Workflow and related WordPress Development Kit articles.

This morning I realized that having a new almost-fully-configured Vagrant box for my WordPress Development Kit allows me to make assumptions in my Grunt tasks.    While it would be more flexible to create options-based tasks where users can set their own configuration for things like MySQL usernames and passwords, the WP Dev Kit Vagrant box assumption allows me to bypass that for now and come back to it when time allows.  Fast turnaround and fewer interruptions in my already-busy work flow is paramount this week.

Today’s WordPress Dev Kit Updates

The official tag I’ve assigned to the newest WordPress Dev Kit is version 0.5.0.  Here is what has been added.

WordPress Database Reset

One of the tasks I do fairly often is to “clear the data cruft” from my development box WordPress tables.  I  accomplish this by dropping the WordPress database and recreating it.

The Vagrant box makes this far easier as I know that when I spin up the WP Dev Kit Vagrant box it already has the WordPress MySQL tables setup.  I also know the username and password.  As such I can execute a simple drop/create table as the privileges are already in place in the meta data for MySQL and will carry over.   Thus I only need to execute a single mysql-cli command to get the data reset.

To get this working in Grunt I added the grunt-ssh module and created a ‘resetdb’ target.

I can now reset my WordPress table with a simple grunt command:


$ grunt shell:resetdb

Online Documentation

The other change I made today will help me remember how the heck all this stuff works.  Now that the dev kit has grown to a couple of commands I know I will soon be forgetting the nuances to certain build and workflow processes.   I started creating my own Markdown files I realized that Bitbucket has a system for using .md files on the repository wiki.    The easy solution was to add the Bitbucket wiki as a submodule to the WP Dev Kit repository and edit the file there.    Doing so means that any doc update will also be published immediately when pushed back to the repo at the WP Dev Kit Bitbucket Wiki.

Now back to getting the Store Locator Plus and Enhanced Results first-pass testing run and prerelease copies published for my Premier Members.

Posted on

Improving WordPress Plugin Development with Sass

SLP Sass Banner

If you’ve been following along since my WordCamp Atlanta trip this spring you know that I’ve been working on automating my WordPress plugin development and production process.    If you missed it you can read about it in the WordPress Workflow and WordPress Development Kit articles.   Since I needed to patch some basic CSS rules in my Store Locator Plus themes, these are plugin “sub-themes” that style the store locator interface within a page, I decided now was the time to leverage Sass.

Sass Is In The House

It was one of my first sessions at WordCamp Atlanta and I KNEW it was going to be part of my automation process.    Sass is a CSS pre-processor.   Store Locator Plus has its own “theme system”, a sort of plugin sub-theme that lives within the WordPress over-arching site theme.     The SLP themes allow users to tweak the CSS that renders the search form, map, and results of location searches to create in-page layouts that better fit within their WordPress theme layout.

Until this past release it was a very tedious process to update themes or create a new theme.    In the latest release there are some 30-odd SLP theme files.    The problem is that when I find an over-arching CSS issue, like the update to Google Maps images that rendered incorrectly on virtually EVERY WordPress Theme in existence, it was a HUGE PAIN.   I was literally editing 30 files and hoping my cut-and-paste job went well.   Yes, I could have done CSS include statements but that slows things down by making multiple server requests to fetch each included CSS file.   Since the store locator is the most-visited page on many retail sites performance cannot be a secondary consideration.   Sass deals with that issue for me and brings some other benefits with it.

There are PLENTY of articles that describe how to install Sass, so I am not going to get into those details here.  On CentOS it was a simple matter of doing a yum install of ruby and ruby gems and a few other things that are required for Sass to operate.  Google can help you here.

My Sass Lives Here…

For my current Sass setup I am letting NetBeans take care of the pre-compiling for me.    It has a quick setup that, once you have Sass and related ruby gems installed, will automatically regenerate the production css files for you whenever you edit a mixin, include, or base SCSS file.

NetBeans Sass Setup
NetBeans Sass Setup

I combine this with the fact that the assets directory is ignored by the WP Dev Kit publication and build tasks to create a simple production environment for my CSS files.   I store my SCSS files in the ./assets/stylesheets directory for my plugin.   I put any includes or mixin files in a ./assets/stylesheets/include subdirectory.     I configure NetBeans to process any SCSS changes and write out the production CSS files to the plugin /css directory.

The first thing I did was copy over a few of my .css files to the new stylesheets directory and changed the extension to .scss as I prepared to start building my Sass rules.

Includes

I then ripped out the repeated “image fix” rules that existed in EVERY .css file and created a new ./stylesheets/include/_map_fix.scss file.     This _map_fix file would now become part of EVERY css file that goes into production by adding the line @include ‘include/_map_fix” at the top of the SLP theme .scss files.    Why is this better?   In the past, when Google has made changes or WordPress has made changes, I had to edit 30+ files.  Now I can edit one file if a map image rule is changing that has to be propagated to all of the css files.   However, unlike standard CSS includes Sass will preprocess the includes and create a SINGLE CSS file.   That means the production server makes ONE file request instead of two.  It is faster.

SLP Map Fix Include
SLP Map Fix Include

As I reiterated this process I ended up with a half-dozen CSS rules that appear in MOST of my CSS files.    Since all of the rules do not appear in all of my plugin theme files I ended up with a half-dozen separate _this_or_that scss files that could be included in a mix-and-match style to get the right rule set for each theme.     I also created a new _slp_defaults include file that does nothing more than include all of those half-dozen rules.  Nearly half of the current CSS files use all of rules that were “boiled out of” the CSS files.

Store Locator Plus Default Includes
Store Locator Plus Default Includes

Mixins

Along the way I learned about mixins.   At first I was a bit confused as to the difference between include files and mixins.  Both are “pulled in” using similar commands in SCSS, @import for the “include files” and @include for the mixins, but what was the difference?    While you can likely get away with “faking it” and having mixins act like includes they serve different purposes.   I like to think of a mixin as a “short snippet of a CSS rule”.

A common example is a “set the border style mixin”.  In the simplest form it can set the border style with a rule for each of the browser variants.  This rule is not a complete CSS rule but rather a portion of a CSS rule that may do other styling AND set a border.    The mixin includes the -moz and other special rule sets to accomodate each browser.   Rather than clutter up a CSS entry with a bunch of border-radius settings, use a mixin and get something like:

.mystyle {
   @include mixin_border_radius;
   color: blue;
}

That is a very simplistic way of using a mixin. One advantage is that if you decide to change the default border radius settings in all of your CSS files you can edit a single mixin file. However that is not a typical use. Yes, you can create subsets of CSS rules, but it really gets better when you add parameters.

At a higher level a mixin is more than just a “CSS rule snippet”. It becomes more like a custom PHP function. In typical coder fashion, I snarfed this box-shadow mixin somewhere along the way:

// _csa_mixins.scss

@mixin box-shadow( $horiz : .5em , $vert : .5em , $blur : 0px , $spread : 0px , $color : #000000 ){
    -webkit-box-shadow: $horiz $vert $blur $spread $color;
    -moz-box-shadow: $horiz $vert $blur $spread $color;
    box-shadow: $horiz $vert $blur $spread $color;
}
@mixin csa_default_map_tagline {
    color: #B4B4B4;
    text-align: right;
    font-size: 0.7em;
}
@mixin csa_ellipsis {
  overflow: hidden;
  text-overflow: ellipsis;
  white-space: nowrap;
}

Since that rule is part of my default _csa_mixins that I tend to use in multiple places I use it as follows:


@import 'include/_csa_mixins';
@import 'include/_slp_defaults';

//blah blah boring stuff here omitted

// Results : Entries (entire locations)
.results_entry {
    padding: 0.3em;
    @include box-shadow(2px, 4px, 4px, 0px, #DADADA);
    border-bottom: 1px solid #DDDDDD;
    border-right: 1px solid #EEEEEE;
}

Notice how I now call in the include with parameters. This is passed to the mixin and the Sass preprocessor calculates the final rules to put in the CSS file. This makes the mixin very flexible. I can create all sorts of different box shadow rules in my CSS files and have the cross-browser CSS generated for me. No more editing a dozen box shadow entries every time I want to change a shadow offset.

Here is what comes out in the final production CSS when using the above mixin. You can see where the parameters are dropped into the .results_entry CSS rule:

.results_entry {
  padding: 0.3em;
  -webkit-box-shadow: 2px 4px 4px 0px #dadada;
  -moz-box-shadow: 2px 4px 4px 0px #dadada;
  box-shadow: 2px 4px 4px 0px #dadada;
  border-bottom: 1px solid #DDDDDD;
  border-right: 1px solid #EEEEEE; }

This is only a start of my journey with Sass and I’ve barely scratched the surface. However I can already see the benefits that are going to come from using Sass. In fact I already used it to fix a problem with cascading menus where one of the SLP theme files did not contain a rule set. Rather than copy-paste from another theme file that contained the proper rules I only needed to add @import ‘include/_slp_tagalong_defaults’ and the problem was fixed.

Going forward Sass will not only increase my throughput in CSS development but also improve the quality of the final product that reaches the customer.

My first SLP Theme file, Twenty Fourteen 01, that was built using these new Sass files is only a 136 line CSS file with LOTS of whitespace and comments.  When the final processing is finished it has all of the rules necessary to support the current add-on packs and style them nicely for the WordPress Twenty Fourteen theme in all major browsers.

A new SLP Theme: Twenty Fourteen Rev 01
A new SLP Theme: Twenty Fourteen Rev 01

 

Posted on

WordPress Dev Kit : Grunt 0.3.0 and Plugin 0.0.2

Grunt WordPress Dev Kit

More refinements have been made this week to my WordPress Workflow and related WordPress Development Kit.  With new products going into production shortly and some older products coming out with new releases, I realized I needed a more efficient way to publish prerelease copies.  As part of the Premier membership program I am trying to get stable prerelease products in the hands of those Premier members that want them.   Some members like to test new releases or try out new features on their test systems before they come out.    It allows them to plan for future updates and provides an opportunity for feedback and updates before the new version is released.  A win-win for the Premier member and for Charleston Software Associates.

In order to support a formal prerelease and production configuration I realized I needed to be able to track two different versions and release dates separately.   Following the general format presented in other Grunt examples, this meant setting up new sub-sections within the plugins.json pluginMeta structure.   The new format looks something like this:

"wordpress-dev-kit-plugin": {
"production": {
"new_version": "0.0.02",
"last_updated": "2014-04-03"
},
"prerelease": {
"new_version": "0.0.03",
"last_updated": "2014-04-04"
},
"publishto" : "myserver"
}

Changing the structure meant updating both the Gruntfile.js for the development kit as well as the parser that is in the WordPress Development Kit plugin. The changes were relatively minor to address this particular issue, but I did learn some other things along the way.

Tasks and Targets

In my own Grunt tasks I had been calling one of my parameters in my build sequence the “type”, as in the “build type”. However the configuration file examples online often talk about a “target”. A target would be something like “production” or “prerelease” that shows up in a configuration block like this one:

// sftp
//
sftp: {
options: {
host: ‘<%= myServer.host %>’,
username: ‘<%= myServer.username %>’,
privateKey: ‘<%= myServer.privateKey %>’,
passphrase: ‘<%= myServer.passphrase %>’,
path: ‘<%= myServer.path %><%= grunt.task.current.target %>/’,
srcBasePath: "../public/<%= grunt.task.current.target %>/",
showProgress: true
},
production: { expand: true, cwd: "../public/<%= grunt.task.current.target %>/", src: ["<%= currentPlugin.zipbase %>.zip","plugins.json"] },
prerelease: { expand: true, cwd: "../public/<%= grunt.task.current.target %>/", src: ["<%= currentPlugin.zipbase %>.zip","plugins.json"] }
},

I have updated my scripts and documentation terminology to refer to this parameter as the “target” to follow convention.

Simplify Congiguration With grunt.task.current.target

I learned a new trick that helps condense my task configuration options. In one of my interim builds of the WordPress Dev Kit I had something that looked more like this:

// sftp
//
sftp: {
options: {
host: ‘<%= myServer.host %>’,
username: ‘<%= myServer.username %>’,
privateKey: ‘<%= myServer.privateKey %>’,
passphrase: ‘<%= myServer.passphrase %>’,
showProgress: true
},
production: {
expand: true,
cwd: "../public/<%= grunt.task.current.target %>/",
src: ["<%= currentPlugin.zipbase %>.zip","plugins.json"]
path: ‘<%= myServer.path %>production/’,
srcBasePath: "../public/production/",
},
prerelease: {
expand: true,
cwd: "../public/<%= grunt.task.current.target %>/",
src: ["<%= currentPlugin.zipbase %>.zip","plugins.json"]
path: ‘<%= myServer.path %>prerelease/’,
srcBasePath: "../public/prerelease/",
},
},

A bit repetitive, right? I found you can use the variable grunt.task.current.target to drop the current task name as a string into a configuration directive:

// sftp
//
sftp: {
options: {
host: ‘<%= myServer.host %>’,
username: ‘<%= myServer.username %>’,
privateKey: ‘<%= myServer.privateKey %>’,
passphrase: ‘<%= myServer.passphrase %>’,
showProgress: true
},
production: {
expand: true,
cwd: "../public/<%= grunt.task.current.target %>/",
src: ["<%= currentPlugin.zipbase %>.zip","plugins.json"]
path: ‘<%= myServer.path %><%= grunk.task.current.target %>/’,
srcBasePath: "../public/<%= grunk.task.current.target %>/",
},
prerelease: {
expand: true,
cwd: "../public/<%= grunt.task.current.target %>/",
src: ["<%= currentPlugin.zipbase %>.zip","plugins.json"]
path: ‘<%= myServer.path %><%= grunk.task.current.target %>/’,
srcBasePath: "../public/<%= grunk.task.current.target %>/",
},
},

Now that the prerelease and production path and scrBasePath variables are identical they can be moved into the top options section.

Now if I can just figure out how to have shared FILE configurations which define the current working directory (cwd), source (src), and destination (dest) file sets I could eliminate ALL of the settings in the production and prerelease configuration blocks and leave them with a simple “use the defaults setup like this:

// sftp
//
sftp: {
options: {
host: ‘<%= myServer.host %>’,
username: ‘<%= myServer.username %>’,
privateKey: ‘<%= myServer.privateKey %>’,
passphrase: ‘<%= myServer.passphrase %>’,
path: ‘<%= myServer.path %><%= grunk.task.current.target %>/’,
srcBasePath: "../public/<%= grunk.task.current.target %>/",
showProgress: true
},
production: { },
prerelease: { },
},

Maybe someday.

Simplify Congiguration With Shared Variables

Another trick I learned is that common configuration strings can be put at the top of the configuration block and re-used. This is alluded to on the Grunt tasks configuration page but they never expound on how to use the common configuration variables. Here is how it works, define the variable at the top of the configuration block then reference that variable inside a string like ‘<%= my_var %>’. Here is my example with some “fluff” missing from the middle:

// Project configuration.
grunt.initConfig({

// Metadata.
currentPlugin: currentPlugin,
myServer: myServer,
pkg: grunt.file.readJSON(‘package.json’),
my_plugin_dir: ‘/var/www/wpslp/wp-content/plugins/<%= currentPlugin.slug %>’,
my_src_files: [
‘**’,
‘!**.neon’,
‘!**.md’,
‘!assets/**’,
‘!nbproject/**’
],
// compress
//
compress: {
options: {
mode: ‘zip’,
archive: ‘../public/<%= grunt.task.current.target %>/<%= currentPlugin.zipbase %>.zip’,
},
prerelease: { expand: true, cwd: ‘<%= my_plugin_dir %>’, src: ‘<%= my_src_files %>’ },
production: { expand: true, cwd: ‘<%= my_plugin_dir %>’, src: ‘<%= my_src_files %>’ },
},

In this example you can see how I’m using the my_plugin_dir variable to set my path to the plugins I am working on on my dev box and my_src_files to list the files I want to add (or ignore) when pushing my development plugin directories to a zip file or the WordPress svn repo for publication.

This has simplified a lot of task configuration entries in my custom grunt tasks script.

That combined with smarter configuration blocks in areas like the SFTP node module has simplified my Grunt configuration which will make it less prone to errors and easier to maintain going forward.

Back to coding…

Posted on

WordPress Add JavaScript to Specific Admin Page

Huh?  “wordpress add javascript to specific admin page”, what the heck is that?   That is the first thing I “googled” when I discovered I was “doing it wrong” so I could learn how to do it right.

As a plugin developer it is one of the more important principles of plugin development.  It is one that took me far too long to learn and based on all the things the break when you install various themes and plugins, I am not alone in that department.

Why It Is Important

Why is this so important?  You load your script on EVERY PAGE in the worst case scenario, or EVERY ADMIN PAGE in the half-right scenario.

Doing it wrong has two notable side effects, neither of which is a good thing:

  1. It slows down page loads unnecessarily.   Think about it?  The server reads a file from disk (slow), loads it into memory, and does nothing.   More stuff in memory = higher chance for paging (putting memory blocks on disk) = more slowness.   Unless you are running a server on solid state drives (SSD) then disk I/O is the worst possible thing you can be doing in terms of performance.
  2. It breaks other scripts.  Not always.  Often.   Your script manipulates the browser JavaScript processing stack. It loads stuff into memory.  It can change the logic flow or the DOM.   Many times it is having unintended consequences and all for a script that will not run in most cases.

What you really want is to only load the JavaScript (or CSS for that matter) onto YOUR admin pages only.

If you are coding plugins DO NOT LOAD YOUR CSS or JavaScript globally!

Use The Hooks

Easy.   There are WordPress hooks specifically designed for this.   The JavaScript-centric hook is called ‘admin_print_scripts-<handle>’.     That <handle> part is important.   It is unique to YOUR admin pages and how you get stuff to happen on JUST your admin page.

For reference, if you are doing CSS stuff there is also an equivalent ‘admin_print_styles-<handle>’ equivalent.

However, I am going to take a slightly different route.    MOST times you are likely adding CSS and JavaScript to your admin page, not one or the other.   As such I don’t want to write TWO hooks to load my stuff, though in a larger system I may do so for clarity.    But I’m a lazy programmer and while I don’t like super-long multi-faceted functions, I feel this case is simple enough to glom the script and CSS together.  So I’m going to use the ‘admin_head-<handle>’ hook instead.   I feel that is a name that makes sense for loading up both together, especially if  my JavaScript is not dependent on styles being in place first.

Curious on how this all hooks into WordPress?

Look in the admin-header.php file in the wp-admin subdirectory.  You will find this call sequence right at the top:

</p>
<p dir="ltr" style="padding-left: 30px;">do_action('admin_enqueue_scripts', $hook_suffix);</p>
<p dir="ltr" style="padding-left: 30px;">do_action("admin_print_styles-$hook_suffix");</p>
<p dir="ltr" style="padding-left: 30px;">do_action('admin_print_styles');</p>
<p dir="ltr" style="padding-left: 30px;">do_action("admin_print_scripts-$hook_suffix");</p>
<p dir="ltr" style="padding-left: 30px;">do_action('admin_print_scripts');</p>
<p dir="ltr" style="padding-left: 30px;">do_action("admin_head-$hook_suffix");</p>
<p dir="ltr" style="padding-left: 30px;">do_action('admin_head');</p>
<p style="padding-left: 30px;"> 

General Rules

Some general rules first.

NEVER use hard-coded output of JavaScript in your apps.   Create a .js file and enqueue it.   Use wp_localize_script to pass in variables from  your PHP script.

Always develop with debug mode on.

Always get a page handle when creating an admin page, you need to that to invoke the hook.

Always develop with debug mode on.

Always use classes.  Do not use procedural code.  It sucks.

Always develop with debug mode on.

Seriously.  Turn on full debugging in your development environment.  There are far too many (90% I’d guess) plugins out there that are barfing warnings all over sever logs every day.   Multiple by millions of installs.   Times thousands of visitors.   You’ve got billions of lines of warning messaging spewing forth all over the Internet and server logs every day.   No wonder my Internet connection is slow as hell some days.

The Simplified Version

While this technique may not be perfect, it gives you the general construct you need to do it right.   The short version shows you the concept, it is not cut-and-paste code.  Go to the WordPress Codex and look this stuff up.

This is the “down and dirty” version using a class construct as a namespace only.   In the real world you will want to invoke the class as an object and avoid the static declarations and direct calls, but hey… its another step toward “don’t pollute other plugins or core WordPress with my crap” nirvana.

<pre>class MyPlugin {   
    static function add_my_pages() {
       $handle = add_options_page(...);   
       add_action("admin_head-$handle",array('MyPlugin','loadMyJS'));
   }

   static function loadMyJS() {
       wp_enqueue_script('myjs',plugin_dir_path(__FILE__).'/my.js');
   }
}
add_action('admin_menu', array('MyPlugin','add_my_pages'));</pre>

So that is the “quick and dirty” version.

The whole thing starts with the add_action(‘admin_menu’) call.   This runs whenever the WordPress dashboard loads up the admin pages.

It then calls the add_my_pages function within the MyPlugin class.

MyPlugin::add_my_pages() will build our admin page by using the add_options_page() function of WordPress.  If you look that up you will discover what parameters to pass to make it build your admin page.  You will likely be adding a “createAdminPage()” method to the MyPlugin class, but that is beyond the scope of this article.   The important point here is that when WordPress does connect your plugin admin page to the system it will return back a unique handle for that page.  You need this for the next step.

MyPlugin::add_my_pages() then tells WordPress that whenever your admin page loads, fire off the loadMyJS() method for your plugin.   How?   WordPress has a special hook called admin_head-<something>.  That SOMETHING is the handle that you got from the add_options_page() call.    That hook is ONLY called when your admin page loads.    Which means the next step ONLY happens when your admin page renders, not every single page on the site.

When your admin page loads, the MyPlugin::loadMyJS() fires.   This uses the standard WordPress enqueue scripts method to load up your JavaScript and put it in the header of the admin page.    This ensure your JavaScript only loads when you need it.

Perfect.

So that is the general process.   Go forth and learn how to incorporate this in your plugins.  Then teach others.

The entire WordPress community thanks you for it.

Oh… and the CSS stuff I mentioned?  Rename “loadMyJS” to “loadMyJSandCSS” for clarity, then throw in the wp_enqueue_style() calls.

Posted on

Geeking Out With Netbeans and PHPDoc

I’ve been playing with phpDoc and NetBeans a lot lately.  I’ve found that having well documented code using the phpDocumentor 2 format makes for a much more efficient coding environment.   For one thing, if you have NetBeans setup properly and are using PHP classes that are fully documented, writing a line of code is as simple as typing a few characters.  NetBeans will find the objects you are looking for and list the properties and methods as you type.

That makes for some fast coding when typing something like $this->plugin->currentLocation->MakePersistent().  When everything is setup properly and the classes are properly documented I can type $t<tab>p<tab>c<down><tab>M<enter>.    While it may look like a lot of typing when spelled out essentially I am typing “$ttptcdtme”, or 10 characters instead of somewhere around 70.    Being 7x as efficient when writing thousands of lines of code is a HUGE bonus.

GitHub Markdown Docs for Code

Auto-complete shortcuts alone is one good reason to use a strong IDE like NetBeans along with a good code commenting/documentation solution like phpDocumentor2.   However tonight I found a cool new trick that I stumbled upon while trying to generate code documentation via GitHub.    I had learned a couple of weeks ago that I could use the command-line PHPDoc tool and a open source GitHub project called PHPDocMD to convert my code comments to XML and convert that to markdown (MD) that GitHub likes.   It creates OK results that are out on the public code repositories at Github.

So what is that new trick?  Well, there are 2 distinct parts.

Autogenerated HTML Docs for Code

Turns out there is a plugin for NetBeans that I’ve been ignoring.  For some reason I just happened to figure out that the “Generate Documentation” menu that appears when I right-click on a project in NetBeans actually has some use.

Yeah, I’m slow like that sometimes.

Turns out that under that menu is a submenu that has “ApiGen” and “PHPDoc” listed there.    Interesting.    That could be useful.

I went to the project, right-click, and select properties.  Sure enough, there is a setting for “PHPDoc” and it asks for a destination directory.  Basically asking “where do you want me to dump all the auto-generated documentation for your project?”.    Well it turns out that during my journey toward publishing code documentation on GitHub with MD, I had cloned the GitHub documentation repository to a local path on my development system.   I already had a folder full of code docs for my main plugin as well as several add-on packs.   So I pointed the output for my Netbeans “PHPDoc” setting to go there.

I go back, right click on the project, and select the Generate Documenation/PHPDoc.

Sure enough, NetBeans crunched my code comments and a few minutes later my Firefox browser window pops open and I see a pretty darn cool HTML site on my local box that has a full navigation menu for all of the classes I am using in my project, the properties, the To Do lists from my comments and more.  All this created because I’ve been taking a bit more time to fully comment the code.  Nice!

Settings Up For Auto-Publishing

Ok, so now on my dev box I have some pretty nice code docs that I can generate with a couple of mouse clicks.   That is helpful with a large project with multiple add-on packs.   But then I start thinking, why just keep it local?   The code is open source.  I have other developers that want to learn how to tweak the code for their own purposes.  I have third party developers looking for documentation and examples on how to do things so they can write efficient add-on packs.    Why keep the info all locked up on my dev box?

Well, I learned a neat trick a couple months ago with NetBeans.    I can have a local git repository on my development system but setup NetBeanswith a new “remote source” project.

So here I am with my commented code in my WordPress plugin directory that is fully commented and connected to a GitHub repository.   In another “docs only” directory outside of WordPress I have my PHPDoc generated HTML and MD files that I was previously only publishing to GitHub Wiki Pages, but now thanks to NetBeans I am also surfing locally on my dev desktop.     I realize I can auto-publish these code docs to my public website.

I create a new NetBeans project right next to my plugin code project.   I call it “store-locator-plus-codedocs” and set it up as a new PHP Remote Source project.   I tell it the local source on my box is the folder that I was storing the PHPDoc and MD files in for GitHub, which now contains my NetBeans-generated HTML files as well.    For a remote location I tell it to use my SFTP connection to the Charleston Software Associates website.  I jump on my website and make sure there is a new codedoc directory with a blank readme.txt file there.  If you don’t have SOMETHING for NetBeans to download from the remote source it thinks you screwed up and won’t continue…. minor flaw IMO, but easily remedied.    I then click OK.  It shows me the readme.txt with a checkbox to download it.    Click OK and BAM… there is a new project with the blank readme.txt plus all of the MD and HTML documents that were already on my local dev box.

OK, so kind of “so what” at this point, but here is where it starts getting cool… at least from a geek perspective.

Push The Initial Content

Now I need to “seed” the public website with the already-created content.   In NetBeans it is easy enough. Just highlight all the files in the project, right click and select upload.    Within a few minutes the entire HTML sub-site is uploaded to the server.   Just like any FTP program.   Nothing to it.

However now if I make any changes to those file with the editor two things happen, it saves a local copy on my dev box and it auto-publishes that updated to the live server in near-real time.    Since my local dev box has version control via GitHub, I can also quickly commit and push those edits also from inside NetBeans and make sure they are saved to the code repo.

But… I’m not going to ever edit those docs by hand.  They are auto-generated code documents that come from my well-formatted comments.  So the RIGHT way to edit them is to continue cleaning up and augmenting the code comments to make sure they are PHPDoc compliant.

So that is kind of cool.  My code docs are now auto-posting to my live server, saving on my dev server, and are one click from being saved and committed back to GitHub.  I can even add on more step and create a second copy in MD format for publication right next to the source code on GitHub.

But this is what really made me think “shit this is kind of cool”…

Auto-Publishing Code Comments

So I have all this stuff setup, my code project and my documentation project are both open at the same time in Netbeans.    I forget all about it as I start getting back into code writing.     I edit a method for one of the add-on packs and of course I go and update the comments about that method to reflect the latest changes.  I see another comment that is not complete and fill that out.

Save the code.  Commit the code so it publishes to GitHub.

A few more edits to code and comments later I decide… let’s generate the documentation before I sign off tonight…

I right click on my code project and select “generate documentation”.

Auto-magically the documentation project updates itself.   It saves a copy locally to my dev box with the new docs in place then auto-publishes to the public website with no intervention on my part.   As I setup the rest of my plugins with this system you will be able to find all the code docs on the bottom of the Technical Docs page on my site.   You can see my first auto-published work for Tagalong here, though I’m nowhere near done cleaning up the comments and naming conventions on my method to make this all “gussied up” for public consumption, but it’s a good start.

Maybe I’m just extra-geeky, but I thought that was cool… especially when the code docs actually look pretty darn good IMO, especially since they were created from nothing but /** This method does something.  @var string $this – does that **/ comments in the code.

I also know that with a few lines of script I can not only save locally and publish nice looking docs to my public site but also commit the code back to the GitHub repository while auto-generating the duplicate MD docs for the GitHub Wiki.

Yeah, I’m a Code Geek… and I’m not ashamed to admit it.

Posted on

Netbeans, phpDoc, and WordPress

A posting from our private “Tech List”, a list I share with select tech geeks mostly from the days at Cyber Sprocket Labs.   We tend to share high end tech tips that only geeks would find interesting.    I am posting here for easy reference for “The Tech List” groupies.

My Intro To Netbeans

9 months ago Chase was showing me NetBeans on his desktop.   It had some cool features for PHP like auto-complete and some very handy code lookup tools that reminded me of Visual Studio without all the weight.

 I wish I had learned about the NetBeans environment a long time ago.    NetBeans+ GitHub + SmartGit have made my coding at least twice a productive when it comes to PHP work, especially with complex WordPress plugin code.
In the past few months, while working in NetBeans, I’ve been refining my phpDoc skills.  This is EXTREMELY handy in making coding far more productive.    Here are some of the things I’ve learned that I wish we had all known about at Cyber Sprocket.    With the right tools it makes coding for people with senility (me) easier.

Effectively Using Netbeans for WP Dev

1) Wrap your ENTIRE plugin into a single instantiated object.
My plugin invocation now is basically something along these lines:
DEFINE(blah,blah) // a few of these help with WordPress stuff, like plugin version
class MP {
}
class MP_Error extends WP_Error {}
add_action('init', array('MP','init');
That is pretty much it other than the singleton.
2) Learn how to create and use singletons PROPERLY in WordPress
It must be a static method.
It should return itself as an instantiated object.
Set minimal properties as needed, not the entire WPCSL object here.
public static function init() {
    static $instance = false;
    if (!$instance) {
        $instance = new MP;
    }
    return $instance;
}
3) phpDoc the class
In netbeans you go back to just above class MP {} and type /** after starting the frame of the class with properties and methods in place.
It will AUTO-CREATE the template PHP docs.
First line of comment = 1 line description
Lines 3..m = a longer description if you’d like
Lines m+1..n = the properties defined with @property, @property-read, @property-write    (the setters/getters with r/w, r, w status)
These property lines, like variable lines are simple:
@property-read <type> name  description
for example
@property-read string plugin_name the plugin name
This indicates a read-only property for the object.
4) phpDoc the methods
 
Like classes, but NetBeans is even better with the docs, you write the function first.   Then go before the function foo() line and type /**.   NetBeans will create the entire phpDoc template.   You update it to give it the “hints”.
This is something I use a LOT now and you’ll see why in a moment.   Here is an example from WPCSL_Settings add_section:
Old school:
/**
* method: add_section
*
* does stuff
*
* params:
*     [0] - named array, blah blah
**/
New:
/**
 * Create a settings page panel.
 *
 * Does not render the panel.  Simply creates a container...
 *
 * <pre><code>
 *   $mySettings->add_section(array('name'='general'));
 * </code></pre>
 *
 * @param array $params named array of the section properties, name is required.
 **/
5) If your phpDoc property/variable type is a specific OBJECT then NAME THE OBJECT. 
For example:
class MP {

   /**
    * The WPCSL settings object, shortcut to the parent instantiated object.
    *
    *  @var wpCSL_settings__mp
    */
    private $wpcSettings = null;

}

Why I Am More Productive

Now here is why all these tips with phpDoc are useful and how I’ve slowly made my coding at least twice as efficient.
NetBeans defaults to having code hints and auto-fill turned on.   The cool part about this is it will do a few things like tell you when a required param is missing and flag the line as an error, the same way it does with blatant PHP syntax errors.    If you are creating some new code and you pause for a second with a partially written invocation then it will show you the possible options from a list of PHP functions, custom functions, or methods from objects you’ve phpDoc’ed properly.
Thus, I do something like this:
$this->wpcSettings->a   <pause>
It now shows me all the methods and properties in the WPCSL Settings Class that start with a in an in-place drop down list.
I cursor down to add_section and pause.
It shows me the full documentation on the method including my code example, the required parameters and more.
I press enter, it drops the completed method along with the first prototype in place, I cursor down to select from the various templates, for example if secondary parameters are optional, press enter and if fills out the entire code block.
I then modify the prototype to fill in my settings and I’m done.
If you do this right you can be ANYWHERE in your code as deep as you need to be.   You never have to go looking for the source, especially if you’ve written decent phpDoc comments.
I used to find myself split-screen looking at the source method or property code to see what it did or how it worked.    Now I spend time documenting the methods or properties in a verbose phpDoc comment and I never have to look at the code again.

Key Points

If you do NOT wrap everything inside a parent class it takes a lot longer to pop up the help.
If you just use the lazy @property object $myvar (or ‘mixed’) syntax you do not get to see all of the methods whenever you newly instantiated object is referenced by the variable name.
If you use things like public, private, setters, getters and use the matching phpDoc elements like @property-read  then NetBeans warns you if you do something illegal like try to directly SET that property.

A Day Older, A Day Smarter

I know some of you probably had productivity helpers like this while at Cyber Sprocket, but if I had known then what I know now I’d have been insisting that we all learn and properly implement phpDocs as our commenting standard.
An as you all know the other “freebie” with this is I could easily generate a full HTML (or markdown) site with all the classes, methods, and properties of any plugin with a few short commands.   I’ve not done that yet but will play with that sometime in the near future.    I need to figure out how to bake it into WordPress content at charlestonsw.com but I think it would be cool to have an entire plugin + wpCSL documented in an easy-to-browse format on the web.
Posted on

Adding Custom Fields To The WordPress Category Interface

Adding custom field s to the WordPress Category interface can be tricky.  Not because the concept is overly difficult, but the documentation on the related filters and actions that are built into WordPress is hard to come by.    To make it even more challenging, some of the action names are built dynamically.     Thus I’ve created this post as my personal cheat-sheet guide to help jog my memory.

The notes here are based on my findings in WordPress 3.5.1.

There are 2 parts to the process, rendering the form fields and saving the data.

In the examples below, Replace {taxonomy} with the taxonomy ID.  If you are not sure what this is, hover over the “categories” link in the sidebar menu and look for the ?taxonomy={taxonomy} parameter in the URL after the edit-tags.php call.   For example, my Store Pages taxonomy is simply called ‘stores’ so my actions are create_stores and created_stores.

Rendering The Fields

There are two actions built into WordPress to manage the category interface, one for adding and one for editing a category.   Your function or method simply needs to output HTML for your new fields.  The actions are:

    • {taxonomy}_add_form_fields
    • {taxonomy}_edit_form

Saving The Data

AJAX

The main “Categories” interface typically shows an “add category” form on the left side with a list of categories on the right.    This add category form uses AJAX to post the form data back to the server and save any new category you enter here.   This is why the page does not refresh.  As such you will have better luck deciphering what is going on with debug statements if you use a browser debug tool such as Firebug on Firefox and watch the console or net tab for the AJAX (AJAJ really) JSON posts and responses going to/from the server.

Action Hooks and Filters

The AJAX call posts back to ./wp-admin/edit-tags.php, which in turn calls the wp_insert_term method in ./wp-includes/taxonomy.php.

wp_insert_term calls the following actions while processing the insert:

If the slug is empty:

    • edit_terms with $term_id as the only param BEFORE the slug is added.
    • edited_terms with $term_id AFTER the slug is added

After the term is inserted into the term_taxonomy table:

    • create_term with $term_id, $tt_id, $taxonomy as params
    • create_{taxonomy} with $term_id, $tt_id as params
    • FILTER: term_id_filter with $term_id and $tt_id as params

The term cache is cleared and then these actions are called:

    • created_term with $term_id, $tt_id, $taxonomy as params
    • created_{taxonomy}  with $term_id and $tt_id as params

Useful Info

Most of the hooks and filters used to add data to the category interface can be implemented in the admin_menu action hook.  Using admin_menu() with a further admin_init() action hook buried within is one of the best ways to ensure all the setup, filters, roles & caps, and other “niceities” are in place before firing off your custom admin-centric hook or filter.

HOWEVER, you cannot attach your custom methods for the create_ or created_ action hooks deep inside admin_menu() or admin_init().  Why?  Because they run through the AJAX action stack and the AJAX action stack does not fire admin_menu().

Summary

So there you have it, my cheat sheet.   There are likely to be hiccups when implementing so don’t be afraid to add in some debugging code on your development system and be sure to check the JSON posts via the WordPress AJAX engine.

 

Posted on

PHP Pretty Print XML

I have been working on MoneyPress : Amazon Edition to get it updated for the latest API release and bring it into the Charleston Software Associates stable of products.  Along the way I found myself needing to debug the XML being returned from the Amazon Product API.   Here is a quick trick for doing that.

...
$returnedXML = $result['body']
$xmlDoc = DOMDocument::loadXML($result['body']);
$xmlDoc->formatOutput=true;
print '<pre>' . htmlentities($xmlDoc->sveXML()).'</pre>';
...


Posted on

Rhomobile : rhoconfig.txt cheat sheet

We have recently been working with Rhomobile to test and deploy some business class mobile apps.  So far the testing has gone well, however the documentation at the Rhomobile site is lacking in some areas and has fallen behind the mainline development.  The Rhomobile forums are also a good source of information but it is obvious Motorola is having some trouble keeping up with the developers that seem to be discovering this hidden gem (pun intended… Rho is ruby-centric).

As we learn more about developing with Rho, we have found some unique things about the environment.   Here we post some notes and hints.    We will add comments or extend the post with more hints as we come across them.

rhoconfig.txt

splash_screen settings

The following settings are the possible values in the 3.3.3 release:

  • splash_screen=’zoom’ (default)
  • splash_screen=’none’,  android = no effect
  • splash_screen=’hzoom’, android = no effect
  • splash_screen=’vzoom’, android = no effect
  • splash_screen=’hcenter’, android = no effect
  • splash_screen=’vcenter’, android = no effect

Derived from the source in <installdir>
uby\lib
uby\gems\1.9.1\gems
hodes-3.3.3\platforms\shared\common\SplashScreen.h and SplashScreen.cpp

There are developer notes in some “random forums” that this was re-factored in 3.3.3 to always zoom/stretch to fill the screen at all times (zoom).

 

 

Posted on

Phonegap Barcodes on IOS

Introduction

This article covers building barcode scanning apps on IOS (iPhone/iPad) with Phonegap.   For reference, Phonegap is going to become Apache Cordova any day now, so the process can be confusing if you are looking at older materials.  Before we get started we make the following assumptions about your development environment:

  • Operating System is OS/X Lion
  • Your IDE is XCode 4.2
  • You have installed Phonegap 1.5.0 or higher
  • You have downloaded Phonegap Plugins

While the general process will work with newer and possibly older versions, this is the environment we have tested this process with.   Also, if you mix versions of Phonegap and the plugins you will have naming convention problems, especially if you mix recent plugins with and older Phonegap install or vice-versa thanks to the transition from Phonegap to Apache Cordova.

We recommend following the links above as Google search results may bring you to older code.   We also recommend that if you had an older version of Phonegap that you install the latest version and start a new Cordova project via XCode with the new install.    Once your test app is successful you can follow the Upgrade Guide that will be included in the docs section of the downloaded zip file.

Our example app is named “abundascan”, which is also the default target that was created.  You will see that name in the screen shots included herein.

Step 1: Build A Starter Cordova App

Start Xcode and create a new Cordova project.  Build your project and run it in your simulator or directly to your connected IOS device.  This will build the template project.

Step 2: Add The Cordova Source Files

After you’ve downloaded and unzipped the files from the Phonegap Plugins page you will have a phonegap-plugins folder full of various plugins.   There will e a subfolder named “BarcodeScanner”.  This is the folder that will have the contents you need.

Highlight the following files and drag them from that folder over to your XCode window and drop them under the <target_name>/Plugins subfolder:

  • CDVBarcodeScanner.mm
  • zxing-all-in-one.cpp
  • zxing-all-in-one.h

Step 3: Edit The Cordova Plist

This file provides the symbol table references that will be used to connect the JavaScript components to the compiled plugin.    Find the Supporting Files folder under your <target_name> folder and click on Cordova.plist.

This will show the key/type/value table int he main editing window.  Click on the arrow next to Plugins to expand that section, then hover over to the right of the word “Plugins” to get the add/subtract entry buttons.  Click the plus to add a new empty line.

In the new line enter org.apach.cordova.barcodeScanner as the key name.  Note the capital “S”.

In the value column enter CDVBarcodeScanner.

Step 4: Add Link Libraries

In order for the project to compile in the new BarcodeScanner library you will need to add some new core libraries to your build commands.   In Xcode click on the target in the left hand sidebar (this is usually the topmost line under which all other items reside).   The project summary should appear in the main window. Click on the Build Phases top tab.

Open up the first Link Binary With Libraries group.   At the bottom of the expansion is the add/remove buttons.  Click on the + to add a new library.    Type in the following names (partial search will work) and add these libraries to the list:

  • AVFoundation.framework
  • AssetsLibrary.framework
  • CoreVideo.framework
  • libiconv.dylib

Note: You may need to click on the IOS5 folder when the quick search is running to see your matches.  

Other Settings

ZXing.cpp Optimizations Off

You may also need to turn off optimization for the zxing-all-in-one.cc file.   To do this, expand the compile sources group.  You will see a half-dozen file names with the last one likely being zxing-all-in-one.cpp.    This is actually a 2-column listing, if your screen is not wide enough you will see a “C…” trailing on the top right of the group.  Go just to the left of this and move your mouse in the header slowly, you will see the expansion double-headed arrow.  Click and drag to the left to expand this column.   Now double-click in this column to the right of the zxing-all-in-one.cpp file and add -O0 (letter “oh”, number zero) and click done.

No ARC

Do not use ARC (Objective-C Automatic Reference Counting).  If you did not turn this OFF when starting your project you can go to the Build Settings tab, scroll down to the Apple LLVM compiler 3.0 – Language group, expand it, and look for “Objective-C Automatic Reference Counting) and set it to no.

Add JavaScript and Run

Now you just need to add the reference to the barcodescanner.js file in your HTML files for the app and bind a click or hyperlink to the JavaScript barcodeScanner.scan_code call.     Run and you should be able to activate the scanner.

 

Posted on

WordPress Activation Hook

WordPress Development

We recently discovered an issue in our commercial plugins related to a change in the WordPress API. It turns out that since WordPress 3.1 was released the register_activation_hook() function is no longer called during a plugin upgrade! This is a significant change in behavior from previous versions that called the WordPress activation hook on every update.   This has caused numerous problems and forced Cyber Sprocket to come up with a patch in our own wpCSL framework.

Why Is This A Problem?

Any site running a version of WordPress older than 3.1 would automatically get any feature  and supporting application tweaks whenever they upgraded the plugin.   Most plugin authors, Cyber Sprocket included, would use the register_activation_hook API call to make sure the user had the latest database structure, settings, and other elements that keep the plugin working.    For example, with Store Locator Plus 3.0 this hook would ensure that the user’s Google Maps API v2.0 settings were converted to the Google Maps API v3.0 equivalent.

As of WordPress 3.1 the function that does this conversion is not called.    To make matters worse, it is not called only in certain circumstances.  For example:

  • User installs upgrade via a downloaded zip file: updates called.
  • User does auto-update on a deactivated plugin then activates the plugin: updates called.
  • User does auto-update on an activated plugin: updates NOT called.

As you can see this is inconsistent.  Even worse, plugins that worked find up to version 3.1 now have the potential to suddenly break.

The Solution?

Cyber Sprocket has created a new version of our wpCSL framework that we use to build WordPress plugins.   The update uses standard admin panel interfaces to call our own “plugin has changed” hooks.    The short version of how this works is as follows:

  • User is on the admin panel…
  • The plugin is active…
  • Check the version of the plugin as stored in the options table of WordPress…
  • Is it different than the current version of our plugin?
    • Yes, run the upgrade callback function if it is set.
    • Update the plugin version stored in the options table to the current installed version.

That’s it.  A fairly simple solution, but more things we need to manage in our plugin framework because the WordPress development team changed things.