Posted on

PHP Switch Vs If Performance

When writing a complex application such as Store Locator Plus, you will often find multiple logic branches to handle a given task.   There are two oft-used methods for processing the logic; the If-Else construct and a Switch statement.    Since I am always looking to optimize the Store Locator Plus codebase for performance, some sites do have hundreds-of-thousands of locations after all, it was time to look into the performance-versus-readability of those two options.

The general consensus, though I’ve not taken the time to run performance tests with the WordPress stack myself, is that “you should use whatever makes your code easier to read are more easily maintained”.  For me that means using switch statements.    I find the construct much easier to extend and not cause inadvertent side effects.  Something I’ve learned in 20-plus years of working on code teams and in long-run projects like Store Locator Plus.

On the pure performance standpoint an if-else can be marginally faster if performing less than 5 logic comparisons.

PHP If Else Statement
PHP If Else Statement

Switch statements will often be faster at-or-near 5 logic comparisons as code optimization within C, and likely carried forth in the PHP psuedo-compiler, will often turn the 5+ logic branches of a switch statement into a hash table.  Hash tables tend to be faster with all branches of the code having equal access time.    Statistically speaking a large number of iterations will favor the equal access time model over the “first-fastest” model of an If-Else.

PHP Switch Statement
PHP Switch Statement

Possibly faster and always easier-to-extend and read, Switch will be my “weapon of choice” whenever I have more than  a simple 2-state if/else logic branch to be tested.

Posted on

Analyzing WordPress PHP Memory Consumption

XDebug Banner

This weekend I have been processing a large 200,000 location data file for a Store Locator Plus customer.   This is one of the larger files I have processed on my test system and it is the first file over 60,000 locations I’ve processed since Store Locator Plus 4.2 and WordPress 4.x have been released.    This large file processing and the geocoding required is taxing several systems in the Store Locator Plus hierarchy.  WordPress, Google OEM API calls, and the locator are all showing their weak spots with this volume of data processing.   They can all handle it to some degree, but maximizing efficiency is the key.

The temporary solution to most of the issues is to increase memory and process limits.   These are some of the key findings, as posted on the CSV Import documentation pages for Store Locator Plus:

Check your php.ini post_max_size setting if doing a direct file import versus a cron URL based import. post_max_size is typically set to 8 (MiB) on most servers.   This is typically enough for around 25,000 locations but it depends on how long your descriptions are and how many data fields you have filled out.   SLP 4.2.41 will warn you if you try to upload a file larger than your post_max_size limit.

Check your php.ini memory_limit setting and make sure it is large enough to handle the WordPress overhead plus the size of your CSV file times two.   The WordPress database interface and the CSV file processing will consume lots of memory.  The more plugins, widgets, and advanced theme features you have more more memory WordPress will use and the more PHP memory will leak over time. A setting of 256M is enough for approximately 15,000 locations.

Check your wp-config WP_MEMORY_LIMIT.   You may need to add this define to wp-config.php.  define(‘WP_MEMORY_LIMIT’ , ‘256M’).  The number needs to be equal-to or less-than the php.ini memory-limit.    It is the WordPress-specific memory limit and works with php.ini memory_limit.

Check your wp-config WP_MAX_MEMORY_LIMIT.   You may need to add this define to wp-config.php.  define(‘WP_MAX_MEMORY_LIMIT’ , ‘256M’).   This is the WordPress admin interface memory limit and works like WP_MEMORY_LIMIT for admin pages.

Set Duplicates Handling to Add especially if you know you do not have duplicate locations in your data.  SLP 4.2.41 further improves the performance when using ‘add’ mode by eliminating extra data reads from the database.

Set Server-To-Server speed to Fast under the General Settings tab unless you are on a shared host or experience a large number of uncoded locations during import.

Set the PHP Time Limit to 0 (unlimited) under the General Settings tab.   For hosting providers that allow your web apps to change this, the unlimited value will let the import run to completion.

Keep in mind Google limits you to 2500 latitude/longitude (geocoding) lookups per 24 hours per server IP address.  If you are on a shared host you share that limit with all other sites on that host.

However, even with all of these settings tweaked to fairly high values for my VirtualBox development system running on a MacBook Pro Retina host, the 4GB of RAM allocated to WordPress still is not enough.   The system eventually runs out of memory when the file gets close to the 45,000 location mark.  Luckily the “skip duplicate addresses” option allows the process to continue.    The “out of memory” error still rears its ugly head in the wpdb  WordPress database engine and is a problem for handling larger files.

Enter Xdebug and memory profiling.   Somewhere buried in the Store Locator Plus code, WordPress code, PHP MySQL interface, or PHP core engine there is a memory leak.  With a complex application environment finding the leak is going to be a monumental task.  It may not be something I can fix, but if I can mitigate the memory usage when processing large files that will help enterprise-class sites use Store Locator Plus with confidence.

Getting Xdebug On CentOS 7

If you follow my blog posts on development you will know that I run a self-contained WordPress development environment.  The system uses Vagrant to fire up a VirtualBox guest that runs CentOS 7 with GUI tools along with a full WordPress install including my plugin code.   This gives me a 2GB “box file” that I can ship around and have my full self-contained development environment on any system capable of running VirutalBox.   Here is how I get Xdebug connected to my local Apache server running WordPress.

Install xdebug from the yum install script.


# sudo yum install php-pecl-xdebug.x86_64

Turn on xdebug in the php.ini file


# find / -name xdebug.so

/usr/lib64/php/modules/xdebug.so

#sudo vim /etc/php.ini

zend_extension="/usr/lib64/php/modules/xdebug.so"

Check if xdebug is installed:


# php --version

... PHP 5.4.16
.... with xdebug v2.2.7

Enable some xdebug features by editing php.ini again.

Read about XDebug Profiling.

Read about XDebug Tracing.


# sudo vim /etc/php.ini

xdebug.default_enable=1  ; turns on xdebug any time a PHP page loads on this local server

xdebug.idekey="PHPSTORM" ; in case I turn on the automated listener for built-in PHP Storm debugging/tracing

xdebug.profiler_enable = 1 ; turn on the profiler which creates cachegrind files for stack trace/CPU execution analysis

xdebug.profiler_enable_trigger = 1;  turn on a cookie "hook" so third party browser plugins can turn the profiler on/off with a bookmark link

xdebug.profiler_output_dir = "/var/www/xdebug" ; make sure this directory is writable by apache and readable by your local user

xdebug.auto_trace = 1 ; when any page loads, enable the trace output for capturing memory data

xdebug.show_mem_delta = 1 ; this is what tells trace to trace memory consumption changes on each function call

xdebug.trace_output_dir = "/var/www/xdebug" ; same idea as the profiler output, this will be where trace txt files go

Restart the web server to get the php.ini settings in effect:


# sudo service httpd restart

At this point I can now open any WordPress page including the admin pages.   Shortly after the page has rendered the web server will finish the processing through xdebug and a trace* file will appear in /var/www/xdebug.   I can now see the stack trace of the functions that were called within WordPress with the memory consumption at each call.     This is the start of tracking down which processes are eating up RAM while loading a large CSV file without adding thousands of debugging output lines in the web app.

Be warned, if you are tracing large repetitive processes your trace file can be many GiB in size, make sure you have the disk space to run a full trace.

Posted on

WordPress Boolean Options and JavaScript : Handle With Care

JavaScript WordPress Banner

After chasing my tail for the past 5 days I finally uncovered the source of a bug in my code that was causing Store Locator Plus to not initialize properly on fresh installs. The culprit? How WordPress handles booleans. In particular, how WordPress handles the persistent storage and the JavaScript localization of booleans.

The important lesson:

WordPress option storage is done using STRINGS*.
* there is a caveat here: SERIALIZED options are stored with the data type,
single options (non-arrays) are stored as strings

WordPress JavaScript localization uses PHP and JavaScript DATA TYPES.

 

What does this mean for plugin coding?

If you are passing WordPress options that have been fetched from the wp_options table with get_option to JavaScript via the wp_localize_script() method your booleans that are set to true/false will be passed to JavaScript as ‘1’/’0′. On the other hand, if you are setting a PHP variable to true/false and passing that to JavaScript they are set as a JavaScript boolean value of true/false.

The difference is important.

In JavaScript you write this for booleans:

if ( myoption.booleanvar ) { alert('true'); }

However you write this for the string version:

if ( myoption.booleanvar === '1' ) { alert('true'); }

How did this break my code?

When my plugin environment loads for the first time it uses an array to store option defaults. Since the user has not yet set or saved the plugin options the get_option call returns an empty array and those defaults are employed. However my option array looked something like this:

$options = array( 'booleanvar' => true );

To further compound the problem, I introduced yet ANOTHER issue by converting what was previously a single-element option to a serialized array in the latest patch release.   Serialized data is MUCH faster.  When you are fetching/setting more than a single option for your plugin.   To be a good WordPress citizen I’ve been slowly migrating all my legacy options to a single serial array.   HOWEVER the update did something like this:

$old_option = get_option('booleanvar');  // singular boolean options are stored as strings '1'/'0'
$options = array ( 'booleanvar' => $old_option ); // $old_option is a string
update_option( 'my_option_setting', $options);
delete_option('booleanvar');

My code is a lot “deeper” than shown here, but the basic idea was to fetch the pre-existing value of the singular option and store it in my options array. I save it back out to persistent storage and blast the old non-serialized option out of the database. THIS conversion for existing sites works well as the singular option variables will store and retrieve boolean data as a STRING. Thus my option comes back as ‘1’/’0′ depending on how the user left it and all is good in JavaScript-land.

HOWEVER, if you note above the NEW DEFAULT is to set booleanvar as a boolean with a default value of true. When storing a compound option (named array) with update_option it stores and retrieves values using the DATA TYPE.

$options = array( 'booleanvar' => true );
update_option( 'my_option_settings', $options );  // serial options are stored as proper data types
$options = get_option( 'my_option_settings');

Here the $options[‘booleanvar’] is set to a proper boolean.

As you can probably guess, this creates inconsistency in the Javascript.

Why? Because my JavaScript has been written for the most common use case which is one where users have set and saved the plugin options at least once. The JavaScript code uses the string comparison code shown above, ( myoption.booleanvar === ‘1’ ). It works as expected every time after the user has saved the options at least once.

Why does it work after options are saved? Because WordPress returns the boolean variable as the string value ‘1’. You can see this for yourself by playing with the update_option() and get_option() functions in WordPress to store/get a boolean true value. Since my code uses the stored values if they are set my booleanvar is not taking on the default true setting after the first save, it is coming back as ‘1’ which is what JavaScript expects to see.

The Lesson?

Use string values of ‘1’ and ‘0’ for your WordPress option values any time you are not using 100% serialized option values. It ensures consistency throughout your application by following the precedence set by the WordPress get_option function.

Yes, you can get away with using boolean true/false values. It even makes the code more readable, IMO. However it can cause issues if you ever decide you need to pass those values along to your JavaScript functions with wp_localize_script() if you are not 100% certain you are using pure non-translated boolean variables throughout.

If you are ever uncertain of your true data types add a gettype($varname) debugging output to your plugin code. Remember that simple print_r statements will convert booleans to strings as well, use gettype to be certain of what flavor variable you have inside PHP.

Posted on

WordPress Add JavaScript to Specific Admin Page

Huh?  “wordpress add javascript to specific admin page”, what the heck is that?   That is the first thing I “googled” when I discovered I was “doing it wrong” so I could learn how to do it right.

As a plugin developer it is one of the more important principles of plugin development.  It is one that took me far too long to learn and based on all the things the break when you install various themes and plugins, I am not alone in that department.

Why It Is Important

Why is this so important?  You load your script on EVERY PAGE in the worst case scenario, or EVERY ADMIN PAGE in the half-right scenario.

Doing it wrong has two notable side effects, neither of which is a good thing:

  1. It slows down page loads unnecessarily.   Think about it?  The server reads a file from disk (slow), loads it into memory, and does nothing.   More stuff in memory = higher chance for paging (putting memory blocks on disk) = more slowness.   Unless you are running a server on solid state drives (SSD) then disk I/O is the worst possible thing you can be doing in terms of performance.
  2. It breaks other scripts.  Not always.  Often.   Your script manipulates the browser JavaScript processing stack. It loads stuff into memory.  It can change the logic flow or the DOM.   Many times it is having unintended consequences and all for a script that will not run in most cases.

What you really want is to only load the JavaScript (or CSS for that matter) onto YOUR admin pages only.

If you are coding plugins DO NOT LOAD YOUR CSS or JavaScript globally!

Use The Hooks

Easy.   There are WordPress hooks specifically designed for this.   The JavaScript-centric hook is called ‘admin_print_scripts-<handle>’.     That <handle> part is important.   It is unique to YOUR admin pages and how you get stuff to happen on JUST your admin page.

For reference, if you are doing CSS stuff there is also an equivalent ‘admin_print_styles-<handle>’ equivalent.

However, I am going to take a slightly different route.    MOST times you are likely adding CSS and JavaScript to your admin page, not one or the other.   As such I don’t want to write TWO hooks to load my stuff, though in a larger system I may do so for clarity.    But I’m a lazy programmer and while I don’t like super-long multi-faceted functions, I feel this case is simple enough to glom the script and CSS together.  So I’m going to use the ‘admin_head-<handle>’ hook instead.   I feel that is a name that makes sense for loading up both together, especially if  my JavaScript is not dependent on styles being in place first.

Curious on how this all hooks into WordPress?

Look in the admin-header.php file in the wp-admin subdirectory.  You will find this call sequence right at the top:

</p>
<p dir="ltr" style="padding-left: 30px;">do_action('admin_enqueue_scripts', $hook_suffix);</p>
<p dir="ltr" style="padding-left: 30px;">do_action("admin_print_styles-$hook_suffix");</p>
<p dir="ltr" style="padding-left: 30px;">do_action('admin_print_styles');</p>
<p dir="ltr" style="padding-left: 30px;">do_action("admin_print_scripts-$hook_suffix");</p>
<p dir="ltr" style="padding-left: 30px;">do_action('admin_print_scripts');</p>
<p dir="ltr" style="padding-left: 30px;">do_action("admin_head-$hook_suffix");</p>
<p dir="ltr" style="padding-left: 30px;">do_action('admin_head');</p>
<p style="padding-left: 30px;"> 

General Rules

Some general rules first.

NEVER use hard-coded output of JavaScript in your apps.   Create a .js file and enqueue it.   Use wp_localize_script to pass in variables from  your PHP script.

Always develop with debug mode on.

Always get a page handle when creating an admin page, you need to that to invoke the hook.

Always develop with debug mode on.

Always use classes.  Do not use procedural code.  It sucks.

Always develop with debug mode on.

Seriously.  Turn on full debugging in your development environment.  There are far too many (90% I’d guess) plugins out there that are barfing warnings all over sever logs every day.   Multiple by millions of installs.   Times thousands of visitors.   You’ve got billions of lines of warning messaging spewing forth all over the Internet and server logs every day.   No wonder my Internet connection is slow as hell some days.

The Simplified Version

While this technique may not be perfect, it gives you the general construct you need to do it right.   The short version shows you the concept, it is not cut-and-paste code.  Go to the WordPress Codex and look this stuff up.

This is the “down and dirty” version using a class construct as a namespace only.   In the real world you will want to invoke the class as an object and avoid the static declarations and direct calls, but hey… its another step toward “don’t pollute other plugins or core WordPress with my crap” nirvana.

<pre>class MyPlugin {   
    static function add_my_pages() {
       $handle = add_options_page(...);   
       add_action("admin_head-$handle",array('MyPlugin','loadMyJS'));
   }

   static function loadMyJS() {
       wp_enqueue_script('myjs',plugin_dir_path(__FILE__).'/my.js');
   }
}
add_action('admin_menu', array('MyPlugin','add_my_pages'));</pre>

So that is the “quick and dirty” version.

The whole thing starts with the add_action(‘admin_menu’) call.   This runs whenever the WordPress dashboard loads up the admin pages.

It then calls the add_my_pages function within the MyPlugin class.

MyPlugin::add_my_pages() will build our admin page by using the add_options_page() function of WordPress.  If you look that up you will discover what parameters to pass to make it build your admin page.  You will likely be adding a “createAdminPage()” method to the MyPlugin class, but that is beyond the scope of this article.   The important point here is that when WordPress does connect your plugin admin page to the system it will return back a unique handle for that page.  You need this for the next step.

MyPlugin::add_my_pages() then tells WordPress that whenever your admin page loads, fire off the loadMyJS() method for your plugin.   How?   WordPress has a special hook called admin_head-<something>.  That SOMETHING is the handle that you got from the add_options_page() call.    That hook is ONLY called when your admin page loads.    Which means the next step ONLY happens when your admin page renders, not every single page on the site.

When your admin page loads, the MyPlugin::loadMyJS() fires.   This uses the standard WordPress enqueue scripts method to load up your JavaScript and put it in the header of the admin page.    This ensure your JavaScript only loads when you need it.

Perfect.

So that is the general process.   Go forth and learn how to incorporate this in your plugins.  Then teach others.

The entire WordPress community thanks you for it.

Oh… and the CSS stuff I mentioned?  Rename “loadMyJS” to “loadMyJSandCSS” for clarity, then throw in the wp_enqueue_style() calls.

Posted on

Geeking Out With Netbeans and PHPDoc

I’ve been playing with phpDoc and NetBeans a lot lately.  I’ve found that having well documented code using the phpDocumentor 2 format makes for a much more efficient coding environment.   For one thing, if you have NetBeans setup properly and are using PHP classes that are fully documented, writing a line of code is as simple as typing a few characters.  NetBeans will find the objects you are looking for and list the properties and methods as you type.

That makes for some fast coding when typing something like $this->plugin->currentLocation->MakePersistent().  When everything is setup properly and the classes are properly documented I can type $t<tab>p<tab>c<down><tab>M<enter>.    While it may look like a lot of typing when spelled out essentially I am typing “$ttptcdtme”, or 10 characters instead of somewhere around 70.    Being 7x as efficient when writing thousands of lines of code is a HUGE bonus.

GitHub Markdown Docs for Code

Auto-complete shortcuts alone is one good reason to use a strong IDE like NetBeans along with a good code commenting/documentation solution like phpDocumentor2.   However tonight I found a cool new trick that I stumbled upon while trying to generate code documentation via GitHub.    I had learned a couple of weeks ago that I could use the command-line PHPDoc tool and a open source GitHub project called PHPDocMD to convert my code comments to XML and convert that to markdown (MD) that GitHub likes.   It creates OK results that are out on the public code repositories at Github.

So what is that new trick?  Well, there are 2 distinct parts.

Autogenerated HTML Docs for Code

Turns out there is a plugin for NetBeans that I’ve been ignoring.  For some reason I just happened to figure out that the “Generate Documentation” menu that appears when I right-click on a project in NetBeans actually has some use.

Yeah, I’m slow like that sometimes.

Turns out that under that menu is a submenu that has “ApiGen” and “PHPDoc” listed there.    Interesting.    That could be useful.

I went to the project, right-click, and select properties.  Sure enough, there is a setting for “PHPDoc” and it asks for a destination directory.  Basically asking “where do you want me to dump all the auto-generated documentation for your project?”.    Well it turns out that during my journey toward publishing code documentation on GitHub with MD, I had cloned the GitHub documentation repository to a local path on my development system.   I already had a folder full of code docs for my main plugin as well as several add-on packs.   So I pointed the output for my Netbeans “PHPDoc” setting to go there.

I go back, right click on the project, and select the Generate Documenation/PHPDoc.

Sure enough, NetBeans crunched my code comments and a few minutes later my Firefox browser window pops open and I see a pretty darn cool HTML site on my local box that has a full navigation menu for all of the classes I am using in my project, the properties, the To Do lists from my comments and more.  All this created because I’ve been taking a bit more time to fully comment the code.  Nice!

Settings Up For Auto-Publishing

Ok, so now on my dev box I have some pretty nice code docs that I can generate with a couple of mouse clicks.   That is helpful with a large project with multiple add-on packs.   But then I start thinking, why just keep it local?   The code is open source.  I have other developers that want to learn how to tweak the code for their own purposes.  I have third party developers looking for documentation and examples on how to do things so they can write efficient add-on packs.    Why keep the info all locked up on my dev box?

Well, I learned a neat trick a couple months ago with NetBeans.    I can have a local git repository on my development system but setup NetBeanswith a new “remote source” project.

So here I am with my commented code in my WordPress plugin directory that is fully commented and connected to a GitHub repository.   In another “docs only” directory outside of WordPress I have my PHPDoc generated HTML and MD files that I was previously only publishing to GitHub Wiki Pages, but now thanks to NetBeans I am also surfing locally on my dev desktop.     I realize I can auto-publish these code docs to my public website.

I create a new NetBeans project right next to my plugin code project.   I call it “store-locator-plus-codedocs” and set it up as a new PHP Remote Source project.   I tell it the local source on my box is the folder that I was storing the PHPDoc and MD files in for GitHub, which now contains my NetBeans-generated HTML files as well.    For a remote location I tell it to use my SFTP connection to the Charleston Software Associates website.  I jump on my website and make sure there is a new codedoc directory with a blank readme.txt file there.  If you don’t have SOMETHING for NetBeans to download from the remote source it thinks you screwed up and won’t continue…. minor flaw IMO, but easily remedied.    I then click OK.  It shows me the readme.txt with a checkbox to download it.    Click OK and BAM… there is a new project with the blank readme.txt plus all of the MD and HTML documents that were already on my local dev box.

OK, so kind of “so what” at this point, but here is where it starts getting cool… at least from a geek perspective.

Push The Initial Content

Now I need to “seed” the public website with the already-created content.   In NetBeans it is easy enough. Just highlight all the files in the project, right click and select upload.    Within a few minutes the entire HTML sub-site is uploaded to the server.   Just like any FTP program.   Nothing to it.

However now if I make any changes to those file with the editor two things happen, it saves a local copy on my dev box and it auto-publishes that updated to the live server in near-real time.    Since my local dev box has version control via GitHub, I can also quickly commit and push those edits also from inside NetBeans and make sure they are saved to the code repo.

But… I’m not going to ever edit those docs by hand.  They are auto-generated code documents that come from my well-formatted comments.  So the RIGHT way to edit them is to continue cleaning up and augmenting the code comments to make sure they are PHPDoc compliant.

So that is kind of cool.  My code docs are now auto-posting to my live server, saving on my dev server, and are one click from being saved and committed back to GitHub.  I can even add on more step and create a second copy in MD format for publication right next to the source code on GitHub.

But this is what really made me think “shit this is kind of cool”…

Auto-Publishing Code Comments

So I have all this stuff setup, my code project and my documentation project are both open at the same time in Netbeans.    I forget all about it as I start getting back into code writing.     I edit a method for one of the add-on packs and of course I go and update the comments about that method to reflect the latest changes.  I see another comment that is not complete and fill that out.

Save the code.  Commit the code so it publishes to GitHub.

A few more edits to code and comments later I decide… let’s generate the documentation before I sign off tonight…

I right click on my code project and select “generate documentation”.

Auto-magically the documentation project updates itself.   It saves a copy locally to my dev box with the new docs in place then auto-publishes to the public website with no intervention on my part.   As I setup the rest of my plugins with this system you will be able to find all the code docs on the bottom of the Technical Docs page on my site.   You can see my first auto-published work for Tagalong here, though I’m nowhere near done cleaning up the comments and naming conventions on my method to make this all “gussied up” for public consumption, but it’s a good start.

Maybe I’m just extra-geeky, but I thought that was cool… especially when the code docs actually look pretty darn good IMO, especially since they were created from nothing but /** This method does something.  @var string $this – does that **/ comments in the code.

I also know that with a few lines of script I can not only save locally and publish nice looking docs to my public site but also commit the code back to the GitHub repository while auto-generating the duplicate MD docs for the GitHub Wiki.

Yeah, I’m a Code Geek… and I’m not ashamed to admit it.

Posted on

Netbeans, phpDoc, and WordPress

A posting from our private “Tech List”, a list I share with select tech geeks mostly from the days at Cyber Sprocket Labs.   We tend to share high end tech tips that only geeks would find interesting.    I am posting here for easy reference for “The Tech List” groupies.

My Intro To Netbeans

9 months ago Chase was showing me NetBeans on his desktop.   It had some cool features for PHP like auto-complete and some very handy code lookup tools that reminded me of Visual Studio without all the weight.

 I wish I had learned about the NetBeans environment a long time ago.    NetBeans+ GitHub + SmartGit have made my coding at least twice a productive when it comes to PHP work, especially with complex WordPress plugin code.
In the past few months, while working in NetBeans, I’ve been refining my phpDoc skills.  This is EXTREMELY handy in making coding far more productive.    Here are some of the things I’ve learned that I wish we had all known about at Cyber Sprocket.    With the right tools it makes coding for people with senility (me) easier.

Effectively Using Netbeans for WP Dev

1) Wrap your ENTIRE plugin into a single instantiated object.
My plugin invocation now is basically something along these lines:
DEFINE(blah,blah) // a few of these help with WordPress stuff, like plugin version
class MP {
}
class MP_Error extends WP_Error {}
add_action('init', array('MP','init');
That is pretty much it other than the singleton.
2) Learn how to create and use singletons PROPERLY in WordPress
It must be a static method.
It should return itself as an instantiated object.
Set minimal properties as needed, not the entire WPCSL object here.
public static function init() {
    static $instance = false;
    if (!$instance) {
        $instance = new MP;
    }
    return $instance;
}
3) phpDoc the class
In netbeans you go back to just above class MP {} and type /** after starting the frame of the class with properties and methods in place.
It will AUTO-CREATE the template PHP docs.
First line of comment = 1 line description
Lines 3..m = a longer description if you’d like
Lines m+1..n = the properties defined with @property, @property-read, @property-write    (the setters/getters with r/w, r, w status)
These property lines, like variable lines are simple:
@property-read <type> name  description
for example
@property-read string plugin_name the plugin name
This indicates a read-only property for the object.
4) phpDoc the methods
 
Like classes, but NetBeans is even better with the docs, you write the function first.   Then go before the function foo() line and type /**.   NetBeans will create the entire phpDoc template.   You update it to give it the “hints”.
This is something I use a LOT now and you’ll see why in a moment.   Here is an example from WPCSL_Settings add_section:
Old school:
/**
* method: add_section
*
* does stuff
*
* params:
*     [0] - named array, blah blah
**/
New:
/**
 * Create a settings page panel.
 *
 * Does not render the panel.  Simply creates a container...
 *
 * <pre><code>
 *   $mySettings->add_section(array('name'='general'));
 * </code></pre>
 *
 * @param array $params named array of the section properties, name is required.
 **/
5) If your phpDoc property/variable type is a specific OBJECT then NAME THE OBJECT. 
For example:
class MP {

   /**
    * The WPCSL settings object, shortcut to the parent instantiated object.
    *
    *  @var wpCSL_settings__mp
    */
    private $wpcSettings = null;

}

Why I Am More Productive

Now here is why all these tips with phpDoc are useful and how I’ve slowly made my coding at least twice as efficient.
NetBeans defaults to having code hints and auto-fill turned on.   The cool part about this is it will do a few things like tell you when a required param is missing and flag the line as an error, the same way it does with blatant PHP syntax errors.    If you are creating some new code and you pause for a second with a partially written invocation then it will show you the possible options from a list of PHP functions, custom functions, or methods from objects you’ve phpDoc’ed properly.
Thus, I do something like this:
$this->wpcSettings->a   <pause>
It now shows me all the methods and properties in the WPCSL Settings Class that start with a in an in-place drop down list.
I cursor down to add_section and pause.
It shows me the full documentation on the method including my code example, the required parameters and more.
I press enter, it drops the completed method along with the first prototype in place, I cursor down to select from the various templates, for example if secondary parameters are optional, press enter and if fills out the entire code block.
I then modify the prototype to fill in my settings and I’m done.
If you do this right you can be ANYWHERE in your code as deep as you need to be.   You never have to go looking for the source, especially if you’ve written decent phpDoc comments.
I used to find myself split-screen looking at the source method or property code to see what it did or how it worked.    Now I spend time documenting the methods or properties in a verbose phpDoc comment and I never have to look at the code again.

Key Points

If you do NOT wrap everything inside a parent class it takes a lot longer to pop up the help.
If you just use the lazy @property object $myvar (or ‘mixed’) syntax you do not get to see all of the methods whenever you newly instantiated object is referenced by the variable name.
If you use things like public, private, setters, getters and use the matching phpDoc elements like @property-read  then NetBeans warns you if you do something illegal like try to directly SET that property.

A Day Older, A Day Smarter

I know some of you probably had productivity helpers like this while at Cyber Sprocket, but if I had known then what I know now I’d have been insisting that we all learn and properly implement phpDocs as our commenting standard.
An as you all know the other “freebie” with this is I could easily generate a full HTML (or markdown) site with all the classes, methods, and properties of any plugin with a few short commands.   I’ve not done that yet but will play with that sometime in the near future.    I need to figure out how to bake it into WordPress content at charlestonsw.com but I think it would be cool to have an entire plugin + wpCSL documented in an easy-to-browse format on the web.
Posted on

Adding Custom Fields To The WordPress Category Interface

Adding custom field s to the WordPress Category interface can be tricky.  Not because the concept is overly difficult, but the documentation on the related filters and actions that are built into WordPress is hard to come by.    To make it even more challenging, some of the action names are built dynamically.     Thus I’ve created this post as my personal cheat-sheet guide to help jog my memory.

The notes here are based on my findings in WordPress 3.5.1.

There are 2 parts to the process, rendering the form fields and saving the data.

In the examples below, Replace {taxonomy} with the taxonomy ID.  If you are not sure what this is, hover over the “categories” link in the sidebar menu and look for the ?taxonomy={taxonomy} parameter in the URL after the edit-tags.php call.   For example, my Store Pages taxonomy is simply called ‘stores’ so my actions are create_stores and created_stores.

Rendering The Fields

There are two actions built into WordPress to manage the category interface, one for adding and one for editing a category.   Your function or method simply needs to output HTML for your new fields.  The actions are:

    • {taxonomy}_add_form_fields
    • {taxonomy}_edit_form

Saving The Data

AJAX

The main “Categories” interface typically shows an “add category” form on the left side with a list of categories on the right.    This add category form uses AJAX to post the form data back to the server and save any new category you enter here.   This is why the page does not refresh.  As such you will have better luck deciphering what is going on with debug statements if you use a browser debug tool such as Firebug on Firefox and watch the console or net tab for the AJAX (AJAJ really) JSON posts and responses going to/from the server.

Action Hooks and Filters

The AJAX call posts back to ./wp-admin/edit-tags.php, which in turn calls the wp_insert_term method in ./wp-includes/taxonomy.php.

wp_insert_term calls the following actions while processing the insert:

If the slug is empty:

    • edit_terms with $term_id as the only param BEFORE the slug is added.
    • edited_terms with $term_id AFTER the slug is added

After the term is inserted into the term_taxonomy table:

    • create_term with $term_id, $tt_id, $taxonomy as params
    • create_{taxonomy} with $term_id, $tt_id as params
    • FILTER: term_id_filter with $term_id and $tt_id as params

The term cache is cleared and then these actions are called:

    • created_term with $term_id, $tt_id, $taxonomy as params
    • created_{taxonomy}  with $term_id and $tt_id as params

Useful Info

Most of the hooks and filters used to add data to the category interface can be implemented in the admin_menu action hook.  Using admin_menu() with a further admin_init() action hook buried within is one of the best ways to ensure all the setup, filters, roles & caps, and other “niceities” are in place before firing off your custom admin-centric hook or filter.

HOWEVER, you cannot attach your custom methods for the create_ or created_ action hooks deep inside admin_menu() or admin_init().  Why?  Because they run through the AJAX action stack and the AJAX action stack does not fire admin_menu().

Summary

So there you have it, my cheat sheet.   There are likely to be hiccups when implementing so don’t be afraid to add in some debugging code on your development system and be sure to check the JSON posts via the WordPress AJAX engine.

 

Posted on

PHP Pretty Print XML

I have been working on MoneyPress : Amazon Edition to get it updated for the latest API release and bring it into the Charleston Software Associates stable of products.  Along the way I found myself needing to debug the XML being returned from the Amazon Product API.   Here is a quick trick for doing that.

...
$returnedXML = $result['body']
$xmlDoc = DOMDocument::loadXML($result['body']);
$xmlDoc->formatOutput=true;
print '<pre>' . htmlentities($xmlDoc->sveXML()).'</pre>';
...


Posted on

WordPress Activation Hook

WordPress Development

We recently discovered an issue in our commercial plugins related to a change in the WordPress API. It turns out that since WordPress 3.1 was released the register_activation_hook() function is no longer called during a plugin upgrade! This is a significant change in behavior from previous versions that called the WordPress activation hook on every update.   This has caused numerous problems and forced Cyber Sprocket to come up with a patch in our own wpCSL framework.

Why Is This A Problem?

Any site running a version of WordPress older than 3.1 would automatically get any feature  and supporting application tweaks whenever they upgraded the plugin.   Most plugin authors, Cyber Sprocket included, would use the register_activation_hook API call to make sure the user had the latest database structure, settings, and other elements that keep the plugin working.    For example, with Store Locator Plus 3.0 this hook would ensure that the user’s Google Maps API v2.0 settings were converted to the Google Maps API v3.0 equivalent.

As of WordPress 3.1 the function that does this conversion is not called.    To make matters worse, it is not called only in certain circumstances.  For example:

  • User installs upgrade via a downloaded zip file: updates called.
  • User does auto-update on a deactivated plugin then activates the plugin: updates called.
  • User does auto-update on an activated plugin: updates NOT called.

As you can see this is inconsistent.  Even worse, plugins that worked find up to version 3.1 now have the potential to suddenly break.

The Solution?

Cyber Sprocket has created a new version of our wpCSL framework that we use to build WordPress plugins.   The update uses standard admin panel interfaces to call our own “plugin has changed” hooks.    The short version of how this works is as follows:

  • User is on the admin panel…
  • The plugin is active…
  • Check the version of the plugin as stored in the options table of WordPress…
  • Is it different than the current version of our plugin?
    • Yes, run the upgrade callback function if it is set.
    • Update the plugin version stored in the options table to the current installed version.

That’s it.  A fairly simple solution, but more things we need to manage in our plugin framework because the WordPress development team changed things.

 

Posted on

Passing Variables To JavaScript In WordPress

We have touched on several complex subjects when it comes to writing plugins for WordPress that make use of JavaScript. In these articles we discuss built-in scripts, custom static scripts, and dynamic scripts. Dynamic scripts are the scripts that need access to information from the WordPress application environment in order to function properly, such as passing in a setting stored in the WordPress database or a variable that is calculated within PHP. There is a simple trick for getting variables into your JavaScript that is quite a bit more elegant than our dynamic scripting approach using the output buffering PHP trick we outlined earlier.

In later versions of WordPress (2.2+ if I recall) there is a function that was originally intended for language translation. It is meant to localize your scripts. You can leverage this feature to load up a variable array in JavaScript which provides an effective mechanism for getting your WordPress variables into the JavaScript environment.

Localize Script Outline

The basic premise for getting data into JavaScript is as follows:

  • Register your script in the wp_enqueue_script() action hook.
  • Localize your script when you render your shortcode.
  • Enqueue your script when you render the footer.

The important part is to use the footer enqueue method to ensure that your variable processing happens ahead of time. If you are doing a simple script you could put the register, localize, and enqueue steps all in the function you write for the wp_enqueue_script action hook. You will want to separate this into the 3 steps outlined above, however.

Register The Script

Here is an example from one of our plugins.

In the main application file, call our hook for the wp_enqueue_scripts action:

add_action('wp_enqueue_scripts',array('SLPlus_Actions','wp_enqueue_scripts'));

In the SLPlus_Actions class:

/*************************************
 * method: wp_enqueue_scripts()
 *
 * This is called whenever the WordPress wp_enqueue_scripts action is called.
 */
 static function wp_enqueue_scripts() {
     //------------------------
     // Register our scripts for later enqueue when needed
     //
     wp_register_script(
       'slplus_map',
       SLPLUS_PLUGINURL.'/core/js/store-locator-map.js',
       array('google_maps')
     );
 }

These steps tell WordPress to keep track of our JavaScript, helping do some version management, cache management, and get the script ready to be rendered. Since WordPress 3.3 will automatically set the “render in footer” flag for any script enqueued after the wp_enqueue_scripts() action hook, we don’t need to set that here.

Pass Our Variables To JavaScript

When we process our shortcode we do two things. We tell WordPress to manipulate the JavaScript rendering engine to pass in a named array of variables we want our script to know about. We also set a global define so that we know our shortcode has been rendered so we can control IF the script is rendered at all when we call our last-stage processing hooks in WordPress.

In our shortcode processing function:

// Lets get some variables into our script
 //
 $scriptData = array(
    'map_domain' => get_option('sl_google_map_domain','maps.google.com'),
    'map_home_icon' => $slplus_home_icon,
    'map_type' => get_option('sl_map_type','G_NORMAL_MAP'),
    'map_typectrl' => (get_option(SLPLUS_PREFIX.'_disable_maptypecontrol')==0),
    'zoom_level' => get_option('sl_zoom_level',4),
);
 wp_localize_script('slplus_map','slplus',$scriptData);
 // Set our flag for later processing
 // of JavaScript files
 //
 if (!defined('SLPLUS_SHORTCODE_RENDERED')) {
 define('SLPLUS_SHORTCODE_RENDERED',true);
 }

Enqueue The Script

Now that we have our script registered and told WordPress to setup our environment we can now render our script. However we only want WordPress to render the script if our shortcode was processed, which is what the global define was for. We also find that some themes skip the footer processing which disables footer scripts, so we are going to force footer scripts to run within our late-stage WordPress action hook.

In our SLPlus_Action Class:

/*************************************
 * method: shutdown()
 *
 * This is called whenever the WordPress shutdown action is called.
 */
 function shutdown() {

 // If we rendered an SLPLUS shortcode...
 //
 if (defined('SLPLUS_SHORTCODE_RENDERED') && SLPLUS_SHORTCODE_RENDERED) {

 // Register Load JavaScript
 //
 wp_enqueue_script('slplus_map');

 // Force our scripts to load for badly behaved themes
 //
 wp_print_footer_scripts();
}

Using The Variables

Now our script only renders on pages where our shortcode appears and we now have our WordPress variables easily accessible from within the script. How do we reference these in our script? That’s the easy part, here is an example:

In our store-locator-map.js file:

/**************************************
 * function: sl_load()
 *
 * Initial map loading, before search is performed.
 *
 */
function sl_load() {
 if (GBrowserIsCompatible()) {
 geocoder = new GClientGeocoder();
 map = new GMap2(document.getElementById('map'));
 if (parseInt(slplus.overview_ctrl)==1) {
 map.addControl(new GOverviewMapControl());
 }
 map.addMapType(G_PHYSICAL_MAP);
 // This is asynchronous, as such we have no idea when it will return
 //
 geocoder.getLatLng(slplus.map_country,
 function(latlng) {
 if (!slplus.load_locations) {
 map.setCenter(latlng, parseInt(slplus.zoom_level), eval(slplus.map_type));
 }

 var customUI = map.getDefaultUI();
 customUI.controls.largemapcontrol3d = slplus.map_3dcontrol;
 customUI.controls.scalecontrol = slplus.map_scalectrl;
 customUI.controls.hierarchicalmaptypecontrol = slplus.map_typectrl;
 map.setUI(customUI);

 if (slplus.disable_scroll) { map.disableScrollWheelZoom(); }

 if (slplus.load_locations) {
 sl_load_locations(map,latlng.lat(),latlng.lng());
 }
 }
 );
 }
}

Obviously our example has variables we culled out of our localize_script section above, but you get the idea. The slplus prefix is based on the 2nd parameter in our wp_localize_script function call in the previous section. The variable name after the slplus prefix is the key of the $scriptData variable that we passed into that function call as the 3rd parameter.

Summary

By using wp_localize_script you can make use of the wp_register_script and wp_enqueue_script WordPress functions to manage your script loading and variable passing. This is a much cleaner environment for managing scripts than using the PHP output buffer tricks discussed earlier.

However, not all plugins play well with each other. Not all themes follow the rules. In many cases the methods we outline here may not work. In our experience, however, the more plugins that use these modern methodologies the more stable and efficient WordPress is. Eventually those plugins and themes that do not play well with others will atrophy and only those that are well crafted and utilizing best methods will survive.

Posted on

WordPress and JavaScript Part 2

WordPress

This is our second article in a series about working efficiently with JavaScript in WordPress. There are a lot of sites and lots of examples on how to implement JavaScript in WordPress. Many of the articles we came across were incorrect or outdated. What was once the viable, or possibly the only available, method for implementing JavaScript hooks a few years ago with WordPress version 2.X are not the most efficient methods today. In our second article we touch on this point and continue to distill the information we’ve uncovered that have helped us create better JavaScript hooks in our plugins.

This follow-on to the first article unveils some key points about using JavaScript in WordPress:

  1. Not all plugins follow best practices.
  2. Some very popular plugins (one of which has 6,000,000 downloads) completely bypasses wp_register_script and wp_enqueue_script, instead doing print <script src="blah"> right in the HTML output header.
  3. These plugins, and themes for that matter, thwart any attempts to abide by best practices in your own plugin.

What we found when working on various client sites is the techniques in our last article are not foolproof.

The main issues we discovered:

  • Some scripts force an archaic version of jQuery to be used, in our case v1.4.X when the current WordPress version of jQuery is 1.7.1. Our plugin, for one, requires jQuery 1.7 or higher as we follow best practices for jQuery and use the .on() method for invoking actions.
  • Some plugins kill the wp_enqueue_script process well before the WordPress action stack has exited. In our case it was being shut down well before the content, and thus shortcodes, were rendered.

Our workaround:

– Register AND Enqueue the scripts in the wp_enqueue_script action.

But…

This loads your scripts on EVERY PAGE. Exactly what we want to avoid.

The solution that gets around “mis-behaving” plugins? Use the much-hated, at least by me, filter to check if the current post has your shortcode.

However, unlike other archaic methods for implementing this test with a direct database query, you can use the built-ins of WordPress to make this run a bit faster. It still accesses the database so it is slower than simply firing enqueue when the shortcode is rendered but it is a price we have to pay if we want to live in the same neighborhood with our miscreant neighbors.

How to filter your enqueue script:

function setup_scripts_for_plugin() {
   global $post;
   $pattern = get_shortcode_regex();
   if ( preg_match_all('/'.$pattern.'/s',$post->content,$matches) &&
        array_key_exists(2,$matches) &&
        in_array('our_shortcode', $matches[2])       ) {     wp_register_script('jquery');   wp_enqueue_script('jquery'); }

Summary

That’s it. Now your scripts will be loaded only on the page with the shortcode you specified in the in_array() call.

In our example we are registering the jQuery built-in, but this works just as well with any script, including your own custom static or dynamic scripts. Speaking of dynamic scripts, we have a new trick for that too… coming up next…

The corresponding presentation for this discussion is available here:

Adobe PDFWordPress Plugin Tips & Tricks Apr 2012

Posted on

WordPress and JavaScript Part 1

WordPressIntroduction

For those that were not present, we had a discussion that was about wpCSL and using JavaScript in a WordPress plugin.

The part of the discussion that would be of interest to the general public revolved around the use of wp_register_script and wp_enqueue_script and the best practices for implementing scripts.

WoAdobe PDFrdPress Plugin Tips & Tricks Mar 2012

 The Key Points

  1. Call your function to REGISTER scripts early in the action stack for WordPress. Typically hooking into the wp_enqueue_scripts action.
  2. Call your function to ENQUEUE the scripts when you know you’ll need them. Typically this should be when the shortcode is rendered. This prevents your scripts from loading on EVERY page in Wordpress.
  3. There are three basic script types, per my definition of scripts, that are important:
    1. Built-in ScriptsWordPress has approximately 30 scripts that ship with or are directly accessible from within WordPress.
    2. Static Scripts: your custom javascript that depends on no external PHP variables or data from the database
    3. Dynamic Scripts: custom javascript that references PHP variables or data from the database
  4. Dynamic scripts can use the ob_start()/ob_content() feature of PHP to render the JavaScript.
  5. Hook dynamic scripts in LATE in the action stack of WordPress, I prefer wp_print_footer_scripts.
  6. Dynamic scripts implemented this way should use a global flag (a property of our wpCSL driver class in our case) to only render the script when the shortcode was rendered. Unlike enqueue_script this footer action hook is called for ALL pages.

Summary

There are a couple of follow up discussion based on findings from working “in the field” on client systems. Those will be posted next.

Posted on

PHP + PostgreSQL App Performance Example

PHP and PostgreSQLOne of our projects this week was to improve the performance in a payment processing system for a client. The system performed well under limited data sets, but as the data set grew larger the performance response time increased exponentially. In reviewing the application we found several design flaws that were significantly impacting performance. This article outlines some of the issues we found.

Overview

The take-away from this article is that you should know the details of the implementation from ALL angles.  If you are a code junkie, then make sure you review and understand the database angle.  Same thing in reverse if you are a DB guru, take the time to understand the code.    Nobody can do an effective job of application design & implementation without understanding the big picture.

In this case there is way too much emphasis on putting all the business intelligence into the database.   While there is nothing wrong with that, and in fact that is often a preferred architecture, it was not well thought out and thus not well implemented in this case.   One of the bigger mistakes was putting the business intelligence into simple views versus using proper stored procedure designs.

Bottom line, sometimes the better solution is to put SOME intelligence in the data engine and move some of the extraction/looping and other processing logic on the code side.  ESPECIALLY in a case like this one where we know that the client will not, at least any time soon, be accessing the data from any applications outside the server-side PHP application that is implemented for them.  Thus we know we could put all the intelligence in the code, though that makes for a LOT more work if/when they decide to introduce things like mobile apps.

Lesson 1: Don’t Put Multiple Sub-Selects In A View

This is a simplified example from a simple view that was built in Postgres for the client.

SELECT a,b,
to_char(c_date, (select settings.value from settings where settings.id='date_format')) as c_date_str,
to_char(d_date, (select settings.value from settings where settings.id='date_format')) as d_date_str,
to_char(e_date, (select settings.value from settings where settings.id='date_format')) as e_date_str
...

This view is using a settings table which holds the date format.  The client can use the web interface to change the date format, which is stored in the settings table.   That is a good web app design.

Doing multiple queries to retrieve that date format in a single view is a bad design.   In the simple example we have above we end up hitting the database, and thus doing disk I/O, no less than FOUR TIMES for a single query.  That is going to be slow.

There are a myriad of better options here, here are the two options I would consider:

  • Move the sub-select into a stored procedure and turn it into a function.An intelligent design of that procedure will retain the initial data fetch in a global variable that is tested on each call, blocking future data I/O requests.   Data I/O is now 2calls v. 4+ for the view.
  • Return the raw data.Allow the code to format the strings.   The code can easily fetch the date format and apply the equivalent PHP formatting call ONCE and apply it to all raw data data.  This also cuts down the data I/O.Using raw data also increases the chances for the PostgreSQL engine to optimize the query via the internal heuristics engine.
In our application improvement the path taken was to avoid this view whenever possible.  As it turns out, this view is so complex and convoluted that there are often multiple shortcuts that get to just the few data elements that are needed.  Constructing new queries retrieved the necessary data without all the view complexities and data overload.In this case the view is so complex that is hampers performance throughout the application and has limited benefit.    The long term solution will be to break the view into a subset of stored procedures.  For the few cases where the larger complex view is actually viable we will see improved performance via an intelligent series of cascading stored procedures or code-side logic.

 Lesson 2: Use Parameter Binding

Any modern database and their related interfaces will support data binding.  If your database does not support this and you are building enterprise-class applications it is time to select a new data engine.  PostgreSQL has supported this for years.   Nearly all of the data interfaces, including PHP’s MDB2 interface have also supported parameter binding for years. With parameter binding you will get a significant performance boost when iterating over data, especially in a nested loop fashion.
In our example the code was doing something similar to this, simplified for instructional purposes:
$qry1 = 'SELECT v_id,a,b,c,d FROM my_first_view WHERE NOT paid';
$result = $database->db()->query($qry1);
$dataset1 = $result->fetchAll(MDB2_FETCHMODE_ASSOC); 
$datastack = $dataset1;
$qry2 = 'SELECT v_id,e,f,g FROM my_second_view WHERE NOT paid';
$result = $database->db()->query($qry2);
$dataset2 = $result->fetchAll(MDB2_FETCHMODE_ASSOC);
foreach ($dataset1 as $data1) {
    foreach ($dataset2 as $data2) {
        if ($data1['v_id'] == $data2['v_id']) { 
             $datastack['subdata'][] = $data2; 
        }
    }
}

There are several significant performance issues here.   To start with there is significant memory consumption as we need to collect ALL the data from the database into memory.  We then collate the data from two complete sets in memory to create a single data element.    There are much more efficient ways to do this without fetching all data in memory first.

The better option would be to fetch the data from dataset1 on a row-by-row basis and push the data onto the stack one record at a time.  The inner loop for dataset2 should then select a subset of data that is specifically for the matching v_id from the outer dataset1 loop.   This is where parameter binding comes in.

Here is the same loop in untested simplified code format, using parameter binding.  In our real-world example this one simple change increased performance more than 50x because the database can be much more intelligent about how it selects subsets of data from the database & the PHP overhead both in memory and stack management is significantly reduced:

// give us parameter binding in MDB2 please
$database->db()->loadModule('Extended'); 

// setup our queries
$qry1 = 'SELECT v_id,a,b,c,d FROM my_first_view WHERE NOT paid';
$qry2 = 'SELECT v_id,e,f,g FROM my_second_view WHERE v_id = ?';
// get the "outer" data
// since we know we will use all the "outer" data, just put it
// directly on the data stack, cutting this memory consumption in half
$result = $database->db()->query($qry1);
$datastack = $result->fetchAll(MDB2_FETCHMODE_ASSOC);

// still loop through outer data to drive the 2nd query
foreach ($datastack as $data1) {
    // Fetch the data2 subset of matching data as
    // a named array, only getting those records that
    // match on v_id... in essence an SQL JOIN done
    // via code
    $database->db()->extended()->getAll(
        $qry2,
        null,
        array($data1['v_id']),
        array('integer'),
        MDB2_FETCHMOD_ASSOC
        );
    // Now we attach each of the matching elements in
    // the "inner" data set to the data stack, attaching
    // it under the related v_id 
    //
    foreach ($dataset2 as $data2) {
             $datastack['v_id'][$data1['v_id']]['subdata'][] = $data2; 
    }
}

This can be further refined and improved per our discussion above, but you get the idea.  I’ll leave it to you to figure out how to further refine the process.

You may be asking “why didn’t you just do a simple JOIN in the database engine?”  Good question.  The real world example is much more complex than this and some of the data elements and views in play make that solution complex to maintain and causes the database engine to trip-up on the optimization and is actually SLOWER in our real world case.   Here we are simplifying to illustrate the general concept only.

 Summary

A couple of simple real-world examples of improving performance have been illustrated here.    When refactoring a real-world application there are often complex interactions that need to be researched & evaluated.  The solutions are rarely simple and often can be approached in several ways.   The options shown here are not necessarily the best solutions but are the solutions that were the quickest to implement while providing a significant performance increase to the client.

Finding the balance between results and costs is always a challenge from the business perspective.    From the technical perspective a similar balance is often necessary between database and code intelligence.  Knowing your environment as a whole will help guide you down the proper path toward improving application performance.

 

Posted on

WordPress – plugin does not have a valid header

We’ve run into this one a couple of times when publishing our WordPress plugins. If you look closely at the URL when that error message appears you will often find that you have a duplicate “main” file that launches the plugin. All plugins should have a single php file that “runs the show” and it should be named the same thing as the plugin subdirectory.

If you have a plugin named “Store Locator Plus” and it resided in the plugin directory store-locator-plus, then the main file in that directory should be called store-locator-plus.php. If that file is missing WordPress will try to guess, and often guesses wrong, what the starting file is. That is one source of the invalid header issue.

Another source that we recently ran into was the fact that we had multiple copies of store-locator-plus.php in our subdirectories. Duplicate copies were hidden down in the WPCSL-generic subdirectory. Like the Highlander, there can only be one. All duplicates must be destroyed wherever they live. The trick to getting rid of the “mutants” once you have published to the WordPress svn repository is to make sure you go into your local subversion directories and run the svn del <offending-file> first, then commit that back to the repo with svn ci -m 'There can only be one' command. Now your future updates won’t continue to clone/duplicate the errant file(s).

The other possible source of this problem, symlinks or shortcut directories that create the appearance of duplicate main files.

The bottom line, make sure your plugin subdirectories are clean and that there are no duplicate <my-plugin>.php files anywhere in the subdirectory structure.