Lazy load all the things!

It may not be obvious to everybody, but everything that relies on the use of the src attribute can be “lazy loaded”. If you’re unsure about what the term “lazy load” means, it is the deferred loading of a resource, performed through JavaScript, which, in simpler terms, can be rephrased as “I won’t load it until I really need it”. Now, what most people do is defer the loading of images: you take your img tags, strip them of their src attributes, and store the value somewhere else, like a data-src attribute. This way, the browser won’t load the resource (image, in this case) at all. When you do need to load the image, simply convert the data-src attribute to an a src attribute with the same value, and the browser will start loading the resource instantly.
var image = document.querySelector( "img" );

image.setAttribute( "src", image.getAttribute( "data-src" ) );
Recently, I was working on a project that required a lot of embeds to be present in page, with more of them optionally loaded upon clicking on a pagination element. Embeds can be heavy. No, heavy is not quite the word I’m looking for. Lots of embeds can kill a web page performance. But, again, I had no other choice but to have a bunch of iframes together side-by-side. Then it dawned on me: the same reasoning that we use for images could be applied to those iframes as well. As a matter of fact, whatever relies on a src attribute, even stylesheets, can be lazy loaded like that. And that’s exactly what I ended up doing: having a bunch of inert iframes, each one with its data-src attribute regularly filled, waiting to be converted to the standard src attribute. Sure, if you load a lot of those iframes, you’ll end up having a pretty resource-demanding page at the end, but sure enough, the initial load will be much much lighter.

jQuery force element redraw

So, today I’ve discovered a neat trick to solve an issue that’s been bothering me essentially since I’ve started web development.

As you know, jQuery offers the ability to modify the DOM at will. Now, all of the functions available to perform DOM operations are syncronous, meaning that the next instruction will start only when the one that preceedes it will have finished its task.

This is not necessarily true for the actual DOM manipulation, leaving us in the middle of a muddy unfathomable jungle, where one thing is syncronous, but actually isn’t.

Before complex UI libraries came around, this led generations of developers, including myself, to solve this issue with loads of setTimeout calls, whose milliseconds parameter was inconsistent, at best: sometimes a 1 millisecond value would do it, sometimes it was 10 milliseconds, other times any other number that you could think of.

In all honesty, this felt weird and unstable from day one: so what if there was a way to get rid of setTimeouts entirely and force a redraw of the manipulated DOM element?

Turns out, it’s simpler than I thought. Here’s the snippet:

$.fn.force_redraw = function() {
    return this.hide( 0, function() {
        $( this ).show();
    } );

Simply instantly hide and show the element, and you’ll be good to go, and you’ll be able to say goodbye to at least some of your setTimeouts.

Multi-dimensional isset

When working with large array/object data, especially when the overall structure is a mix of the two types, it is often useful to check if a given property exists.

This is something particularly relevant since data structures may change over time, needing to be reshaped.

For that task, I’ve written a small function that checks if a given sub-key exists in an array/object: if it does, the function will return its value, while if it doesn’t it will return either a specified default value, or boolean false.

The function produces the following results:

$arr = array(
	'a' => array(
		'b' => 42,
		'c' => new stdClass()

$arr['a']['c']->foo = "bar";

var_dump( ev_isset( $arr, array( 'a', 'b' ) ) ); // returns "42"
var_dump( ev_isset( $arr, array( 'a', 'd' ) ) ); // returns false
var_dump( ev_isset( $arr, array( 'a', 'c', 'foo' ) ) ); // returns "bar"
var_dump( ev_isset( $arr, array( 'a', 'c', 'baz' ), 'default value' ) ); // returns "default value"


Your code is not the end of the story

This is a quick post to remind me of something important, something that maybe is not only relevant to WordPress, but surely is magnified in that context.

Before starting my own gig, I worked for a software company. Sure, we could pick up data from external sources, but, apart from these sporadic integrations, the whole show started and ended with things that we built, things that, supposedly, we knew 100%.

When working, developing, designing with WordPress your code is never the end of the story. Whether it’s a plugin or a theme, your code will always run alongside other codes, written by other people, with various skills degrees; people you will most likely not know.

If you’re like me, you might reject this idea, even for a little while: running other people’s code can expose yours to issues, and generally impact the end product you’ve so carefully created, possibly making look bad, without you having done nothing really wrong.

Recently, we’ve fixed a couple of compatibility issues with a product we’re publishing. One of those issues, specifically, got me thinking: it was something that I never thought could be a possibility, yet it took only a couple of minutes to adapt what we wrote to that unforeseen scenario.

I’m not saying that we must expect the unexpected, rather than you need to embrace this heterogeneity as a fact, and work for it, not against it.

As with all diversities, it’ll maybe take some time to accept it, but the reward, not necessarily for you, but for the people that are going to use your product, is too big to be missed.

An alternative to file_get_contents

The official WordPress Theme Review guidelines are fairly strict in some cases, and for a good reason: those best practices, tips and rules ensure that the risk of having bad code pushed to the ever growing themes and plugins repository is kept to the minimum.

One of those rules dictates that direct file operations aren’t allowed, unless they’re performed through the Filesystem API. Due to this restriction, the use of an handy function such as file_get_contents is prohibited, and its occurrences in a theme are promptly signaled by the Theme Check plugin.

For local reads, though, there’s a way to access a file’s contents without invoking file_get_contents

$content = implode( ‘’, file( $path_to_file );

which essentially accesses the file, reads its lines into an array, whose elements are then joined in a single string.

How to deploy on a production server from your local Git repository

If you’re like me and began to do what you do more than a decade ago, you’ll definitely remember how we all used to push updates to our production servers via FTP. There’s no shame in that: we’ve all been beginners.

Using FTP might even be fine today for teeny-weeny projects, but two things are for sure:

  1. it’s slow,
  2. it will almost always lead to uncertainty regarding the syncronization between your local copy of the project, and the remote one on the server (even if I hear that some people still develop directly on a server, but that’s a rant for another occasion).

Luckily for us, we have version control systems such as Git and our work is never really lost.

So how can we avoid using FTP to upload updates to our production servers?

You set up a new repository that’s hosted on the very same production server you’ll project will end up on and also push updates to that repository.

So let’s assume that you have developed a theme for WordPress and you want to keep it in sync with your local copy.

I wouldn’t want to have the theme folder on the production server to host the repository itself, so I’d opt to set it up in a folder outside public_html and then listen for push events on that repository and perform a checkout of the project to the actual theme folder.

So, since I’m lazy, I’ve created a little script to make my life easier:

rm -rf $1.git
mkdir -p $1.git
cd $1.git
git --bare init
cat <<EOF >hooks/post-receive
mkdir -p $2
export GIT_WORK_TREE=$2
git checkout -f
chmod +x hooks/post-receive

You can create a file (let’s say in the folder that will host your repositories on the server, adjust the file permissions so that it can be executed (chmod +x ./, and launch it with the following syntax:

./ repository-name path-to-actual-folder

This script does the following things in sequence:

  1. remove any pre-existing repository with the specified name,
  2. set up a new blank repository in a specific folder whose name is indicated by the first parameter (in our case repository-name),
  3. create a hook that is triggered upon receiving a push that will perform the checkout in the actual project folder (second parameter passed to the script, in our case path-to-actual-folder).

So, assuming that you have a standard WordPress installation in the root directory, you could create a repository that points to the themes folder:

./ your-theme-name /home/your-user/public_html/wp-content/themes/your-theme-name

The only remaining thing is to upload stuff to the production server, for which you’ll have to create a new remote pointing to the newly created repository through SSH and then execute a push towards said remote:


assuming that you’ve created the repository in a repos folder in your home directory, which I’d recommend.

As you can see, this is nothing too complex, but it’s a pretty nice time saver nonetheless.

A note regarding importing serialized data in WordPress

The WordPress Importer hasn’t received much love lately.

It does work without any particular issue, it’s a tad slow, but in the end it doesn’t give you any particular headache, if not throwing a couple of warnings here and there if you have the WP_DEBUG constant turned on.

Luckily for us, a redux version is in the works, maintained by the fine folks at Human Made, that looks very promising.

The other day I was trying to import a couple of pages that had serialized data in one of their post metas and the Importer kept failing at adding those metas, while still being able to correctly create the pages.

This situation left me baffled for a while, so I started digging.

The reason for the Importer not being able to import serialized data was related to line endings contained in the array I was dealing with: in particular \r\n line endings had to be converted to \n in order for the importer not to fail.

I’ve written a recursive function you might want to pass your data through before actually saving your post meta, in case it might contain values with line endings, such as the ones generated by user input in a textarea:

function replace_endlines_deep( $data ) {
	if ( ! is_array( $data ) ) {
		return $data;

	foreach ( $data as $k => $v ) {
		if ( is_array( $v ) ) {
			$data[$k] = replace_endlines_deep( $v );
		else {
			$data[$k] = str_replace( "\r\n", "\n", $data[$k] );

	return $data;

So actually saving the data to the database would become:

// ... make sure to sanitize user input ...

$data = replace_endlines_deep( $data );

update_post_meta( $post_id, 'my_serialized_data', $data );

On sharing knowledge

A few days ago I was thinking about how I started doing what I do for a living. I think everyone has a memory of a moment that started it all.

For me it was when I first inspected a web page to discover what was hidden behind the words “page source“. I have flashes of that memory: I remember that the page I was looking at was grey, with Times New Roman text, and the classic default blue links.

I distinctly remember though the sense of wonder that I had after opening said page in the default text editor of my operative system, changing a couple of characters, saving and hitting refresh in my browser.

It was the year 1999, or something like it, and I was officially in love.

I also distinctly remember the first time I uploaded a simple HTML document to a free hosting space I had back then.

It was a time when you had to wait a few minutes before actually be connected to the Internet, and those minutes were filled with this weird sound.

The passion that I have for what I do today started because I was able, with a little initiative on my part, to try to alter something that had been written by someone else, just for the sake of seeing what would happen.

My initiative isn’t the end of the story, but merely its beginning.

I was able to change those characters in that grey looking hypertext document because HTML is an open system, and its source can be seen and analyzed by anyone.

This is exactly why WordPress renews for me that sense of wonder almost on a daily basis, and a praise should go to WordPress itself, having created a friendly community that carries on the liberties proclaimed by the GPL.

Problems and solutions aren’t solely yours, or the plugin author’s, but they’re everybody’s territory, and everyone has the possibility to add their own little brick to the wall, actively contributing to something greater.

The other day, I was thinking that we shouldn’t take this for granted. Sadly, many companies out there still defy the logic depicted above.

Some say that Open Source is a true cultural shift, even the cultural shift of our time. What I say from my late-to-the-party perspective, is that ultimately you get what you give.

I’m starting to realize now that there’s much more to get if you share your knowledge with others.

Customizing the “Enter title here” placeholder text

Today I’ve put a new item in my ideal category of “WordPress things I didn’t even know existed”, which is the ability to edit the “Enter title here” placeholder text when creating a new post or a new item in a Custom Post Type.

While there is no way of customizing such text at the moment of the creation of the Custom Post Type, there’s a neat filter that you can use to alter it, depending on the type of the post you’re creating.

It could go something like this:

add_filter( 'enter_title_here', function( $title, $post ) {
	if ( $post->post_type === 'your-post-type' ) {
		$title = __( 'Enter Company Name' );

	return $title;
}, 10, 2 );

Pretty easy, right? This is also a cool way of avoiding calling “Title” what in fact could be a “Company Name” or a “Testimonial Name”.

How to split a Gruntfile into multiple files

Task runners such as Grunt or Gulp can immensely speed up development, while also increasing the reliability of the code you’re writing.

The problem is that their configuration files tend to grow easily, even for small projects, and they can become hard to maintain pretty quickly.

The ones we’ve used back at my company so far follow the same path, so we though about finding a way to split them. Turns out, there is a way of doing so, and a quick search on Google returned more than one method to do it.

I’ve created a public Gist that summarizes what I’ve found.

Going in a little more detail about it, most notably here’s what I have discovered:

  • The load-grunt-tasks module allows you to automatically load the tasks you need, without having to call grunt.loadNpmTasks for each one of them; just add the tasks to the package.json file, and you’re good.
  • The loadConfig function (got the idea from Thomas Boyt) takes care of reading each configuration files that are put in a specific folder, such as tasks/options. Each file must be named as the task its declaring the configuration for (so uglify.js, for example). After reading each file, the only thing left to do is extending the main configuration object:
    grunt.util._.extend( config, loadConfig( "./tasks/options/" ) );
  • Grunt can already do the same for tasks definitions with the grunt.loadTasks( "tasks" ); instruction, which opens the tasks folder and looks for files containing tasks definitions.

So, in conclusion, just like I’m not looking back after having discovered what task runners can do for our projects, I doubt I’ll ever follow the single Gruntfile.js approach again.