Tag: kranthi

Auto-Save User’s Input In Your Forms With HTML5 and Sisyphus.js





 



 


Editor’s note: This article is the third in our new series that introduces new, useful and freely available tools and techniques, developed and released by active members of the Web design community. The first article covered PrefixFree; the second introduced Foundation, a responsive framework that helps you build prototypes and production code. This time, we’re presenting Sisyphus.js, a library developed by Alexander Kaupanin to provide Gmail-like client-side drafts and a bit more.

What Problem Needs Solving?

Have you ever been filling out a long form online or writing an eloquent and spirited comment when suddenly the browser crashes? Or perhaps you closed the browser tab accidentally, or your Internet connection cuts off, or the electricity goes down (and, being ever obedient to Murphy’s Law, you had no backup power supply). If not, then you’re lucky. But no one is protected from such minor catastrophes.

screenshot
(Image: Kristian Bjornard)

Imagine the storm of emotions felt by a user who had to add just a bit more information before submitting a form and then loses all data. Horrible, huh? Now, if only there was a way to recover that data, rather than undertake a Sisyphean pursuit.

Existing Solutions

One common solution is to write one’s comments in a local document, saving the file periodically, and then copying and pasting the text into the form once it’s complete. Some forms also allow you to save your draft by clicking a button, but not all forms have this feature, and it’s not the most convenient solution. The product that does this best is Gmail, with its auto-save feature for drafts: just type away, and all of the content is stored automatically, without you even needing to press a button.

After releasing Sisyphus.js, I learned of Lazarus, an extension for Firefox and Chrome that helps to recover form data. But browser plugins lead to an even bigger problem: distribution. Some users don’t have a clue what a browser extension is — many users don’t, in fact, which makes this approach inadequate on a large scale.

The people with a direct line to users are Web developers themselves. So, addressing the problem of user input at the stage of development seems more practical than expecting users to add to their skyscraper of extensions.

A Solution: Sisyphus.js

Implementing Gmail-like auto-saving of drafts is not straightforward at all. I wanted the solution to be simple and easy to use, which would rule out the use of server-side magic.

The result is an unassuming script that stores form data to the local storage of the user’s browser and restores it when the user reloads or reopens the page or opens the page in a new tab. The data is cleared from local storage when the user submits or resets the form.

How to Use It

Implementing Sisyphus.js is pretty simple. Just select which forms you’d like to protect:

$('#form1, #form2').sisyphus();

Or protect all forms on the page:

$('form').sisyphus();

The following values are the defaults but are customizable:

{
customKeyPrefix: '',
timeout: 0,
onSave: function() {},
onRestore: function() {},
onRelease: function() {}
}

Let’s break these options down:

  • customKeyPrefix
    An addition to key in local storage details in order to store the values of form fields.
  • timeout
    The interval, in seconds, after which to save data. If set to 0, it will save every time a field is updated.
  • onSave
    A function that fires every time data is saved to local storage.
  • onRestore
    A function that fires when a form’s data is restored from local storage. Unlike onSaveCallback, it applies to the whole form, not individual fields.
  • onRelease
    A function that fires when local storage is cleared of stored data.

Even after Sisyphus.js has been implemented in a form, you can apply it to any other form and the script won’t create redundant instances, and it will use the same options. For example:

// Save form1 data every 5 seconds
$('#form1').sisyphus( {timeout: 5 } );

…

// If you want to protect second form, too
$('#form2').sisyphus( {timeout: 10} )

// Now the data in both forms will be saved every 10 seconds

Of course, you can change options on the fly:

var sisyphus = $('#form1').sisyphus();

…

// If you decide that saving by timeout would be better
$.sisyphus().setOptions( {timeout: 15} );

…

// Or
sisyphus.setOptions( {timeout: 15} );

Usage Details

Requirements: Sisyphus.js requires jQuery version 1.2 or higher.

Browser support:

  • Chrome 4+,
  • Firefox 3.5+,
  • Opera 10.5+,
  • Safari 4+,
  • IE 8+,
  • It also works on Android 2.2 (both the native browser and Dolphin HD). Other mobile platforms have not been tested.

Download the script: Sisyphus.js and the demo are hosted on GitHub; the minified version is about 3.5 KB. A road map and issue tracker are also available.

(al)


© Alexander Kaupanin for Smashing Magazine, 2011.


Freebie: Festive Christmas Icon Pack (20 .EPS Icons)





 



 


The year is slowly coming to an end and we’re glad to present to you a festive icon set to inspire you in your designs. In this post we present a minimalist collection of 20 free festive vector (.EPS) icons created by offset media. The pack includes color and grayscale versions. The pack includes mostly Christmas related icons, such us the gingerbread man, nutcracker, snowman and the very well-known fir tree.

Feel free to also have a look at some of our previous Smashing Christimas icon sets: Free Smashing Christmas Icon Set by Icon Eden (2009) and Free Smashing Christmas Icon Set by SoftFacade (2008).

Download the Icon Set for Free!

The pack is completely free to use in personal and commercial projects without any restrictions. Please link to this article if you want to spread the word.

Christmas Icons

Christmas Icons

Behind the Design

As always, here are some insights from the designer:

“Like any design studio, here at offset media, we prefer to create our own festive greeting cards to give to clients. This year I had a very clear vision of the overall look I wanted, a single white festive character on a solid red background.

Knowing the look, but unsure of what character to use, I ended up creating a bunch of icons to choose from. Once we were happy with the card design (choosing the snowman), we had all these unused icons lying about, and being someone who does not likes things going to waste, I decided to share them with others who could possible make use of them!”

 —  George Neocleous (aka GeoNeo) is a full-time designer and illustrator, who works at London-based design studio offset media and blogs (sporadically) at Geoneo’s Blog.

Thank you, George. We appreciate your work and your good intentions!

(il) (vf)


© Smashing Editorial for Smashing Magazine, 2011.


Integrating Amazon S3 With WordPress





 



 


Computing is full of buzzwords, “cloud computing� being the latest one. But unlike most trends that fizzle out after the initial surge, cloud computing is here to stay. This article goes over Amazon’s S3 cloud storage service and guides you to implementing a WordPress plugin that backs up your WordPress database to Amazon’s S3 cloud. Note that this is not a tutorial on creating a WordPress plugin from scratch, so some familiarity with plugin development is assumed.

The reason for using Amazon S3 to store important data follows from the “3-2-1� backup rule, coined by Peter Krogh. According to the 3-2-1 rule, you would keep three copies of any critical data: the original data, a backup copy on removable media, and a second backup at an off-site location (in our case, Amazon’s S3 cloud).

Cloud Computing, Concisely

Cloud computing is an umbrella term for any data or software hosted outside of your local system. Cloud computing is categorized into three main service types: infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS).

  • Infrastructure as a service
    IaaS provides virtual storage, virtual machines and other hardware resources that clients can use on a pay-per-use basis. Amazon S3, Amazon EC2 and RackSpace Cloud are examples of IaaS.
  • Platform as a service
    PaaS provides virtual machines, application programming interfaces, frameworks and operating systems that clients can deploy for their own applications on the Web. Force.com, Google AppEngine and Windows Azure are examples of PaaS.
  • Software as a service
    Perhaps the most common type of cloud service is SaaS. Most people use services of this type daily. SaaS provides a complete application operating environment, which the user accesses through a browser rather than a locally installed application. SalesForce.com, Gmail, Google Apps and Basecamp are some examples of SaaS.

For all of the service types listed above, the service provider is responsible for managing the cloud system on behalf of the user. The user is spared the tedium of having to manage the infrastructure required to operate a particular service.

Amazon S3 In A Nutshell

Amazon Web Services (AWS) is a bouquet of Web services offered by Amazon that together make up a cloud computing platform. The most essential and best known of these services are Amazon EC2 and Amazon S3. AWS also includes CloudFront, Simple Queue Service, SimpleDB, Elastic Block Store. In this article, we will focus exclusively on Amazon S3.

Amazon S3 is cloud-based data-storage infrastructure that is accessible to the user programmatically via a Web service API (either SOAP or REST). Using the API, the user can store various kinds of data in the S3 cloud. They can store and retrieve data from anywhere on the Web and at anytime using the API. But S3 is nothing like the file system you use on your computer. A lot of people think of S3 as a remote file system, containing a hierarchy of files and directories hosted by Amazon. Nothing could be further from the truth.

Amazon S3 is a flat-namespace storage system, devoid of any hierarchy whatsoever. Each storage container in S3 is called a “bucket,â€� and each bucket serves the same function as that of a directory in a normal file system. However, there is no hierarchy within a bucket (that is, you cannot create a bucket within a bucket). Each bucket allows you to store various kinds of data, ranging in size from 1 B to a whopping 5 TB (terabytes), although the largest object that can be uploaded in a single PUT request is 5 GB. Obviously, I’ve not experimented with such enormous files.

A file stored in a bucket is referred to as an object. An object is the basic unit of stored data on S3. Objects consist of data and meta data. The meta data is a set of name-value pairs that describe the object. Meta data is optional but often adds immense value, whether it’s the default meta data added by S3 (such as the date last modified) or standard HTTP meta data such as Content-Type.

So, what kinds of objects can you store on S3? Any kind you like. It could be a simple text file, a style sheet, programming source code, or a binary file such as an image, video or ZIP file. Each S3 object has its own URL, which you can use to access the object in a browser (if appropriate permissions are set — more on this later).

You can write the URL in two formats, which look something like this:

The bucket’s name here is deliberately simple, codediesel. It can be more complex, reflecting the structure of your application, like codediesel.wordpress.backup or codediesel.assets.images.

Every S3 object has a unique URL, formed by concatenating the following components:

  1. Protocol (http:// or https://),
  2. S3 end point (s3.amazonaws.com),
  3. Bucket’s name,
  4. Object key, starting with /.

In order to be able to identify buckets, the S3 system requires that you assign a name to each bucket, which must be unique across the S3 bucket namespace. So, if a user has named one of their buckets company-docs, you cannot create a bucket with that name anywhere in the S3 namespace. Object names in a bucket, however, must be unique only to that bucket; so, two different buckets can have objects with the same name. Also, you can describe objects stored in buckets with additional information using meta data.

Bucket names must comply with the following requirements:

  • May contain lowercase letters, numbers, periods (.), underscores (_) and hyphens (-);
  • Must begin with a number or letter;
  • Must be between 3 and 255 characters long;
  • May not be formatted as an IP address (e.g. 265.255.5.4).

In short, Amazon S3 provides a highly reliable cloud-based storage infrastructure, accessible via a SOAP or REST API. Some common usage scenarios for S3 are:

  • Backup and storage
    Provide data backup and storage services.
  • Host applications
    Provide services that deploy, install and manage Web applications.
  • Host media
    Build a redundant, scalable and highly available infrastructure that hosts video, photo or music uploads and downloads.
  • Deliver software
    Host your software applications that customers can download.

Amazon S3’s Pricing Model

Amazon S3 is a paid service; you need to attach a credit card to your Amazon account when signing up. But it is surprisingly low priced, and you pay only for what you use; if you use no resources in your S3 account, you pay nothing. Also, as part of the AWS “Free Usage Tier,� upon signing up, new AWS customers receive 5 GB of Amazon S3 storage, 20,000 GET requests, 2,000 PUT requests, and 15 GB of data transfer out each month free for one year.

So, how much do you pay after the free period. As a rough estimate, if you stored 5 GB of data per month, with data transfers of 15 GB and 40,000 GET and PUT requests a month, the cost would be around $2.60 per month. That’s lower than the cost of a burger — inexpensive by any standard. The prices may change, so use the calculator on the S3 website.

Your S3 usage is charged according to three main parameters:

  • The total amount of data stored,
  • The total amount of data transferred in and out of S3 per month,
  • The number of requests made to S3 per month.

Your S3 storage charges are calculated on a unit known as a gigabyte-month. If you store 1 GB for one month, you’ll be charged for one gigabyte-month, which is $0.14.

Your data transfer charges are based on the amount of data uploaded and downloaded from S3. Data transferred out of S3 is charged on a sliding scale, starting at $0.12 per gigabyte and decreasing based on volume, reaching $0.050 per gigabyte for all outgoing data transfer in excess of 350 terabytes per month. Note that there is no charge for data transferred within an Amazon S3 “region� via a COPY request, and no charge for data transferred between Amazon EC2 and Amazon S3 within the same region or for data transferred between the Amazon EC2 Northern Virginia region and the Amazon S3 US standard region. To avoid surprises, always check the latest pricing policies on Amazon.

Introduction To The Amazon S3 API And CloudFusion

Now with the theory behind us, let’s get to the fun part: writing code. But before that, you will need to register with S3 and create an AWS account. If you don’t already have one, you’ll be prompted to create one when you sign up for Amazon S3.

Before moving on to the coding part, let’s get acquainted with some visual tools that we can use to work with Amazon S3. Various visual and command-line tools are available to help you manage your S3 account and the data in it. Because the visual tools are easy to work with and user-friendly, we will focus on them in this article. I prefer working with the AWS Management Console for security reasons.

AWS Management Console

The Management Console is a part of the AWS. Because it is a part of your AWS account, no configuration is necessary. Once you’ve logged in, you have full access to all of your S3 data and other AWS services. You can create new buckets, create objects, apply security policies, copy objects to different buckets, and perform a multitude of other functions.

S3Fox Organizer

The other popular tool is S3Fox Organizer. S3Fox Organizer is a Firefox extension that enables you to upload and download files to and from your Amazon S3 account. The interface, which opens in a Firefox browser tab, looks very much like a regular FTP client with dual panes. It displays files on your PC on the left, files on S3 on the right, and status messages and information in a panel at the bottom.

Onto The Coding

As stated earlier, AWS is Amazon’s Web service infrastructure that encompasses various cloud services, including S3, EC2, SimpleDB and CloudFront. Integrating these varied services can be a daunting task. Thankfully, we have at our disposal an SDK library in the form of CloudFusion, which enables us to work with AWS effortlessly. CloudFusion is now the official AWS SDK for PHP, and it encompasses most of Amazon’s cloud products: S3, EC2, SimpleDB, CloudFront and many more. For this post, I downloaded the ZIP version of the CloudFusion SDK, but the library is also available as a PEAR package. So, go ahead: download the latest version from the official website, and extract the ZIP to your working directory or to your PHP include path. In the extracted directory, you will find the config-sample.inc.php file, which you should rename to config.inc.php. You will need to make some changes to the file to reflect your AWS credentials.

In the config file, locate the following lines:

define('AWS_KEY', '');
define('AWS_SECRET_KEY', '');

Modify the lines to mirror your Amazon AWS’ security credentials. You can find the credentials in your Amazon AWS account section, as shown below.

Get the keys, and fill them in on the following lines:

define('AWS_KEY', 'your_access_key_id');
define('AWS_SECRET_KEY', 'your_secret_access_key');

You can retrieve your access key and secret key from your Amazon account page:

With all of the basic requirements in place, let’s create our first bucket on Amazon S3, with a name of your choice. The following example shows a bucket by the name of com.smashingmagazine.images. (Of course, by the time you read this, this name may have already be taken.) Choose a structure for your bucket’s name that is relevant to your work. For each bucket, you can control access to the bucket, view access logs for the bucket and its objects, and set the geographical region where Amazon S3 will store the bucket and its contents.

/* Include the CloudFusion SDK class */
require_once( ‘sdk-1.4.4/sdk.class.php');

/* Our bucket name */
$bucket = 'com.smashingmagazine.images’;

/* Initialize the class */
$s3 = new AmazonS3();

/* Create a new bucket */
$resource = $s3->create_bucket($bucket, AmazonS3::REGION_US_E1);

/* Check if the bucket was successfully created */
if ($resource->isOK()) {
    print("'${bucket}' bucket created\n");
} else {
    print("Error creating bucket '${bucket}'\n");
}

Let’s go over each line in the example above. First, we included the CloudFusion SDK class in our file. You’ll need to adjust the path depending on where you’ve stored the SDK files.

require_once( 'sdk-1.4.4/sdk.class.php');

Next, we instantiated the Amazon S3 class:

$s3 = new AmazonS3();

In the next step, we created the actual bucket; in this case, com.smashingmagazine.images. Again, your bucket’s name must be unique across all existing bucket names in Amazon S3. One way to ensure this is to prefix a word with your company’s name or domain, as we’ve done here. But this does not guarantee that the name will be available. Nothing prevents anyone from creating a bucket named com.microsoft.apps or com.ibm.images, so choose wisely.

$bucket = 'com.smashingmagazine.images’;
$resource = $s3->create_bucket($bucket, AmazonS3::REGION_US_E1);

To reiterate, bucket names must comply with the following requirements:

  • May contain lowercase letters, numbers, periods (.), underscores (_) and hyphens (-);
  • Must start with a number or letter;
  • Must be between 3 and 255 characters long;
  • May not be formatted as an IP address (e.g. 265.255.5.4).

Also, you’ll need to select a geographical location for your bucket. A bucket can be stored in one of several regions. Reasons for choosing one region over another might be to optimize for latency, to minimize costs, or to satisfy regulatory requirements. Many organizations have privacy policies and regulations on where to store data, so consider this when selecting a location. Objects never leave the region they are stored in unless you explicitly transfer them to another region. That is, if your data is stored on servers located in the US, it will never be copied or transferred by Amazon to servers outside of this region; you’ll need to do that manually using the API or AWS tools. In the example above, we have chosen the REGION_US_E1 region.

Here are the permitted values for regions:

  • AmazonS3::REGION_US_E1
  • AmazonS3::REGION_US_W1
  • AmazonS3::REGION_EU_W1
  • AmazonS3::REGION_APAC_SE1
  • AmazonS3::REGION_APAC_NE1

Finally, we checked whether the bucket was successfully created:

if ($resource->isOK()) {
    print("'${bucket}' bucket created\n");
} else {
    print("Error creating bucket '${bucket}'\n");
}

Now, let’s see how to get a list of the buckets we’ve created on S3. So, before proceeding, create a few more buckets to your liking. Once you have a few buckets in your account, it is time to list them.

/* Include the CloudFusion SDK class */
require_once ('sdk-1.4.4/sdk.class.php');

/* Our bucket name */
$bucket = 'com.smashingmagazine.images;

/* Initialize the class */
$s3 = new AmazonS3();

/* Get a list of buckets */
$buckets = $s3->get_bucket_list();

if($buckets)  {
    foreach ($buckets as $b) {
        echo $b . "\n";
    }
}

The only new part in the code above is the following line, which gets an array of bucket names:

$buckets = $s3->get_bucket_list();

Finally, we printed out all of our buckets’ names.

if($buckets)  {
    foreach ($buckets as $b) {
        echo $b . "\n";
    }
}

This concludes our overview of creating and listing buckets in our S3 account. We also learned about S3Fox Organizer and the AWS console tools for working with your S3 account.

Uploading Data To Amazon S3

Now that we’ve learned how to create and list buckets in S3, let’s figure out how to put objects into buckets. This is a little complex, and we have a variety of options to choose from. The main method for doing this is create_object. The method takes the following format:

create_object ( $bucket, $filename, [ $opt = null ] )

The first parameter is the name of the bucket in which the object will be stored. The second parameter is the name by which the file will be stored on S3. Using only these two parameters is enough to create an empty object with the given file name. For example, the following code would create an empty object named config-empty.inc in the com.magazine.resources bucket:

$s3 = new AmazonS3();
$bucket = 'com.magazine.resources';
$response = $s3->create_object($bucket, 'config-empty.inc');

// Success?
var_dump($response->isOK());

Once the object is created, we can access it using a URL. The URL for the object above would be:

https://s3.amazonaws.com/com.magazine.resources/config-empty.inc

Of course, if you tried to access the URL from a browser, you would be greeted with an “Access denied� message, because objects stored on S3 are set to private by default, viewable only by the owner. You have to explicitly make an object public (more on that later).

To add some content to the object at the time of creation, we can use the following code. This would add the text “Hello World� to the config-empty.inc file.

$response = $s3->create_object($bucket,  config-empty.inc ‘,
    array(
        'body' => Hello World!'
));

As a complete example, the following code would create an object with the name simple.txt, along with some content, and save it in the given bucket. An object may also optionally contain meta data that describes that object.

/* Initialize the class */
$s3 = new AmazonS3();

/* Our bucket name */
$bucket = 'com.magazine.resources’;

$response = $s3->create_object($bucket, 'simple.txt',
    array(
    'body' => Hello World!'
));

if ($response->isOK())
{
    return true;
}

You can also upload a file, rather than just a string, as shown below. Although many options are displayed here, most have a default value and may be omitted. More information on the various options can be found in the “AWS SDK for PHP 1.4.7.�

require_once( ‘sdk-1.4.4/sdk.class.php');

$s3 = new AmazonS3();
$bucket = 'com.smashingmagazine.images’;

$response = $s3->create_object($bucket, 'source.php',
    array(
    'fileUpload' => 'test.php',
    'acl' => AmazonS3::ACL_PRIVATE,
    'contentType' => 'text/plain',
    'storage' => AmazonS3::STORAGE_REDUCED,
    'headers' => array( // raw headers
        'Cache-Control' => 'max-age',
        'Content-Encoding' => 'text/plain',
        'Content-Language' => 'en-US',
        'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT',
    )
));

// Success?
var_dump($response->isOK());

Details on the various options will be explained in the coming sections. For now, take on faith that the above code will correctly upload a file to the S3 server.

Writing Our Amazon S3 WordPress Plugin

With some background on Amazon S3 behind us, it is time to put our learning into practice. We are ready to build a WordPress plugin that will automatically back up our WordPress database to the S3 server and restore it when needed.

To keep the article focused and at a reasonable length, we’ll assume that you’re familiar with WordPress plugin development. If you are a little sketchy on the fundamentals, read “How to Create a WordPress Plugin� to get on track quickly.

The Plugin’s Framework

We’ll first create a skeleton and then gradually fill in the details. To create a plugin, navigate to the wp-content/plugins folder, and create a new folder named s3-backup. In the new folder, create a file named s3-backup.php. Open the file in the editor of your choice, and paste the following header information, which will describe the plugin for WordPress:

/*
Plugin Name: Amazon S3 Backup
Plugin URI: http://cloud-computing-rocks.com
Description: Plugin to back up WordPress database to Amazon S3
Version: 1.0
Author: Mr. Sameer
Author URI: http://www.codediesel.com
License: GPL2
*/

Once that’s done, go to the plugin’s page in the admin area, and activate the plugin.

Now that we’ve successfully installed a bare-bones WordPress plugin, let’s add the meat and create a complete working system. Before we start writing the code, we should know what the admin page for the plugin will ultimately look like and what tasks the plugin will perform. This will guide us in writing the code. Here is the main settings page for our plugin:

The interface is fairly simple. The primary task of the plugin will be to back up the current WordPress database to an Amazon S3 bucket and to restore the database from the bucket. The settings page will also have a function for naming the bucket in which the backup will be stored. Also, we can specify whether the backup will be available to the public or accessible only to you.

Below is a complete outline of the plugin’s code. We will elaborate on each section in turn.

/*
Plugin Name: Amazon S3 Backup
Plugin URI: http://cloud-computing-rocks.com
Description: Plugin to back up WordPress database to Amazon S3
Version: 1.0
Author: Mr. Sameer
Author URI: http://www.codediesel.com
License: GPL2
*/

$plugin_path = WP_PLUGIN_DIR . "/" . dirname(plugin_basename(__FILE__));

/* CloudFusion SDK */
require_once($plugin_path . '/sdk-1.4.4/sdk.class.php');

/* WordPress ZIP support library */
require_once(ABSPATH . '/wp-admin/includes/class-pclzip.php');

add_action('admin_menu', 'add_settings_page');

/* Save or Restore Database backup */
if(isset($_POST['aws-s3-backup'])) {
…
}

/* Generic Message display */
function showMessage($message, $errormsg = false) {
…
}   

/* Back up WordPress database to an Amazon S3 bucket */
function backup_to_AmazonS3() {
…
}

/* Restore WordPress backup from an Amazon S3 bucket */
function restore_from_AmazonS3() {
…
}

function add_settings_page() {
…
}

function draw_settings_page() {
…
}

Here is the directory structure that our plugin will use:

plugins (WordPress plugin directory)
---s3-backup (our plugin directory)
-------s3backup (restored backup will be stored in this directory)
-------sdk-1.4.4 (CloudFusion SDK directory)
-------s3-backup.php (our plugin source code)

Let’s start coding the plugin. First, we’ll initialize some variables for paths and include the CloudFusion SDK. A WordPress database can get large, so to conserve space and bandwidth, the plugin will need to compress the database before uploading it to the S3 server. To do this, we will use the class-pclzip.php ZIP compression support library, which is built into WordPress. Finally, we’ll hook the settings page to the admin menu.

$plugin_path = WP_PLUGIN_DIR . "/" . dirname(plugin_basename(__FILE__));

/* CloudFusion SDK */
require_once($plugin_path . '/sdk-1.4.4/sdk.class.php');

/* WordPress ZIP support library */
require_once(ABSPATH . '/wp-admin/includes/class-pclzip.php');

/* Create the admin settings page for our plugin */
add_action('admin_menu', 'add_settings_page');

Every WordPress plugin must have its own settings page. Ours is a simple one, with a few buttons and fields. The following is the code for it, which will handle the mundane work of saving the bucket’s name, displaying the buttons, etc.

function draw_settings_page() {
    ?>
    <div class="wrap">
        <h2><?php echo('WordPress Database Amazon S3 Backup') ?>
            <input type="hidden" name="action" value="update" />
            <input type="hidden" name="page_options" value="aws-s3-access-public, aws-s3-access-bucket" />
            <?php
                wp_nonce_field('update-options');
                $access_options = get_option('aws-s3-access-public');
            ?>
            <p>
                <?php echo('Bucket Name:') ?>
                <input type="text" name="aws-s3-access-bucket" size="64" value="<?php echo get_option('aws-s3-access-bucket'); ?>" />
            </p>
            <p>
                <?php echo('Public:') ?>
                <input type="checkbox" name="aws-s3-access-public" <?php checked(1 == $access_options); ?> value="1" />
            </p>
            <p class="submit">
                <input type="submit" class="button-primary" name="Submit" value="<?php echo('Save Changes') ?>" />
            </p>
        </form>
        <hr />
        <form method="post" action="">
            <p class="submit">
                <input type="submit" name="aws-s3-backup" value="<?php echo('Backup Database') ?>" />
                <input type="submit" name="aws-s3-restore" value="<?php echo('Restore Database') ?>" />
            </p>
        </form>
    </div>
    <?php
}

Setting up the base framework is essential if the plugin is to work correctly. So, double-check your work before proceeding.

Database Upload

Next is the main part of the plugin, its raison d’être: the function for backing up the database to the S3 bucket.

/* Back up WordPress database to an Amazon S3 bucket */
function backup_to_AmazonS3()
{
    global $wpdb, $plugin_path;

    /* Backup file name */
    $backup_zip_file = 'aws-s3-database-backup.zip';

    /* Temporary directory and file name where the backup file will be stored */
    $backup_file = $plugin_path . "/s3backup/aws-s3-database-backup.sql";

    /* Complete path to the compressed backup file */
    $backup_compressed = $plugin_path . "/s3backup/" . $backup_zip_file;

    $tables = $wpdb->get_col("SHOW TABLES LIKE '" . $wpdb->prefix . "%'");
	$result = shell_exec('mysqldump --single-transaction -h ' .
                         DB_HOST . ' -u ' . DB_USER . ' --password="' .
                         DB_PASSWORD . '" ' .
                         DB_NAME . ' ' . implode(' ', $tables) . ' > ' .
                         $backup_file);

    $backups[] = $backup_file;

    /* Create a ZIP file of the SQL backup */
    $zip = new PclZip($backup_compressed);
    $zip->create($backups);

    /* Connect to Amazon S3 to upload the ZIP */
    $s3 = new AmazonS3();
    $bucket = get_option('aws-s3-access-bucket');

    /* Check if a bucket name is specified */
    if(empty($bucket)) {
        showMessage("No Bucket specified!", true);
        return;
    }

    /* Set backup public options */
    $access_options = get_option('aws-s3-access-public');

    if($access_options) {
        $access = AmazonS3::ACL_PUBLIC;
    } else {
        $access = AmazonS3::ACL_PRIVATE;
    }

    /* Upload the database itself */
    $response = $s3->create_object($bucket, $backup_zip_file,
                array(
                'fileUpload' => $backup_compressed,
                'acl' => $access,
                'contentType' => 'application/zip',
                'encryption' => 'AES256',
                'storage' => AmazonS3::STORAGE_REDUCED,
                'headers' => array( // raw headers
                    'Cache-Control' => 'max-age',
                    'Content-Encoding' => 'application/zip',
                    'Content-Language' => 'en-US',
                    'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT',
                )
            ));

    if($response->isOK()) {
        unlink($backup_compressed);
        unlink($backup_file);
        showMessage("Database successfully backed up to Amazon S3.");
    } else {
        showMessage("Error connecting to Amazon S3", true);
    }
}

The code is self-explanatory, but some sections need a bit of explanation. Because we want to back up the complete WordPress database, we need to somehow get ahold of the MySQL dump file for the database. There are multiple ways to do this: by using MySQL queries within WordPress to save all tables and rows of the database, or by dropping down to the shell and using mysqldump. We will use the second method. The code for the database dump shown below uses the shell_exec function to run the mysqldump command to grab the WordPress database dump. The dump is further saved to the aws-s3-database-backup.sql file.

$tables = $wpdb->get_col("SHOW TABLES LIKE '" . $wpdb->prefix . "%'");
$result = shell_exec('mysqldump --single-transaction -h ' .
                         DB_HOST . ' -u ' . DB_USER .
                         ' --password="' . DB_PASSWORD . '" ' .
                         DB_NAME . ' ' . implode(' ', $tables) .
                         ' > ' . $backup_file);

$backups[] = $backup_file;

The SQL dump will obviously be big on most installations, so we’ll need to compress it before uploading it to S3 to conserve space and bandwidth. We’ll use WordPress’ built-in ZIP functions for the task. The PclZip class is stored in the /wp-admin/includes/class-pclzip.php file, which we have included at the start of the plugin. The aws-s3-database-backup.zip file is the final ZIP file that will be uploaded to the S3 bucket. The following lines will create the required ZIP file.

/* Create a ZIP file of the SQL backup */
    $zip = new PclZip($backup_compressed);
    $zip->create($backups);

The PclZip constructor takes a file name as an input parameter; aws-s3-database-backup.zip, in this case. And to the create method we pass an array of files that we want to compress; we have only one file to compress, aws-s3-database-backup.sql.

Now that we’ve taken care of the database, let’s move on to the security. As mentioned in the introduction, objects stored on S3 can be set as private (viewable only by the owner) or public (viewable by everyone). We set this option using the following code.

/* Set backup public options */
    $access_options = get_option('aws-s3-access-public');

    if($access_options) {
        $access = AmazonS3::ACL_PUBLIC;
    } else {
        $access = AmazonS3::ACL_PRIVATE;
    }

We have listed only two options for access (AmazonS3::ACL_PUBLIC and AmazonS3::ACL_PRIVATE), but there are a couple more, as listed below and the details of which you can find in the Amazon SDK documentation.

  • AmazonS3::ACL_PRIVATE
  • AmazonS3::ACL_PUBLIC
  • AmazonS3::ACL_OPEN
  • AmazonS3::ACL_AUTH_READ
  • AmazonS3::ACL_OWNER_READ
  • AmazonS3::ACL_OWNER_FULL_CONTROL

Now on to the main code that does the actual work of uploading. This could have been complex, but the CloudFusion SDK makes it easy. We use the create_object method of the S3 class to perform the upload. We got a short glimpse of the method in the last section.

/* Upload the database itself */
$response = $s3->create_object($bucket, $backup_zip_file, array(
            'fileUpload' => $backup_compressed,
            'acl' => $access,
            'contentType' => 'application/zip',
            'encryption' => 'AES256',
            'storage' => AmazonS3::STORAGE_REDUCED,
            'headers' => array( // raw headers
                'Cache-Control' => 'max-age',
                'Content-Encoding' => 'application/zip',
                'Content-Language' => 'en-US',
                'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT',
            )
        ));

Let’s go over each line in turn. But bear in mind that the method has quite a many options, so refer to the original documentation itself.

  • $backup_zip_file
    The name of the object that will be created on S3.
  • 'fileUpload' => $backup_compressed
    The name of the file, whose data will be uploaded to the server. In our case, aws-s3-database-backup.zip.
  • 'acl' => $access
    The access type for the object. In our case, either public or private.
  • 'contentType' => 'application/zip'
    The type of content that is being sent in the body. If a file is being uploaded via fileUpload, as in our case, it will attempt to determine the correct MIME type based on the file’s extension. The default value is application/octet-stream.
  • 'encryption' => 'AES256'
    The algorithm to use for encrypting the object. (Allowed values: AES256)
  • 'storage' => AmazonS3::STORAGE_REDUCED
    Specifies whether to use “standard� or “reduced redundancy� storage. Allowed values are AmazonS3::STORAGE_STANDARD and AmazonS3::STORAGE_REDUCED. The default value is STORAGE_STANDARD.
  • 'headers' => array( // raw headers
    'Cache-Control' => 'max-age',
    'Content-Encoding' => 'application/zip',
    'Content-Language' => 'en-US',
    'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT',
    )

    The standard HTTP headers to send along with the request. These are optional.

Note that the plugin does not provide a function to create a bucket on Amazon S3. You need use Amazon’s AWS Management Console or S3Fox Organizer to create a bucket before uploading objects to it.

Database Restore

Merely being able to back up data is insufficient. We also need to be able to restore it when the need arises. In this section, we’ll lay out the code for restoring the database from S3. When we say “restore,� keep in mind that the database’s ZIP file from S3 will simply be downloaded to the specified folder in our plugin directory. The actual database on our WordPress server is not changed in any way; you will have to restore the database yourself manually. We could have equipped our plugin to also auto-restore, but that would have made the code a lot more complex.

Here is the complete code for the restore function:

/* Restore WordPress backup from an Amazon S3 bucket */
function restore_from_AmazonS3()
{
    global $plugin_path;

    /* Backup file name */
    $backup_zip_file = 'aws-s3-database-backup.zip';

    /* Complete path to the compressed backup file */
    $backup_compressed = $plugin_path . "/s3backup/" . $backup_zip_file;

    $s3 = new AmazonS3();
    $bucket = get_option('aws-s3-access-bucket');

    if(empty($bucket)) {
        showMessage("No Bucket specified!", true);
        return;
    }

    $response = $s3->get_object($bucket, $backup_zip_file, array(
                'fileDownload' => $backup_compressed
                 ));

    if($response->isOK()) {
        showMessage("Database successfully restored from Amazon S3.");
    } else {
        showMessage("Error connecting to Amazon S3", true);
    }

}

As you can see, the code for restoring is much simpler than the code for uploading. The function uses the get_object method of the SDK, the definition of which is as follows:

get_object ( $bucket, $filename, [ $opt = null ] )

The details of the method’s parameters are enumerated below:

  • $bucket
    The name of the bucket where the backup file is stored. The bucket’s name is stored in our WordPress settings variable aws-s3-access-bucket, which we retrieve using the get_option('aws-s3-access-bucket') function.
  • $backup_zip_file
    The file name of the backup object. In our case, aws-s3-database-backup.zip.
  • 'fileDownload' => $backup_compressed
    The file system location to download the file to, or an open file resource. In our case, the s3backup directory in our plugin folder. It must be a server-writable location.

At the end of the function, we check whether the download was successful and inform the user.

In addition to the above functions, there are some miscellaneous support functions. One is for displaying a message to the user:

/* Generic message display */
function showMessage($message, $errormsg = false) {
	if ($errormsg) {
		echo '<div id="message" class="error">';
	} else {
		echo '<div id="message" class="updated">';
	}

	echo "<p><strong>$message</strong></p></div>";
}

Another is to add a hook for the settings page to the admin section.

function add_settings_page() {
    add_options_page('Amazon S3 Backup', 'Amazon S3 Backup',8,
                     's3-backup', 'draw_settings_page');
}

The button commands for backing up and restoring are handled by this simple code:

/* Save or Restore Database backup */
if(isset($_POST['aws-s3-backup'])) {
    backup_to_AmazonS3();
} elseif(isset($_POST['aws-s3-restore'])) {
    restore_from_AmazonS3();
}

This rounds out our tutorial on creating a plugin to integrate Amazon S3 with WordPress. Although we could have added more features to the plugin, the functionality was kept bare to maintain focus on Amazon S3’s basics, rather then on the calisthenics of plugin development.

In Conclusion

This has been a relatively long article, the aim of which was to acquaint you with Amazon S3 and how to use it in your PHP applications. Together, we’ve developed a WordPress plugin to back up our WordPress database to the S3 cloud and to retrieve it when needed. To reiterate, S3 is a complex service with many features and functions, which we have not covered here due to space considerations. Perhaps in future articles we’ll elaborate on the other features of S3 and add more features to the plugin we developed here.

(al)


© Sameer Borate for Smashing Magazine, 2011.


Content Strategy Within The Design Process





 



 


The first thing to understand about content strategy is that no two people understand it the same way. It’s a relatively new — and extremely broad — discipline with no single definitive definition. A highly informative Knol on content strategy defines it as follows:

Content strategy is an emerging field of practice encompassing every aspect of content, including its design, development, analysis, presentation, measurement, evaluation, production, management, and governance.

This definition is a great place to start. Although the discipline has clearly evolved, this breakdown of its scope makes perfect sense. The aspects of content strategy that matter most to Web designers in this definition are design (obviously!), development, presentation and production. In this article, we’ll concentrate on the relationship between content strategy and design in creating, organizing and displaying Web copy.

As a writer and content strategist myself, I’ve worked with designers in all of these areas and find the creative process highly enriching. I’ve been fortunate enough to work with designers who are quick to challenge ideas that are unclear or unsound, who are brilliant at creating striking visual representations of even the most complex concepts. A lively interplay between design and content is not only fun, but is how spectacular results are achieved. This is why content strategy should matter a great deal to designers.

What Is Content Strategy, And Why Should A Designer Care?

Content strategy is the glue that holds a project together. When content strategy is ambiguous or absent, don’t be surprised if you end up with the Internet equivalent of Ishtar. When content strategy is in place and in its proper place, we’re on our way to producing beautiful and effective results.

Language
Slide from The Language of Interfaces by Des Traynor.

While wrapping one’s head around content strategy might be difficult, the thing that makes it work is very simple: good communication. Sometimes a project moves along like a sports car on a superhighway. Other times, the road is so full of bumps and potholes that it’s a wonder we ever reach our destination. As we explore the relationship between content strategy and design, I’ll detail how I keep the channels of communication open and go over the workflow processes that I’ve used to support that effort. I hope that sharing my experiences (both positive and negative) will help you contribute to and manage projects more effectively and deliver better products to clients.

How To Get Started: The First Step Is The Longest

Project manager: We need a landing page for client X.

Designer: I can’t start the design until I see some content.

Writer: I can’t start writing until I see a design.

You may find this dialogue amusing… until it happens to you! At our firm, we find that the best way to get past such a standoff is to write first. This is because content strategy, at a fundamental level, frames a project for the designer. As a content strategist, my job is to articulate the why, where, who, what and how of the content:

  • Why is it important to convey this message? This speaks to purpose.
  • Where on the website should the message appear? This speaks to context.
  • Who is the audience? This speaks to the precision of the message.
  • What are we trying to say? This speaks to clarity.
  • How do we convey and sequence the information for maximum impact? This speaks to persuasiveness.

Bringing it down to a more detailed level, let’s consider a landing page. A content strategist will determine such things as the following:

  • Audience
    Is the audience sophisticated? Down to earth? College-level? Predominately male? Female? Etc.
  • Word count
    Some pitches scream for long copy, while others must be stripped to the bare minimum. SEO might factor into the equation as well.
  • Messaging priorities
    What is the most important point to convey? The least important? What needs to be said first (the hook)? What needs to be said just leading up to the call to action?
  • Call to action
    What will the precise wording be? What emotional and intellectual factors will motivate the visitor to click through?

Clear direction on these points not only helps the writer write, but helps the designer with layout, color palettes and image selection. When we start with words, we produce designs that are more reflective of the product’s purpose.

Landing pages are a great place to try this workflow, because in terms of content strategy, they are less complex than many other types of Web pages. A product category page, on the other hand, might have a less obvious purpose or multiple purposes, considerably greater word counts, more (and more involved) messaging points, and a variety of SEO considerations, all of which would affect its design.

Quick Tips for Getting Started

  • Make sure someone is specifically responsible for content strategy. If strategic responsibility is vague, your final product will be, too.
  • Slow down! Everybody, me included, is eager to dive headfirst into a new project. But “ready-aim-fireâ€� is not a winning content strategy. Make sure everyone is on the same page conceptually before cranking out work.
  • If content strategy falls on your shoulders as a designer, cultivate an understanding of the discipline. Resources are listed at the end of this article to help you.
  • Make sure designers and writers understand what their roles are — and are not. There’s no need for writers to tell designers how to design, or for designers to tell writers how to write.

Perfecting The Process: Break Up Those Bottlenecks

Project manager: How are things coming along?

Developer: I’m waiting on design.

Designer: I’m waiting on content.

Writer: I’m waiting on project management.

Web development projects in particular involve a lot of moving parts, with potential bottlenecks everywhere. The graphic below describes our Web development process, with an emphasis on the design and content components. Chances are, whether you are freelancing or at an agency, at least parts of this should look familiar:

Design & Content Process
Link: Larger version (Image credit: Chris Depa, Straight North)

The process is by no means perfect, but it is continually improving. In the next section, we’ll look at the many types of content-design difficulties you might experience.

To help our designers lay out text for wireframes and designs, we utilize content templates based on various word counts. These templates also incorporate best practices for typography and SEO. When the designer drops the template into a wireframe, it looks like this:

Content in wireframe
SEO content template in a wireframe.

The use of content templates not only takes a lot of guesswork out of the designer’s job, but also speeds up client reviews. When clients are able to see what the content will roughly look like in the allotted space, they tend to be more comfortable with the word counts and the placement of text on the page.

Communication can be streamlined using project management software. We use Basecamp, which is a popular system, but many other good ones are available. If you’re a freelancer, getting clients to work on your preferred project management platform can be an uphill battle, to say the least. Still, I encourage you to try; my experience in managing projects via email has been dismal, and many freelance designers I know express the same frustration.

The big advantage of a project management system is that it provides a single place for team members to manage tasks and interact. Internal reviews of design templates is one good example. The project manager can collect feedback from everyone in one place, and each participant can see what others have said and respond to it. Consolidating this information prevents the gaps and miscommunication that can occur when projects are managed through multiple email exchanges. Designers can see all of the feedback in one place — and only one place. This is a big time-saver.

Quick Tips for the Creative Process

  • Make sure someone is specifically responsible for project management.
  • Whether or not your process is sophisticated, get it down in writing and in front of all team members before the project starts. This really helps to align expectations and keep communication flowing.
  • Meet at regular intervals to discuss status and problems. Hold yourself and others accountable.
  • Get approvals along the way, rather than dump the completed project in the client’s lap. Having clients sign off on a few pages of content and one or two templates really helps to align the creative process with client expectations, and it reduces the risk of those massive overhauls at the tail end that demolish budgets and blow deadlines.
  • Writers and designers should discuss issues as quickly, openly and thoroughly as possible.

Conflict Resolution: Why Can’t We All Just Get Along?

Designer: All these words are boring me.

Writer: All these images are confusing me.

Project manager: All these arguments are killing me.

No matter how clear the strategy, no matter how smooth the process, design and content will conflict somewhere along the line in almost every project. In fact, if creative tension is absent, it may well indicate that the project is in serious trouble. Here are the issues I run into on a fairly regular basis, as well as ideas for getting past them.

Making Room for SEO Content

Big chunks of content are bothersome to designers; even as a writer, I worry about high word counts turning off some of our audience. However, when SEO considerations demand a lot of words on a page, there are ways to make everyone happy:

  1. Tabs are a nifty way to hide text.
    Tabs allow you to keep the page tight vertically. Even more importantly, they enable visitors to easily find the information they need — and ignore what they don’t need. Below is a tabbed product area in the Apple Store.
    Apple Tabs
    The Apple Store
  2. Keep SEO content below the fold.
    This is a compromise, because an SEO strategist would prefer optimized content to appear above the fold. However, if a website is to have any hope of converting traffic brought in by SEO, then visitors need to see appealing design, not a 300-word block of text.
    SEO below the fold
    The Movies Now landing page.
  3. Step up creativity on non-SEO pages.
    For many websites, the pages that are most important to SEO have to do with products and services, where conveying features and benefits is needed more than wowing visitors with design. Conversely, pages on which awesome design matters most are often unimportant for SEO: “About,� bio and customer service pages, for example.
    Carsonified Team Pages
    Carsonified’s team pages.

Clarity vs. Creativity

We fight this battle over what I call “design contentâ€� all the time — primarily with navigation labels, home-page headers and call-to-action blocks. At a fundamental level, it is a battle over the question, “Which wins over the hearts and minds of visitors more: awesome design or straightforward information?â€�

Navigation
Making the labels for navigation straightforward is a fairly established best practice. Predictability is important: if visitors are looking for your “About� page, and they finally stumble on it by clicking on “Be Amazed,� then the emotion you will have elicited is irritation, not adoration. Be as creative as you want with the look and feel of the labels, but to maximize the user experience, the text and positioning of the labels must be as vanilla as possible.

Interface
For insight on how to achieve clarity, read “The Language of Interfaces.�

Design of the header on the home page
Rotating header images and other types of animation are rather in vogue these days, and they’re a good way to convey a thumbnail sketch of a firm’s capabilities and value proposition. Content must convey information, but the header must work on an emotional level to be effective. Writers must take a back seat to designers! The Ben the Bodyguard home page (below) starts to build a connection using a comic character and storyline. This is different than most sites that simply talk about feature after feature.

Ben the Bodyguard
The design should tell a story. (Ben the Bodyguard)

Call-to-action blocks
Before all else, make sure your website’s pages even have calls to action, because this is your opportunity to lead visitors to the logical next step. A call to action could be as simple as a text link, such as “Learn more about our Chicago SEO services.� Generally more effective for conversion would be a design element that functions almost as a miniature landing page.

Much like landing pages, the wording of the call-to-action phrase must be crystal clear and be completely relevant to the page to which you are taking visitors. Yet impeccable wording is not enough: the design of the content block must be captivating, and the text laid out in a way that makes it eminently readable.

Designers can get rather snarly when I tell them their design for a call to action needs five more words: it might force them to rethink the entire design. Many times, though, a discussion with the designer will make us realize that we don’t actually need those extra five words; in fact, we’ll sometimes hit on a way to reduce the word count. The creative interplay mentioned earlier makes a huge difference in this all-important area of conversion optimization.

Calls to action
Calls to action require excellent design and content.

Quick Tips for Conflict Resolution

  1. Keep the lines of communication open between all team members and the client.
  2. Select a project manager with great communication skills and an objective point of view.
  3. Stay focused on the purpose of the design: is it to persuade, motivate, inform or something else? Creative disagreements should never be theoretical; they should always be grounded in what will increase the real-world effectiveness of the work at hand.

Long-Winded Writers Vs. Lofty-Minded Designers

One thing I run up against continually is my own tendency to say too much and a designer’s tendency to say too little. Ask a writer what time it is, and they’ll tell you how to make a clock. Ask a designer what time it is, and they’ll give you a stylized image of a pendulum. Neither answer is particularly helpful!

These opposing mentalities pose challenges in Web design. Does an image alone convey enough information about a product’s key benefit? Will the length of a 200-word explanation of that benefit deter people from reading it? How intuitive can we expect visitors to be? How patient?

This is when having a process that encourages communication between team members makes a difference. I wish I had a secret formula for resolving conflict, but I don’t. I know of only two ways to balance design and content philosophies, and one of them is to talk it out as a team. As I said, communication is at the heart of an effective content strategy, and we have to resist the temptation that some of us have to withdraw into a shell when we encounter confrontation.

The other way to resolve conflicts — astoundingly underused, in my experience — is to get feedback from target users. Simply showing people a Web page and then asking for their key takeaways will tell you just about all you need to know about how effective you’ve been in getting the point across. Our opinion of our own work will always be subjective. Furthermore, because we’re emotionally invested is what we’ve created, discussing its flaws calmly and collectedly is difficult. Users are the ultimate judge of any creative effort, so why not take subjectivity and emotion out of the equation by going directly to the source?

Resources

  • The New Rules of Marketing and PR, David Meerman Scott
    Explains content strategy better than anything I’ve read. The third edition was published in July 2011.
  • “Content Strategy,â€� Google Knol
    For a thorough overview of content strategy and links to books, blogs and other resources, check out this fantastic Knol.
  • “Call to Action Buttons: Examples and Best Practices,â€� Jacob Gube
    To promote creative compatibility, designers and writers alike should study this Smashing Magazine article.
  • “Top Ten Mistakes of Web Management,â€� Jakob Nielsen
    For insight into design-related project management, read this post by the brilliant Web usability expert Jakob Nielsen.

(al) (fi)


© Brad Shorr for Smashing Magazine, 2011.


Integrating Amazon S3 With WordPress





 



 


Computing is full of buzzwords, “cloud computing� being the latest one. But unlike most trends that fizzle out after the initial surge, cloud computing is here to stay. This article goes over Amazon’s S3 cloud storage service and guides you to implementing a WordPress plugin that backs up your WordPress database to Amazon’s S3 cloud. Note that this is not a tutorial on creating a WordPress plugin from scratch, so some familiarity with plugin development is assumed.

The reason for using Amazon S3 to store important data follows from the “3-2-1� backup rule, coined by Peter Krogh. According to the 3-2-1 rule, you would keep three copies of any critical data: the original data, a backup copy on removable media, and a second backup at an off-site location (in our case, Amazon’s S3 cloud).

Cloud Computing, Concisely

Cloud computing is an umbrella term for any data or software hosted outside of your local system. Cloud computing is categorized into three main service types: infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS).

  • Infrastructure as a service
    IaaS provides virtual storage, virtual machines and other hardware resources that clients can use on a pay-per-use basis. Amazon S3, Amazon EC2 and RackSpace Cloud are examples of IaaS.
  • Platform as a service
    PaaS provides virtual machines, application programming interfaces, frameworks and operating systems that clients can deploy for their own applications on the Web. Force.com, Google AppEngine and Windows Azure are examples of PaaS.
  • Software as a service
    Perhaps the most common type of cloud service is SaaS. Most people use services of this type daily. SaaS provides a complete application operating environment, which the user accesses through a browser rather than a locally installed application. SalesForce.com, Gmail, Google Apps and Basecamp are some examples of SaaS.

For all of the service types listed above, the service provider is responsible for managing the cloud system on behalf of the user. The user is spared the tedium of having to manage the infrastructure required to operate a particular service.

Amazon S3 In A Nutshell

Amazon Web Services (AWS) is a bouquet of Web services offered by Amazon that together make up a cloud computing platform. The most essential and best known of these services are Amazon EC2 and Amazon S3. AWS also includes CloudFront, Simple Queue Service, SimpleDB, Elastic Block Store. In this article, we will focus exclusively on Amazon S3.

Amazon S3 is cloud-based data-storage infrastructure that is accessible to the user programmatically via a Web service API (either SOAP or REST). Using the API, the user can store various kinds of data in the S3 cloud. They can store and retrieve data from anywhere on the Web and at anytime using the API. But S3 is nothing like the file system you use on your computer. A lot of people think of S3 as a remote file system, containing a hierarchy of files and directories hosted by Amazon. Nothing could be further from the truth.

Amazon S3 is a flat-namespace storage system, devoid of any hierarchy whatsoever. Each storage container in S3 is called a “bucket,� and each bucket serves the same function as that of a directory in a normal file system. However, there is no hierarchy within a bucket (that is, you cannot create a bucket within a bucket). Each bucket allows you to store various kinds of data, ranging in size from 1 B to a whopping 5 GB.

A file stored in a bucket is referred to as an object. An object is the basic unit of stored data on S3. Objects consist of data and meta data. The meta data is a set of name-value pairs that describe the object. Meta data is optional but often adds immense value, whether it’s the default meta data added by S3 (such as the date last modified) or standard HTTP meta data such as Content-Type.

So, what kinds of objects can you store on S3? Any kind you like. It could be a simple text file, a style sheet, programming source code, or a binary file such as an image, video or ZIP file. Each S3 object has its own URL, which you can use to access the object in a browser (if appropriate permissions are set — more on this later).

You can write the URL in two formats, which look something like this:

The bucket’s name here is deliberately simple, codediesel. It can be more complex, reflecting the structure of your application, like codediesel.wordpress.backup or codediesel.assets.images.

Every S3 object has a unique URL, formed by concatenating the following components:

  1. Protocol (http:// or https://),
  2. S3 end point (s3.amazonaws.com),
  3. Bucket’s name,
  4. Object key, starting with /.

In order to be able to identify buckets, the S3 system requires that you assign a name to each bucket, which must be unique across the S3 bucket namespace. So, if a user has named one of their buckets company-docs, you cannot create a bucket with that name anywhere in the S3 namespace. Object names in a bucket, however, must be unique only to that bucket; so, two different buckets can have objects with the same name. Also, you can describe objects stored in buckets with additional information using meta data.

Bucket names must comply with the following requirements:

  • May contain lowercase letters, numbers, periods (.), underscores (_) and hyphens (-);
  • Must begin with a number or letter;
  • Must be between 3 and 255 characters long;
  • May not be formatted as an IP address (e.g. 265.255.5.4).

In short, Amazon S3 provides a highly reliable cloud-based storage infrastructure, accessible via a SOAP or REST API. Some common usage scenarios for S3 are:

  • Backup and storage
    Provide data backup and storage services.
  • Host applications
    Provide services that deploy, install and manage Web applications.
  • Host media
    Build a redundant, scalable and highly available infrastructure that hosts video, photo or music uploads and downloads.
  • Deliver software
    Host your software applications that customers can download.

Amazon S3’s Pricing Model

Amazon S3 is a paid service; you need to attach a credit card to your Amazon account when signing up. But it is surprisingly low priced, and you pay only for what you use; if you use no resources in your S3 account, you pay nothing. Also, as part of the AWS “Free Usage Tier,� upon signing up, new AWS customers receive 5 GB of Amazon S3 storage, 20,000 GET requests, 2,000 PUT requests, and 15 GB of data transfer out each month free for one year.

So, how much do you pay after the free period. As a rough estimate, if you stored 5 GB of data per month, with data transfers of 15 GB and 40,000 GET and PUT requests a month, the cost would be around $2.60 per month. That’s lower than the cost of a burger — inexpensive by any standard. The prices may change, so use the calculator on the S3 website.

Your S3 usage is charged according to three main parameters:

  • The total amount of data stored,
  • The total amount of data transferred in and out of S3 per month,
  • The number of requests made to S3 per month.

Your S3 storage charges are calculated on a unit known as a gigabyte-month. If you store 1 GB for one month, you’ll be charged for one gigabyte-month, which is $0.14.

Your data transfer charges are based on the amount of data uploaded and downloaded from S3. Data transferred out of S3 is charged on a sliding scale, starting at $0.12 per gigabyte and decreasing based on volume, reaching $0.050 per gigabyte for all outgoing data transfer in excess of 350 terabytes per month. Note that there is no charge for data transferred within an Amazon S3 “region� via a COPY request, and no charge for data transferred between Amazon EC2 and Amazon S3 within the same region or for data transferred between the Amazon EC2 Northern Virginia region and the Amazon S3 US standard region. To avoid surprises, always check the latest pricing policies on Amazon.

Introduction To The Amazon S3 API And CloudFusion

Now with the theory behind us, let’s get to the fun part: writing code. But before that, you will need to register with S3 and create an AWS account. If you don’t already have one, you’ll be prompted to create one when you sign up for Amazon S3.

Before moving on to the coding part, let’s get acquainted with some visual tools that we can use to work with Amazon S3. Various visual and command-line tools are available to help you manage your S3 account and the data in it. Because the visual tools are easy to work with and user-friendly, we will focus on them in this article. I prefer working with the AWS Management Console for security reasons.

AWS Management Console

The Management Console is a part of the AWS. Because it is a part of your AWS account, no configuration is necessary. Once you’ve logged in, you have full access to all of your S3 data and other AWS services. You can create new buckets, create objects, apply security policies, copy objects to different buckets, and perform a multitude of other functions.

S3Fox Organizer

The other popular tool is S3Fox Organizer. S3Fox Organizer is a Firefox extension that enables you to upload and download files to and from your Amazon S3 account. The interface, which opens in a Firefox browser tab, looks very much like a regular FTP client with dual panes. It displays files on your PC on the left, files on S3 on the right, and status messages and information in a panel at the bottom.

Onto The Coding

As stated earlier, AWS is Amazon’s Web service infrastructure that encompasses various cloud services, including S3, EC2, SimpleDB and CloudFront. Integrating these varied services can be a daunting task. Thankfully, we have at our disposal an SDK library in the form of CloudFusion, which enables us to work with AWS effortlessly. CloudFusion is now the official AWS SDK for PHP, and it encompasses most of Amazon’s cloud products: S3, EC2, SimpleDB, CloudFront and many more. For this post, I downloaded the ZIP version of the CloudFusion SDK, but the library is also available as a PEAR package. So, go ahead: download the latest version from the official website, and extract the ZIP to your working directory or to your PHP include path. In the extracted directory, you will find the config-sample.inc.php file, which you should rename to config.inc.php. You will need to make some changes to the file to reflect your AWS credentials.

In the config file, locate the following lines:

define('AWS_KEY', '');
define('AWS_SECRET_KEY', '');

Modify the lines to mirror your Amazon AWS’ security credentials. You can find the credentials in your Amazon AWS account section, as shown below.

Get the keys, and fill them in on the following lines:

define('AWS_KEY', 'your_access_key_id');
define('AWS_SECRET_KEY', 'your_secret_access_key');

You can retrieve your access key and secret key from your Amazon account page:

With all of the basic requirements in place, let’s create our first bucket on Amazon S3, with a name of your choice. The following example shows a bucket by the name of com.smashingmagazine.images. (Of course, by the time you read this, this name may have already be taken.) Choose a structure for your bucket’s name that is relevant to your work. For each bucket, you can control access to the bucket, view access logs for the bucket and its objects, and set the geographical region where Amazon S3 will store the bucket and its contents.

/* Include the CloudFusion SDK class */
require_once( ‘sdk-1.4.4/sdk.class.php');

/* Our bucket name */
$bucket = 'com.smashingmagazine.images’;

/* Initialize the class */
$s3 = new AmazonS3();

/* Create a new bucket */
$resource = $s3->create_bucket($bucket, AmazonS3::REGION_US_E1);

/* Check if the bucket was successfully created */
if ($resource->isOK()) {
    print("'${bucket}' bucket created\n");
} else {
    print("Error creating bucket '${bucket}'\n");
}

Let’s go over each line in the example above. First, we included the CloudFusion SDK class in our file. You’ll need to adjust the path depending on where you’ve stored the SDK files.

require_once( 'sdk-1.4.4/sdk.class.php');

Next, we instantiated the Amazon S3 class:

$s3 = new AmazonS3();

In the next step, we created the actual bucket; in this case, com.smashingmagazine.images. Again, your bucket’s name must be unique across all existing bucket names in Amazon S3. One way to ensure this is to prefix a word with your company’s name or domain, as we’ve done here. But this does not guarantee that the name will be available. Nothing prevents anyone from creating a bucket named com.microsoft.apps or com.ibm.images, so choose wisely.

$bucket = 'com.smashingmagazine.images’;
$resource = $s3->create_bucket($bucket, AmazonS3::REGION_US_E1);

To reiterate, bucket names must comply with the following requirements:

  • May contain lowercase letters, numbers, periods (.), underscores (_) and hyphens (-);
  • Must start with a number or letter;
  • Must be between 3 and 255 characters long;
  • May not be formatted as an IP address (e.g. 265.255.5.4).

Also, you’ll need to select a geographical location for your bucket. A bucket can be stored in one of several regions. Reasons for choosing one region over another might be to optimize for latency, to minimize costs, or to satisfy regulatory requirements. Many organizations have privacy policies and regulations on where to store data, so consider this when selecting a location. Objects never leave the region they are stored in unless you explicitly transfer them to another region. That is, if your data is stored on servers located in the US, it will never be copied or transferred by Amazon to servers outside of this region; you’ll need to do that manually using the API or AWS tools. In the example above, we have chosen the REGION_US_E1 region.

Here are the permitted values for regions:

  • AmazonS3::REGION_US_E1
  • AmazonS3::REGION_US_W1
  • AmazonS3::REGION_EU_W1
  • AmazonS3::REGION_APAC_SE1
  • AmazonS3::REGION_APAC_NE1

Finally, we checked whether the bucket was successfully created:

if ($resource->isOK()) {
    print("'${bucket}' bucket created\n");
} else {
    print("Error creating bucket '${bucket}'\n");
}

Now, let’s see how to get a list of the buckets we’ve created on S3. So, before proceeding, create a few more buckets to your liking. Once you have a few buckets in your account, it is time to list them.

/* Include the CloudFusion SDK class */
require_once ('sdk-1.4.4/sdk.class.php');

/* Our bucket name */
$bucket = 'com.smashingmagazine.images;

/* Initialize the class */
$s3 = new AmazonS3();

/* Get a list of buckets */
$buckets = $s3->get_bucket_list();

if($buckets)  {
    foreach ($buckets as $b) {
        echo $b . "\n";
    }
}

The only new part in the code above is the following line, which gets an array of bucket names:

$buckets = $s3->get_bucket_list();

Finally, we printed out all of our buckets’ names.

if($buckets)  {
    foreach ($buckets as $b) {
        echo $b . "\n";
    }
}

This concludes our overview of creating and listing buckets in our S3 account. We also learned about S3Fox Organizer and the AWS console tools for working with your S3 account.

Uploading Data To Amazon S3

Now that we’ve learned how to create and list buckets in S3, let’s figure out how to put objects into buckets. This is a little complex, and we have a variety of options to choose from. The main method for doing this is create_object. The method takes the following format:

create_object ( $bucket, $filename, [ $opt = null ] )

The first parameter is the name of the bucket in which the object will be stored. The second parameter is the name by which the file will be stored on S3. Using only these two parameters is enough to create an empty object with the given file name. For example, the following code would create an empty object named config-empty.inc in the com.magazine.resources bucket:

$s3 = new AmazonS3();
$bucket = 'com.magazine.resources';
$response = $s3->create_object($bucket, 'config-empty.inc');

// Success?
var_dump($response->isOK());

Once the object is created, we can access it using a URL. The URL for the object above would be:

https://s3.amazonaws.com/com.magazine.resources/config-empty.inc

Of course, if you tried to access the URL from a browser, you would be greeted with an “Access denied� message, because objects stored on S3 are set to private by default, viewable only by the owner. You have to explicitly make an object public (more on that later).

To add some content to the object at the time of creation, we can use the following code. This would add the text “Hello World� to the config-empty.inc file.

$response = $s3->create_object($bucket,  config-empty.inc ‘,
    array(
        'body' => Hello World!'
));

As a complete example, the following code would create an object with the name simple.txt, along with some content, and save it in the given bucket. An object may also optionally contain meta data that describes that object.

/* Initialize the class */
$s3 = new AmazonS3();

/* Our bucket name */
$bucket = 'com.magazine.resources’;

$response = $s3->create_object($bucket, 'simple.txt',
    array(
    'body' => Hello World!'
));

if ($response->isOK())
{
    return true;
}

You can also upload a file, rather than just a string, as shown below. Although many options are displayed here, most have a default value and may be omitted. More information on the various options can be found in the “AWS SDK for PHP 1.4.7.�

require_once( ‘sdk-1.4.4/sdk.class.php');

$s3 = new AmazonS3();
$bucket = 'com.smashingmagazine.images’;

$response = $s3->create_object($bucket, 'source.php',
    array(
    'fileUpload' => 'test.php',
    'acl' => AmazonS3::ACL_PRIVATE,
    'contentType' => 'text/plain',
    'storage' => AmazonS3::STORAGE_REDUCED,
    'headers' => array( // raw headers
        'Cache-Control' => 'max-age',
        'Content-Encoding' => 'text/plain',
        'Content-Language' => 'en-US',
        'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT',
    )
));

// Success?
var_dump($response->isOK());

Details on the various options will be explained in the coming sections. For now, take on faith that the above code will correctly upload a file to the S3 server.

Writing Our Amazon S3 WordPress Plugin

With some background on Amazon S3 behind us, it is time to put our learning into practice. We are ready to build a WordPress plugin that will automatically back up our WordPress database to the S3 server and restore it when needed.

To keep the article focused and at a reasonable length, we’ll assume that you’re familiar with WordPress plugin development. If you are a little sketchy on the fundamentals, read “How to Create a WordPress Plugin� to get on track quickly.

The Plugin’s Framework

We’ll first create a skeleton and then gradually fill in the details. To create a plugin, navigate to the wp-content/plugins folder, and create a new folder named s3-backup. In the new folder, create a file named s3-backup.php. Open the file in the editor of your choice, and paste the following header information, which will describe the plugin for WordPress:

/*
Plugin Name: Amazon S3 Backup
Plugin URI: http://cloud-computing-rocks.com
Description: Plugin to back up WordPress database to Amazon S3
Version: 1.0
Author: Mr. Sameer
Author URI: http://www.codediesel.com
License: GPL2
*/

Once that’s done, go to the plugin’s page in the admin area, and activate the plugin.

Now that we’ve successfully installed a bare-bones WordPress plugin, let’s add the meat and create a complete working system. Before we start writing the code, we should know what the admin page for the plugin will ultimately look like and what tasks the plugin will perform. This will guide us in writing the code. Here is the main settings page for our plugin:

The interface is fairly simple. The primary task of the plugin will be to back up the current WordPress database to an Amazon S3 bucket and to restore the database from the bucket. The settings page will also have a function for naming the bucket in which the backup will be stored. Also, we can specify whether the backup will be available to the public or accessible only to you.

Below is a complete outline of the plugin’s code. We will elaborate on each section in turn.

/*
Plugin Name: Amazon S3 Backup
Plugin URI: http://cloud-computing-rocks.com
Description: Plugin to back up WordPress database to Amazon S3
Version: 1.0
Author: Mr. Sameer
Author URI: http://www.codediesel.com
License: GPL2
*/

$plugin_path = WP_PLUGIN_DIR . "/" . dirname(plugin_basename(__FILE__));

/* CloudFusion SDK */
require_once($plugin_path . '/sdk-1.4.4/sdk.class.php');

/* WordPress ZIP support library */
require_once(ABSPATH . '/wp-admin/includes/class-pclzip.php');

add_action('admin_menu', 'add_settings_page');

/* Save or Restore Database backup */
if(isset($_POST['aws-s3-backup'])) {
…
}

/* Generic Message display */
function showMessage($message, $errormsg = false) {
…
}   

/* Back up WordPress database to an Amazon S3 bucket */
function backup_to_AmazonS3() {
…
}

/* Restore WordPress backup from an Amazon S3 bucket */
function restore_from_AmazonS3() {
…
}

function add_settings_page() {
…
}

function draw_settings_page() {
…
}

Here is the directory structure that our plugin will use:

plugins (WordPress plugin directory)
---s3-backup (our plugin directory)
-------s3backup (restored backup will be stored in this directory)
-------sdk-1.4.4 (CloudFusion SDK directory)
-------s3-backup.php (our plugin source code)

Let’s start coding the plugin. First, we’ll initialize some variables for paths and include the CloudFusion SDK. A WordPress database can get large, so to conserve space and bandwidth, the plugin will need to compress the database before uploading it to the S3 server. To do this, we will use the class-pclzip.php ZIP compression support library, which is built into WordPress. Finally, we’ll hook the settings page to the admin menu.

$plugin_path = WP_PLUGIN_DIR . "/" . dirname(plugin_basename(__FILE__));

/* CloudFusion SDK */
require_once($plugin_path . '/sdk-1.4.4/sdk.class.php');

/* WordPress ZIP support library */
require_once(ABSPATH . '/wp-admin/includes/class-pclzip.php');

/* Create the admin settings page for our plugin */
add_action('admin_menu', 'add_settings_page');

Every WordPress plugin must have its own settings page. Ours is a simple one, with a few buttons and fields. The following is the code for it, which will handle the mundane work of saving the bucket’s name, displaying the buttons, etc.

function draw_settings_page() {
    ?>
    <div class="wrap">
        <h2><?php echo('WordPress Database Amazon S3 Backup') ?>
            <input type="hidden" name="action" value="update" />
            <input type="hidden" name="page_options" value="aws-s3-access-public, aws-s3-access-bucket" />
            <?php
                wp_nonce_field('update-options');
                $access_options = get_option('aws-s3-access-public');
            ?>
            <p>
                <?php echo('Bucket Name:') ?>
                <input type="text" name="aws-s3-access-bucket" size="64" value="<?php echo get_option('aws-s3-access-bucket'); ?>" />
            </p>
            <p>
                <?php echo('Public:') ?>
                <input type="checkbox" name="aws-s3-access-public" <?php checked(1 == $access_options); ?> value="1" />
            </p>
            <p class="submit">
                <input type="submit" class="button-primary" name="Submit" value="<?php echo('Save Changes') ?>" />
            </p>
        </form>
        <hr />
        <form method="post" action="">
            <p class="submit">
                <input type="submit" name="aws-s3-backup" value="<?php echo('Backup Database') ?>" />
                <input type="submit" name="aws-s3-restore" value="<?php echo('Restore Database') ?>" />
            </p>
        </form>
    </div>
    <?php
}

Setting up the base framework is essential if the plugin is to work correctly. So, double-check your work before proceeding.

Database Upload

Next is the main part of the plugin, its raison d’être: the function for backing up the database to the S3 bucket.

/* Back up WordPress database to an Amazon S3 bucket */
function backup_to_AmazonS3()
{
    global $wpdb, $plugin_path;

    /* Backup file name */
    $backup_zip_file = 'aws-s3-database-backup.zip';

    /* Temporary directory and file name where the backup file will be stored */
    $backup_file = $plugin_path . "/s3backup/aws-s3-database-backup.sql";

    /* Complete path to the compressed backup file */
    $backup_compressed = $plugin_path . "/s3backup/" . $backup_zip_file;

    $tables = $wpdb->get_col("SHOW TABLES LIKE '" . $wpdb->prefix . "%'");
	$result = shell_exec('mysqldump --single-transaction -h ' .
                         DB_HOST . ' -u ' . DB_USER . ' --password="' .
                         DB_PASSWORD . '" ' .
                         DB_NAME . ' ' . implode(' ', $tables) . ' > ' .
                         $backup_file);

    $backups[] = $backup_file;

    /* Create a ZIP file of the SQL backup */
    $zip = new PclZip($backup_compressed);
    $zip->create($backups);

    /* Connect to Amazon S3 to upload the ZIP */
    $s3 = new AmazonS3();
    $bucket = get_option('aws-s3-access-bucket');

    /* Check if a bucket name is specified */
    if(empty($bucket)) {
        showMessage("No Bucket specified!", true);
        return;
    }

    /* Set backup public options */
    $access_options = get_option('aws-s3-access-public');

    if($access_options) {
        $access = AmazonS3::ACL_PUBLIC;
    } else {
        $access = AmazonS3::ACL_PRIVATE;
    }

    /* Upload the database itself */
    $response = $s3->create_object($bucket, $backup_zip_file,
                array(
                'fileUpload' => $backup_compressed,
                'acl' => $access,
                'contentType' => 'application/zip',
                'encryption' => 'AES256',
                'storage' => AmazonS3::STORAGE_REDUCED,
                'headers' => array( // raw headers
                    'Cache-Control' => 'max-age',
                    'Content-Encoding' => 'application/zip',
                    'Content-Language' => 'en-US',
                    'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT',
                )
            ));

    if($response->isOK()) {
        unlink($backup_compressed);
        unlink($backup_file);
        showMessage("Database successfully backed up to Amazon S3.");
    } else {
        showMessage("Error connecting to Amazon S3", true);
    }
}

The code is self-explanatory, but some sections need a bit of explanation. Because we want to back up the complete WordPress database, we need to somehow get ahold of the MySQL dump file for the database. There are multiple ways to do this: by using MySQL queries within WordPress to save all tables and rows of the database, or by dropping down to the shell and using mysqldump. We will use the second method. The code for the database dump shown below uses the shell_exec function to run the mysqldump command to grab the WordPress database dump. The dump is further saved to the aws-s3-database-backup.sql file.

$tables = $wpdb->get_col("SHOW TABLES LIKE '" . $wpdb->prefix . "%'");
$result = shell_exec('mysqldump --single-transaction -h ' .
                         DB_HOST . ' -u ' . DB_USER .
                         ' --password="' . DB_PASSWORD . '" ' .
                         DB_NAME . ' ' . implode(' ', $tables) .
                         ' > ' . $backup_file);

$backups[] = $backup_file;

The SQL dump will obviously be big on most installations, so we’ll need to compress it before uploading it to S3 to conserve space and bandwidth. We’ll use WordPress’ built-in ZIP functions for the task. The PclZip class is stored in the /wp-admin/includes/class-pclzip.php file, which we have included at the start of the plugin. The aws-s3-database-backup.zip file is the final ZIP file that will be uploaded to the S3 bucket. The following lines will create the required ZIP file.

/* Create a ZIP file of the SQL backup */
    $zip = new PclZip($backup_compressed);
    $zip->create($backups);

The PclZip constructor takes a file name as an input parameter; aws-s3-database-backup.zip, in this case. And to the create method we pass an array of files that we want to compress; we have only one file to compress, aws-s3-database-backup.sql.

Now that we’ve taken care of the database, let’s move on to the security. As mentioned in the introduction, objects stored on S3 can be set as private (viewable only by the owner) or public (viewable by everyone). We set this option using the following code.

/* Set backup public options */
    $access_options = get_option('aws-s3-access-public');

    if($access_options) {
        $access = AmazonS3::ACL_PUBLIC;
    } else {
        $access = AmazonS3::ACL_PRIVATE;
    }

We have listed only two options for access (AmazonS3::ACL_PUBLIC and AmazonS3::ACL_PRIVATE), but there are a couple more, as listed below and the details of which you can find in the Amazon SDK documentation.

  • AmazonS3::ACL_PRIVATE
  • AmazonS3::ACL_PUBLIC
  • AmazonS3::ACL_OPEN
  • AmazonS3::ACL_AUTH_READ
  • AmazonS3::ACL_OWNER_READ
  • AmazonS3::ACL_OWNER_FULL_CONTROL

Now on to the main code that does the actual work of uploading. This could have been complex, but the CloudFusion SDK makes it easy. We use the create_object method of the S3 class to perform the upload. We got a short glimpse of the method in the last section.

/* Upload the database itself */
$response = $s3->create_object($bucket, $backup_zip_file, array(
            'fileUpload' => $backup_compressed,
            'acl' => $access,
            'contentType' => 'application/zip',
            'encryption' => 'AES256',
            'storage' => AmazonS3::STORAGE_REDUCED,
            'headers' => array( // raw headers
                'Cache-Control' => 'max-age',
                'Content-Encoding' => 'application/zip',
                'Content-Language' => 'en-US',
                'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT',
            )
        ));

Let’s go over each line in turn. But bear in mind that the method has quite a many options, so refer to the original documentation itself.

  • $backup_zip_file
    The name of the object that will be created on S3.
  • 'fileUpload' => $backup_compressed
    The name of the file, whose data will be uploaded to the server. In our case, aws-s3-database-backup.zip.
  • 'acl' => $access
    The access type for the object. In our case, either public or private.
  • 'contentType' => 'application/zip'
    The type of content that is being sent in the body. If a file is being uploaded via fileUpload, as in our case, it will attempt to determine the correct MIME type based on the file’s extension. The default value is application/octet-stream.
  • 'encryption' => 'AES256'
    The algorithm to use for encrypting the object. (Allowed values: AES256)
  • 'storage' => AmazonS3::STORAGE_REDUCED
    Specifies whether to use “standard� or “reduced redundancy� storage. Allowed values are AmazonS3::STORAGE_STANDARD and AmazonS3::STORAGE_REDUCED. The default value is STORAGE_STANDARD.
  • 'headers' => array( // raw headers
    'Cache-Control' => 'max-age',
    'Content-Encoding' => 'application/zip',
    'Content-Language' => 'en-US',
    'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT',
    )

    The standard HTTP headers to send along with the request. These are optional.

Note that the plugin does not provide a function to create a bucket on Amazon S3. You need use Amazon’s AWS Management Console or S3Fox Organizer to create a bucket before uploading objects to it.

Database Restore

Merely being able to back up data is insufficient. We also need to be able to restore it when the need arises. In this section, we’ll lay out the code for restoring the database from S3. When we say “restore,� keep in mind that the database’s ZIP file from S3 will simply be downloaded to the specified folder in our plugin directory. The actual database on our WordPress server is not changed in any way; you will have to restore the database yourself manually. We could have equipped our plugin to also auto-restore, but that would have made the code a lot more complex.

Here is the complete code for the restore function:

/* Restore WordPress backup from an Amazon S3 bucket */
function restore_from_AmazonS3()
{
    global $plugin_path;

    /* Backup file name */
    $backup_zip_file = 'aws-s3-database-backup.zip';

    /* Complete path to the compressed backup file */
    $backup_compressed = $plugin_path . "/s3backup/" . $backup_zip_file;

    $s3 = new AmazonS3();
    $bucket = get_option('aws-s3-access-bucket');

    if(empty($bucket)) {
        showMessage("No Bucket specified!", true);
        return;
    }

    $response = $s3->get_object($bucket, $backup_zip_file, array(
                'fileDownload' => $backup_compressed
                 ));

    if($response->isOK()) {
        showMessage("Database successfully restored from Amazon S3.");
    } else {
        showMessage("Error connecting to Amazon S3", true);
    }

}

As you can see, the code for restoring is much simpler than the code for uploading. The function uses the get_object method of the SDK, the definition of which is as follows:

get_object ( $bucket, $filename, [ $opt = null ] )

The details of the method’s parameters are enumerated below:

  • $bucket
    The name of the bucket where the backup file is stored. The bucket’s name is stored in our WordPress settings variable aws-s3-access-bucket, which we retrieve using the get_option('aws-s3-access-bucket') function.
  • $backup_zip_file
    The file name of the backup object. In our case, aws-s3-database-backup.zip.
  • 'fileDownload' => $backup_compressed
    The file system location to download the file to, or an open file resource. In our case, the s3backup directory in our plugin folder. It must be a server-writable location.

At the end of the function, we check whether the download was successful and inform the user.

In addition to the above functions, there are some miscellaneous support functions. One is for displaying a message to the user:

/* Generic message display */
function showMessage($message, $errormsg = false) {
	if ($errormsg) {
		echo '<div id="message" class="error">';
	} else {
		echo '<div id="message" class="updated">';
	}

	echo "<p><strong>$message</strong></p></div>";
}

Another is to add a hook for the settings page to the admin section.

function add_settings_page() {
    add_options_page('Amazon S3 Backup', 'Amazon S3 Backup',8,
                     's3-backup', 'draw_settings_page');
}

The button commands for backing up and restoring are handled by this simple code:

/* Save or Restore Database backup */
if(isset($_POST['aws-s3-backup'])) {
    backup_to_AmazonS3();
} elseif(isset($_POST['aws-s3-restore'])) {
    restore_from_AmazonS3();
}

This rounds out our tutorial on creating a plugin to integrate Amazon S3 with WordPress. Although we could have added more features to the plugin, the functionality was kept bare to maintain focus on Amazon S3’s basics, rather then on the calisthenics of plugin development.

In Conclusion

This has been a relatively long article, the aim of which was to acquaint you with Amazon S3 and how to use it in your PHP applications. Together, we’ve developed a WordPress plugin to back up our WordPress database to the S3 cloud and to retrieve it when needed. To reiterate, S3 is a complex service with many features and functions, which we have not covered here due to space considerations. Perhaps in future articles we’ll elaborate on the other features of S3 and add more features to the plugin we developed here.

(al)


© Sameer Borate for Smashing Magazine, 2011.


  •   
  • Copyright © 1996-2010 BlogmyQuery - BMQ. All rights reserved.
    iDream theme by Templates Next | Powered by WordPress