Tag: kranthi

The History Of Usability: From Simplicity To Complexity


  

The story of usability is a perverse journey from simplicity to complexity. That’s right, from simplicity to complexity—not the other way around.

If you expect a “user-friendly” introduction to usability and that the history of usability is full of well-defined concepts and lean methods, you’re in for a surprise. Usability is a messy, ill-defined, and downright confusing concept. The more you think about it—or practice it—the more confusing it becomes. We learned that the history of usability is a “perverse journey from simplicity to complexity”.

What Is Usability?

Early Roots of Usability

If we go far back in history, Vitruvius (1st century BC) was probably the first person to lay forth systematic and elaborated principles of design. His three core design principles became very influential:

  1. Firmitas: The strength and durability of the design;
  2. Utilitas: A design’s usefulness and suitability for the needs of its intended users;
  3. Venustas: The beauty of the design.

Vitruvius’ work was an inspiration to people like Leonardo da Vinci, who drew the well-known Vitruvian Man (fig. 1 below). By empirically measuring and calculating the proportions of the human body, and emphasising the “utilitas” principle, Vitruvius may be considered the first student of ergonomics and usability.


Figure 1: The Vitruvian Man drawing was created by Leonardo da Vinci circa 1487 based on the work of Vitruvius.

Military Roots

The discipline of usability is also rooted in the discipline called Human Factors, which started as military personnel asked themselves the very morbid question:

“What design do we need to kill more enemies through better matching soldier and weapon? And thus avoid getting killed ourselves.”

Both World War I and World War II fueled research into Human Factors. When designing artillery cannons, for example, usability yielded more precision, greater kills, and shorter training of personnel.

Thus, military designers could extract some very concrete usability metrics. For example:

  • How quickly will a new crew member learn how to use the artillery cannon (now that the former crew member is dead)?
  • How many rounds per minute (ordnance) is the cannon able to fire with an inexperienced versus an experienced crew?
  • How will improving the design of the cannon improve target acquisition (and thus kill more enemies)?
  • How does a design improvement decrease soldier fatigue (as a consequence of a lighter cognitive load)?

Cyclone
Figure 2: Photograph of the French 320 mm railway gun Cyclone, taken in Belgium in 1917. The gun required not only trained personnel to fire it, but also trained personnel to drive it.

Howitzer
Figure 3: A 155 mm artillery shell fired by a United States 11th Marine Regiment’s M-198 howitzer. The setup time (and thus usability) is essential as anti-artillery weapons necessitate that the position of the howitzer be changed very quickly after firing.

Recent Roots of Usability

The concept of usability has its more recent and direct origins in the falling prices of computers in the 1980s, when for the first time it was feasible for many employees to have their own personal computer. In the 80s, most computer users had practically no, or only basic, training on operating systems and applications software. However, software design practices continued to implicitly assume knowledgeable and competent users, who would be familiar with technical vocabularies and system architectures, and also possess an aptitude for solving problems arising from computer usage.

Such implicit assumptions rapidly became unacceptable. For the average user, interactive computing became associated with constant frustrations and consequent anxieties. Computers were obviously too hard to use for most users, and often absolutely inpractical. Usability thus became a key goal for the design of any interactive software that would not be used by trained technical computer specialists.

The current understanding of usability is different from the early days in the 1980s. Usability used to be a dominant concept but this changed with research increasingly focused on usage contexts. Usage quality no longer appeared to be a simple issue of how inherently usable an interactive system was, but how well it fitted its context of use.

Usability Evaluation: What’s Good And What’s Bad?

Usability is a contested historical term that is difficult to replace. User experience specialists have to refer to usability, since it is a strongly established concept within the IT landscape. In simplified terms, usability work is about finding out what’s good and what’s bad. However, when we examine the hundreds of usability evaluation methods in use, we do see that different approaches to usability result in differences over the causes of good and poor usability. That may sound complicated but let’s take two different approaches to usability:

  1. If you think usability is a feature of an interactive system, your approach to usability may be called essentialist—i.e. poor/good usability resides in the “essense” of the system. You will typically find yourself saying things like “that website is not user-friendly”, “a website or system has poor usability when there is no visibility of system status”, “I can reliably measure which website has the best usability”, etc. This means you think that all causes of user performance are due to technology. In that case, you will typically use system-centered usability inspection methods to identify such causes.
  2. On the other hand, if you think usability is a feature of the interaction between user, computer and the context, your approach to usability is contextual—i.e. depending on the context. This means that you think questions of user performance have different causalities, some due to technologies, and others due to some aspect(s) of usage contexts, but most due to interactions between both. Several evaluation and other methods may be needed to identify and relate a nexus of causes. You will often find yourself saying vague things like “that depends…”, “well… this website checkout procedure is great for male, fact-oriented, middle-aged buyers, but not for an impatient teenager doing the purchase on his smartphone sitting in the bus”, etc.

The reason I mention the essentialist/contextual distinction is that anyone involved with usability should—ideally—be able to say “this website/technology/system is good, that one is bad”. After all, isn’t that what your client or boss is paying you to do?

To answer if the usability of a website is good or bad you have to employ a usability method. And your choice of usability method will depend on your approach to usability—whether you admit it or not. Maybe you’ll deny it and say, “I’ve never heard of any essentialist/contextual approaches to usability.” However, that would be like selling French wine without ever having spent time in a French vineyard. You can do it, but at some point your client or boss will start asking questions you can’t answer. Or your decisions will have unexpected side-effects.

So what usability method should you choose to determine “what’s good and what’s bad”?

Analytical And Empirical Evaluation Methods

Usability work can involve a mix of methods and the mix can be guided by high level distinctions between methods, for example the distinction between analytical and empirical evaluation methods.

  1. Analytical evaluation methods are based on examination of an interactive system and/or potential interactions with it. You analyse the system or analyse the interaction with the system.
  2. Empirical evaluation methods, on the other hand, are based on actual usage data.

Analytical evaluation methods may be system-centred, like Jakob Nielsen’s Heuristic Evaluation or interaction-centred, like the Cognitive Walkthrough method. Design teams use the resources provided by a method (e.g. heuristics) to identify strong and weak elements of a design from a usability perspective.

Three types of analytical evaluation methods:

  • Inspection methods tend to focus on the causes of good or poor usability.
  • System-centered inspection methods focus solely on software and hardware features regarding attributes that will promote or obstruct usability.
  • Interaction-centered methods focus on two or more causal factors (i.e. software features, user characteristics, task demands, or other contextual factors).


Figure 4: Jakob Nielsen’s Heuristic Evaluation is a good example of an analytical evaluation method. Heuristic Evaluation became the most popular user-centered design approach in the 1990s, but has become less prominent with the move away from desktop applications.

Empirical evaluation methods focus on evidence of good or poor usability, i.e. the positive or negative effects of attributes of software, hardware, user capabilities and usage environments. User testing is the principal project-focused method. It uses project-specific resources such as test tasks, users, and also measuring instruments to expose usability problems that can arise in use. Also, empirical experiments can be used to demonstrate superior usability arising from user interface components (e.g. text entry on mobile phones) or to optimise tuning parameters (e.g. timings of animations for windows opening and closing).

Such experiments assume that the test tasks, users and contexts allow generalisation in regards to other users, tasks and contexts. Such assumptions are readily broken, e.g. when users are very young or elderly, or have impaired movement or perception.

How To Balance The Mix: Why Is It Difficult?

Achieving a balanced mix of evaluation methods is not straightforward, and involves more than simply combining analytical and empirical methods. This is because there is more to usability work than simply choosing and using methods.

Evaluation methods are as complete as a Chicken Fajita Kit, which contains very little of what is actually needed to make Chicken Fajitas: no chicken, no onion, no peppers, no cooking oil, etc. Similarly, user testing ‘methods’ miss out equally vital ingredients and project-specific resources such as participant recruitment criteria, screening questionnaires, consent forms, test task selection criteria, test (de)briefing scripts, target thresholds, data analysis methods, or reporting formats.

There is no complete published user testing method that novices can pick up and use ‘as is’. All user testing requires extensive project-specific planning and implementation. Much usability work is about configuring and combining methods for project-specific use.


Figure 5. Chicken Fajita Kit. Copyright Old El Paso. All rights reserved.

The Only Methods Are The Ones That You Complete Yourselves

When planning usability work, it is important to recognise that so-called ‘methods’ are simply loose collections of resources better understood as ‘approaches’. There is much work in getting usability work to work, and as with all knowledge-based work, methods cannot be copied from books and applied without a strong understanding of fundamental underlying concepts. All methods must have unique usage settings that require project-specific resources. For user testing, for example, these include participant recruitment, test procedures and (de-)briefings. There are no universal measures of usability that are relevant to every software development project.

The Position So Far

  1. There are fundamental differences on the nature of usability, i.e. it is either an inherent property of interactive systems, or an emergent property of usage. There is no single definitive answer to what usability ‘is’.
  2. There are no universal measures of usability and no fixed thresholds above or below which all interactive systems are or are not usable. There are no universal, robust, objective and reliable metrics. All positions here involve hard won expertise, judgement calls, and project-specific resources beyond what all documented evaluation methods provide.
  3. Usability work is too complex and project-specific to admit generalised methods. What are called ‘methods’ are more realistically ‘approaches’ that provide loose sets of resources that need to be adapted and configured on a project by project basis.

Readers could reasonably draw the conclusion from the above that usability is an attractive idea in principle that has limited real-world application. However, the reality is that we all continue to experience frustrations when using interactive digital technologies, and even we would say that we find them difficult to use. Even so, frustrating user experiences may not be due to some single abstract construct called ‘usability’, but instead be the result of unique complex interactions between people, technology and usage contexts.

Usability In The Design Room

In well directed design teams, there will not be enough work for a pure usability specialist. This is evidenced by a trend within the last decade of a broadening from usability to user experience expertise. User experience work focuses on both positive and negative value, both during usage and after it. A sole focus on negative aspects of interactive experiences is becoming rarer. Useful measures of usage are extending beyond the mostly cognitive problem measures of 1980s usability to include positive and negative effect, attitudes and values, e.g. fun, trust, and self-affirmation. The coupling between evaluation and design is being improved by user experience specialists with design competences.

Many user experience professionals have also developed specific competences in areas such as brand experience, trust markers, search experience/optimisation, usable security and privacy, game experience, self and identity, and human values. We can see two trends here. The first involves complementing human-centered expertise with strong understandings of specific technologies such as search and security. The second involves a broadening of human-centered expertise to include business competences (e.g. branding) and humanistic psychological approaches (e.g. phenomenology, meaning and value).

As such, a usability person is not a lone judge who makes the call “is this usable?” Instead, the usability proficient person will often be a team player taking on many roles in the product development lifecycle.

Rower

Row Team
Figure 6 A-B. From Solo Specialist to Team Member: User Experience—as opposed to Usability—as an integrated part of design teams. Copyright of leftmost picture: Flickr user ‘Tety’ through the Creative Commons Attribution Share-Alike 2.0 licence

Conclusion: Are You Confused?

This Smashing Magazine article is a digested version of the monstrous 25,000 word encyclopedic introduction to the history of Usability Evaluation at Interaction-Design.org. It’s available in a special version for Smashing readers.

Did this article confuse you more than it informed you? Well, it should! If you want an answer to the question of “what is usability?”, it’s just as complicated as asking “what is beauty?” The more you think about it, the less you feel you know. I don’t believe you will ever find the answer to the question “what is usability?”, but I nevertheless hope you—as I—will continue to ask that very question. And be confused again and again. Confusion is—after all—the mother of wisdom.

The concept of usability is a product of millions of designers trying for decades to describe what they are doing to make technology easier and more pleasant. As such, the concept is immensely complex. It may have started out as a simple concept but the more people who are involved with usability, the more multi-faceted the concept will become. Therefore, the history of usability is a journey from simplicity to complexity. Is that journey worth the effort? Certainly! Anyone who masters usability—and all its facets—has a power position when it comes to designing easy/pleasant/cool/useful/usable/etc. technology. As confusing as that endeavour may be!

(jc) (il)


© Mads Soegaard for Smashing Magazine, 2012.


Smashing Daily #7: Wheels, Print And Bingo


  

Editor’s Note: This post is the seventh in the new Smashing Daily series on Smashing Magazine, where we highlight items to help you stay on the top of what’s going on in the Web industry. Vasilis van Gemert will carefully pick the most interesting discussions, tools, techniques and articles that have been published recently and present them in a nice compact overview.

In this edition of The Smashing Daily, you will find an answer to the question of why reinventing the wheel is a good thing, why you should make a mobile friendly website first, how to make that website, how to serve a print style sheet and you’ll get an excellent lesson in out-of-the-box thinking and innovation. Doesn’t that sound interesting? There’s much more in here, so hope you’ll like it!

“Web First for Mobile�
Steve Souders argues that you should first optimize your website for mobile. When you’re done you can then start thinking about a native app. He came to this conclusion after seeing a few talks on Mobilism. You should read this article.

The web is huge

“In defense of reinventing wheels�
A few weeks ago I needed a little script to enable simple left and right swipe gestures on touch devices. People sent me links to amazing libraries that emulate all the possible gestures out there—but I didn’t want all possible gestures, I just needed one. This is a common issue and Lea Verou explains it in detail. You should definitely read it.

“Content Everywhere with Lyza Gardner�
I have to admit that I haven’t listened to this podcast (to be honest, I never listen to any podcasts, not even my own, it’s just not my thing… I’d rather read) but looking at the Show Notes and Links is enough to link to it. It’s a show by Jen Simmons and Lyza Danger Gardner about a lot of great stuff that’s happening right now with content on the (mobile) Web. You should probably listen to it (and definitely click throught the links).

“5 years later: print CSS still sucks�
You might think that linking to a separate print style sheet would be a good idea from a performance point of view. Stoyan Stefanov did a little bit of research and yes, you guessed right, linking to a separate print style sheet is very bad for performance. This doesn’t mean you shouldn’t create a print style sheet (you just shouldn’t link to it separately).

“Mobile First Design: Why It’s Great and Why It Sucks�
More and more designers start looking at websites from a mobile-first perspective and there are many good reasons to do so. Joshua Johnson explores these reasons and he looks at the cons to this approach (which are mostly personal issues, like the fact that he isn’t used to working this way). A good article which clearly shows that we have to rethink our workflows.

Pros and cons of responsive design

“Augmented Paper�
Here’s an interesting article in which Matt Gemmell tries to quantify what constitutes an enticing interface. He looks at iOs, Android and Window Phone (which he describes as finding yourself living inside an infographic) and looks at different apps on iOs. A very interesting read that explores the many different approaches to interfaces.

Augmented Paper

“First thing you should do to optimize your desktop website for mobile�
Jason Grigsby argues that the most important thing you can do if you want to optimize your website for mobile is to focus on performance first. Once performance is good you can then start focusing on things like mediaqueries (but only then).

“TTMMHTM: Cardboard arcade, the power of music, Book tank, Book about Ada and inventing on principles�
Do you need more to read? Here’s the ever inspiring Things That Made Me Happy This Morning by Christian Heilmann, with some beautiful, amazing and fantastic links.

Last Click

“Dr. Reggie Watts on innovation and out of the box thinking�
Definitely: Dr. Reggie Watts is the new Emperor of Marketing Mullshit Bingo. In this presentation he proposes some new, revealing insights on innovation and out of the box thinking in a new world of challenges for the organizational production funnel in business value environments of seemingly unrealistic, concrete demands from new, experienced entrepreneurs. Exactly (-:

The Ceasar of Bullshit Bingo

Previous issues

You might be interested in the previous issues of Smashing Daily:

(jvb)


© Vasilis van Gemert for Smashing Magazine, 2012.


Replicating MySQL AES Encryption Methods With PHP


  

At our company, we process a lot of requests on the leading gift cards and coupons websites in the world. The senior developers had a meeting in late October to discuss working on a solution to replicate the MySQL functions of AES_ENCRYPT and AES_DECRYPT in the language of PHP. This article centers on what was produced by senior developer Derek Woods and how to use it in your own applications.

Security should be at the top of every developer’s mind when building an application that could hold sensitive data. We wanted to replicate MySQL’s functions because a lot of our data is already AES-encrypted in our database, and if you’re like us, you probably have that as well.

Why Does Encryption Matter?

We will begin by examining why security and encryption matter at all levels of an application. Every application, no matter how large or small, needs some form of security and encryption. Any user data you have is extremely valuable to potential hackers and should be protected. Basic encryption should be used when your application stores a password or some other form of identifying information.

Different levels of sensitive data require different encryption algorithms. Knowing which level to use can be determined by answering the basic question, “Will I ever need access to the original data after it has been encrypted?� When storing a user’s password, using a heavily salted MD5 hash and storing that in the database is sufficient; then, you would use the same MD5 hashing on the input and compare the result from the database. When storing other sensitive data that will need to be returned to its original input, you would not be able to use a one-way hashing algorithm such as MD5; you would need a two-way encryption scheme to ensure that the original data can be returned after it’s been encrypted.

Encryption is only as powerful as the key used to protect it. Imagine that an attacker breaches your firewall and has an exact clone of your database; without the key, breaking the encryption that protects the sensitive data would be nearly impossible. Keeping the key in a safe place should always be priority number one. Many people use an ini file that’s read at runtime and that is not publicly accessible within the scope of the Web server. If your application requires two-way encryption, there are industry standards for protecting such data, one being AES encryption.

What Is AES Encryption?

AES, which stands for Advanced Encryption Standard, was developed by Joan Daemen and Vincent Rijmen. They named their cipher, Rijndael, after a play on their two names. AES was announced as the winner of a five-year competition conducted by the National Institute of Standards and Technology (NIST) on 26 November 2001.

AES is a two-way encryption and decryption mechanism that provides a layer of security for sensitive data while still allowing the original data to be retrieved. To do this, it uses an encryption key that is used as a seed in the AES algorithm. As long as the key remains the same, the original data can be decrypted. This is necessary if your sensitive data needs to be returned to its original state.

How Does It Work?

AES uses a complex mathematical algorithm, which we will explore later, to combine two main concepts: confusion and diffusion. Confusion is a process that hides the relationship between the original data and the encrypted result. A classic example of this is the Caesar Cipher, which applies a simple shift of letters so that A becomes C, B becomes D, etc. Diffusion is a process that shifts, adjusts or otherwise alters the data in complex ways. This can be done using bit-shifts, replacements, additions, matrix manipulations and more. A combination of these two methods provides the layer of security that AES needs in order to give us a secure algorithm for our data.

In order to be bi-directional, the confusion and diffusion process is managed by the secret key that we provide to AES. Running the algorithm in one direction with a key and data will output an obfuscated string, thus encrypting the data. By passing that string back into the algorithm with the same key, running the algorithm in reverse will output the original data. Instead of attempting to keep the algorithm secret, the AES algorithm relies on complete key secrecy. According to its cryptographic storage guidelines, OWASP recommends rotating the key every one to three years. The guidelines also go through methods of rotating AES keys.

PHP Mcrypt Specifics

PHP provides AES implementation through the Mcrypt extension, which gives us a number of other ciphers as well. In addition to the algorithm itself, Mcrypt provides multiple modes that alter the security level of the AES algorithm to make it more secure. The two definitions that we will need to use in our PHP functions are MCRYPT_RIJNDAEL_256 and MCRYPT_MODE_ECB. Rijndael 256 is the encryption cipher that we will use for our AES encryption. Additionally, the code uses an “Electronic Code Book� that segments the data into blocks and encrypts them separately. Alternative and more secure modes are available, but MySQL uses ECB by default, so we will be crafting our PHP implementation around that.

Why Should You Avoid AES In MySQL?

Using MySQL’s AES encryption and decryption functions is very simple. Setting up requires less work, and it is generally far easier to understand. Apart from the obvious benefit of simplicity, there are three main reasons why PHP’s Mcrypt is superior to MySQL’s AES functions:

  1. MySQL needs a database link between the application and database in order for encryption and decryption to occur. This could lead to unnecessary scalability issues and fatal errors if the database has internal failures, thus rendering your application unusable.
  2. PHP can do the same MySQL decryption and encryption with some effort but without a database connection, which improves the application’s speed and efficiency.
  3. MySQL often logs transactions, so if the database’s server has been compromised, then the log file would produce both the encryption key and the original value.

MySQL AES Key Modification

Out of the box, PHP’s Mcrypt functions unfortunately do not provide encrypted strings that match those of MySQL. There are a few reasons for this, but the root cause of the difference is the way that MySQL treats both the key and the input. To understand the causes, we have to delve into MySQL’s default AES encryption algorithm. The code segments presented below have been tested up to the latest version of MySQL, but MySQL could possibly alter their encryption scheme. The first problem we encounter is that MySQL will break up our secret key into 16-byte blocks and XOR the characters together from left to right. XOR is an exclusive disjunction, and if the two bytes are the same, then the output is 0; if they are different, then the output is 1. Additionally, it begins with a 16-byte block of null characters, so if we pass in a key of fewer than 16 bytes, then the rest of the key will contain null characters.

Say we have a 34-character key: 123456789abcdefghijklmnopqrstuvwxy. MySQL will start with the first 16 characters.

new_key = 123456789abcdefg

The second step taken is to XOR (⊕) the next 16 characters in the value in new_key with the first 16.

new_key = new_key ^ hijklmnopqrstuvw
new_key = 123456789abcdefg ^ hijklmnopqrstuvw
new_key = Y[Y_Y[YWI

Finally, the two remaining characters will be XOR’d starting from the left.

new_key = new_key ^ xy
new_key =  Y[Y_Y[YWI ^ xy
new_key = !"Y_Y[YWI

This is, of course, drastically different from our original key, so if this process does not take place, then the return value from the decrypt/encrypt functions will be incorrect.

Key Padding

The second major difference between Mcrypt PHP and MySQL is that the length of the Mcrypt value must have padding to ensure it is a multiple of 16 bytes. There are numerous ways to do this, but the standard is to pad the key with the byte value equal to the number of bytes left over. So, if our value is 34 characters, then we would pad with a byte value of 14.

To calculate this, we use the following formula:

$pad_value = 16-(strlen($value) % 16);

Using our 34-character example, we would end up with this:

$pad_value = 16-(strlen("123456789abcdefghijklmnopqrstuvwxy") % 16);
$pad_value = 16-(34 % 16);
$pad_value = 16-(2);
$pad_value = 14;

MySQL Key Function

In the previous section, we dove into the issues surrounding the MySQL key used to encrypt and decrypt. Below is the function that’s used in both our encryption and decryption functions.

function mysql_aes_key($key)
{
	$new_key = str_repeat(chr(0), 16);
	for($i=0,$len=strlen($key);$i<$len;$i++)
	{
		$new_key[$i%16] = $new_key[$i%16] ^ $key[$i];
	}
	return $new_key;
}

First, we instantiate our key value with 16 null characters. Then we iterate through each character in the key and XOR it with the current position in new_key. Since we’re moving from left to right and using modulus 16, we will always be XOR’ing the correct characters together. This function changes our secret key to the MySQL standard and is the first step in achieving interoperability between PHP and MySQL.

Value Transformation

We run into the last caveat when we pull the data from MySQL. For the data encrypted with AES, the padded values that we added before encryption will remain tacked onto the end after we decrypt. In general, this would go unnoticed if we were only fetching the data in order to display it; but if we’re using any of basic string functions on the decrypted data, such as strlen, then the results will be incorrect. There are a couple ways to handle this, and we will be removing all characters with a byte position of 0 through 16 from the right of our value since they are the only characters used in our padding algorithm.

The code below will handle the transformation.

$decrypted_value = rtrim($decrypted_value, "\0..\16");

Throwing all the concepts together, we have a few main points:

  1. We have to transform our AES key to MySQL’s standards;
  2. We have to pad the value that we want to encrypt if it’s not a multiple of 16 bytes in size;
  3. We have to strip off the padded values after we decrypt the encrypted value from MySQL.

It’s advantageous to segment these concepts into components that can be reused throughout the project and in other areas that use AES encryption. The following two functions are the end result and perform the encryption and decryption before sending the data to MySQL for storage.

MySQL AES Encryption

function aes_encrypt($val)
{
	$key = mysql_aes_key('Ralf_S_Engelschall__trainofthoughts');
	$pad_value = 16-(strlen($val) % 16);
	$val = str_pad($val, (16*(floor(strlen($val) / 16)+1)), chr($pad_value));
	return mcrypt_encrypt(MCRYPT_RIJNDAEL_128, $key, $val, MCRYPT_MODE_ECB, mcrypt_create_iv( mcrypt_get_iv_size(MCRYPT_RIJNDAEL_128, MCRYPT_MODE_ECB), MCRYPT_DEV_URANDOM));
}

The first line is where we get the MySQL encoded key using the previously defined mysql_aes_key function. We are not using our real key in this article, but are instead paying homage to one of the creators of OpenSSL (among other technologies that he’s been involved in). The next line determines the character value with which to pad our data. The last two lines perform the actual padding of our data and call the mcrypt_encrypt function with the appropriate key and value. The return of this function will be the encrypted value that can be sent to MySQL for storage — or used anywhere else that requires encrypted data.

MySQL AES Decryption

function aes_decrypt($val)
{
	$key = mysql_aes_key('Ralf_S_Engelschall__trainofthoughts');
	$val = mcrypt_decrypt(MCRYPT_RIJNDAEL_128, $key, $val, MCRYPT_MODE_ECB, mcrypt_create_iv( mcrypt_get_iv_size(MCRYPT_RIJNDAEL_128, MCRYPT_MODE_ECB), MCRYPT_DEV_URANDOM));
	return rtrim($val, "\0..\16");
}

The first line of the decrypt function again generates the MySQL version of our secret key using mysql_aes_key. We then pass that key, along with the encrypted data, to the mcrypt_decrypt function. The final line returns the original data after stripping away any padded characters that we might have used in the encryption process.

See It In Action

To show that the encryption and decryption schemes here do in fact work, we must exercise both encryption and decryption functions in PHP and MySQL and compare the results. In this example, we have integrated the aes_encrypt/aes_decrypt and key function into a CakePHP model, and we are using Cake to run the database queries for MySQL. You can replace the CakePHP functions with mysql_query to obtain the results outside of Cake. In the first group, we are encoding the same data with the same key in both PHP and MySQL. We then base64_encode the result and print the data. The second group runs the MySQL encrypted data through PHP decrypt, and vice versa for the PHP encrypted data. We’re also outputting the result. The final block guarantees that the inputs and outputs are identical.

define('MY_KEY','Ralf_S_Engelschall__trainofthoughts');

// Group 1
$a = $this->User->aes_encrypt('test');
echo base64_encode($a).'';

$result = $this->User->query("SELECT AES_ENCRYPT('test', '".MY_KEY."') AS enc");
$b = $result[0][0]['enc'];
echo base64_encode($b).'';

// Group 2
$result = $this->User->query("SELECT AES_DECRYPT('".$a."', '".MY_KEY."') AS decc");
$c = $result[0][0]['decc'];
echo $c."
"; $d = $this->User->aes_decrypt($b); echo $d."
"; // Comparison var_dump($a===$b); var_dump($c===$d);

Output

The snippet below is the output when you run the PHP commands listed above.

L8534Dj1sH6IRFrUXXBkkA==
L8534Dj1sH6IRFrUXXBkkA==
test
test
bool(true) bool(true)

Final Thoughts

AES encryption can be a bit painful if you are not familiar with the specifics of the algorithm or familiar with any differences between implementations that you might have to work with in your various libraries and software packages. As you can see, it is indeed possible to use native PHP functions to handle encryption and decryption and to actually make it work seamlessly with any legacy MySQL-only encrypted data that your application might have. These methods should be used regardless so that you can use MySQL to decrypt data if a scenario arises in which that is the only option.

(al)


© Chad Smith & Derek Woods for Smashing Magazine, 2012.


Publication Standards Part 2: A Standard Future

Publication Standards Part 2: A Standard Future

This is the second part in a two-part essay about digital publishing. You can read part 1, “The Fragmented Present,” here.

It’s never been a better time to be a reader. We’re partly defined by the things we read, so it’s good to have an embarrassment of enriching, insightful writing on our hands. We hold hundreds of books in lightweight, portable e-readers. We talk to authors over Facebook and Twitter. Thousands of public domain works are available for free, and blogs afford us a staggering array of high-quality writing. Long-form journalism and analysis is experiencing a minor renaissance, and we’re finding new ways to discuss the things we read.

It’s never been a better time to be a writer. Anybody can publish their thoughts. Anybody can write a book and publish it on demand. Authors can reach out to readers, and enriching, fulfilling conversations can blossom around the connections we develop out of the things we make.

But we have considerable work ahead. Our ebook reading and creation tools are primitive, nascent, born of necessity, and driven by fear. We have one-click ePub to Kindle conversion, but it’s buried in a clumsy, bloated, cross-platform application that screams for improvement. We have page layout software, but it saves natively in a proprietary format, and it exports ePub files almost entirely as a set of <span> tags, rather than proper, semantic HTML. (Think <span class="header"> instead of <h1>.) ePub may be saved as a zip file, but Mac OS X’s default zip archiver doesn’t handle ePub’s mimetype correctly, requiring a separate application. And there is still, as of this writing, no native reader for Mac OS X that’s up to both iBooks’ design standards and ePub’s native spec. (When creating ePub, my workflow involves uploading a new ePub to Dropbox, opening the Dropbox app on my iPad, and sending the ePub to iBooks—every time I want to view a change.) There is so much work yet to be done to make publishing easier. Farming out ePub development—overwhelmingly the current accepted solution—isn’t the answer.

Does this madness look familiar?

Even if writers know HTML, they face many more hurdles. Writing now generates less income for people, but it costs the same to produce. The publishing landscape of 2012 looks similar to the music landscape of 1998, crossed with the web designs of 1996: it’s encumbered by DRM and proprietary formats, it treats customers as criminals, it’s fragmented across platforms, and it’s hostile to authors who want to distribute their work through independent channels. Libraries are almost ignored wholesale with every new development around DRM and pricing. Publishers take DRM on faith, in the face of considerable evidence that DRM hurts both readers and sales.

At the beginning of the aughts, major record labels weren’t behaving any differently than publishers are now. And for almost a decade, one browser maker held back technical progress in web development by not fully, reliably supporting web standards: this is no different than the Kindle entirely ignoring the recommendations of the International Digital Publishing Forum, despite Amazon being a paying member.

In 1997, the Web Standards Project was founded to encourage browser makers and web developers to embrace open standards. We need a similar advocacy organization for publishers, e-reader manufacturers, and readers. We need a Web Standards Project for electronic publishing.

Self-publishing and its discontents

Before the web, self-publishers bankrolled their own operations. Edward Tufte took out a second mortgage on his house to self-publish The Visual Display of Quantitative Information in 1982, because no printer could meet his quality standards. But even after getting the copies made, there was no way for self-publishers like Tufte to effectively market their work. You could take out an advertisement in a newspaper or magazine, or you could call bookstores and see if they might be interested, but in 1982, you didn’t have the luxury of a website or email account.

These days, anybody can publish their own work. You can go through a print-on-demand service like Lulu. You can connect with readers via Kickstarter to handle upfront printing costs. You can set up a website and sell copies of your work through Fetch, Shopify, and Digital Delivery. You can print postage with Endicia. You can sell in person with Square. You can establish subscriptions with Memberly.

In large part, we—those who help craft the web—have embraced such a model. The Manual is a beautifully crafted independent journal of the “whys” around design. 8faces is a semi-annual typography magazine. Five Simple Steps covers all manner of design techniques. I run a quarterly journal for long essays called Distance, and I set up the whole thing to function over the internet. Finally, partly because of this very site, A Book Apart connects with like-minded readers. And more publications come by the day. Codex. Bracket. Kern & Burn. As web designers learn the details of print production, there will only be more publications like this. (I can’t speak for others, but I find a tremendous amount of pride in making physical goods, after so many days of crafting intangible things for the web.)

Many self-published projects receive less marketing and advertising, but they foster a greater intimacy with audiences, provide better customer service, and require more self-promotion. Self-publishers hustle.

And while I’ve focused mostly on design-related projects, the notion of connecting with readers over the internet is genre-agnostic. Anybody can do this—and writers are becoming empowered to take publishing into their own hands.

How must publishers evolve?

There is still a role for publishers, though, if they adapt to the new landscape. What are the problems and trade-offs? Ebook cover design would probably change, for instance, now that it doesn’t need to stand out on a bookstore shelf. And while editorial increases in importance, frequently differentiating quality writing from stuff that’s written alone, marketing would change substantially. An ebook author probably doesn’t need to do a book tour. Print and display ads will be less frequent. Mailing lists will be more frequent. Publishers that understand the trade-offs and shifts in their work will be able to nimbly respond to the internet before the internet does their work for them.

Where are the standards?

When it comes to ebooks, we’ve abandoned the standards we claim to embrace. In numerous conversations that I had while researching this article, many self-publishers said that it simply isn’t worth publishing their work in ePub—and the only people who were excited about ePub hadn’t tried to publish in ePub yet. It doesn’t have the same reach as other formats, and its features are implemented piecemeal, meaning it’s hard to ensure a consistent typographic standard from device to device.

Often, publishers start by producing a PDF in a tool like InDesign, and there simply isn’t an effective way to translate PDF layout and typography into HTML for ePub. As editor Allen Tan told me, “our workflow is pretty digital, it’s just that our output isn’t digital.” Lack of typeface support and robust layout tools are major pain points, and often publishers simply export their proof PDF and call it the digital edition. When people do make ePub files, they usually farm the task out, saying it’s too painful to create in-house. These are hacks, and they indicate a deeper problem.

But as web workers, we’re used to responding to such concerns better than other industries, and we’re uniquely equipped to discuss publishing issues from an outsider’s perspective. The web is typography; books are typography; ePub, the prevailing standard in books, is HTML, CSS, and XML, saved as a Zip file. Allen Tan added: “the advantage of having ePub as a standard is that any improvements with ePub can be pulled back into the web, because they use a common base.” We can provide deep, meaningful, constructive change in both ePub and the publishing industry if we apply what we’ve learned in our struggles with HTML and CSS.

So what have we learned?

Standards and disruption

In 1997, competition between Netscape and Internet Explorer drove a handful of trailblazing web workers to found the Web Standards Project, which called for browsers to adopt the open standards of HTML and CSS. These days, we debate the fragmentation of the landscape, calling for cross-platform solutions and expressing worry when browser makers independently develop their own capabilities.

Meanwhile, the IDPF—essentially the W3C equivalent for books—has developed and released the ePub specification. But tools to create ePub efficiently haven’t kept up, there’s no way to semantically develop a book in page layout software, the largest e-reader company doesn’t follow the ePub spec at all, and no e-reader on the market fully supports the latest published spec, ePub 3.0. The IDPF has moved out of sync with the realities of the e-reading market—not unlike when the W3C released XHTML, which was out of step with the realities of the browser market. Publishers have taken to painstakingly developing digital bundles with many different formats, one for each potential e-reader—not unlike when websites were “best viewed in Netscape 4.0 at 800x600 resolution.” What did we learn fourteen years ago, and why are we letting this happen again?

Because the largest publishers make us. Even though DRM has been proven, time and time again, to be hostile to both consumer and publisher, almost all e-books sold today are encumbered by it. But media distribution usually runs this course: DRM is enforced out of fear, and in the face of flagging sales and rampant piracy, the industry moves toward open standards. On the iTunes Store, FairPlay gave way to unencumbered MP3s. With movies, DivX died; DRM-cracked DVDs flourished. Creators will only gain control of their industry if they stand up for themselves.

Likewise with web standards, and now publishing standards: we can only kill off Amazon’s DRM if we become fierce advocates for open standards and vote with our wallets until things are made right. And proprietary tweaks to ePub, like Apple’s iBooks Author spec, are not unlike the approach that WebKit takes to its own proprietary -webkit styles; such an approach can be refined and reformed if we approach it with the same perspective. As the W3C had WaSP in the late nineties and early aughts, the IDPF needs its own advocacy watchdog, too. We’re used to the web disrupting many industries, and it’s time to embrace the turbulence around publishing, for better and worse. The only alternative is to abolish the internet, and we’d rather not see that happen.

The Publication Standards Project

That’s why, along with this issue of A List Apart, and with the support of many great people, I’ve launched the Publication Standards Project, which has the following long-term goals:

  • Fully featured, native support of the most modern ePub standard in all ebook reader software. You can support your own proprietary format in tandem, but if your reader does not fully support ePub 3.0, we will continue to advocate for you to do so.
  • Support of the most modern ePub standard in creation tools. Same as above: your book-making software should write semantically correct markup, even if it also exports to other publishing formats.
  • Improving the existing ePub standard. It is not perfect, and it needs to be improved. In the long run this might result in a fork of the specification—essentially a WHATWG equivalent—but for now we’ll begin working with the IDPF.
  • Page layout software that exports semantically correct, standards-compliant HTML and CSS code. Software developers, take heed: after speaking with many publishers and independent writers, I've concluded that this market is wide open right now. If you build a better mousetrap, it will do very well by you.
  • Abolishing DRM in all published writing. DRM has provably aided piracy, and it works against the customer by assuming they’re a thief. Removing DRM, on the other hand, has been proven to increase sales in many situations.
  • An end to gatekeeper standards. As power consolidates in the hands of a few booksellers, they have a decreasing motivation to accept radical viewpoints or contentious, “banned” books. Rejecting a book because it contains a third-party link may fulfill the letter of Apple’s law, but it violates the spirit of open access, sharing, and healthy competition—and it could arguably be interpreted as an act of censorship.
  • Simpler, more humane library lending policies. We desperately need libraries to support under-served communities without pervasive broadband. Refusal to simplify pricing models, and refusal to inter-operate among e-readers and lending systems, means that libraries will simply opt out of ebook adoption entirely—something they can’t afford to do if they’re going to stay relevant in the future.

At least in the short term, we’ll accomplish it in these ways:

  • Education. Many people don’t know everything about the issues, and how they parallel our prior technological progress in other areas. Publishers don’t understand the new mindset that readers are in, readers don’t understand why publishers won’t join the 21st century, writers don’t understand why readers won’t pay anymore, and writers don’t want publishers to have full editorial control. Very few people have a clear sense of all the competing publishing formats and why such fragmentation is a bad thing. And we still don’t have the right tools to build the best writing that we can, share it with others, and constructively discuss it. At the Publication Standards Project, we’re ridiculously passionate about these issues, and we’d love you to join the conversation.
  • Outreach. Part of the Publication Standards Project is a call to action: sign on to our goals as a reader, writer, and publisher, and resolve to work to improve the way that we communicate with one another. Another part is lobbying: we need to collectively advocate for e-reader manufacturers, publishing software developers, booksellers, and publishers to adopt better practices in the way they work. Everyone who reads this article is capable of action, and nobody else will stand up in your place.
  • Your ideas. We know we haven’t thought of everything, and the best thing you can do to help is to volunteer. Get in touch with us and tell us about your vision for this; it cannot happen in a vacuum, and it cannot come from a handful of people. We will adapt to new developments in the publishing landscape, and there’s no way to tell exactly how things will play out.

Concluding thoughts

We discussed ePub’s promise in 2010, but we’ve only regressed since then. Our prospects look dim these days, but this is an opportunity for all of us to assert control over the way these standards are adopted. Right now, the landscape is dismal. But we solved these sorts of problems once. We can do it again. You can help. We set up a site at http://pubstandards.org. Our Twitter name is @pubstn. We’d like you to sign up for our mailing list so we can begin to take action with your help.

The internet was built on the foundation of free and open access to information, and it’s time for us to act like it. This isn’t going to happen by going to the companies and reforming them from the inside. It’s going to happen by building a new movement that can reform the older model. If we’re going to follow standards and openness in the things we publish, it starts with every single one of us, on every side of the table. Otherwise, we may end up with the walled gardens that we deserve. 

Translations:
Basque
Italian


RSS readers: Don't forget to join the discussion!


Publication Standards Part 1: The Fragmented Present

ebooks are a new frontier, but they look a lot like the old web frontier, with HTML, CSS, and XML underpinning the main ebook standard, ePub. Yet there are key distinctions between ebook publishing’s current problems and what the web standards movement faced. The web was founded without an intent to disrupt any particular industry; it had no precedent, no analogy. E-reading antagonizes a large, powerful industry that’s scared of what this new way of reading brings—and they’re either actively fighting open standards or simply ignoring them. In part one of a two-part series in this issue, Nick Disabato examines the explosion in reading, explores how content is freeing itself from context, and mines the broken ebook landscape in search of business logic and a way out of the present mess.

  •   
  • Copyright © 1996-2010 BlogmyQuery - BMQ. All rights reserved.
    iDream theme by Templates Next | Powered by WordPress