Archive for September, 2012

Your Photos Retro Style with HTML5 Canvas and Vintage.js


  

HTML5, JavaScript and Canvas are gaining momentum when it comes to image processing. It’s only been a few days since we introduced you to the jQuery-plugin tiltShift.js, which, you can easily guess by its name, applies the popular effect of miniaturizing to whatever image you fire it on. Today we’ll show you vintage.js. This jQuery-plugin, which is available as a web service also, enables you to let your images look as if they were taken ages ago. It doesn’t only do that good, it does a fantastic job.

Vintage.js: Webservice plus Gallery

German developer Robert Fleischmann introduces an extremely flexible way to expose photos to virtual digital aging. Lots of parameters help you finetune the results. If you’re thinking about implementing the plugin into your own website, but are unsure regarding its capabilities, you should take a look at the project’s website. This is also true, if you’re just searching for a quick and easy way to retro style your photos.

Vintagejs.com provides a web app, where you can upload your own pictures, manipulate them via simple value sliders and even expose them to the public eye in an ever growing gallery. This last aspect is totally optional. The following image shows a possible result vitage.js is able to achieve:

Before applying heavy filtering the image looked like this:

Check out the online service. You’ll notice that the manipulation is comfortable and that you’re working in an wysiwyg surrounding, at least almost. Looking at the user interface you’ll feel at home quite quickly. You don’t need any academic background to use it:

After you’re finished firing effects to your photos, you can download it and/or save it publicly in the online gallery. You’ll find thousands of pages with photos there. The quality of course, as usual with user-generated content, is varying…

Vintage.js: Plugin for your own Website

After you’ve convinced yourself that vintage.js is powerful enough for your needs, let’s take a look at the technical implementation of the plugin. Vintage.js requires jQuery. Furthermore you’ll have to embed the plugin and the accompanying CSS stylesheet:

1
2
3
<script src="src/jquery.js"></script>
<script src="src/vintage.js"></script>
<link rel="stylesheet" type="text/css" href="css/vintagejs.css" media="all" />

Vintage.js is called only once, which makes is generally available and easy to maintain. If you want to invoke it on an image, you add the class “vintage” to it. That way, the JavaScript identifies where to get active. The function call can most basically look like this:

1
2
3
4
5
6
7
<script>
$(function () {
    $('img.vintage').click(function () {
        $(this).vintage();
    });
});
</script>

There are three different presets to choose from. You’ll easily guess what these will achieve, as they are called sepia, green and grayscale. If you’re into more flexibility or a full blown control freak even, vintage.js is for you, too. You won’t be working with a preset, you’ll pass custom parameters to the function call.

Using this variant you’ll feel reminded of the possibilities of the online service. While you can change all the parameters that alter the imagery, you are also able to change the file format. A callback funktion allows you to decide which further steps should be invoked after execution of the main task. A function call with a reasonable amount of parametrization could look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<script>
$(function () {
    $('img.vintage').click(function () {
        $(this).vintage({
            vignette: {
                black: 0.8,
                white: 0.2
            },
            noise: 20,
            screen: {
                red: 12,
                green: 75,
                blue: 153,
                strength: 0.3
            },
            desaturate: 0.05
        });
    });
});
</script>

Vintage.js works in all modern browsers and the Internet Exploder 9. Crucial is support of the canvas element. Developer Robert Fleischmann provides you with the plugin without charging for anything. The plugin as well as the online service can be used totally free, for personal and commercial projects alike, as it is double-licensed under MIT and GPL.

Related Links:


Making A Better Internet


  

My relationship with the Internet oscillates between waves of euphoria and waves of angst. Some things make me extraordinarily happy: like a client who loves usability testing so much when they first experience it that they can’t sleep for days; or connecting with someone whose writing I’ve admired for many years.

But other things make me want to close my computer forever and go live on a farm somewhere: like people who take entire articles and present them as their own work, with tiny source links at the bottom of the page; or endless arguments and name-calling that ignore even the most basic human dignity.

We are capable of such great things, yet we somehow can’t resist the temptation to tear others apart. There is, perhaps, no better depiction of the current state of the Internet than xkcd’s “Duty Calls�:

Duty Calls
Duty Calls by Randall Munroe.

In this essay, I’ll weave together a story about the current state of Internet discourse. At the end, I’ll tell you how I think we can make it better. And then, we’ll most likely all go back to what we were doing and forget about it. Despite the probable futility of this exercise, I’ll carry it out anyway, because I love the Web and I really don’t want us to destroy it.

Act 1: The Lap Dancer

Paul Ford’s “The Web Is a Customer Service Medium� is one of the most important essays of our time. Towards the end, he explains how, in April 2010, the Daily Mail “reported� that “computer tycoon Sir Clive Sinclair, 69, has secretly married his lap dancer fiancee Angie Bowness, who is 36 years his junior.� The appropriate response to this type of story is an overwhelming “Who cares?�, but that’s obviously not what happened. A lot of blogs wrote about it, and the comment sections are sights to behold. Below one of the accounts, a commenter posted the following image in response:

It’s at this point that we need to pick up Paul’s essay for his response:

Consider what that cartoon means in that context: It implies that the commenter feels — with some irony and self-awareness, I’m sure — that his opinion, in some way, is relevant to the question of whether Clive Sinclair should marry a particular woman. This is, for many obvious reasons, completely insane. And yet there was an image already sketched and available to that commenter so that he could express this exact sentiment of choosing not to be outraged at a situation he read about on the Internet.

Paul has a phrase for this, a phrase that has shaped my view of the Internet ever since I first read it. He calls this phenomenon “Why Wasn’t I Consulted?� or WWIC. It’s the fuel that powers the Internet — the insatiable desire to be heard, to make your opinion known, to be understood. It’s the new scribbling “[X] was here� on tree trunks. We read, we share, we “curate,� we post pithy statements and ask people to “Like if you agree!!1!� Lest we spend a day not being noticed.

Act 2: The Bottom Half

On 6 August 2012, South African news website IOL posted an article titled “How ANCYL Plans to Shut Down Cape.� Things got a little out of hand, as they usually do on news websites, until the editors deleted all comments and posted the following notice: “IOL has closed comments on this story due to the high volume of racist and/or derogatory comments.� A friend noted that they should probably just hardcode that sentence into the footer.

A search for the origin of the well-known phrase “Never read the bottom half of the Internet� led me to Sophie Heawood, who told me that she first used it in an article for The Independent a couple of years ago. The first online reference we could find is in her article “Save Dappy From the Venom of the Anonymous�:

What surprises me the most about the bottom half of the internet, that place where all the angry comments go, is that so many of the people writing them turn out not to be rabid murderers but ordinary mild people who casually fire off drive-by verbal shootings in their lunch breaks.

A friend of Sophie’s and fellow journalist for The Independent, Grace Dent, told me that she often quotes the phrase as follows: “Never read the bottom half of the Internet; it’s where the sediment lies.�

If you’ve spent any time reading comments on YouTube, you’ll most likely agree with this — in theory. Unfortunately, our need to be consulted about everything is a perfect match for our morbid fascination with what others are doing with their need to be consulted. It’s a viscous, self-feeding downward cycle. I often wish that we could adopt this rule from Thomas More’s Utopia:

There’s a rule in the Council that no resolution can be debated on the day that it’s first proposed. Otherwise someone’s liable to say the first thing that comes into his head, and then start thinking up arguments to justify what he has said, instead of trying to decide what’s best for the community.

Instead, many of us turn off comments on our own websites instead. It’s not a great solution, but it’s better than losing sleep because of personal attacks and a general sense of meanness towards something you’ve spent a lot of time creating.

Act 3: Turtles, Turtles, Everywhere

A well-known scientist (some say it was Bertrand Russell) once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the center of a vast collection of stars called our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: “What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.� The scientist gave a superior smile before replying, “What is the tortoise standing on?� “You’re very clever, young man, very clever,� said the old lady. “But it’s turtles all the way down!�

This story, as related by Stephen Hawking in A Brief History of Time, is well known and has entered popular Internet culture in many different ways, from Dr. Seuss, to Stephen King’s Dark Tower, and even as an achievement in World of Warcraft. But it’s Frank Chimero who brought this story to the design world in his essay for The Manual titled “The Space Between You and Me�:

Frank goes on to explain the irony of social media. Social networking websites were created to connect us to each other, and yet they reduce us to a two-dimensional avatar, a short bio and a list of books and movies we like. We’re so quick to throw around the word “empathy� as being essential to the work we do, and yet we know frighteningly little not just about the people who use our products, but even about the people who we think we have close relationships with online.

Based on its almost 10 million page views, I’m pretty sure everyone has seen this photograph of a man giving his shoes to a homeless girl in Rio de Janeiro, from BuzzFeed’s “21 Pictures That Will Restore Your Faith in Humanity�:

That’s empathy — a quite literal interpretation of Atticus’s reminder in To Kill a Mockingbird that we need to walk in someone else’s shoes before we judge them. It’s people all the way down, and social media websites are making us forget this by abstracting a person’s “brand� from who they really are.

So, How Do We Make A Better Internet?

In a fit of uncharacteristic optimism, I’d like to propose three ways in which we could make a better Internet. I need to do this before the feeling passes, so let’s get to it.

I know it sounds strange, but not saying something every once in a while is OK. In what some are calling the best social media policy ever written, Benjamin Franklin once said:

Remember not only to say the right thing in the right place, but far more difficult still, to leave unsaid the wrong thing at the tempting moment.

It’s tough, but it can be done. The other day, as I stopped at a red traffic light, one of South Africa’s characteristically dangerous taxi drivers came up from behind and swerved around me so that they could run the red light. I was infuriated, but at that point I’d already started thinking about this essay, so I decided not to tweet about it. And then I instantly wanted someone to give me a high five for my remarkable show of restraint. How insane is that? Sometimes, it’s WWIC all the way down as well.

We must resist the temptation to feel entitled to be consulted on everything that happens around us.

A few weeks ago, the Mars Rover made a perfect landing, and at least for a few minutes, the Internet rejoiced with tweets like these:

We quickly went back to complaining about other people’s jokes and reactions to the event. But wow! — for a while there, I saw how awesome, encouraging and funny we can be when we pull together to amplify the good things around us.

We don’t have to link to hate speech and angry rants. The best way to stop that behavior is to send traffic elsewhere. We also don’t have to go trolling every time we need a little excitement in our lives. Instead, make and share good things. Be nice. If someone does something good, help them spread the word about it.

I’m pretty sure everyone’s read Jack Cheng’s “The Slow Web� by now, in which he sums up the problem with the environment that we’ve created:

What is the Fast Web? It’s the out of control web. The oh my god there’s so much stuff and I can’t possibly keep up web. It’s the spend two dozen times a day checking web. The in one end out the other web. The web designed to appeal to the basest of our intellectual palettes, the salt, sugar and fat of online content web. […] The Fast Web is a cruel wonderland of shiny shiny things.

Contrast that with what Patrick Rhone says in his essay “Twalden�:

The things I want to know are “happening� — like good news about a friend’s success, or bad news about their relationship, or even just the fact they are eating a sandwich and the conversation around such — I wish to have at length and without distraction. Such conversations remain best when done directly, and there are plenty of existing and better communication methods for that.

Having conversations “at length and without distraction� — what a novel concept.

But let’s bring this full circle, all the way back to Paul Ford. In the closing keynote at the 2012 MFA Interaction Design Festival, he said the following:

If we are going to ask people, in the form of our products, in the form of the things we make, to spend their heartbeats on us, on our ideas, how can we be sure, far more sure than we are now, that they spend those heartbeats wisely?

I’m not saying we should shun the Fast Web and all make Instapaper clones. The Fast Web has its place. I’m also not saying we should quit Twitter. But I do, with all my heart, believe that we — designers and developers — are the ones who are responsible for making a better Internet. And that means we are responsible for how other people spend their time.

We can either take it easy and play to WWIC and bottom-half-of-the-Internet culture, or we can do it the hard way and think carefully about the meaning of the things we make and share. We can choose to ignore our darker tendencies and instead take responsibility for our users and how we ask them to spend their heartbeats. We can shift the flow of traffic away from the bottom half, all the way to the top.

(al)


© Rian van der Merwe for Smashing Magazine, 2012.


Guidelines For Designing With Audio // UX Enhancement


  

As we’ve seen, audio is used as a feedback mechanism when users interact with many of their everyday devices, such as mobile phones, cars, toys and robots. There are many subtleties to designing with audio in order to create useful, non-intrusive experiences. Here, we’ll explore some guidelines and principles to consider when designing with audio.

While I won’t cover this here, audio is a powerful tool for designing experiences for accessibility, and many of the guidelines discussed here apply. Both Android phones and iPhones already have accessibility options that enable richer experiences with gestural and audio input and audio output.

First, who designs audio? Certainly, the audio producers and game designers who bring gaming to life. There’s also the world of voice user interface designers — those who design interactive voice response telephone systems for banks, airlines, etc. Then there are mobile, toy and interaction designers who have some of this expertise or who work closely with audio engineers and producers to create the right experience for their devices.

If audio might play a part in your design, here are some considerations to make once you have determined that the user’s device has a speaker and can play audio, and is either network-connected or has enough memory to store audio on the device.

Audio Design Guidelines

Choose the Right Type of Audio

Audio can be non-verbal sounds, sometimes called “earcons,� or can be words, sometimes called prompts, and choosing the right type is important. Meaning can be embedded in an earcon in such a way that a short non-intrusive sound can represent something much larger. Think of the sound that confirms that a text message has been sent on an iPhone: the sound effectively represents the action by suggesting motion and movement away from the user. Another example is the parking-assist system in a car; the intensity and pitch of sounds create a sense of urgency to let the driver know their distance from the nearest car.

Embedding meaning in a single sound allows for quick and efficient feedback; sounds are shorter than verbal prompts and can be less intrusive. The AOL email notification “You’ve got mailâ€� is a great example of the opposite — an incredibly annoying notification that makes most of us want to throw a hammer at the computer. (But if the AOL sound has made you nostalgic, check out “13 Tech Sounds You Don’t Hear Anymore.â€�)

But only so much information can be embedded in a sound. Sometimes words are the best way to communicate an idea. If that is the case with your product (say you are delivering instructions, alerts or dynamic information such as turn-by-turn navigation), then there are ways to design these smartly. You’ll also need to consider whether to localize the experience, with all of the implications that entails. A talking toy sold in multiple countries will probably need to have audio feedback in the language of each country, and this will require some thought on the scalability of audio feedback.

Embed Meaning in Audio Earcons

So, how can sounds be designed in such a way that the user intuitively knows what they mean? Some research is out there to guide novice earcon designers, such as work done by Blattner et al in “Earcons and Icons: Their Structure and Common Design Principles� (PDF). Blattner comments on W.W. Gaver’s mappings of earcons into symbolic, nomic and metaphorical sounds:

Symbolic mappings rely on social convention such as applause for approval, nomic representations are physical such as a door slam, and metaphorical mappings are similarities such as a falling pitch for a falling object.

Blattner goes on to say that if a good mapping can be found, then the earcon will be more easily learned and remembered. Earcons that take advantage of a pre-existing relationships enable users to associate sounds with meaning with minimal or no training.

Designing sound is complex, and audio designers will want to consider pitch, timbre, loudness, duration and direction to create the right sound. For details on how these should be considered in earcon design, consult “Auditory Interfaces: A Design Platform� (PDF).

Design in Context

Whether you are designing earcons or prompts, consider the particular context of the user, both physically and emotionally. If you are designing audio instructions or information, consider these factors:

  • Is there a way to differentiate between a novice user (i.e. someone who needs more hand-holding) and an expert user? This could be done by keeping track of the number of interactions that the user has with the device, and tailoring an audio experience for first-time users, while playing shortened prompts to expert users.
  • If the device has a screen, do you know whether the user will rely on visual feedback to complete their task? If so, audio might be a secondary feedback mechanism or might not be needed at all. Audio could be tailored specifically for these situations by playing less or different audio. Knowing where the device is in relation to the user could be done with certain sensors or accelerometers or derived from how the interaction was initiated. For example, if an interaction with Siri on the iPhone 4S was initiated from a Bluetooth headset, then the user’s phone is likely not available for visual feedback, so providing rich audio feedback becomes essential.
  • Many other contexts warrant tailoring the audio experience. With GPS, for example, you can determine whether the user is driving (using their speed). Sometimes the current state of the device is relevant and can indicate the proximity of the user or their level of engagement: Is the user listening to music? Have they recently interacted with the device? Have the swiped their credit card? Etc.

Consider the “Non-Use Cases�

Designers always talk of use cases, but for devices that “talk,� being aware of the non-use cases is also important, situations in which playing audio wouldn’t make sense. Alerts or information being shouted out from a device with no warning or context can be alarming. The example below shows a moving walkway that repeats its warning over and over, even when no one is nearby.

You will often want to give the user control over whether to play audio at all, through the settings. For example, on a Windows Phone, a user can set whether an incoming text message is read aloud automatically only when connected to a Bluetooth headset, when connected to any headset, always or never.

It’s Not Just What You Say But How You Say It

Designing prompts is part art and part science. Many good speech-recognition and voice user interface design books are out there with details. We’ll look at one example here and some of the problems with the design. Taken from an early version of the Ford Sync’s in-car speech recognition, this audio clip instructs the driver on how to ask for a particular music artist, but it does it very poorly; the pace, voice and grouping of words are just not clear enough.

Some design guidelines:

  • Use language that users understand. Stay away from lingo, jargon and technical terms that would make sense to the company but not to the end user.
  • Do not overload the user with too much information at once.
  • Limit the number of audio menu options. Audio is linear, time-sensitive and transient, unlike the Web and other visual feedback media in which users can take time to read, process and select. Research has shown that remembering more than five options from an audio menu is hard. Users will often listen to all choices before picking one, so a long list will limit their ability to remember them all.
  • When writing prompts that require users to make a choice, structure them so that the menu option comes before the action; for example “For y, press x,â€� instead of “Press x for y.â€� The user will more easily be able to identify the option they want and listen more attentively for the action.

Decide Between Recorded Prompts and Text-to-Speech

Another decision to make is whether to prerecord the audio with a voice actor or use text-to-speech (TTS). Prerecorded audio provides the most natural reading of text in most cases, but there are many considerations to make before implementing it. How many things must be recorded? Will the audio content change? How much storage is available?

Over the years, TTS has improved dramatically and in some cases does a great job of reading back audio. TTS engines should be evaluated based on the task at hand: Are multiple languages needed? Multiple voices? Is the type of information to be read back specialized? Evaluating various implementations is also important: Is the device connected, in which case the TTS engine could be cloud-based, or will the TTS engine need to be embedded in the device? Reactions to TTS vary; some users say that TTS impairs the experience so much that they avoid using it, while others barely notice it.

Here are two examples:

TTS Email

Recording Prompts

If you are able to record all prompts with an actor, choose a voice and personality that fits your brand and the experience. Best to recruit talent with a personality in mind, and have them record a representative script to evaluate how they would come across in the device.

There are many subtleties to be aware of when recording prompts. Voice user interface designers spend time directing voice actors to make sure that the prompts elicit the right spoken response from users. The following prompt can mean different things depending on how it’s read: “Would you like departures <pause> or arrivals?� would steer users to say “departures� or “arrivals.� A slightly different reading, “Would you like departures or arrivals?�, could be misinterpreted by users as requiring a yes or no response.

Prompts can be recorded even when some of the prompts need to change dynamically, such as when reading back the time or a phone number. In these cases, you would record shorter prompts and then concatenate them together during playback. To make these readings sound natural instead of robotic, record as large a chunk of the prompt as possible.

Summary

The most important consideration when designing with audio is to ensure that it enhances the experience and does not interfere or distract. If you are considering designing with audio, hopefully you are now armed with some helpful information to get you started on designing a great experience.

(al)


© Karen Kaushansky for Smashing Magazine, 2012.


Over Troubled Water: Showcase of Bridge Photography


  

One photographic centerpiece that most certainly ends up in front of a photographer’s lens are bridges. And for good reason. Bridges capture our attention and often, because of their sheer size, they force us to take note of them. We marvel at their architectural majesty or we wonder over their history and the lives that have touched their surface. This is perhaps one of the secrets behind the popularity of bridge photography.

Below is a brand new showcase which features bridges of all shapes and sizes. From the modern to the old, these carriers across gaps in our paths are the subject of this gallery of pictures. Whether they be stone, metal or made of wood, bridges are not always inspiring on their own. For some, it takes the photographer’s skills and lens to give these common structures more creative energy.

Over Troubled Water: The Ferryman’s Foes

Ha’Penny Bridge, Dublin by Pajunen

Bridge over Untroubled Water by Huicca

Goodwill Bridge by delsando

Bridge by fixer

bridge by hm923

Burlington Bristol Bridge HDR Edit by tatt2ed13

Mostar by Oceanum-MMA

Cologne Skyline by AljoschaThielen

A Sunset Bridge by susannamaryi

Bridge of Destiny by Brandeno45

Golden Gate Bridge by wonderlandslost

Hagg Bank – Aug2012

Water Under the Bridge by Lowe-Light

The Ben Franklin Bridge by nanshant

Bridge by snapdragon46

old welsh mining bridge by GazPoo

Bridge by friedapi

Bridge by ClaudiaPPhotography

Bridge by SNiPERWOLF-UAE

Bridge in bath by PhotographicJaydiee

small wooden bridge by frei76

manhattan bridge by bjarr

Helix Bridge by As0oma

Bridge by ArizaonaRose

Natural Bridge by Il-Lupo-Grigio

Bridge by LadyGreeny

The Old Bridge by CitizenOlek

bridge state of mind by frankgtrs

Blue Wonder by taenaron

Famous Forth Rail Bridge by LyndaWithaWhy

Bridge by terrorkaetzchen

The Port Of Murray Bridge by djzontheball

purified by murisakii

While the showcase closes, the conversation doesn’t have to end here. Give us your thoughts on the collection, or point us in the direction of some other works that we missed. Show us your favorites. Either way, keep the inspiration going.

(dpe)


Creating A Pattern Library With Evernote And Fireworks // Workflow Tips


  

A well-functioning pattern library is a beautiful thing. It is a powerful resource that you and your entire team can use to efficiently create consistent user experiences for your website or service. It cuts out repetitive design work, allowing you to focus your energy on creating new user experiences; and it creates a common UI language for your team, reducing communication issues and keeping everyone on the same page.

But to many designers, creating a pattern library can feel like a daunting academic pursuit, or simply useless overhead documentation. To make matters worse, getting consensus on which technology to use and how to get started is hard.

After experimenting with various options, our team has found that using Evernote to house our pattern library of Adobe Fireworks PNG design files has proven to be a winning combination. We’ll outline how you can use Evernote and Fireworks to easily build your own pattern library and reap the benefits mentioned above.

Creating A Pattern Library With Evernote And Fireworks

Patterns With Benefits

To get fueled for the journey ahead, let’s dig a little deeper into what makes pattern libraries so awesome. Here are some of the top reasons why a library is worth the investment:

  • Consistency
    A pattern library spreads consistency across your products. As they say, “the interface is the brand,� and creating an interface that is consistently good is critical to creating a great brand. This benefit is not trivial.
  • Efficiency
    A pattern library frees designers from the heavy lifting of repetitive design work and allows them to focus on building new interactions and improving existing patterns.
  • Speed
    A pattern library gives designers and developers the building blocks to quickly build complex interactions. Which also facilitates rapid prototyping of new ideas.
  • Vocabulary
    A pattern library gives team members a shared understanding of the product’s primary building blocks, and it gets new team members ramped up quickly on how the product is built.
  • Evolution
    A great pattern or component library helps your website evolve. The nature of components is such that if you make a change to one component, it is updated across all instances in the product. This helps you respond quickly to customer needs and iteratively refine the user experience.

If They’re So Great, Why Don’t You Marry Them?

In light of all the benefits, why doesn’t everyone have a pattern library? From my experience, two of the main factors that hamper the creation of a pattern library are lack of knowledge and lack of tools. Learning about patterns, components and frameworks can feel like a daunting academic pursuit, and there doesn’t seem to be solid consensus on what technology to use.

In addition, if a pattern library is hard to maintain or hard to use, it won’t gain any traction. Updating the library with new patterns and modifying existing patterns have to be easy, because as soon as the library gets out of date or becomes cumbersome to use, it will be useless. We have found that using Evernote and Fireworks together makes for a system that’s easy to manage and easy to use and that overcomes these obstacles very well.

Evernote + Fireworks = Winning Combination

Numerous websites and services have sprung up to help companies build their own pattern libraries, but after experimenting with Quince, Patternry and a few others, our design team landed on using Evernote and Fireworks. Here are the reasons why they are the best fit for us.

(Unique) Fireworks Features

While Evernote will get a lot of attention in this article, the pattern library we have created would not be possible without some of the excellent features in Fireworks:

  • Fireworks uses the PNG format as its source file type. This means that Evernote (and any Web browser) is able to display Fireworks PNG files natively, while still retaining all of the data from Fireworks vectors, bitmaps, layers and effects inside the files. This is not possible with PSD, AI, INDD or any other proprietary file type.
  • Because the PNG file format is open, Evernote is also able to “read” and index text inside every Fireworks PNG in the library, and this gives you the ability to search for text strings that appear in the individual components. Again, this is something that would not be possible with any other “closed” file type.
  • Another awesome feature of Fireworks is that it allows you to drop a PNG file directly into any other PNG file that is open. This allows you to seamlessly drag and drop patterns from Evernote into Fireworks without missing a beat. Combining this with the rich symbols feature in Fireworks makes for a powerful workflow.
  • Fireworks’ editable PNG files are typically very small in size, which makes opening and syncing files extremely fast and easy.

Fireworks is the only app that has all of these features, making it a critical component to this workflow.

Organization

In addition to being able to display PNG files natively, Evernote has robust tagging and search capabilities, which have allowed us to relax about organization. Even without tagging, finding the pattern you are looking for is quick and easy. Because Evernote is able to index text inside Fireworks PNG files, you can search for text within the patterns, which makes finding most things lightning quick, with zero organizational overhead.

Syncing and Accessibility

A final distinction between Evernote and most other pattern library services is that Evernote is an app that runs locally, which makes it much quicker and easier to use than any Web-based service. With Evernote, no uploading or downloading of files through a browser is involved — just drag and drop from Evernote to Fireworks. Evernote works across multiple platforms, so no matter where you are, you can access it (and even when offline, you can still access local copies of your assets). Any changes to existing patterns are near-instantly synchronized for everyone else, so it really is the best of both worlds.

Pattern Recognition

Hopefully, you’re sold on the benefits of creating a pattern library, and on why Evernote and Fireworks are great tools for the job. But before jumping into how to implement the pattern library, we should understand what exactly patterns are. A good definition from Erin Malone is:

[A pattern is] an optimal solution to a common problem within a specific context.

Tabs, pagination, table data, breadcrumbs — these are all solutions to common problems in specific contexts (our very definition of patterns!). Patterns are the fundamental interactions that make up a user experience. Combined, they can create rich, complex user experiences. Any given YouTube page shows multiple patterns in use:

YouTube Patterns
Some patterns that YouTube employs to create a rich user experience. (large view)

Patterns vs. Components

A quick note on patterns versus components. In general, patterns describe the overarching concept behind an interaction. They typically don’t call out a particular visual design or code.

A component, on the other hand, is a specific skinned implementation of a pattern. Also known as a widget, chunk or module, it typically consists of a visual design asset and a code snippet that renders the component. In other words, components are how patterns become real. Looking again at YouTube, here is the same page, this time with components highlighted:

YouTube Components
Calling out some of the many components that make up a YouTube page. (large view)

The library we’re focusing on in this article is actually a pattern and component hybrid. Here’s how Nathan Curtis describes this type of library in his excellent article “So, You Wanna Build a Library, Eh?�:

Choosing between patterns and components may not be an either/or question… [Teams] have hedged their bets by embedding aspects of one approach into the guidelines and spirit of the other, most commonly via pattern-like guidelines incorporated into the more specific component definitions.

Embedding pattern-like guidelines into a component library is the exact approach we have taken, and it has served us well. If you already have a product and are looking to build a library, this approach is a great way to get started because it yields the immediate benefit of components (reusable visual design assets and code), while enabling you to invest in adding pattern-like guidelines over time.

The Library In Action

Here’s how we have set up our pattern/component library. In Evernote, we have a shared notebook named “Pattern Library.� I own the notebook and share it (read only) with all members of the UX and Web development team. Each pattern/component has these sections defined:

  1. What the pattern is;
  2. When and why you would use it;
  3. A visual example (Fireworks vector PNG);
  4. Basic functionality (describing the component’s behaviors);
  5. An example in context (to show alignment, etc.);
  6. A link to the Web development component (live example and HTML snippet).

Here’s what that looks like in Evernote:

Evernote Anatomy
In the shared notebook, each pattern/component contains all of the information needed to understand when, why and how to use it. (large view)

A key to making components work is to have available the corresponding code snippets and CSS needed to bring them to life. Each component in the library has a link to an internal Web development page where we keep live working examples, as well as the code snippets required to generate the component. This way, our developers can easily find the code they need to implement the components that they see in the mockups.

Web Development Page
Each component has a corresponding page that our Web Development team uses to implement each component. It documents the behavior of the component and allows developers to copy and paste the code. (large view)

Drag, Drop And Roll

So, what does using this type of a pattern/component library on a daily basis look like? Here’s what our process looks like in practice:

  1. When a UX designer needs to use a component in their Fireworks mockup, they find the pattern in Evernote by searching or by visually browsing to the one they want.
  2. The designer then drags and drops the Fireworks PNG image from Evernote into their layout in Fireworks, places it where it needs to go, and makes any necessary adjustments.
  3. Once the mockup has been finalized, the designer hands it over to the Web developers for implementation.
  4. The Web developers implement the mockup by visiting the Web development page for each pattern being used, copying the component’s code to their project and hooking it up to the back end. Done!

The easiest way to describe the design process is to see it in action. Here’s a screencast (should open in a new tab or browser window; .SWF, requires Flash):

screencast on Fireworks and Evernote

One Final Note(book)

Once we started using Evernote for our pattern library, we realized we could apply the concept to other useful areas. We have since created several other shared notebooks, which come in really handy for the design team:

  • “Design Resourcesâ€�
    This notebook contains everything that might help designers make mockups quickly: browser chrome, cursors, icons, scroll bars, company logos, templates, etc. Designers just drag and drop them into their mockups.
  • “Design Inspirationâ€�
    This notebook (which anyone can add to) is full of screenshots of inspiring designs. Evernote allows you to search text inside images, which works beautifully when you want to research how other websites handle interactions (for example, we can search screenshots for “Sign up� or “Buy now�).

I hope this article has shed a little light on how pattern/component libraries can be useful and on how combining Adobe Fireworks and Evernote makes for a simple, fast and flexible solution. Hopefully, you feel more comfortable now creating your own pattern library and reaping the benefits it offers.

Further Reading

(mb) (al)


© Kris Niles for Smashing Magazine, 2012.


  •   
  • Copyright © 1996-2010 BlogmyQuery - BMQ. All rights reserved.
    iDream theme by Templates Next | Powered by WordPress