Archive for November, 2012
Digital Artistry Series #1: Planning Your Work
Performance: Writing Fast, Memory-Efficient JavaScript
JavaScript engines such as Google’s V8 (Chrome, Node) are specifically designed for the fast execution of large JavaScript applications. As you develop, if you care about memory usage and performance, you should be aware of some of what’s going on in your user’s browser’s JavaScript engine behind the scenes.
Whether it’s V8, SpiderMonkey (Firefox), Carakan (Opera), Chakra (IE) or something else, doing so can help you better optimize your applications. That’s not to say one should optimize for a single browser or engine. Never do that.
You should, however, ask yourself questions such as:
- Is there anything I could be doing more efficiently in my code?
- What (common) optimizations do popular JavaScript engines make?
- What is the engine unable to optimize for, and is the garbage collector able to clean up what I’m expecting it to?
Fast-loading Web sites — like fast cars — require the use specialized tools. Image source: dHybridcars.
There are many common pitfalls when it comes to writing memory-efficient and fast code, and in this article we’re going to explore some test-proven approaches for writing code that performs better.
So, How Does JavaScript Work In V8?
While it’s possible to develop large-scale applications without a thorough understanding of JavaScript engines, any car owner will tell you they’ve looked under the hood at least once. As Chrome is my browser of choice, I’m going to talk a little about its JavaScript engine. V8 is made up of a few core pieces.
- A base compiler, which parses your JavaScript and generates native machine code before it is executed, rather than executing bytecode or simply interpreting it. This code is initially not highly optimized.
- V8 represents your objects in an object model. Objects are represented as associative arrays in JavaScript, but in V8 they are represented with hidden classes, which are an internal type system for optimized lookups.
- The runtime profiler monitors the system being run and identifies “hot” functions (i.e. code that ends up spending a long time running).
- An optimizing compiler recompiles and optimizes the “hot” code identified by the runtime profiler, and performs optimizations such as inlining (i.e. replacing a function call site with the body of the callee).
- V8 supports deoptimization, meaning the optimizing compiler can bail out of code generated if it discovers that some of the assumptions it made about the optimized code were too optimistic.
- It has a garbage collector. Understanding how it works can be just as important as the optimized JavaScript.
Garbage Collection
Garbage collection is a form of memory management. It’s where we have the notion of a collector which attempts to reclaim memory occupied by objects that are no longer being used. In a garbage-collected language such as JavaScript, objects that are still referenced by your application are not cleaned up.
Manually de-referencing objects is not necessary in most cases. By simply putting the variables where they need to be (ideally, as local as possible, i.e. inside the function where they are used versus an outer scope), things should just work.
Garbage collection attempts to reclaim memory. Image source: Valtteri Mäki.
It’s not possible to force garbage collection in JavaScript. You wouldn’t want to do this, because the garbage collection process is controlled by the runtime, and it generally knows best when things should be cleaned up.
De-Referencing Misconceptions
In quite a few discussions online about reclaiming memory in JavaScript, the delete
keyword is brought up, as some developers think you can force de-referencing using it. Never use delete
. Ever. In the below example, delete o.x
does a lot more harm than good behind the scenes, as it changes o
‘s hidden class and makes it a generic slow object.
var o = { x: 1 }; delete o.x; // true o.x; // undefined
There are also misconceptions about how null
works. Setting an object reference to null
doesn’t “null” the object. It sets the object reference to null
. Using o.x = null
is better than using delete
, but it’s probably not even necessary.
var o = { x: 1 }; o = null; o; // null o.x // TypeError
If this reference was the last reference to the object, the object is then eligible for garbage collection. If the reference was not the last reference to the object, the object is reachable and will not be garbage collected.
Another important note to be aware of is that global variables are not cleaned up by the garbage collector during the life of your page. Regardless of how long the page is open, variables scoped to the JavaScript runtime global object will stick around.
var myGlobalNamespace = {};
Globals are cleaned up when you refresh the page, navigate to a different page, close tabs or exit your browser. Function-scoped variables get cleaned up when a variable falls out of scope. When functions have exited and there aren’t any more references to it, the variable gets cleaned up.
Rules of Thumb
To give the garbage collector a chance to collect as many objects as possible as early as possible, don’t hold on to objects you no longer need. This mostly happens automatically; here are a few things to keep in mind.
- As mentioned earlier, a better alternative to manual de-referencing is to use variables with an appropriate scope. I.e. instead of a global variable that’s nulled out, just use a function-local variable that goes out of scope when it’s no longer needed. This means cleaner code with less to worry about.
- Ensure that you’re unbinding event listeners where they are no longer required, especially when the DOM objects they’re bound to are about to be removed
- If you’re using a data cache locally, make sure to clean that cache or use an aging mechanism to avoid large chunks of data being stored that you’re unlikely to reuse
Functions
Next, let’s look at functions. As we’ve already said, garbage collection works by reclaiming blocks of memory (objects) which are no longer reachable. To better illustrate this, here are some examples.
function foo() { var bar = new LargeObject(); bar.someCall(); }
When foo
returns, the object which bar
points to is automatically available for garbage collection, because there is nothing left that has a reference to it.
Compare this to:
function foo() { var bar = new LargeObject(); bar.someCall(); return bar; } // somewhere else var b = foo();
We now have a reference to the object which survives the call and persists until the caller assigns something else to b
(or b
goes out of scope).
Closures
When you see a function that returns an inner function, that inner function will have access to the outer scope even after the outer function is executed. This is basically a closure — an expression which can work with variables set within a specific context. For example:
function sum (x) { function sumIt(y) { return x + y; }; return sumIt; } // Usage var sumA = sum(4); var sumB = sumA(3); console.log(sumB); // Returns 7
The function object created within the execution context of the call to sum
can’t be garbage collected, as it’s referenced by a global variable and is still very much accessible. It can still be executed via sumA(n)
.
Let’s look at another example. Here, can we access largeStr
?
var a = function () { var largeStr = new Array(1000000).join('x'); return function () { return largeStr; }; }();
Yes, we can, via a()
, so it’s not collected. How about this one?
var a = function () { var smallStr = 'x'; var largeStr = new Array(1000000).join('x'); return function (n) { return smallStr; }; }();
We can’t access it anymore and it’s a candidate for garbage collection.
Timers
One of the worst places to leak is in a loop, or in setTimeout()
/setInterval()
, but this is quite common.
Consider the following example.
var myObj = { callMeMaybe: function () { var myRef = this; var val = setTimeout(function () { console.log('Time is running out!'); myRef.callMyMaybe(); }, 1000); } };
If we then run:
myObj.callMeMaybe();
to begin the timer, we can see every second “Time is running out!” If we then run:
myObj = null;
The timer will still fire. myObj
won’t be garbage collected as the closure passed to setTimeout
has to be kept alive in order to be executed. In turn, it holds references to myObj
as it captures myRef
. This would be the same if we’d passed the closure to any other function, keeping references to it.
It is also worth keeping in mind that references inside a setTimeout
/setInterval
call, such as functions, will need to execute and complete before they can be garbage collected.
Be Aware Of Performance Traps
It’s important never to optimize code until you actually need to. This can’t be stressed enough. It’s easy to see a number of micro-benchmarks showing that N is more optimal than M in V8, but test it in a real module of code or in an actual application, and the true impact of those optimizations may be much more minimal than you were expecting.
Doing too much can be as harmful as not doing anything. Image source: Tim Sheerman-Chase.
Let’s say we want to create a module which:
- Takes a local source of data containing items with a numeric ID,
- Draws a table containing this data,
- Adds event handlers for toggling a class when a user clicks on any cell.
There are a few different factors to this problem, even though it’s quite straightforward to solve. How do we store the data? How do we efficiently draw the table and append it to the DOM? How do we handle events on this table optimally?
A first (naive) take on this problem might be to store each piece of available data in an object which we group into an array. One might use jQuery to iterate through the data and draw the table, then append it to the DOM. Finally, one might use event binding for adding the click behavior we desire.
Note: This is NOT what you should be doing
var moduleA = function () { return { data: dataArrayObject, init: function () { this.addTable(); this.addEvents(); }, addTable: function () { for (var i = 0; i < rows; i++) { $tr = $('<tr></tr>'); for (var j = 0; j < this.data.length; j++) { $tr.append('<td>' + this.data[j]['id'] + '</td>'); } $tr.appendTo($tbody); } }, addEvents: function () { $('table td').on('click', function () { $(this).toggleClass('active'); }); } }; }();
Simple, but it gets the job done.
In this case however, the only data we’re iterating are IDs, a numeric property which could be more simply represented in a standard array. We also know that DocumentFragment
and native DOM methods are more optimal than using jQuery for our table generation, and, of course, that event delegation is typically more performant than binding each td
individually.
Adding in these changes results in some good (expected) performance gains. Event delegation provides decent improvement over simply binding, and opting for documentFragment
was a real booster.
var moduleD = function () { return { data: dataArray, init: function () { this.addTable(); this.addEvents(); }, addTable: function () { var td,td; for (var i = 0; i < rows; i++) { tr = document.createElement('tr'); for (var j = 0; j < this.data.length; j++) { td = document.createElement('td'); td.appendChild(document.createTextNode(this.data[j])); frag2.appendChild(td); } tr.appendChild(frag2); frag.appendChild(tr); } tbody.appendChild(frag); }, addEvents: function () { $('table').on('click', 'td', function () { $(this).toggleClass('active'); }); } }; }();
We might then look to other ways of improving performance. It’s possible we came across a test case showing that the prototype pattern is more optimal than the module pattern, or that JavaScript templating frameworks are super-optimized. (Sometimes they are, but use them because they make for readable code. Also, precompile!). Let’s test and find out.
moduleG = function () {}; moduleG.prototype.data = dataArray; moduleG.prototype.init = function () { this.addTable(); this.addEvents(); }; moduleG.prototype.addTable = function () { var template = _.template($('#template').text()); var html = template({'data' : this.data}); $tbody.append(html); }; moduleG.prototype.addEvents = function () { $('table').on('click', 'td', function () { $(this).toggleClass('active'); }); }; var modG = new moduleG();
As it turns out, in this case the performance benefits are extremely negligible. Opting for templating and prototypes didn’t really offer anything more than what we had before. That said, performance isn’t really the reason modern developers use either of these things — it’s the readability, inheritance model and maintainability they bring to your codebase.
More complex problems include efficiently drawing images using canvas and manipulating pixel data with or without typed arrays
Always give micro-benchmarks a close lookover before exploring their use in your application. Some of you may recall the JavaScript templating shoot-off and the extended shoot-off that followed. You want to make sure that tests aren’t being impacted by constraints you’re unlikely to see in real world applications — test optimizations together in actual code.
V8 Optimization Tips
Whilst detailing every V8 optimization is outside the scope of this article, there are certainly many tips worth noting. Keep these in mind and you’ll reduce your chances of writing unperformant code.
- Certain patterns will cause V8 to bail out of optimizations. A try-catch, for example, will cause such a bailout. For more information on what functions can and can’t be optimized, you can use the
--trace-bailout file.js
with the d8 shell utility that comes with V8. - If you care about speed, try very hard to keep your functions monomorphic, i.e. make sure that variables (including properties, arrays and function parameters) only ever contain objects with the same hidden class. For example, don’t do this:
function add(x, y) { return x+y; } add(1, 2); add('a',''b'); add(my_custom_object, undefined);
- Don’t load from uninitialized or deleted elements. This won’t make a difference in output, but it will make things slower.
- Don’t write enormous functions, as they are more difficult to optimize
For more tips, watch Daniel Clifford’s Google I/O talk Breaking the JavaScript Speed Limit with V8 as it covers these topics well. Optimizing For V8 — A Series is also worth a read.
Objects Vs. Arrays: Which Should I Use?
- If you want to store a bunch of numbers, or a list of objects of the same type, use an array.
- If what you semantically need is an object with a bunch of properties (of varying types), use an object with properties. That’s pretty efficient in terms of memory, and it’s also pretty fast.
- Integer-indexed elements, regardless of whether they’re stored in an array or an object, are much faster to iterate over than object properties.
- Properties on objects are quite complex: they can be created with setters, and with differing enumerability and writability. Items in arrays aren’t able to be customized as heavily — they either exist or they don’t. At an engine level, this allows for more optimization in terms of organizing the memory representing the structure. This is particularly beneficial when the array contains numbers. For example, when you need vectors, don’t define a class with properties x, y, z; use an array instead..
There’s really only one major difference between objects and arrays in JavaScript, and that’s the arrays’ magic length
property. If you’re keeping track of this property yourself, objects in V8 should be just as fast as arrays.
Tips When Using Objects
- Create objects using a constructor function. This ensures that all objects created with it have the same hidden class and helps avoid changing these classes. As an added benefit, it’s also slightly faster than
Object.create()
- There are no restrictions on the number of different object types you can use in your application or on their complexity (within reason: long prototype chains tend to hurt, and objects with only a handful of properties get a special representation that’s a bit faster than bigger objects). For “hot” objects, try to keep the prototype chains short and the field count low.
Object Cloning
Object cloning is a common problem for app developers. While it’s possible to benchmark how well various implementations work with this type of problem in V8, be very careful when copying anything. Copying big things is generally slow — don’t do it. for..in
loops in JavaScript are particularly bad for this, as they have a devilish specification and will likely never be fast in any engine for arbitrary objects.
When you absolutely do need to copy objects in a performance-critical code path (and you can’t get out of this situation), use an array or a custom “copy constructor” function which copies each property explicitly. This is probably the fastest way to do it:
function clone(original) { this.foo = original.foo; this.bar = original.bar; } var copy = new clone(original);
Cached Functions in the Module Pattern
Caching your functions when using the module pattern can lead to performance improvements. See below for an example where the variation you’re probably used to seeing is slower as it forces new copies of the member functions to be created all the time.
Note, however, that in most browsers, this will really only render it as performant as it would be just using prototypes. A good gotcha worth knowing, otherwise!
Performance improvements when using the module pattern.
Here is a test of prototype versus module pattern performance
// Module pattern Klass = function() { var foo = function() { log('foo'); }, bar = function() { log('bar'); }; return {foo: foo, bar: bar} } var i = 1000, objs = []; while (i--) { var o = new Klass() objs.push(new Klass()); o.bar; o.foo; } // Module pattern with cached functions var FooFunction = function() { log('foo'); }; var BarFunction = function() { log('bar'); }; Klass = function() { return {foo: FooFunction, bar: BarFunction} } var i = 1000, objs = []; while (i--) { var o = new Klass() objs.push(new Klass()); o.bar; o.foo; }
Tips When Using Arrays
Next let’s look at a few tips for arrays. In general, don’t delete array elements. It would make the array transition to a slower internal representation. When the key set becomes sparse, V8 will eventually switch elements to dictionary mode, which is even slower.
Array Literals
Array literals are useful because they give a hint to the VM about the size and type of the array. They’re typically good for small to medium sized arrays.
// Here V8 can see that you want a 4-element array containing numbers: var a = [1, 2, 3, 4]; // Don't do this: a = []; // Here V8 knows nothing about the array for(var i = 1; i <= 4; i++) { a.push(i); }
Storage of Single Types Vs. Mixed Types
It’s never a good idea to mix values of different types (e.g. numbers, strings, undefined or true/false) in the same array (i.e. var arr = [1, “1�, undefined, true, “true�]
)
Test of type inference performance
As we can see from the results, the array of ints
is the fastest.
Sparse Arrays vs. Full Arrays
When you use sparse arrays, be aware that accessing elements in them is much slower than in full arrays. That’s because V8 doesn’t allocate a flat backing store for the elements if only a few of them are used. Instead, it manages them in a dictionary, which saves space, but costs time on access.
Test of sparse arrays versus full arrays.
The full array sum
and sum
of all elements on an array without zeros were actually the fastest. Whether the full array contains zeroes or not should not make a difference.
Packed Vs. Holey Arrays
Avoid “holes” in an array (created by deleting elements or a[x] = foo
with x > a.length
). Even if only a single element is deleted from an otherwise “full” array, things will be much slower.
Test of packed versus holey arrays.
Pre-allocating Arrays Vs. Growing As You Go
Don’t pre-allocate large arrays (i.e. greater than 64K elements) to their maximum size, instead grow as you go. Before we get to the performance tests for this tip, keep in mind that this is specific to only some JavaScript engines.
Test of empty literal versus pre-allocated array in various browsers.
Nitro (Safari) actually treats pre-allocated arrays more favorably. However, in other engines (V8, SpiderMonkey), not pre-allocating is more efficient.
// Empty array var arr = []; for (var i = 0; i < 1000000; i++) { arr[i] = i; } // Pre-allocated array var arr = new Array(1000000); for (var i = 0; i < 1000000; i++) { arr[i] = i; }
Optimizing Your Application
In the world of Web applications, speed is everything. No user wants a spreadsheet application to take seconds to sum up an entire column or a summary of their messages to take a minute before it’s ready. This is why squeezing every drop of extra performance you can out of code can sometimes be critical.
Image source: Per Olof Forsberg.
While understanding and improving your application performance is useful, it can also be difficult. We recommend the following steps to fix performance pain points:
- Measure it: Find the slow spots in your application (~45%)
- Understand it: Find out what the actual problem is (~45%)
- Fix it! (~10%)
Some of the tools and techniques recommended below can assist with this process.
Benchmarking
There are many ways to run benchmarks on JavaScript snippets to test their performance — the general assumption being that benchmarking is simply comparing two timestamps. One such pattern was pointed out by the jsPerf team, and happens to be used in SunSpider‘s and Kraken‘s benchmark suites:
var totalTime, start = new Date, iterations = 1000; while (iterations--) { // Code snippet goes here } // totalTime → the number of milliseconds taken // to execute the code snippet 1000 times totalTime = new Date - start;
Here, the code to be tested is placed within a loop and run a set number of times (e.g. six). After this, the start date is subtracted from the end date to find the time taken to perform the operations in the loop.
However, this oversimplifies how benchmarking should be done, especially if you want to run the benchmarks in multiple browsers and environments. Garbage collection itself can have an impact on your results. Even if you’re using a solution like window.performance
, you still have to account for these pitfalls.
Regardless of whether you are simply running benchmarks against parts of your code, writing a test suite or coding a benchmarking library, there’s a lot more to JavaScript benchmarking than you might think. For a more detailed guide to benchmarking, I highly recommend reading JavaScript Benchmarking by Mathias Bynens and John-David Dalton.
Profiling
The Chrome Developer Tools have good support for JavaScript profiling. You can use this feature to detect what functions are eating up the most of your time so that you can then go optimize them. This is important, as even small changes to your codebase can have serious impacts on your overall performance.
Profiles Panel in Chrome Developer Tools.
Profiling starts with obtaining a baseline for your code’s current performance, which can be discovered using the Timeline. This will tell us how long our code took to run. The Profiles tab then gives us a better view into what’s happening in our application. The JavaScript CPU profile shows us how much CPU time is being used by our code, the CSS selector profile shows us how much time is spent processing selectors and Heap snapshots show how much memory is being used by our objects.
Using these tools, we can isolate, tweak and reprofile to gauge whether changes we’re making to specific functions or operations are improving performance.
The Profile tab gives you information about your code’s performance.
For a good introduction to profiling, read JavaScript Profiling With The Chrome Developer Tools, by Zack Grossbart.
Tip: Ideally, you want to ensure that your profiling isn’t being affected by extensions or applications you’ve installed, so run Chrome using the --user-data-dir <empty_directory>
flag. Most of the time, this approach to optimization testing should be enough, but there are times when you need more. This is where V8 flags can be of help.
Avoiding Memory Leaks — Three Snapshot Techniques for Discovery
Internally at Google, the Chrome Developer Tools are heavily used by teams such as Gmail to help us discover and squash memory leaks.
Memory statistics in Chrome Developer Tools.
Some of the memory statistics that our teams care about include private memory usage, JavaScript heap size, DOM node counts, storage clearing, event listener counts and what’s going on with garbage collection. For those familiar with event-driven architectures, you might be interested to know that one of the most common issues we used to have were listen()
’s without unlisten()
’s (Closure) and missing dispose()
’s for objects that create event listeners.
Luckily the DevTools can help locate some of these issues, and Loreena Lee has a fantastic presentation available documenting the “3 snapshot” technique for finding leaks within the DevTools that I can’t recommend reading through enough.
The gist of the technique is that you record a number of actions in your application, force a garbage collection, check if the number of DOM nodes doesn’t return to your expected baseline and then analyze three heap snapshots to determine if you have a leak.
Memory Management in Single-Page Applications
Memory management is quite important when writing modern single-page applications (e.g. AngularJS, Backbone, Ember) as they almost never get refreshed. This means that memory leaks can become apparent quite quickly. This is a huge trap on mobile single-page applications, because of limited memory, and on long-running applications like email clients or social networking applications. With great power comes great responsibility.
There are various ways to prevent this. In Backbone, ensure you always dispose old views and references using dispose()
. This function was recently added, and removes any handlers added in the view’s ‘events’ object, as well as any collection or model listeners where the view is passed as the third argument (callback context). dispose()
is also called by the view’s remove()
, taking care of the majority of basic memory cleanup needs when the element is cleared from the screen. Other libraries like Ember clean up observers when they detect that elements have been removed from view to avoid memory leaks.
Some sage advice from Derick Bailey:
“Other than being aware of how events work in terms of references, just follow the standard rules for manage memory in JavaScript and you’ll be fine. If you are loading data in to a Backbone collection full of User objects you want that collection to be cleaned up so it’s not using anymore memory, you must remove all references to the collection and the individual objects in it. Once you remove all references, things will be cleaned up. This is just the standard JavaScript garbage collection rule.”
In his article, Derick covers many of the common memory pitfalls when working with Backbone.js and how to fix them.
There is also an helpful tutorial available for debugging memory leaks in Node by Felix Geisendörfer worth reading, especially if it forms a part of your broader SPA stack.
Minimizing Reflows
When a browser has to recalculate the positions and geometrics of elements in a document for the purpose of re-rendering it, we call this reflow. Reflow is a user-blocking operation in the browser, so it’s helpful to understand how to improve reflow time.
Chart of reflow time.
You should batch methods that trigger reflow or that repaints, and use them sparingly. It’s important to process off DOM where possible. This is possible using DocumentFragment, a lightweight document object. Think of it as a way to extract a portion of a document’s tree, or create a new “fragment” of a document. Rather than constantly adding to the DOM using nodes, we can use document fragments to build up all we need and only perform a single insert into the DOM to avoid excessive reflow.
For example, let’s write a function that adds 20 div
s to an element. Simply appending each new div
directly to the element could trigger 20 reflows.
function addDivs(element) { var div; for (var i = 0; i < 20; i ++) { div = document.createElement('div'); div.innerHTML = 'Heya!'; element.appendChild(div); } }
To work around this issue, we can use DocumentFragment
, and instead, append each of our new div
s to this. When appending to the DocumentFragment
with a method like appendChild
, all of the fragment’s children are appended to the element triggering only one reflow.
function addDivs(element) { var div; // Creates a new empty DocumentFragment. var fragment = document.createDocumentFragment(); for (var i = 0; i < 20; i ++) { div = document.createElement('a'); div.innerHTML = 'Heya!'; fragment.appendChild(div); } element.appendChild(fragment); }
You can read more about this topic at Make the Web Faster,
JavaScript Memory Optimization and Finding Memory Leaks.
JavaScript Memory Leak Detector
To help discover JavaScript memory leaks, two of my fellow Googlers (Marja Hölttä and Jochen Eisinger) developed a tool that works with the Chrome Developer Tools (specifically, the remote inspection protocol), and retrieves heap snapshots and detects what objects are causing leaks.
A tool for detecting JavaScript memory leaks.
There’s a whole post on how to use the tool, and I encourage you to check it out or view the Leak Finder project page.
Some more information: In case you’re wondering why a tool like this isn’t already integrated with our Developer Tools, the reason is twofold. It was originally developed to help us catch some specific memory scenarios in the Closure Library, and it makes more sense as an external tool (or maybe even an extension if we get a heap profiling extension API in place).
V8 Flags for Debugging Optimizations & Garbage Collection
Chrome supports passing a number of flags directly to V8 via the js-flags
flag to get more detailed output about what the engine is optimizing. For example, this traces V8 optimizations:
"/Applications/Google Chrome/Google Chrome" --js-flags="--trace-opt --trace-deopt --trace-bailout"
Windows users will want to run `chrome.exe --js-flags="--trace-opt --trace-deopt --trace-bailout"
When developing your application, the following V8 flags can be used.
trace-opt
– log names of optimized functions.trace-deopt
– log a list of code it had to deoptimize while running.trace-bailout
– find out where the optimizer is skipping code because it can’t figure something out.trace-gc
– logs a tracing line on each garbage collection.
V8’s tick-processing scripts mark optimized functions with an *
(asterisk) and non-optimized functions with ~
(tilde).
If you’re interested in learning more about V8′s flags and how V8′s internals work in general, I strongly recommend looking through Vyacheslav Egorov’s excellent post on V8 internals, which summarizes the best resources available on this at the moment.
High-Resolution Time and Navigation Timing API
High Resolution Time (HRT) is a JavaScript interface providing the current time in sub-millisecond resolution that isn’t subject to system clock skews or user adjustments. Think of it as a way to measure more precisely than we’ve previously had with new Date
and Date.now()
. This is helpful when we’re writing performance benchmarks.
High Resolution Time (HRT) provides the current time in sub-millisecond resolution.
HRT is currently available in Chrome (stable) as window.performance.webkitNow()
, but the prefix is dropped in Chrome Canary, making it available via window.performance.now()
. Paul Irish has written more about HRT in a post on HTML5Rocks.
So, we now know the current time, but what if we wanted an API for accurately measuring performance on the web?
Well, one is now also available in the Navigation Timing API. This API provides a simple way to get accurate and detailed time measurements that are recorded while a webpage is loaded and presented to the user. Timing information is exposed via window.performance.timing
, which you can simply use in the console:
Timing information is shown in the console.
Looking at the data above, we can extract some very useful information. For example, network latency is responseEnd-fetchStart
, the time taken for a page load once it’s been received from the server is loadEventEnd-responseEnd
and the time taken to process between navigation and page load is loadEventEnd-navigationStart
.
As you can see above, a perfomance.memory
property is also available that gives access to JavaScript memory usage data such as the total heap size.
For more details on the Navigation Timing API, read Sam Dutton’s great article Measuring Page Load Speed With Navigation Timing.
about:memory and about:tracing
about:tracing
in Chrome offers an intimate view of the browser’s performance, recording all of Chrome’s activities across every thread, tab and process.
About:Tracing
offers an intimate view of the browser’s performance.
What’s really useful about this tool is that it allows you to capture profiling data about what Chrome is doing under the hood, so you can properly adjust your JavaScript execution, or optimize your asset loading.
Lilli Thompson has an excellent write-up for games developers on using about:tracing
to profile WebGL games. The write-up is also useful for general JavaScripters.
Navigating to about:memory
in Chrome is also useful as it shows the exact amount of memory being used by each tab, which is helpful for tracking down potential leaks.
Conclusion
As we’ve seen, there are many hidden performance gotchas in the world of JavaScript engines, and no silver bullet available to improve performance. It’s only when you combine a number of optimizations in a (real-world) testing environment that you can realize the largest performance gains. But even then, understanding how engines interpret and optimize your code can give you insights to help tweak your applications.
Measure It. Understand it. Fix it. Rinse and repeat.
Image source: Sally Hunter.
Remember to care about optimization, but stop short of opting for micro-optimization at the cost of convenience. For example, some developers opt for .forEach
and Object.keys
over for
and for in
loops, even though they’re slower, for the convenience of being able to scope. Do make sanity calls on what optimizations your application absolutely needs and which ones it could live without.
Also, be aware that although JavaScript engines continue to get faster, the next real bottleneck is the DOM. Reflows and repaints are just as important to minimize, so remember to only touch the DOM if it’s absolutely required. And do care about networking. HTTP requests are precious, especially on mobile, and you should be using HTTP caching to reduce the size of assets.
Keeping all of these in mind will ensure that you get the most out of the information from this post. I hope you found it helpful!
Credits
This article was reviewed by Jakob Kummerow, Michael Starzinger, Sindre Sorhus, Mathias Bynens, John-David Dalton and Paul Irish.
Image source of picture on front page.
(cp) (jc)
© Addy Osmani for Smashing Magazine, 2012.
Navigation Patterns: Exploration Of Single-Page Websites
We tend to think of navigating a website as clicking from page-to-page via some kind of global navigation that’s always visible. When it comes to a single page, we often think scrolling is the one and only way to move from one end to the next.
Sometimes global navigation and scrolling are the best, most appropriate ways to move about, (however, they aren’t the only ways).
The websites in this article let you scroll, but they also provide alternative ways of finding cues and means for getting around. In several cases the designs encourage exploration, which is both more engaging and also teaches you how to navigate at the same time.
Jess and Russ
The Jess and Russ’s website is a wedding invitation, though it’s also something more. As it says at the top of the page, it is the story of Jess and Russ leading up to this moment. It’s a narrative that begins with a few details before they had met, leads to their meeting and falling in love, and culminates with the invitation (complete with RSVP form).
How do you navigate a story that’s told linearly through time? Sure, there are flashbacks and other narrative devices, but for the most part you tell the story from beginning to end. You move through it in a straight line and so here the navigation is simply scrolling through the page. Nothing more is needed.
I started this post suggesting we could provide more than scrolling. This example shows that, at times, scrolling is the most appropriate way to navigate. Jess and Russ’s website could easily have been broken up into several pages (navigated through the “next” and “previous” links at the bottom and top of each page). That would still keep things moving linearly, though each click would momentarily disrupt the narrative. In this case scrolling was the better choice.
Fortunately the website makes us want to scroll. Along the way we get an engaging story, filled with wonderful artwork and with interesting parallax effects. With this website you won’t get bored scrolling — instead, you’ll be looking forward to the next part of the story and how it will be told.
The story your design is telling may not be as linear as this one, though it’s likely parts of it will be. The lesson from Jess and Russ is that when you’re designing the linear parts of a website and you want people to move through it in a single direction, scrolling is possibly the best option. You also may want to consider a single, longer page as opposed to several shorter ones that are connected by links.
Ballantyne
Ballantyne creates luxury knitwear from cashmere. The website itself contains different types of information. There is the standard “About Us” and “Contact” information, as a start. Beyond that there are product images and chunks of text to go along with the images. It’s easy to imagine yourself thumbing through the pages of a catalog when browsing through this website.
As with Jess and Russ, this website is entirely on a single page, and as such, scrolling is once again a predominant way to navigate. It’s not the only way this time, though it’s perhaps the more interesting method.
On the landing section for the domain there are links that read “Established 1921″ and “Contacts”. Clicking the former scrolls the page up to see the “Who We Are” section (the “About Us” info) above. The latter scrolls you through all the images and text to the bottom of the page as well as the contact information.
When arriving at either of these ends of the website you’re also presented with additional ways to navigate. The “Who We Are” part of the page contains an “X” to close it, though this information doesn’t actually open or close — it just scrolls you back to the main landing section for the page, which you can also do yourself.
At the top of the contact section of the page a header drops down containing the company name and the links for “Who We Are” and “Contacts”. Unfortunately, the company name isn’t clickable, which is conventional for navigating back to a home location.
You can equally scroll through these two end sections of the page. As you do, there’s a nice parallax effect. The outer two columns scroll as you’d expect, while the middle column scrolls in the opposite direction. The effect creates additional interest beyond simple scrolling as more information and imagery pass through your view. The two header links along with the company name are also present as soon as you scroll below the root landing spot.
As with Jess and Russ, the Ballantyne website is more enjoyable to scroll than most. Here we’re also given an alternative means of navigation in addition to scrolling. There are a few problems, though:
- No link is provided to navigate back to the original landing location. You have to scroll to get there, or first go to the Who We Are section and close it. This seems odd.
- Clicking to either “Who We Are” or “Contacts” isn’t quite a smooth experience.
- There’s no way to scroll up to the “Who We Are” section.
- The link at the landing location to “Who We Are” reads “Established 1921″ and isn’t clear where it leads.
Another minor complaint is while scrolling, the images don’t always align where you’d like them to — you see a full image in one column, but not the others. This might have been done on purpose to get you to scroll slowly through the website, but I kept wanting things to align better. While it won’t affect your experience of the website, it can be a little jarring.
Even though the above items could be improved, they hardly cause problems when navigating the website. We’re talking about a limited amount of content, and within a moment or two, you’ve figured out where everything is. While clicking to the end locations isn’t the smoothest experience, seeing everything scroll from one end to the other does show you quickly how to navigate the entire website. In fact, it’s this behavior that cues you in if you didn’t immediately realize to scroll.
The lesson here is that even if your page will most likely be scrolled, you can still provide alternate options to navigate and help people understand what’s located on the page.
Cadillac ATS vs The World
Unlike the two websites above, Cadillac is a website with a couple of separate pages. Here we’ll look at one section of the website, specifically one page within that section. One of the ways Cadillac is promoting the ATS is as a vehicle that can take you anywhere and exhilarate you as it does.
The designers have set up a section of the website where you can explore four interesting locations around the world that you might not ordinarily get to see. It’s these location pages that we will consider here.
A navigation bar remains fixed at the top of each of these pages making it easy to get back to the main section page, or switch to one of the other three locations. If you hover over the Cadillac logo, the global navigation appears and allows you to get to any part of the website.
We’re here to explore though, and there’s an immediate cue for how to go about it. An animation of a series of arrows pointing down suggest that’s where we look. They direct your eye to another downward pointing shape with the words “watch the video”. Shape and words are a link.
Clicking scrolls a video from below into place. Below the video is another now familiar downward pointing shape with the words “ATS vs The Wind”. Clicking once again scrolls content from below, this time complete with a change of background image and parallax effect.
Each subsequent click scrolls to a new part of the page. You can navigate the entire page by clicking one shape after another until you reach the end, where you can check in (share on FaceBook, Twitter, or Google+) or visit one of the other three locations.
You could, of course, scroll through the entire page instead of clicking at each stop — you’ll experience the parallax effect a little more, but otherwise navigating the page will be the same until you want to move back up the page (as there are no upward pointing shapes to click).
There are two additional ways to navigate, both located along the right edge of the page. At the very edge is a scroll bar, though not the default one that comes with the browser. It works exactly as you would expect and provides an immediate cue that there’s more on the page than on the screen.
Just inside this scrollbar is a long thin column with a series of lighter and darker dots. Clicking on any dot will take you to a specific section within the page. The dots also offer additional clues about the page.
Lighter dots mark the start of a section. Darker dots take you to a location within each section. Each section is further reinforced by a line separator.
Clicking any dot scrolls the page to the given section or sub-section. Hovering brings up a tool tip pointing to the light dot and containing the heading for each section.
As with the websites above, everything here works well — the content is limited, and it won’t take long to work out the organization. You’re also encouraged to explore each location in each section, and cues are provided to help in your exploration.
- The downward pointing shapes invite you to click and get started.
- Content scrolling into place after a click suggests you can scroll the page on your own.
- The scroll bar along the right edge further suggests scrolling and provides another mechanism to do so
- The chapter/timeline feature might be the last thing you discover, but it’s ultimately the quickest way to navigate the page.
Each location is a new destination to explore — both literally (as a new page) and figuratively (with the content each contains). It’s part of the fun, and puts you in discovery mode from the start.
Aside: The main Cadillac website has more conventional navigation (a horizontal navigation bar with drop-downs), though it’s very nicely done and worth a look. The drop-downs present quite a bit of useful information.
The lesson here is that you can provide several ways to navigate for different types of visitors. You should provide immediate cues for how to begin navigation and let more advanced users discover other means to navigate as they explore.
Bleep Radio
Bleep Radio also encourages you to explore their single-page website. Unlike the websites above, there’s less of a directional nature to the scrolling. What you want to do could be located on any part of the page. As with the Cadillac ATS pages, there are visual cues in the form of triangles that suggest they are clickable for navigation.
Any browser open to at least 1200×900 will see most of the main menu, which is inside a large triangle showing the word Discover (again, encouraging exploration). The program link takes you to a section above the page (like Who We Are on Ballantyne). Again, there is an X to get back.
Aside from the Program link, most of the other links are located in the main Discover triangle. And of course, you can scroll up and down the page to find different content.
While the layout is certainly original and interesting, I don’t think the navigation here works as well as with the other websites, for a few reasons:
- Unless you navigate to a section toward the top or bottom of the page, you’re left without navigation back besides scrolling. The discover triangle is only present at the top and bottom.
- Some triangles are clickable, while others aren’t, creating a bit of confusion as to what is and isn’t navigation.
- The page is always wider than the browser, no matter what size it’s opened to. Scrolling vertically will at times shift things to the right or left.
In all fairness to the website, it’s written in Greek (and I don’t speak Greek) so I could easily be missing some obvious cues.
On a more positive note, the website does have some qualities that are both nice and fun:
- Clicking the Just Bleep triangle at the top clears away most of the content on the page so that you can focus on the task at hand. Nothing specifically happens for me after clicking Just Bleep (though I’m guessing it would, were I logged into the website).
- The bleeper section is a grid of member images. There are a few triangles sitting atop the images, and hovering over them results in their shifting to the right or left. There’s no functional purpose, but it lends an interactive feel to the website.
One other thing to point out is the triangle along the right edge that remains fixed in place when scrolling. Clicking on it opens the current on-air Bleep, along with some social buttons. I can’t help but think navigating the website would be easier if the Discover menu was similarly fixed in place along the left edge.
The lesson here is that a unique and creative design can encourage exploration, however you should be consistent in your navigational cues. If a shape, color or specific style is a link in one place, it should be a link everywhere it occurs, or it risks confusing your visitors.
EVO Energy: The Interactive U.K. Energy Consumption Guide
The Interactive U.K. Energy Consumption Guide from EVO Energy is what information graphics on the Web should be. As with the Cadillac website, we’re looking at a single page within a larger website. And as with all the pages, the primary way to navigate is to scroll from top to bottom.
However, scrolling isn’t the only way to navigate the content here. You are expected to interact with the page in order to get most of the information it contains.
For example, the first interactive section on the page offers data about the total primary energy consumption from fuel used in the United Kingdom. The main graphic is a tree with circles of various colors representing leaves. Each color is associated with a different type of fuel…
- Electricity
- Biomass
- Gas
- Petroleum
- Solid Fuel
The more colored circles are shown in the graphic, the greater that fuel contributes to the total. Each of the fuel types are listed in another graphic to the right, and hovering over them reveals the actual percentage of the fuel within the total.
To the left is another list allowing you to view the same data over different decades. With a couple of hovers and clicks, you will see that solid fuel accounted for 47% of the total in 1970 and only 15% of the total in 2010.
There’s little in the way of text on the page outside of a few basic bits of information and occasional instructions. It’s hardly needed (though it could enhance the graphics some).
These interactive infographics take advantage of what the Web can do and through interaction the information sinks in a lot more. You aren’t just being presented information — you’re actively selecting the information you want to see, making it more likely that you’ll pay attention and remember it.
The only issue I have with the page is that some panels aren’t interactive. After interacting with so many, I felt cheated when all of a sudden I couldn’t interact with one.
The lesson here is that navigation is more than moving about a website or Web page, it can also be a way to bring content to you in place. Instead of something that takes your visitors from one location on a page or website to another, navigation can be about replacing content in place — it’s a much more engaging way to interact with a website.
Summary
The five examples above naturally allow you to scroll up and down their pages — but they don’t stop there, as they provide additional cues and means for you to get around.
Some of the lessons these websites teach us about navigation:
- Choose appropriate navigation based on the needs of the content.
- Provide alternate forms of navigation when it benefits your visitor.
- Provide immediate and obvious cues about how to navigate.
- Offer advanced ways to navigate for advanced users.
- Encourage exploration, but don’t require it for navigation to be usable.
- You don’t always have to take people to the content — you can bring the content to them.
Hopefully this brief look at the websites above will get you to explore further and help you generate ideas for alternate ways to navigate content.
(jvb)
© Steven Bradley for Smashing Magazine, 2012.
Uncle Sam Wants You (to Optimize Your Content for Mobile)
Americans deserve a government that works for them anytime, anywhere, and on any device.
It’s easy to get frustrated by the pace of change in mobile. Companies drag their feet about actually delivering content and services optimized for mobile devices, commissioning yet more research to “prove” the need for a mobile strategy. Meanwhile, we tap away at our ever-more-capable smartphones and tablets, pinching and zooming our way through sites designed for a much larger screen.
Now we can find inspiration for taking quick action in mobile from an unexpected source: the government. President Obama has ordered executive branch federal agencies to make at least two key services available on mobile devices over the next year.
The initiative to optimize content for mobile is part of the larger Digital Government strategy aimed at building a twenty-first-century platform to better serve the American people. This strategy outlines a sweeping vision for how to deliver government services more efficiently and effectively, and it covers everything from how government agencies can share technology and resources more effectively to how to maintain the privacy and security of sensitive government data.
But running through the entire Digital Government strategy is a consistent thread: The government needs to communicate with and deliver services for its citizens on whatever devices they use to access the web.
If it’s true for the government, it’s probably true for your company, too. Your customers are using mobile devices to access your content—you need a strategy to communicate with them where they are.
The latest personal computing revolution
Nearly everyone is carrying smart devices in their pockets that have incredible computing power. It’s creating a dynamic, both inside the walls of government and outside, where citizens are really demanding more.
Imagine you don’t have a broadband internet connection at home. Your employer could see how many hours you waste looking up elementary school classmates on Facebook. Your ridiculous Google searches for things like “What is the probability of being killed by a Pop-Tart?” and “What does rhino milk taste like?” and “Ned Flanders shirtless or nude”—all visible to the eyes of prying co-workers as they walk past your desk. All your por… okay, let’s not go there.
Sure, we all indulge in embarrassing behavior on the internet, best left to the privacy of our own homes. But don’t forget about all the perfectly normal information we want to look up online—research we might not want our employers, coworkers, or strangers in a public computer lab to know about. Looking for a new job. Learning more about a medical condition. Checking a bank statement. Even shopping for Christmas presents.
If you’re like me, you’ve enjoyed the convenience of having a home internet connection for almost twenty years. It’s easy to lose sight of the fact that for many people, access to the internet is a luxury.
Thirty-five percent of Americans have no internet access at home. More than a third of Americans don’t have easy access to personal medical information, tools for financial planning, and animated GIFs of surprised cats.
Race, income, and education level all influence whether people will have a home broadband connection. Roughly 50 percent of both Black Americans and Hispanic Americans have no broadband connection at home. Nearly 60 percent of Americans making less than $30,000 per year don’t have a broadband connection at home. And Americans who don’t have a high school diploma? A whopping 88 percent of them lack a broadband connection at home.[1] The digital divide is real.
But as of early 2012, 88 percent of American adults reported they do have a mobile phone.[2] As of July, 55 percent of those Americans who own a mobile phone have a smartphone—and two thirds of people who had acquired a new phone in the previous three months chose to get a smartphone.[3]
As more and more people acquire smartphones, many of those who don’t currently have access to the internet will suddenly have it in the palm of their hands. A growing number of people who cannot afford to pay for both mobile phone and broadband internet access pick one device—the phone.
The mobile-mostly user
Mobile was the final front in the access revolution. It has erased the digital divide. A mobile device is the internet for many people.
The stories about how mobile computing has changed human behavior often emphasize the developing world. Billions of people will only ever connect to the internet from a mobile phone. That development may seem positive and exciting, but remote. We assume that a “mobile-only” user is as foreign to us as a villager in Africa, in India, in China.
We’re wrong.
A large and growing minority of internet users in the U.S. are mobile only. As of June 2012, 31 percent of Americans who access the internet from a mobile device say that’s the way they always or mostly go online—they rarely or never use a desktop or laptop computer.[4] Most tellingly, people who are less likely to have a broadband connection at home are more likely to rely solely or mostly on their mobile device for internet access:
- 51 percent of Black Americans
- 42 percent of Hispanic Americans
- 43 percent of Americans earning less than $30,000 per year
- 39 percent of Americans with a high school or lower education
Even some populations who do have access to a broadband connection and a full-size computer prefer to browse the internet on their phones. Do you want to communicate with people in the coveted 18–29 demographic? Good luck pulling them away from their smartphones. A whopping 45 percent say most of their internet browsing is on their phones.
By 2015, more Americans will access the internet through mobile devices than through desktop computers, according to a prediction by International Data Corporation. Some of these people may still have access to the desktop web, but, for reasons of context, ease, or sheer laziness, will choose their mobile first. For others, there will be no other way to view your content.
For this growing population, if your content doesn’t exist on the mobile screen, it doesn’t exist at all.
It’s never too early for content strategy
Will your organization be ready? Are you moving right now to develop a content strategy for mobile?
Or are you telling yourself that mobile is a blip, a grace note, a mere satellite to the larger desktop website? Do you think that offering a full set of content on mobile is a “nice-to-have,” something to think about only after investing in yet another redesign of the “real” website?
Delivering content on mobile isn’t an afterthought. It’s a requirement. It isn’t a luxury. It’s a necessity.
Think of any piece of information people may want to access on the internet. They will need to access it on a device that isn’t a desktop computer. Do people want to look it up? They’ll want to look it up on mobile. Do people need to search for it? They’ll want to search for it on mobile. Do people want to read it, deeply and fully? They’ll expect to read it on mobile. Do they need to fill it out, document it, and enter it into the system? They’ll need to do it on mobile.
This goes double for any organization that needs to reach people outside mainstream desktop users. Government agencies have a civic responsibility to deliver content to all citizens, which means providing it to them on mobile. Organizations seeking to deliver public health messages to the most at-risk populations won’t reach them—unless that content is available on mobile. Are you an equal-opportunity employer? Not unless you’re delivering your content where African American and Hispanic users can find it. You can’t assume all people in these groups will take the extra step to go find your desktop website.
You need to bring it to where they are. Which is mobile.
How to do it
The United States’ Digital Government strategy outlines a customer-centric approach to optimizing content for mobile. Your organization might not need to take a page from the government procurement handbook, but the roadmap for getting government information and services onto mobile might help you get your content out there, too.
A few highlights from the U.S. government’s approach that you should consider for your organization:
Manage structured content
Today, 57 percent of domains under development by American federal agencies are not being built with a CMS.[5] As a result, managing and updating content is a cumbersome and difficult process. Content is locked up in static web pages (or worse, in printed documents), which makes it difficult or impossible to move content to a new platform.
The government encourages agencies to explore open-source CMS tools—and to model their content, turning unstructured pages into structured data with appropriate metadata attached.
Create presentation-independent content
Instead of focusing on the final delivery and presentation of the information—whether publishing web pages, mobile applications, or brochures—government agencies now are encouraged to take an “information-centric” approach. This means separating content from presentation, and instead describing the content more fully with a complete taxonomy and authoritative metadata, so the same content can be reused in a variety of contexts.
Treat content as a service
Making government content and data available through APIs increases the value of that data. When the government made GPS and weather data available to the public, it fueled multibillion-dollar industries.
By structuring their content and data, then treating it as a service that can be accessed via an API, the government can make content available to more people and more devices—while maintaining security and control over confidential information.
You can do it. Start now.
The U.S. government has recognized that their goal is not to publish brochures, handbooks, or binders. Their goal is not to publish web pages, either. Their goal isn’t even to publish apps. Their goal is to keep American citizens informed.
The government’s challenge and responsibility is making information available in whatever format American citizens want to consume it. We get to decide what device we want to use—the government can’t impose that on us.
The same is true for your organization. You already have customers who only access your content from their mobile device—and that number will only grow. If your web content isn’t fully optimized for mobile, you’re invisible to a large and growing subset of Americans.
It’s not too late, but the time to start is now. Develop a roadmap and put a strategy in place to start publishing your content on mobile devices.
References
[1] From the Pew Internet Project’s Digital Differences report
[2] From Pew’s mobile research
[3] According to a 2012 report from Nielsen
[4] From Pew’s 2012 Cell Internet Use report
[5] According to the State of the Federal Web report (PDF) from the .gov Reform Task Force
Translations:
Italian
- Illustration by Kevin Cornell
RSS readers: Don't forget to join the discussion!
Your Content, Now Mobile
We are pleased to present you with this excerpt from Chapter 1 of Content Strategy for Mobile by Karen McGrane, now available from A Book Apart. —Ed.
When we talk about how to create products and services for mobile, the conversation tends to focus on design and development challenges. How does our design aesthetic change when we’re dealing with a smaller (or higher-resolution) screen? How do we employ (and teach) new gestural interactions that take advantage of touchscreen capabilities? How (and who) will write the code for all these different platforms—and how will we maintain all of them?
Great questions, every one. But focusing just on the design and development questions leaves out one important subject: how are we going to get our content to render appropriately on mobile devices?
The good news is that the answer to this question will help you, regardless of operating system, device capabilities, or screen resolution. If you take the time to figure out the right way to get your content out there, you’ll have the freedom (and the flexibility) to get it everywhere. You can go back to thinking about the right design and development approaches for each platform, because you’ll already have a reusable base of content to work from.
The bad news is that this isn’t a superficial problem. Solving it isn’t something you can do in isolation, by sandboxing off a subset of your content in a stripped-down mobile website or app. The solution requires you to look closely at your content management system, your editorial workflow, even your organizational structure. You may need different tools, different processes, different ways of communicating.
Don’t despair. There’s even better news at the end of this rainbow. By taking the time now to examine your content and structure it for maximum flexibility and reuse, you’ll be (better) prepared the next time a new gadget rolls around. You’ll have cleared out all the dead wood, by pruning outdated, badly written, and irrelevant content, which means all your users will have a better experience. You’ll have revised and updated your processes and tools for managing and maintaining content, which means all the content you create in every channel—print, desktop, mobile, TV, social—will be more closely governed.
Mobile is not the “lite” version
It looks like you're on a train. Would you like me to show you the insultingly simplified mobile site?
—Cennydd Bowles (http://bkaprt.com/csm/15)
If people want to do something on the internet, they will want to do it using their mobile device. Period.
The boundaries between “desktop tasks” and “mobile tasks” are fluid, driven as much by the device’s convenience as they are by the ease of the task. Have you ever tried to quickly look up a bit of information from your tablet, simply because you’re too lazy to walk over to your computer? Typed in a lengthy email on your BlackBerry while sitting at your desk, temporarily forgetting your keyboard exists? Discovered that the process to book a ticket from your mobile was easier than using the desktop (looking at you, Amtrak!) because all the extra clutter was stripped away?
Have you noticed that the device you choose for a given activity does not necessarily imply your context of use?
People use every device in every location, in every context. They use mobile handsets in restaurants and on the sofa. They use tablets with a focused determination in meetings and in a lazy Sunday morning haze in bed. They use laptops with fat pipes of employer-provided connectivity and with a thin trickle of data siphoned through expensive hotel Wi-Fi. They use desktop workstations on the beach—okay, they really only use traditional desktop machines at desks. You’ve got me on that one.
Knowing the type of device the user is holding doesn’t tell you anything about the user’s intent. Knowing someone’s location doesn’t tell you anything about her goals. You can’t make assumptions about what the user wants to do simply because she has a smaller screen. In fact, all you really know is: she has a smaller screen.
The immobile context
Users have always accessed our content from a variety of screen sizes and resolutions. Data reported by SecureCube shows that in January 2000, the majority of users visited from a browser with an 800×600 resolution, but a significant minority (twenty-nine percent) accessed the site at 1024×768 or higher, with a smaller percentage (eleven percent) viewing the site at 640×480 (http://bkaprt.com/csm/16; fig 1.1). At that time, decisions about how best to present content were seen as design challenges, and developers sought to provide a good reading experience for users at all resolutions, discussing appropriate ways to adjust column widths and screen layouts as content reflowed from smaller to larger screens.
Fig 1.1: We have plenty of experience delivering content to a variety of screen resolutions. Why do we assume that mobile screens necessarily indicate a different context?
What you didn’t hear designers talking about was the “640×480 context” and how it differed from the “1024×768 context.” No one tried to intuit which tasks would be more important to users browsing at 800×600, so less important options could be hidden from them. No one assumed that people’s mindset, tasks, and goals would be different, simply because they had a different-sized monitor.
Why do we assume that mobile is any different?
Mobile tasks, mobile content
I recently departed Austin, Texas, traveling with three friends. Since we arrived at the airport a bit early, I wanted to lounge in the comfort of the United Club, away from the teeming masses. I felt it would be rude to abandon my friends to a similar fate outside, and so I wanted to know how many guests I could bring with me to the club.
A simple Google search should clear up this problem. Sure enough, I quickly found a link that seemed promising (fig 1.2).
Fig 1.2: Searching for “United Club Membership” shows that the content exists on the desktop site. But because the mobile website redirects the URL, users wind up on the homepage of the mobile site.
Alas, following the link to United Club Membership just took me to the homepage for mobile.united.com. When users search from a mobile device, United automatically redirects links from Google to its mobile website—without checking to see if the content is available on mobile. If the content doesn’t exist on mobile, the user gets unceremoniously dumped on the homepage of the mobile website. Mobile redirects that break search—how is that ever a good user experience?
Sure, there’s a link to the full desktop site, but that too just dumped me on the desktop homepage. I could try to use United’s internal site search, but I’d wind up pinching and zooming my way through several search result screens formatted for the desktop. And honestly: why should I have to? An answer that should take me one tap from the Google search results should not require searching and tapping through several pages on both the mobile and the desktop sites.
I went and asked the representative at the desk. (Correct answer: two guests.)
I don’t bring this up just because I want to shame United for wantonly redirecting links to a mobile URL when the content isn’t available on its mobile website. (That’s a terrible thing to do, but it comes after a long list of other bad things I’d like to shame United Airlines for doing.) No, I use this example to illustrate a common misconception about mobile devices: that they should deliver only task-based functionality, rather than information-seeking content.
Information seeking is a task
Luke Wroblewski, in his book Mobile First, tells us that Southwest Airlines is doing the right thing by focusing only on travel tasks (fig 1.3):
The mobile experience…has a laser-like focus on what customers need and what Southwest does: book travel, check in, check flight status, check miles, and get alerts. No room for anything else. Only what matters most.
Fig 1.3: The Southwest Airlines iPhone application only has room for what actually matters…if what matters doesn’t involve looking up information.
Mobile experts and airline app designers don’t get to decide what “actually matters.” What matters is what matters to the user. And that’s just as likely to be finding a piece of information as it is to be completing a task.
Eighty-six percent of smartphone owners have used their phone in the previous month to look up information—whether to solve a problem, settle an argument, get up-to-the minute information such as traffic or sports scores, or to decide whether to visit a business like a restaurant (http://bkaprt.com/csm/27). Don’t believe me? Look at your own search history on your mobile device—you’ve probably tried to answer all sorts of questions by looking up information on your phone.
The Southwest Airlines desktop website includes information about their baggage policies, including policies for checked bags, carry-ons, and pets, as well as lost and found, delayed baggage, and a variety of other traveler information, such as what to do if you lose your ticket, need to rebook, or your flight is overbooked. It even includes information for parents looking to book travel for unaccompanied minors, and how Southwest accommodates disabled flyers and the elderly.
The mobile experience does not. Who are we to say that this content doesn’t actually matter?
It’s fine to optimize the mobile experience for the most common tasks. But that doesn’t mean that you should exclude valuable content.
Mobile is social
Have you ever clicked on a link from Facebook or Twitter on your phone? How about a link someone sent you in an email?
Fig 1.4: “No mobile content found. Would you like to visit the desktop version of the site?” asks The Guardian. Can you guess the answer?
Of course you have. Sharing content with our friends and colleagues is one of the bedrock ways we communicate these days. Users don’t distinguish between accessing email, Facebook, Twitter, or other social services on the desktop or on mobile—they choose them fluidly, depending on which device they’re closest to at the time. In fact, as of June 2012, nearly twenty percent of Facebook members use it exclusively on mobile (http://bkaprt.com/csm/28).
If your content isn’t available on mobile—or provides a bad reading experience—you’re missing out on one of the most compelling ways to get people to read it. Is your site littered with icons trying to get people to share your content? If your readers just get an error message when they tap on shared content, all the effort you put into encouraging social sharing is wasted (fig 1.4).
Designing for context
“Context” is the buzzword everyone throws around when talking about mobile. At the South by Southwest Interactive conference in 2011, the panel called “Designing for context” was the number one must-see session, according to .net Magazine (http://bkaprt.com/csm/29).
The dream is that you can tailor your content for the user’s context—location, time of day, social environment, personal preferences. Based on what you know about the user, you can dynamically personalize the experience so it adapts to meet her needs.
Today, we use “designing for the mobile context” as an excuse to make mobile an inferior experience. Businesses want to invest the least possible time and effort into mobile until they can demonstrate return on investment. Designers believe they can guess what subset of information or functionality users want. Everyone argues that they’re designing for the “mobile use case.”
Beware of personalized interfaces
Presuming that the “designer knows best” when choosing how to deliver personalized content or functionality is risky. We’re notoriously bad about predicting what someone will want. Even armed with real data, we’re likely to make incorrect assumptions when we decide to show some things and hide others.
Microsoft Office tried this strategy in the late 1990s. Office 97 offered many new features and enhancements, which made the user interface more complex. Long menus and dense toolbars gave the impression that the interface was “bloated” (http://bkaprt.com/csm/30). (Sound like any desktop websites you know?)
In response, Microsoft developed “personalized menus” and “rafted toolbars” which showed the most popular items first (fig 1.5). Although Microsoft had good data and a powerful algorithm to help determine which items should be presented first, it turned out that users didn’t like being second-guessed. People found it more frustrating to go through a two-stage process, hunting through multiple menus to find what they were looking for. Personalized menus violated one of the core principles of usable design: put the user in control.
Fig 1.5: Personalized menus in Office 97 attempted to prioritize only the options Microsoft thought users wanted. They were a failure.
Now imagine that instead of clicking a chevron at the bottom of the menu to expand it, the user has to click a link to “full desktop website” and then hunt around in the navigation while squinting at a tiny screen. If your website’s mobile version only offers a subset of your content, you’re giving your users the same frustrating experience. Only much worse.
You don’t have good data
Microsoft had a ton of data about which options people used most frequently. They developed a complex algorithm to present the default “short” menu based on the items people were most likely to want, based on years of history and research with multiple iterations of their product. And they still made mistakes.
The choices you make about which subset of content you want to deliver probably aren’t backed up by good data. They might not be backed up by any research at all, just a gut feeling about which options you imagine will be most important to the mythical on-the-go user.
Even if you do have analytics data about which content people are looking for on mobile, it’s not likely you’re getting an accurate picture of what people really want. Today’s crippled mobile experiences are inadequate testing grounds for evaluating what people wish they could do on mobile. As Jason Grigsby, Cofounder of CloudFour.com and MobilePortland.com, says:
We cannot predict future behavior from a current experience that sucks (http://bkaprt.com/csm/31).
If your vision for mobile is designing for context, then the first step you need to take is getting all your content onto mobile devices.
All of it? Really?
Really. Your content strategy for mobile should not be to develop a satellite to your desktop site, showing only the subset of content you’ve decided a mobile user will need. That’s not going to work because:
- People move fluidly between devices, often choosing a mobile device even when they have access to a desktop computer. Don’t assume you can design for “the on-the-go user” because people use their mobile devices anywhere and everywhere.
- Mobile-only users want and need to look at your content too! Don’t treat them like second-class citizens just because they never or rarely use the desktop. Even if you think of them as “mobile-mostly” users, remember that you don’t get to decide which device they use to access your content. They do.
- Mobile supports reading content just as well as it supports functional tasks. Don’t pat yourself on the back just because you’ve mobile-ized some key features—there’s more work to do with your content.
- Context is a cop out. Don’t use context as a rationale to withhold content unless you have real research and data about what users need in a given situation or environment. Unless you have that, you’re going to guess wrong. (And even if you do have that—given the crappy experiences most users get on mobile today, you’ll still probably guess wrong.)
Never force users to go to the desktop website for content they’re seeking on a mobile device. Instead, aim for content parity between your desktop and your mobile experiences—maybe not exactly the same content presented exactly the same way, but essentially the same experience.
It is your mission to get your content out, on whichever platform, in whichever format your audience wants to consume it. Your users get to decide how, when, and where they want to read your content. It is your challenge and your responsibility to deliver a good experience to them.
Translations:
Italian
RSS readers: Don't forget to join the discussion!