How Should Your Mobile and Desktop Sites Differ?

In this article we'll share the usability research insights gained on how the mobile and desktop versions of your commerce site can and should differ

This is the 5th in a series of 8 articles on mobile commerce usability that draw on findings from our m-commerce usability report 2013.

When defining, designing and structuring your mobile commerce site; should you slim down content and features, or try to stuff it all in the mobile version as well? During our mobile commerce usability study the test subjects encountered m-commerce sites adopting widely different approaches. It turned out that some approaches had dire outcomes. Here’s a glimpse into the complex dilemma of what content and features to share across the mobile and desktop versions of an e-commerce site.

Content should be the same

First off, it is very important to distinguish between both the type of site (e-commerce sites versus other websites), and between content and features. The following observations from the test sessions are specifically for an e-commerce context and user behavior might differ if your mobile site is a news portal, blog, company site, intranet, web service, etc.

Now with regards to the amount of content you should have in the mobile version, the findings are clear: during testing, a limited set of content lead to endless misunderstandings, poor shopping experiences and abandonments. The problem with limited content on the mobile site is that users very often don’t realize it is limited as they expect (and thus assume) all content will be available. Reduced content in the mobile version was particularly problematic for two specific types of content: 1) product catalog, and 2) Help & FAQ content.

(Note that there’s a distinction between having all “content” and having all “features” on your mobile site. Our research shows that you must have all content, but that features may differ where sensible.)

Product catalog

One of the tested mobile sites, H&M, only offered a highly limited product catalog and as a result, the subjects had, by far, the worst overall shopping experience at their site. It turns out that limiting the product catalog in the mobile version introduces an incredible complex communication task: telling the customer that something is missing, communicating what the missing content is, explaining why it was omitted, and directing the customer to where they can find it.

H&M's mobile site, with a limited catalog, mislead the subjects to belive the entire product catalog was there.

The mobile version of H&M featured 10-20 selected products and then dedicated the rest of the site to fashion news, events, etc. However, those 10-20 featured products mislead every single test subject to believe that the site featured the entire product catalog because they were able to find some products and thus figured the rest of the catalog had to be somewhere on the site. Some subjects ended up leaving the site after spending more than 10 minutes only looking for the entire product catalog, never realizing it wasn’t there.

This tendency was observed in other scenarios as well. For example, when a search query returned no results and the subject had already tried a few synonyms, they always ended up concluding that the site didn’t carry that particular item. Never considering that it might “just” be left out of the mobile version.

Considering how severe it is when users misunderstand limitedproduct catalogs and given how incredibly difficult it is to explain, your mobile site should always feature the full product catalog. (Or have no catalog at all to clearly communicate to users that this isn’t a mobile commerce site.)

Help pages

Now the issues during testing were not related to limited product catalogs only. It proved equally important to mobile help sections, which at many sites were severely limited compared to the desktop version. Often only offering basic help or help relating to mobile devices (cookies, rendering issues, etc.).

At REI, one subject found it ironic that he could visit the mobile “Help” (first image), then the “FAQ”, and then the “Help: Shopping & Products” (second image), all without finding any indication of delivery methods and speed, but that when he left the mobile site for the full-site, he located the “Shipping Info” help page within seconds (third and fourth image).

In many cases something as trivial as learning more about the site’s shipping speed and costs, or the basic terms for returning an item, turned out to be so difficult for the subjects that they often gave up on the task. Typically because the info simplywasn’t there in the mobile version, or because it was buried somewhere in a 30 screen long Terms & Conditions legal text.

Form can be different

While the entire product catalog and all the help and static page information must be in the mobile version, the form of that content can change for the mobile context (and often should).

Best Buy divided the product description into concise sections with short bolded headers for easy scanning and slightly more in-depth (but still very brief) descriptions in a lighter grey.

A good example of where a mobile-optimized format makes sense is product descriptions. On mobile, the product page description should be optimized for a mobile context with short and easily scannable bullet points and additional info in collapsed sections instead of displaying them directly at the product page or separate sub-pages (which often cause scope confusion on mobile commerce sites).

Similarly static help pages can be optimized for the mobile context where the 4″ screen impose a need for text and copy to be very concise and easy to scan (typically with much narrower sections and sub-headers, prioritized order, etc).

In summary, all content (as in: “information and answers for your users”) must be in the mobile version, however, the formulation, formatting and position doesn’t have to be the same as the desktop version. So you must have the entire product catalog on your site, but the product page can look different. Your help section must include information on shipping speeds and cost on both mobile and desktop, but the layout of that shipping table can look different.

Features can be different

Our mobile usability research was only conclusive when it came to all content being featured on the mobile site; when it comes to features, the observations differ, and you will most often need to judge and test it on a case by case basis.

Animating carousels on the homepage are a good example of a feature that shouldn’t be on the mobile site despite being on the desktop site as they suffer from severe interaction issues due to lack of hover state on touch devices.

For example, in the prior article in this series ‘Mobile Product Pages: Always Offer a List of Compatible Products’ we described how removing compatibility list would be an oversimplification of the mobile product page. Whereas, keeping an animating carousel on the homepage (which is a quite common feature at most large desktop e-commerce sites) proved to cause great difficulty on the mobile sites in the cases it displayed anything other than products (e.g. when displaying features, site navigation, help, events, etc).

Furthermore the mobile site will often benefit from additional features that are not necessarily meaningful to the desktop version: location detection (via GPS), larger product images in landscape mode, context dependent search scope, smart defaults based on the user’s context, etc. Therefore, you may remove and add features to the mobile site where sensible.

User Expectations

Like so many other things in usability, this boils down to user expectations. Users expect to find all the products available on the desktop site; after all, why would a mobile site carry less products when the site / brand is the same? The same goes for product descriptions and help text – you’d expect to be able to find the same information since the selected product or shipping speeds shouldn’t change depending on the device you’re using to order. Meanwhile, you’d expect layout and formatting to change, after all, the screen is so much smaller. And you might be positively surprised when the mobile site detects that you’re physically in their store and offer you a “Ship for free to this store” option.

So to answer the question we started out with – “When defining, designing and structuring your mobile commerce site; should you slim down content and features, or try to stuff it all in the mobile version as well?” – we can say that all content (products and pages) should be in the mobile site but the articulation and formatting of the content may be different, and the site’s feature set should be different where it makes sense.

Link:http://baymard.com/blog/content-on-mobile-vs-desktop

Adapting your usability testing practise for mobile

Adapting your usability testing practise for mobile

There’s no better way to get feedback on the usability of your mobile app than by running a usability test. Although the process is the same as when testing a desktop app, there are quite a few differences in the details. Adjust your test to take account of these differences and you’ll be better placed to identify the real problems that real users will have with your app when used in an authentic context. — , MARCH 4, 2013

Read more open the link.

Mobile Form Usability: Avoid Splitting Single Input Entities

Users have difficulties entering single input entities that are split across multiple fields.

During recent M-Commerce Usability study researcher observed subjects struggle with inputs that were split across multiple fields, such as a phone number divided into three fields (area code, central office code, and subscriber number). While the intention is good, these fields proved difficult for the subjects to both understand and interact with on a mobile device.

Specifically, the subjects had a hard time navigating between such fields (Issue #1: Interaction), found it unclear if they were all required (Issue #2: Ambiguity), and sometimes found the division illogical (Issue #3: Perception). In this article we’ll go over each of these three types of mobile form usability issues related to dividing a single input entity into multiple fields.

Issue #1: Interaction

Users generally have a difficult time navigating between fields on mobile devices. Surprisingly few of the test subjects used the ‘Next’ and ‘Previous’ buttons on the touch keyboard – instead they generally navigated to new fields by tapping them on the screen.

On Macy’s, the phone input is divided into three fields (three-digit area code, three-digit central office code, and four-digit subscriber number), which made the input needlessly difficult to enter. See video clip here

To enter a phone number on Macy’s m-commerce site, you must: 1) Tap field. 2) Switch to numeric keyboard. 3) Type first three digits. 4) Keyboard auto-closes (Macy’s for some reason finds it a good idea to auto-close the user’s keyboard). 5) Tap the next field. 6) Switch to numeric keyboard again. 7) Type the next three digits. 8) Keyboard disappears again. 9) Tap the last field. 10) Switch to numeric keyboard once more. 11) Type the last four digits. (And then, to make matters worse, at the last field, the one place where the keyboard could reasonably auto-close, it does not). Even if the keyboard did not disappear as each field is completed, the number of interactions required to fill a phone number in the three fields remain the same (you’d still need to either tap the field on the screen or the ‘next’ button on the keyboard), and the typing flow of the 10 digit phone number would still be abruptly stopped twice. Not to mention all the issues related to editing a phone number split across multiple fields (in case the user spots an error).

While the intention of dividing the phone number into multiple fields is good and indeed serve as a very strong input formatting example, it simply does not work very well on mobile. (On a desktop site where advancing between fields is easier for users the division might be acceptable; our full-site checkout usability study showed no conclusive data on this.)

Issue #2: Ambiguity

Another issue arising from dividing an input entity into multiple fields is the ambiguity of required vs optional fields. Our research studies show that you should always clearly indicate both required and optional fields (with explicit markup for both), however, almost all sites indicate this in the label. This presents a design issue when some parts of the divided input field are required and others are not, as seen on Southwest’s mobile site below:

On Southwest the ZIP code input is divided into two fields (the basic five-digit code followed by the four additional ZIP+4 digits), however, it’s only the first field that is required while the second field is optional, but the user really has no way of knowing this from Southwest’s form design.

Of course one way to solve the issue Southwest runs into is by adding “required” and “optional” labels below each field or as inline text, but why divide the fields in the first place? It addsunnecessary complexity to the form design and suffers from the interaction issues described in the previous section. Also, suddenly changing the position of “required” and “optional” labels will result in an inconsistent form design unless you of course change it for all fields in the form which seems like a rather drastic design change just to be able to divide a ZIP code input across two fields.

Issue #3: Perception

Lastly, there is an issue of perception where an input is commonly presented as separate fields even though users perceive it as a single coherent entity. This is true of “name” fields that are often split into “First name” and “Last name” fields.

Toys’R’Us ask for “First name” and “Last name” instead of a single “Full name”. We saw countless subjects enter their full name in the “First name” field, only to discover they had to split it into separate fields.

During both our E-Commerce Checkout Usability and (the upcoming) M-Commerce Usability studies users consistently considered their name a single whole – I am not “James” and “Newman”, I am “James Newman”. Therefore, users often enter their entire name in the “First name” field and then upon advancing to the next field, discover that they must now enter their last name. They then go back to delete their last name from the “First name” field, only to advance to the “Last name” field again to complete it.

While numerous sites ask for the user’s name in two or more fields it simply is not good usability. Of course it can be difficult to discover this if you’re not doing usability tests since all subjects we observed noticed and corrected the error before submitting the form (thus not showing up in most form tracking web statistics). It is therefore not a critical error to make but it does introduce needless friction to the checkout experience. Amazon seems to have reached the same conclusion and instead asks for the user’s “Full name”:

Amazon only uses a single name field on both their desktop and mobile sites, which matches the user’s perception of their name as a single entity.

Another issue related to multiple name fields concerns middle names and titles. If “First name” and “Last name” are separate fields then logically “Middle name” and “Title” should be separate fields too. Suddenly you end up with four fields instead of a single field, or a subpar experience where the user will have to guess whether their middle name should be appended to the “First name” field or prefixed in the “Last name” field. With a single “Name” field you avoid this issue altogether as users simply enter their name from start to end, including any middle name(s) and titles.

Conclusion

As it can see, seemingly innocent and sometimes even common divisions of a single input entity into multiple fields can lead to interaction issues, required vs optional field ambiguity, and misalignment between the user’s perceptions and your site’s form design.

On desktop sites, there may be instances where dividing a single input entity across multiple fields may be acceptable if all the fields are either required or optional and there’s no misalignment between user perception and form design. However, on mobile – even under those narrow circumstances – you should avoid splitting single input entities across multiple fields due to the interaction issues previously discussed.

Link:http://baymard.com/blog/mobile-form-usability-single-input-fields

Interaction Design Guidelines (2)

How Effective Are Heuristic Evaluations?

It’s a question that’s been around since Nielsen and Molich introduced the discount usability method in 1990.

The idea behind discount usability methods, like heuristic evaluations in particular and expert reviews in general, is that it’s better to uncover some usability issues –even if you don’t have the time or budget to test actual users.

That’s because despite the rise of cheaper and faster unmoderated usability testing methods, it still takes a considerable amount of effort to conduct a usability test.

If a few experts can inspect an interface and uncover many or most of the problems users would encounter in less time and for less cost, then why not exploit this method?

But, can we trust heuristic evaluations? How do we know the problems evaluators uncover aren’t just opinions?

Do they uncover legitimate problems with an interface? How many problems are missed? How many problems are false positives?

Heuristic Evaluation and Usability Testing

To help answer these questions, we conducted a heuristic evaluation and usability test to see how the different evaluation methods compared.

We recently reported on a heuristic evaluation of the Budget and Enterprise websites. Four inspectors (2 experts and 2 novices) independently examined each website for issues users might encounter. They were asked to limit their inspections to two tasks (finding a rental location and renting a car).

In total, 22 issues were identified across the four evaluators. How many of these issues would users encounter and what was missed?

Prior to the heuristic evaluation we conducted a usability test on the same websites but didn’t share the results with the evaluators.

In total we had 50 users attempt the same two tasks on both websites. The test was an unmoderated study conducted using userzoom. Each participant was recorded usingUsertesting.com so we could playback all sessions with audio and video to identify usability issues. Two researchers viewed all 50 videos to record usability problems and identified 50 unique issues.

The graph below shows the 22 issues identified by the evaluators and the number and percent of users that encountered the issue.


Figure 1: Problem matrix for Budget.com (“B”) and Enterprise.com (“E”) from four evaluators (E1-E4) and the number and percentage of 50 users who encountered the issue in a usability test.

For example, three evaluators and 24 of the 50 users (48%) on Enterprise had trouble locating the place where rental locations were listed (issue #1 “Locations Placement”).

Two evaluators and 14 users (28%) had a problem with the way the calendar switches from the current month to the next month on Budget.com (issue #16), as shown in the figure below.


Figure 2: When selecting certain return dates, the Budget  calendar will switch the placement of the month (notice how October goes from being on the right to the left).

All four evaluators found that adding a GPS to your rental after you added your personal information was confusing on Enterprise.com—an issue 62% of users also had (issue #11).

We found that the evaluators identified 16 of the 50 issues users encountered in the usability test (32% of the total). In terms of false positives, only two of the issues identified by the evaluators weren’t found by any of the 50 users (9%).

How this study compares

There is a rich publication history comparing heuristic evaluations and usability testing. In fact, two of the most influential papers in usability cover usability testing and heuristic evaluations. In examining some of the more recent publications comparing HE and UT we looked for specific examples like our experiment, where the overlap in problems between the two methods is shown.

The table below shows four studies, in addition to the current one, that on average heuristic evaluations find around 36% of the problems in usability tests (ranging from 30% to 43%).

Study Overlap
(Hits)
In HE not  UT
(False Alarm)
In UT not  HE  (Misses) Inspectors Users
Current Study 32% 9% 68% 4 50
Doubleday, et. al. 36% 40% 39% 5 20
Law & Hvannberg  2002 30% 38% 32% 2 10
Law & Hvannberg  2004 43% 46% 48% 18 19
Hvannverg  et al. 2006 40% 37% 60% 10 10

Average

36% 34% 49%

The overlap is called a “hit,” meaning the discounted method hit on the same issue as found in the traditional evaluation method of usability testing.

To get an idea about potential false alarms, we see that on average, 34% of problems identified in Heuristic Evaluations aren’t found by users in a usability test (ranging from 9% to 46%). These have come to be known as “False Alarms,” suggesting these problems would not be encountered by users.

Finally we see on average Heuristic Evaluations miss around 49% of the issues uncovered from watching users in a usability test.  Note: The percentages don’t always add up to 100% because different problem numbers are used to derive the percentages.

This study had by far the most users (2.5x more than any other). This likely explains the much lower false alarm rate (9% vs. 34% average) and higher miss rate (68% vs. 49% average). With more users, you increase the chances of seeing new issues and you increase the chances of “hitting” the issues identified by the evaluators.

The lower False alarm rate might also be explained by our task-based inspection approach. Often inspectors aren’t confined to specific tasks when evaluating an interface and detect problems in the less used parts—parts that often aren’t encountered by users in a 30-60 minute study.

This exercise also illustrates the shortcoming of this approach for judging the effectiveness of heuristic evaluations. Just because an issue wasn’t detected in a usability test doesn’t mean it won’t be encountered by a user. In fact, one of the advantages of an expert review is that it uncovers issues that are harder to find in usability tests because users rarely visit enough parts of a website or software application outside of the assigned tasks(s).

What’s more, even 50 users represent less than 1% of the daily number of users on these websites, meaning it’s presumptuous to assume that no users would have the issue. If we don’t see an issue with 50 users we can be 95% confident between 0% and 6% of all users still might encounter it. For example, the two issues found in the heuristic evaluation and not found in our usability test both seem like legitimate issues–with enough users, we’d probably eventually see them.

Conclusions

The most effective approach at uncovering usability problems is to combine both heuristic evaluations and usability testing.

Heuristic evaluations will miss issues: Even four evaluators failed to find an issue that 28% of users encountered in the usability test (a bizarre message on Enterprise to pick another country when 0 search results are returned for a rental location).Heuristic evaluations will uncover many issues: The four evaluators did find all of the top ten most common usability issues and most (75%) of the 20 most common issues.

Heuristic evaluations will typically find between 30% and 50% of problems found in a concurrent usability test—a finding also echoed by Law and Hvannberg.

It’s hard to conclude that issues identified in a heuristic evaluation and not in a usability test are “false positives.”  It could be that the issues are encountered by fewer users and just weren’t detected with the sample tested. It’s probably better to characterize them as less frequent or “long-tail” issues than false positives.

So how effective are heuristic evaluations? While the question will and should continue to be debated and researched, I like to think of heuristic evaluations like sugary cereal. They provide a quick jolt of insight but should be part of a “balanced” breakfast of usability evaluation methods.

Link: http://www.measuringusability.com/blog/effective-he.php