Posts in user experience design
How to Become a User Experience Designer

I am often asked how to become a user experience designer - it’s actually the number one question that comes in through the contact form. Anyone can learn UX best practices, but what makes a great UX designer are the soft skills - being able to empathize with users, see the big picture, and have good sense. That aside, here is how I would learn the hard skills if I were starting over. 

As you probably know, some people go to undergrad or grad school for Human Computer Interaction or Interaction Design, and there are also various certifications at various price points. You don't really need to do either, but they are out there, depending on your goals and access to the field. If you’re already working in the digital space, you can probably apprentice your way into a position. If not, you might need a degree program or certificate to get your foot in the door. I started in the field as a web copywriter with an English degree, but I worked at a digital design agency where I could learn on the job from other talented UX professionals.

First, I would start with design theory. 

If you aren’t engaged and interested in this stuff after you read about it, you might want to look into related digital careers. These are the two books I would absolutely read: 

The Design of Everyday Things by Don Norman is a foundational usability/design book that everyone talks about. If you grab an old version and are under 30, you may not recognize any of the examples, so spring for the updated version! Don Norman also covers the highlights in this free online class from Udacity

Change by Design by Tim Brown, who founded IDEO - Another one of those books everybody reads - and by everybody, I mean lots of people in business in general. Using the term "design thinking" has become very buzzy of late, and it originates from this book. This book is also great if you're thinking about experimenting with startup ideas and service design.

Second, learn the practice. 

If you’re video-oriented, Coursera has an interaction design series that results in a certificate. If you decide to pay, it's affordable compared to what else is out there. I tried the first course in HCI a couple years ago, and it was very good. Even if you just watched those videos for free, you'd have a good working foundation of the basics. 

I would also read a selection of books from Rosenfeld Media. They are deep-dives into particular subsets of UX, so it will deepen what you learned in the Coursera course. They are also very practical books to skim or use for reference. 

UX Team of One by Leah Buley - Read this first! This is going to give you a really good overview of stuff you should know, and techniques for implementation. 

Web Form Design by Luke Wroblewski - The best book about form layout. The internet is made up of forms, and most of them are kind of bad. If you are a coder, reading this will also up your game. 

A Web for Everyone by Horton and Quesenbery - Usability for people with disabilities. Accessibility is fundamental to user experience, and this book will tell you why. I think everyone should read this, because it will change the way you think about 508 compliance or WCAG 2.0.

Prototyping, A Practitioner's Guide by Todd Zaki Warfel - The part where it talks about tools is a little outdated, but the book outlines approaches UX prototyping, and why one should prototype first. 

Which brings me to prototyping and wireframing tools - don’t worry about whether or not you have a well-developed skill in design tools. Learn them as you need them. Anyone can learn how to use Balsamiq, Axure, Omnigraffle, Visio, and the Adobe suite, and plenty of people just draw or even use PowerPoint. Anyone who is hiring based solely on how good you are at a tool is probably offering a UX job that is not worth having.

There are also millions of websites out there with practical information, but you can’t go wrong with these few: 

Good luck - I hope you found what you needed to become a great user experience designer! 

Measuring Task Time During Usability Testing

I design applications that are used all day, every day in a corporate setting. Because of this, I measure efficiency and time when I do usability studies to make sure that we are considering productivity as part of our design process. 

Although actual times gathered from real interactions via an analytics package are more reliable and quantifiable than those gathered in usability testing, they require you to have a lot of users or a live product. When you're in the design stage, you often don't have the ability to gather that kind of data, especially when you're using mockups or prototypes instead of a live application. Being able to gauge the relative times of actions within a process during usability testing can be helpful, and being able to compare the times of two new design options is also valuable. Gathering information about task times early in the design phase can save money and effort down the road. 


 

HOW TO CONDUCT A TIME STUDY

During a typical usability study, simply collect the times it took to accomplish a task. The best way to do this is to measure time per screen or activity in addition to the duration of the task, so that you'll be able to isolate which step of a process is taking the most time or adding unnecessary seconds. This can be more illuminating from a usability perspective than simply knowing how long something takes.

Make a video screen recording of the session. Pick a trigger event to start and pause timing, such as clicking a link or a button. Gather the times via the timestamp when you replay the video. Don't try to time with a stopwatch during the actual usability test. You can make a screen recording with SnagIt, Camtasia, or Morae, or through any number of other tools.

When comparing two designs for time, test both designs in the same study and use the same participants. This means you'll have a within-subjects study, which produces results with less variation - a good thing if you have a small sample size. To reduce bias, rotate the order of designs so each option is presented first half of the time.  


 

COMMON QUESTIONS ABOUT TIME STUDIES

Should you count unsuccessful tasks?

Yes and no. If the user fails to complete the task, or the moderator intervenes, exclude it from the time study. If the user heads the wrong direction, but eventually completes the task, include it.

What if my participant is thinking aloud and goes on a tangent, but otherwise, they completed the task?

I leave "thinking aloud" in and let it average in the results. If the participant stops what they are doing to talk for an extended period of time (usually to ask a question or give an example), I exclude the seconds of discussion. But, be conservative with the amount of time excluded and make sure you've made a note of how long the excluded time was. 

Should you tell participants they are being timed?

I don't. Sometimes I'll say that we're gathering information for benchmarking, but I generally only give them the usual disclaimer about participating in a usability test and being recorded.

How relevant are these results? 

People will ask if times gathered in an unnatural environment like usability testing or a simulation are meaningful. These times are valuable because some information is better than no information. However, it's important to caveat your results with the methodology and the environment in which the information was collected.


 

REPORTING RESULTS: AVERAGE TASK TIMES WITH CONFIDENCE INTERVALS

Report the confidence interval if you want to guesstimate how long an activity will take: "On average, this took users 33 seconds. With 95% confidence, this will take users between 20 and 46 seconds."

Report the mean if you want to make an observation that one segment of the task took longer than the other during the study. A confidence interval may not be important if your usability results are presented informally to the team, or you're not trying to make a prediction. Consider the following scenario: you notice, based on your timings, that a confirmation page is adding an average of 9 seconds to the task, which end-to-end takes an average of 42 seconds. Does it matter that the confirmation screen may actually take 4-15 seconds? Not really. The value in the observation is whether you think the confirmation page is worth nearly 1/4 of the time spent on the task, and whether there's a better design solution that would increase speed. 

When you're determining average task time, always take the geometric mean of times instead of the arithmetic mean/average (Excel: =GEOMEAN). This is because times are actually ratios (0:34of 0:60). If the sample size is smaller than 25, report the geometric mean. If the sample size is larger than 25, the median may be a better gauge (Excel: =MEDIAN).

If you're reporting the confidence interval, take the natural log of the values and calculate the confidence interval based on that. This is because time data is almost always positively skewed (not a normal distribution). Pasting your time values into this calculator from Measuring U is much easier than calculating in Excel. 


 

REPORTING RESULTS: CALCULATING THE DIFFERENCE BETWEEN TWO DESIGNS

For a within-subjects study, you'll compare the mean from Design A to the mean of Design B. You'll use matched pairs, so if a participant completed the task for Design A, but did not complete the task for Design B, you will exclude her both of her times from the results.

There are some issues with this, though. First, I've found it very difficult to actually get a decent p-value, so my comparison is rarely statistically significant. I suspect this is because my sample size is quite small (<15). I also have trouble with the confidence interval. Often my timings are very short, so I will have a situation where my confidence interval takes me into negative time values, which, though seemingly magical, calls my results into question.  

Here's the process: 

  1. Find the difference between Design A and B for each pair. (A-B=difference)

  2. Take the average of the differences (Excel: =AVERAGE).

  3. Calculate the standard deviation of the differences (Excel: =STDEV).

  4. Calculate the test statistic.
    t = average of the difference / (standard deviation / square root of the sample size)

  5. Look up the p-value to test for statistical significance.
    Excel: =TDIST(test statistic, sample size-1, 2). If the result is greater than 0.01, you have statistical significance.

  6. Calculate the confidence interval:

    1. Confidence interval = Absolute value of the mean of the difference +/- (critical value (standard deviation / square root of sample size).

    2. Excel critical value at 95% confidence: =TINV(0.05, sample size - 1)


 

REFERENCES

Both of these books are great resources. The Tullis/Albert book provides a good overview and is a little better at explaining how to use Excel. The Sauro/Lewis book gives many examples and step-by-step solutions, which I found more user-friendly. 

Measuring the User Experience by Tom Tullis and Bill Albert ©2008

Quantifying the User Experience by Jeff Sauro and James R. Lewis ©2012

Interested in more posts about usability testing? Click here.

Measuring Efficiency during Usability Testing

Recently, most of my work has been developing enterprise software and web applications. Because I'm building applications that employees spend their whole workday using, productivity and efficiency matters. This information can be uncovered during usability testing. 

The simplest way to capture the amount of effort during usability testing is to keep track of the actions or steps necessary to complete a task, usually by counting page views or clicks. Whichever you count should be meaningful and easily countable, either by an automated tool or video playback. 

There are two ways to examine the data - comparison to another system, and comparing the average users' performance to the optimal performance.

Compare One App to Another

Use this when you're comparing how many steps it took in the new application vs. the old application. Here, you'll compare the "optimal paths" of both systems and see which one required fewer steps. This doesn't require usability test participants and can be gathered at any time. It can be helpful to present this information in conjunction with a comparison time study, as it may become obvious that App A was faster than App B because it had fewer page views.

Compare the Users' Average Path to the Optimal Path

To do this, you'll compare the average click count or page views per task of all of the users in your usability study to the optimal path for the system. The optimal path should be the expected "best" path for the task. 

More than simply reporting efficiency, comparing average performance to optimal performance can uncover usability issues. For example, is there a pattern of users deviating from the "optimal path" scenario in a specific spot? Was part of the process unaccounted for in the design, or could the application benefit from more informed design choices?

Here's the process I use to calculate efficiency against the optimal path benchmark. 

  1. Count the clicks or page views for the optimal path.

  2. Count the clicks or page views for a task for each user.

  3. Exclude failed tasks.

  4. Take the average of the users' values (Excel: =AVERAGE or Data > Data Analysis* > Descriptive Statistics).

  5. Calculate the confidence interval of the users' values (Excel: Data > Data Analysis* > Descriptive Statistics).

  6. Compare to the optimal path benchmark and draw conclusions.

*Excel for Mac does not include the Data Analysis package. I use StatPlus instead. 

Reference

Measuring the User Experience by Tom Tullis and Bill Albert ©2008

Read more posts about usability testing.

Ask "Why?" for Stronger Requirements

In a recent post, I listed a few ways that you can accidentally end up with a bad user experience. But there's another way to add bloat to your design: failing to ask "why."

The other day, a business analyst asked me to add a checkbox to one of my screens so that an administrative user could indicate, once every 6 months, that someone reviewed the screen for errors. Yet, we already have proof that the screen's data is being maintained because it's in the change log.

On the surface, adding a checkbox is an easy update to make. But, we don't know why we're being asked to make this update, or how this feature is going to be valuable to users. Simply put, our customer is requesting a solution without indicating what problem it solves. Though it might a good solution, how do we know it is the BEST solution?

Why ask why?

The next step isn't to drop the checkbox onto a wireframe, write up a few requirements, and head home for the weekend. The next step is to call up the client and ask "why."

  • Why does the client need this feature?
  • What information is the client hoping to collect via this feature?
  • How does the client plan to use this information?
  • Does the client know about the information we're already collecting?  

Once all these questions have been answered (and maybe a few more), we'll know how to proceed.

Needs vs. design

Good requirements describe a user need, but the need is never a "checkbox." The need is bigger. In my example's case, maybe the need is reporting for an audit, or maybe that checkbox is really supposed to notify someone of something. But we'll never know unless we stop and ask why, and solve the real problem.    

How Bad UX Happens

And now, a note about imperfect user experiences and how they happen. 

The late-breaking requirement

The UX is perfect, things are going great, and you just finished usability testing. Then the client calls and asks you to add something. It's not a problem, it's just a small tweak. Easy! 

A few weeks later, you're looking at your screens and wondering how things went so wrong.

Daniel Kahneman would blame it on a lazy controller. Basically, your instinctual mind makes a snap decision based on previous experience and your analytical mind blindly follows. Most of the time, this works well. Other times, not so much. Maybe you were rushed, or didn't quite understand the full impact of the change. Either way, once you look at your design change with a fresh mind, you can see what you would've done differently.

The early design that overstays its welcome

The other good time to unwittingly make a mistake is early on. You make a solid choice based on the information that you have at the time. But as you refine the design and learn more about the project, this "solid choice" becomes irrelevant or even totally unnecessary.

You, and probably the larger team, have gotten used to seeing it, and you have a type of "banner blindness" to your own work. And maybe it's so irrelevant that usability testing won't uncover it, because it literally is useless and unremarkable.

Much later, you or someone on your team notices it and you see it for what it is - clutter.

How to fix it

These types of unitentional mistakes get shipped to customers and handed off to clients all the time. Sometimes, you only notice them after your engagement has ended. If you're still working on the project, offer to clean it up as part of a larger group of updates - they can only turn you down. There is rarely a good argument against continuous improvement.

If your project is still underway - just suck it up and admit that a part of your design needs rework! Maybe your team will have a better idea, or maybe they will remember something about the requirements that you've already forgotten. Most people understand that it is hard to self-critique, and I think they'll appreciate you more for your willingness to make a change.

How to keep it from happening

Don't work in a silo! Ask for opinions from other smart people as you're designing, whether they are on your team or not.

For those small late-breaking changes, it's tempting to do something quickly, call it done, and post a file. Instead, make the change and let it rest a few hours. Then you'll be able to review it with a more critical eye.

How to design a highly usable form with the help of a good book

One of the most useful books I’ve read in the past few years is Web Form Design by Luke Wroblewski. The reason I like this book so much is simple: when I’m working on a big form project or reviewing someone else’s work, it’s very easy to grab the book and flip through the best practices at the end of each chapter, using it as a rubric, as well as a source of much-needed inspiration. 

Over time, though, I realized that rather than dig out my ebook, I could just create a checklist that I could use to prompt me on some pivotal issues. The actual explanation of what you should do and why, of course, is in Luke W’s book. Which you should buy it. Right now. I’m not kidding. Anyway, his book has examples of high-quality patterns and the research to back to it up so you don’t look foolish when you are questioned about why you are recommending a certain label alignment over another, or why you chose those smart defaults. So, to the checklist…

High-Level Form Organization

Does the form need a start page?

  • The form needs room to explain need, purpose, and incentive, if any.

  • The form requires specific information that should be compiled or found before starting. (Example, tax returns and bank statements)

  • The form requires an investment of time. Warn the user. (Example, long surveys)

Should the form be broken into multiple pages?

  • Does the form contain a lengthy amount of question groups that are relatively independent of each other?

  • Can you ask optional questions after a form is completed?

  • Is the form a good candidate for progress indicators?

Have you considered gradual engagement for sign up and registration?

Questions & Form Inputs

  • Have you grouped questions/fields by types (contact, billing info, etc.)?

  • Have you eliminated unnecessary questions/form fields?

  • Have you clearly identified optional or required fields, using a label for whichever is in the minority?

  • Are labels in natural/plain language and consistently formatted?

  • Can you top align labels? If not, can you right align? If mobile, have you top-aligned labels?

  • Do the field lengths provide meaningful affordances? (Example, a ZIP code field for the U.S. should be 5 characters long, not 27.)

  • Have you set smart defaults?

  • Are there any areas of the form that could be enhanced by hiding irrelevant form controls/using progressive disclosure?

Buttons

  • Are your primary actions, such as save or register buttons, aligned with input fields?

  • Can you remove secondary actions like clear and reset?

  • If you use secondary actions, are they visually distinct from the primary action?

  • Can users undo secondary actions if accidentally clicked?

Help Text, Error Messages, and Success Pages

  • Do you need that help text? Is it clear and concise? Any input limits or restrictions? (Example, password must contain XYZ.)

  • Are error messages clearly communicated by both a top-level message and a contextual message near the affected field?

  • Have you considered in-line validation for questions/fields with high error rates or specific formatting requirements?

  • Have you clearly communicated the successful completion of the form? And it's not a dead end, right? 

Voting usability: my personal experience

Over the years, I've seen news reports about user difficulty with eletronic voting kiosks and read the occasional article about ballot usability. I did not really expect to be the person with the voting problem, but suprisingly, this tech-savvy gal was.

I headed to vote in a hotly contested local primary election. In Allegheny County, we use the ES&S iVotronic (view a big photo here), a rather boxy touchscreen electronic voting machine. (Full disclosure: the past two years, I was traveling over election day and used an absentee ballot. The last time I voted via machine was 2008.)

Here's what happened: 

  1. The voting helper showed me the machine and asked if I used the machine before. I had.
  2. I read the instructions on the screen. 
  3. I started to vote, and easily picked my candidates via the touchscreen. 
  4. I reviewed my choices; it said I could press the vote button.
  5. I couldn't find the vote button. An embarassing red light was going off, totally flashing in my face. 
  6. I looked some more for the button, cycling through screens. I was feeling rather sheepish.
  7. I examined the help sheet taped to the booth, panicked. It indicated a cartoonish big red button. 
  8. I looked for the big red vote button on the screen. Mortification set in. 
  9. I realized that the vote button was a physical button, not a touchscreen button. It was also the very thing that had been obnoxiously blinking all the while. 
  10. I pressed the button and walked away feeling really, really dumb. 

I felt stupid, but it was the UI's fault.

I've seen it plenty of times in user testing: a participant doesn't find what they were looking for or otherwise fails at a task and they say to me rather sheepishly, "oh, I just didn't see it" or "I guess I didn't understand how it works." But, it's not their fault. Most of the time, usability test particpants aren't the dumb ones; rather, they have good reasons for thinking something should work a certain way. When the site/product/device doesn't meet their expectations, rather than recognize the idiocy of the "thing," users self-blame or feel embarassed.     

I am a novice user of electronic voting booths.

At best, most people vote once or twice a year, but many go years between votes, showing up only for presidential elections. Essentially, you have to re-learn how to vote every time. To solve this problem, the voting booth gives a brief on-screen instruction page and the voting attendant asks if you need an overview. The actual process of voting is made to be easy---similar to a wizard interface. To top it off, my voting booth had a sheet of printed instructions, kind of like a post-it note with a password scribbled on it stuck to the monitor of a computer. Every safegaurd was taken, and the machine had help copy out the wazoo. Perhaps they should've done more user testing. 

The touchscreen/button interface is a mixed metaphor. 

My last beef: why on earth would a touchscreen have a physical button? It breaks the convention of the user interface. I went through my entire voting process touching only the screen, checking off names and then clicking the "next" and "review" buttons at the bottom of the screen. When the time came to actually cast my vote, I was not expecting to use a physical button AT THE TOP of the machine---even as it flashed in my face (weirdly, the flashing light made it nearly impossible to read the word "Vote" printed on the button). Maybe the thought was that people would feel more secure casting their vote with a non-digital action, but I suspect the button's top placement and physical nature felt odd to quite a few people in addition to me. 

The good news? At least I (successfully) voted! 

What are wireframes, and why does your website need them?

One of the most misunderstood UX deliverables are website wireframes. Maybe it's because they are lo-fi, or maybe it's because they are sometimes filled with scary decisions, or maybe it's just because some clients want to get to the fun "marketing" part: creative design. I'm not really sure. But, here's how I usually explain wireframes to clients when they encounter them for the first time. 

What are wireframes?

Wireframes are a visual guide to elements on a page. They represent function, content elements and features, and express navigation and wayfinding.

Though wireframes underpin the visual design, they are not one-to-one layouts of the future creative design composition; rather, they serve to influence and guide the design process.

In development and coding, wireframes transition into a functional specification for the build. 

Why does your website project need wireframes?

The short answer: because someone has to plan for what the site is going to do and how it is going to do it, captured in a language that designers and tech folk understand. Wireframes are these blueprints. 

If you've ever envisioned a new website before (or even a part of one), you know that you have a goal or a strategy in mind. Wireframes are the bridge between these strategies and the technology required to implement them successfully for users. 

What do you need to know before you start wireframing?

Basically, it boils down to these things: 

  • Awareness of client's wishes, strategy, objectives or goals. 

  • A content outline, beginnings of a content strategy, audiences identified, analytics, calls to action, success measures, etc. 

  • Understand the PROBLEM. There is always a problem; take it on.

When it's time to start, look at the content outline or sitemap and decide what should be illustrated. Ask yourself:

  • What content templates or page types will the site need?  
  • Are there processes that should be paper-prototyped? Paper prototypes are essentially wireframes that express a workflow. For example, a registration process or shopping cart.  
  • Is there a content management system (CMS)? If so, do you know the CMS's capabilities? If you do, great! Try to work within these capabilities. If not, talk to tech people as you go and prepare to adapt. 

Once you've produced the wireframes, expect tweaks throughout the rest of the project. Though "approved," or even endlessly vetted with coders, developers, designers and the client, nothing is ever final on the web.