No matter if you are sitting across from your users in their office, asking for feedback on an idea over Slack, or working with potential partners our goal is the same. We need to build a product and service that solves their core, root problem. To be able to do this, we need to be able to set our biases and preconceived notions aside.

Cognitive biases are underlying (often unconscious) schemas that define our thinking patterns. These gaps in rational judgment occur based on our life experience, our place in society, and our interactions with others.

We’ll cover a number of common cognitive biases that affect both our users and ourselves.

Biases that Affect Users

Users are sooo biased

There are a number of cognitive and attributional biases that can impact our users’ ability to provide us with accurate information. I’ll include a bit about each bias, a generic example, how we see it in practice, and how we can help limit its effects.

Authority Bias

This bias is the tendency for people to assume authority figures have more accurate opinions, regardless of the background of the individual.

Example: When one might assume the medical opinion of the California Governor is correct, even though she has no medical training.

In Practice: Our users might defer to our thoughts on how to do their work because we are the ones building the software. They may not interrupt or correct our statements (especially in front of us) regarding policy, procedure, or real-life.

Mitigating: Two of the best ways to mitigate this is to build trust between yourself and the participant. Remind them that we are not experts, and expect them to bring their industry-specific knowledge to guide us. A second way is to do our best to fit in, visually. If we wear branded gear it can often further point to differences between our participants and ourselves.

Availability Heuristic

This happens when we judge how common or important something is based on if we can recall it easily. This can affect decision-making abilities, and inaccurately influence responses to our questions.

Example: When asked how reliable iPhones are compared to Android phones, we might respond “iPhones break all the time” just because we had to spend a couple hours talking to AppleCare to trouble-shoot our iCloud account.

In Practice: A user might feel that a particular edge case is more common than it really is. They may request for your app has to handle every kind of situation absolutely perfectly. This suggested is based on the theory that user’s particular experience handling that edge case three times in the past month.

Mitigating: For our participants, we can ask them to quantify how often certain types of behavior happens. Asking our participants for specific examples and numbers helps ground the answers in reality.

On our side, ethnographic research (talking to many different industry veterans or senior users) can help us gather an accurate baseline. Personas and Jobs-to-be-Done artifacts can help you ‘reality check’ your own opinions and feedback from users who are deviating from the norm. (This is not carte-blanche to ignore data that deviates, in fact, it’s often a sign that we need to check with a few more users before continuing.)

Bandwagon Effect

As the name suggestions, the bandwagon effect occurs when beliefs, opinions, and actions are influenced by what is trendy, what “everyone else” is doing, or what appears to be common.

Example: Everyone used to wear this:

In Practice: A vendor may spend 8 hours a week formatting reports in a certain way because she has seen others do it that way. (Even though it’s not at all needed. This is a true story- I called her manager and politely told her “that would no longer be necessary.”)

Mitigating: To understand our actions, asking ‘Why?’ multiple times, and move up to find the real reasoning. Sometimes we can accompany this, by following up the chain of command to verify the overarching strategy and goals.

Base Rate Fallacy

People tend to favor specific information and anecdotes to statistical generalities.

Example: Mike thinks propeller planes are far less safe than jets. When picking airlines for his trip to Birmingham, AL, he can fly a jet plane from Allegiant Airlines (whose planes have accidents 12/10,000 times), or he can fly on a propeller plane from United (whose planes have accidents 1/10,000 times). Base Rate Fallacy follows when people ignore basic statistical information for anecdotal specifics.

In Practice: Participants might lead us to focus on certain situations because they think they are more important due to anecdotal information. For instance, one user might be convinced that data left in ‘the cloud’ could get corrupted or deleted without warning because it happened to their boss once. Instead, they print out every single report as it becomes available and downloads files, saving them on their desktop. This ignores the statistics proving it’s far more likely that their workstation could fail and the paper misplaced than cloud data getting corrupted.

Mitigating: We can ask why, or ask our participants to quantify the impact. We can also cross-check with our ethnographic research and personas to see if it is just one user’s belief.

Courtesy Effect

People are more likely to avoid offending others with their true opinions, rather than voice their true feelings.

Example: Gregory doesn’t like his boyfriend’s new jacket, but he may avoid saying his true feelings as a way to avoid hurting his boyfriend’s feelings.

In Practice: Alex does not think the workflow we tested with him will be easy to adopt but avoids mentioning that directly because he thinks we worked hard on it.

Mitigating: One way is to mention that you have not personally designed the mockup. Mention that you are a researcher, not a designer. Clarify that your goal in this session is to bring feedback and comments back to the design team.

Framing Effect

People change their perception of choices and options based on how they are presented. (E.g. as a loss or as a gain.)

Example: Presenting the new airline fare class as “Basic Economy” for the price-conscious passenger, or calling it “Cattle Class” for cheap passengers.

In Practice: When showing that we have removed a feature, some users would react poorly when told: “We removed this feature because no one uses it.” However, they might be elated when “We simplified this interface to make it easier for you to use.” Despite the fact, the design is the same.

Mitigating: Using the 3-Column Technique to write non-sensationalist introduction or questions. Utilizing adjectives, descriptors, and intensifiers works great for marketing material, but by design, will influence impressions of the product and muddy our research data.

Functional Fixedness

Biases people to believe that certain objects can only be used for their traditional purpose.

Example: Some think they can only use a Q-Tip to clean your ears, but others use Q-Tips to build a 3D model of DNA.

In Practice: We might run into difficulties getting nontechnical users to adapt to relying on a smartphone app over a paper-based process instead of just a communication device. A lot of times you’ll hear “You can’t use X for that!”

Mitigating: Continuing to ask “Why” recursively and run through the reasoning for resistance. Sometimes it can come back to feelings and emotions more than logical blockers – these emotions should not be discounted but incorporated in any design.

Impact Bias

The bias towards thinking a change will have a longer and more intense impact than it really will.

Example: If we change our office phone number, no one will be able to get in touch with us reliably for months.

In Practice: A sentiment that “Our users will struggle with uploading PDFs rather than faxing in files for years. It will be difficult to get people to switch.” When in reality, there is difficulty and pushback for less than two weeks.

Mitigating: Often we can mitigate this by promising to check in at a predetermined future date on whether or not these challenges are continuing to have a strong impact.

Status Quo Bias

This is the tendency for people to avoid and dislike change of any kind.

Example: People may resist moving away from Yahoo! email and Alta Vista search despite the known benefits of Gmail and Google Search because they want to avoid the change.

In Practice: We will often run into people who struggle to move away from paper-based systems they have used for years. (Despite the fact that our digital system is faster, more efficient, and more reliable.)

Mitigating: When speaking to users, it’s important to keep this in mind when users are resistant to digitizing their workflow or changing their processes. If you cannot get them to speak about reasons they are worried about changing their processes, this bias may come into play.

Biases That Affect Us

Me, biased? Never!

Let’s run through a crash course on our cognitive biases that can get us in trouble.

Anchoring Bias:

Our (usually) first information gained about a decision can bias us despite any further information presented.

Example: We are told that all truck drivers have flip phones or crappy Android phones and cannot speak English well. Without additional verification and being actively shown how many speak English fluently or use iPhones, we may hold this information with higher importance because it was the first set of information about this project.

In Practice: Our first user interview or research may bias us to make decisions for the user interface that are not aligned with the majority of use cases.

Mitigating: Make sure that you document and take notes as you start a project. Be sure to come back to the first design or information and be honest with yourself if you are weighing our initial research more highly than you sure.

Confirmation Bias

We like to be correct. This bias is when we ask questions in a certain way or ignore details that would show our decisions and designs to be flawed. Sometimes we can found to overemphasize the data that supports our theories.

Example: A questionnaire that asks “Are you happy with your social life?” would lead to more positive results than “Are you unhappy with your social life” even though they are asking for the same information.

Example: People with a strong belief in free-market capitalism read an economics news story and are asked to summarize it. Their personal feelings would impact which bits of data they present at the end.

In Practice: We may ask questions in a way that encourage users to tell us what we want to hear. Or we may pick out details about the design that shows it’s the right decision. For instance, users like the visual design, but mention that it’s a bit slower than the old design. We may focus on the usability improvements and ignore statements that don’t support our decisions.

Mitigating: Make sure that we ask questions in both a positive and negative cadence. We should also be sure to sure to have others review notes with us.

Experimenter’s Bias (Observer-expectancy effect)

When conducting research, we have certain expectations of what the research will uncover and tend to ignore other data.

Example: When playing music backward, conspiracy theorists will often expect to hear words in the music, and they do point out words in random noises. The reverse is true, where people do not hear actual hidden lyrics when they are pointed out because they don’t believe they could be there.

In Practice: We expect users to struggle with certain parts of the user interface but may ignore other struggles in places where we thought it would be easy to move through.

Mitigating: Double-blind experiments are used in scientific studies. However, often we cannot facilitate this. Instead, it’s best to have non-designers and often regular people review research sessions / recordsings from FullStory / feedback from users in support tickets. Even having other designers who are not on the project review feedback can be immensely helpful.

The Participant is NOT Always Right:

Occasionally you may find having strong opinions that are not matched by prior research or by user actions. This often happens when users find data missing or not every one-off situation prioritized. Sometimes it’s just a bad day. You can denote research participants who deviated in the notes you take after the fact.

It’s your job to make the final design decisions. It’s your job to design the product. Do your job and own your decisions. Analyze the research and do what you think is right.

Next, read Evidencing Your Work