Measuring Hard-to-Measure Things

2015-06-23

What? is not the same thing as Why? You need both, but why is usually two steps ahead of What?.

Last week, I spoke at Monitorama, and it was one of the best experiences of my career. The community was accepting, supportive, and safe. Thank you for the warmest welcome, truly. Jason Dixon, you are the best! The following post provides a bit of narrative to accompany slides and video.

Slippery Slope

I joined GitHub in March 2013, excited and nervous about starting as the company's first user experience researcher. Professionally, I'm trained as a historian who turned towards design to fund an academic habit. Not being an engineer, researching engineers felt intimidating, but there's plenty of overlap between disciplines like curiosity, skepticism, and a compulsion for discovery.

The first thing I did was research our internal project history. What did my new team members think about research? That endeavor surfaced a freshly-pressed debate about variance testing, usability studies, and user research, inspired by a thread on Hacker News. One comment, in particular, stood out:

We make informed design decisions by using the product we build on a daily basis. We keep what works, remove what doesn't and improve what needs to be improved. I think formalizing that process and having a "UX Researcher" is a slippery slope.

Since I'd been hired shortly after this post, it was clear that we were on a slippery slope. However, my colleague's insights were honest and accurate, and GitHub was wildly successful without user research.

In its early days, GitHub relied heavily on the experience and intuition of its many talented engineers and designers. The creative combination of people, vision, ability, and hunger to build a product that solved their problems with version control, code hosting, and collaboration was a huge hit. Developers, especially those from the Rails community, shared similar needs, enthusiastically signed up, and started using and paying for GitHub.

Powered-by Luck ...

Without research all you have is luck. –@sboak

Early product success is exciting and terrifying because if the growth path is up and to the right, your audience will broaden, as will your challenges. Newcomers will arrive with experiences less like your earliest users and less like your own and may begin using your product in ways you didn't intend. This may leave you puzzled scratching your head.

Who are these people? What do they want to achieve? Why do their goals matter? How do we help them?

In GitHub's case, there was a distinct shift in growth and audience. By 2013, developers experienced with version control and git signing up for accounts were far outnumbered by newcomers with limited experience, hungry, and expecting to learn both. The caché of a GitHub profile and membership to its community brought many types of people through the door and into a product that is frankly pretty hard to figure out.

I asked participants in our first foundational study of new user experiences to draw GitHub's core features. This drawing is still one of my favorites; she used colors so deliberately and keenly to illustrate the confusion between branching and forking:

Why would you request someone "pull" in your changes?

What would a person think if you tried to change their project?

As your audience expands beyond early adopters, that initial luck can start to run out.

Blind Spots

All products have blind spots, and no amount of analytics and monitoring can account for the hidden variables "waiting to screw you." Even the best systems can gather data on unusual behaviors for long without signaling something is amiss. Our sheer volume of data can make it hard for talented analysts to utilize information, and powerful tools and visualizations are rendered less useful.

Gap from metric/graph to insight can be huge. –@randommood

Graphs are impressive and addictive. They depict what is happening (e.g., year-over-year growth, anomaly detection, etc.). However, in isolation, graphs can also be misleading and create a sense of overconfidence in data. No graphing or intricate analysis can make up for fundamentally flawed data.

It's hard to believe in the results if you don't know why something is happening.

What v. Why

What? is not the same thing as Why? You need both.

Why is hard. Why requires you to step outside your comfortable universe and uncomfortably into experiences unlike your own. Why is time-consuming and awkward. Why can't be automated. Why requires you to look at the world through someone else's eyes. Why breaks our product design egos down with radical amounts of empathy. Why shows us what we don't know and what we didn't want to see. Why confounds and humbles us and is usually two-steps ahead of what. Why can sink a product or push it over the mountain into success.

Measuring "Why?" is hard without watching someone use your product. The experience of interviewing someone or sitting with them as they navigate tasks provides intimate insights into human workflows, tools, and social experiences that are not otherwise easily attainable.

Watching someone use your product will change the way you see your product.

Human Instrument

We tend to think of technical instruments concerning engineering challenges, data collection, and analysis. To surface blind spots, you must get out from behind the numbers and in front of people. Think of yourself as a human instrument. You are handcrafted, possibly artisanal, and brilliantly designed to measure socially meaningful questions with an open and compassionate heart.

Humans do really interesting things with software when they're confused and frustrated. They can become skilled at designing effective workarounds to mitigate bad design and unwinding those patterns can be difficult.

When you talk with people who use the product you're building, consider these three things:

  1. Goals – People "hire" your product to help them achieve personal efforts and ambitions.

  2. Motivations – People are driven towards achievement. They have underlying reasons for behaving in particular ways. People want to succeed.

  3. Workarounds – When goals and motivations are out of sync with a product's user experience, people can effectively use bad design.

Change, even good change, can be frustrating. Adapting to change requires investing trust and time in adopting new behaviors toward achieving aspirational ones.

*Read more about designing for core, new, and aspirational behaviors and learn from some of our mistakes with a big project.

Vocal Minority

When designing a product, especially one that needs to scale for thousands and millions of people, you must be increasingly deliberate in your design decisions. If you introduce complexity, you must also consider its effects on current and future behaviors; taking features away is hard.

There is always a vocal minority.

Listening closely and reacting to insights from the wrong audience might encourage you to head in the wrong direction. Sometimes, the vocal minority is not the best representation of your customers; they generate the most feedback and are often easier to reach.

Please don't take the easy path because it's easy. Measuring hard-to-measure things is hard work, and it's hella fun, too.

Guarantee

At a time when we are awash in data if you think in terms of human behaviors and get out from behind the numbers to talk with people, I guarantee you will become better at discovering, measuring, and solving complex problems.

*Related: Measuring Hard-to-Measure Things and video

Next
Next

Designing to change human behaviors