What is Atomic UX Research?

Daniel Pidcock
Prototypr
Published in
9 min readMay 16, 2018

--

Atomic Research, helps you organise knowledge in an infinitely powerful manner.

Super artistic photo of my laptop screen showing Atomic Research cards in the Glean.ly tool

In short Atomic Research is the concept of breaking knowledge down into its constituent parts:

The Atomic Research model — a funnel from data to decisions, then around again
  • Experiments “We did this…”
  • Facts “…and we found out this…”
  • Insights “…which makes us think this…”
  • Recommendations “…so we’ll do that.”

By breaking knowledge down like this allows for some extraordinary possibilities.

Watch me explain Atomic UX Research

If you prefer not to read, watch me talk in detail about Atomic UX Research and best practices at User Research London 2022 (30 mins):

Daniel Pidcock explains the concept of Atomic UX Research and best practices (30mins)

How it started

Click here to skip to how this works in practice >>

Last year I was working for a FTSE 100 tech company. The issue we were trying to solve was how to store and distribute the UX learnings in a way that everyone in the business could use and benefit from them.

As it stood, the UX team, BAs and PMs would run experiments, write up what they learned and how they used that knowledge. These were normally produced as PDFs, Google Docs or Slide-decks and then were filed away in Google Drive.

That’s all fine until someone else came to work on a feature and needed to find out what we already knew, and it was hard to use those findings for another project.

Sound familiar?

We asked: “What if, instead of documents gathering dust in files and folders, our UX knowledge was in a searchable and shareable format?”

Easy right? Instead of putting our research in to PDFs we put them in to some kind of online repository, maybe a wiki of some kind?

I started researching the repositories out there for something that we could use to make our research taggable and searchable. There were a few systems that claimed to do this but it became obvious that these are all aimed at smaller companies doing small, self-contained projects. The categorisation and search just wasn’t up to dealing with large scale projects.

Research is often very specific to the area you are researching. This seems like an obvious and pointless statement. But it is important — say I ran some research and one of the outputs was that we learnt that green was much more effective on the call to action than red. That means it’s more effective on that very particular area, or for a certain persona… or both. It doesn’t mean we should change the colours of the whole UI.

The repositories out there either didn’t allow you to give proper provenance to the research or went the other way and gave you no way to discover and utilise research outside that tiny walled gardens that were no better than PDFs in a shared drive.

What we needed was the ability to:

  • Record and properly categorise research
  • Search in a easy but flexible manner
  • Understand the provenance, environment and limitations of research
  • Discover patterns
  • Support an evidence based approach

I was talking to a colleague about this problem and — as UX designers are wont to do — we started talking about breaking what research is into simple bits. I have to give a lot of credit to this colleague David Yates, as I think it was he who started talking about how you could separate data from the insights.

As we talked we realised we can break an item of knowledge in to 3 or 4 parts. This idea of ‘lots of small signals leading to larger discoveries’ made me think of Atomic Design.

As we discussed how this could work, and the benefits of breaking down research like this I knew we had discovered something important.

So important it had been done before! Ever heard of the DIKW hierarchy (data, information, knowledge, and wisdom)? We’d accidentally invented an existing and well respected scientific data model that is at least 60 years old!

Still, that just confirmed to me that this was a good way to look at UX research — I feel going around saying ‘DIKW’ (which most people seem to pronounce ‘dickwee’) isn’t brilliant, also our model was slightly different, so I believe Atomic Research is a better term. When I use the comparison to Atomic Design people au fait with the method tend to get it.

I’ve been using the Atomic Research principle for nearly a year now and find it an incredibly useful way to think about product knowledge.

So what is Atomic Research?

Atomic research in practice

Atomic research in practice — how it looks with real knowledge.

Experiments — “We did this…”
The experiments from which we have sourced our facts.

Facts — “…and we found out this…”
From experiments we can glean facts. Facts make no assumptions, they should never reflect your opinion only what was discovered or the sentiment of the users.

For example: 3 in 5 users didn’t understand the button label.

Insights — “…which makes us think this…”
This is where you can interpret the facts you have discovered. One or more fact can connect to create an insight. Even if they come from other experiments. Some facts might disprove an insight.

For example: The language used on the buttons isn’t clear.

Recommendations — “…so we’ll do that.”
Recommendations are your ideas for how to use the valuable insights you have gleaned from the facts. The more insights connecting to the recommendations, the more evidence you have to it’s value. This helps when prioritising work.

For example: Let’s add icons to the buttons.

Multiple sources mean better decisions

One of the first benefits I noticed from this method is how more than one fact can support or refute an insight and more than one insight can support or refute a recommendations.

The more facts that ultimately lead to a recommendation the more you can be confident about that route forward.

A fact can be understood in multiple ways and there could be several recommendations to draw from an insight. Therefore one fact can have many insights and an insight can have many recommendations.

It doesn’t matter as long as we are testing them, generating more evidence and proving which ones are correct.

When we have more evidence that can be linked up to prove or disprove an insight.

The best thing… This works across multiple experiments!

Because what we discovered is linked to, but not reliant on, how we discovered it — And that is linked to but not reliant on what we did next — it gives us the opportunity to use facts from several experiments to support a single insight. We can take insights from anywhere to create a recommendations. We can spot patterns of results from anywhere in an organisation to guide us in to the future.

It might be that the experiment that first led to an insight is long forgotten. No longer relevant. But evidence from other sources continue to support that insight, bolster it and enable it to remain as a truth.

The results are no longer held in this little bubble of a specific bit of research and I can give as much evidence as possible to make major decisions.

Research is no longer linear

Once we have come to a recommendations, that needs to be tested too.

Let’s say we have an insight that says people don’t understand our buttons. One recommendation might be to add icons to those buttons. I ran a user test that seemed to suggest that it aided comprehension now I want to run a split test on the live system. The data comes in and shows that in reality this didn’t work — Damn!

But the good news is I can use that data to disprove my recommendation while leaving the insights that led to that recommendation intact. In fact, the failed tests data might weaken some insights but actually strengthen others — it might prove that another insight is the correct one.

It certainly helps us get a clearer picture of how to improve our products moving forwards.

Traditional reporting methods are stuck in a moment of time — “Our research told us this…” might be true when that document was written, but it’s unlikely to have been updated when that was discovered to be incorrect a few quarters later.

By holding insights as separate and independent of their sources means they can be constantly re-tested and allowed to live and die by the evidence.

This leads to what I think is the most important benefit:

Atomic research forces evidence based thinking

I can’t create a recommendation if I don’t have insights that support it.

I can’t create insights without facts.

The more sources I have for each one, the more confident we can be about my recommendations.

Of course I can cheat and say that a fact supports my insight (or just be misguided), but it will be obvious to anyone looking that it doesn’t.

Atomic Research gives provenance to my assertions.

Tools to practice Atomic Research

I’ve been using Atomic Research in my own work for nearly a year now.

For most of this time I’ve been doing this manually. Literally sticky notes on white boards with hand drawn lines. This is useful for playing with findings in a small way, but is temporary and not very shareable.

A next step up is to use mind mapping tools such as draw.io — This is longer lasting but still very time-consuming and massively limited.

It was obvious that for this method to have real value it needs a proper tool.

I started working with developer David Barker, to help me build this out as a working tool and we’re hoping to release it publicly soon under the name Glean.ly

Update on the Glean.ly atomic research repository:
I’m happy to announce that Glean.ly has been released and you can sign up now for a 30 day trial!

See it in action here:

15-minute Glean.ly demo video

Updated 2 June 2021

I’ve changed the term ‘conclusions’ to ‘recommendations’ as we’ve found this more approachable and accurate. Recommendations are based on the current evidence and should be reviewed as new data is connected, whereas conclusions sounds very concrete and final.

If you prefer different terminology, whether for experiments, facts, insights, or recommendations, I think that is great. It should work for you and your organization.

I’m working on an article that will talk about all I’ve learned about Atomic over the past 3 years, so watch this space and please follow to be alerted when it’s ready.

Further reading:

Videos

Atomic UX Research and best practices at User Research London 2022 (30 mins):

Daniel Pidcock explains the concept of Atomic UX Research and best practices (30mins)

My 2018 talk at UX Brighton was the first time I spoke about atomic publicly. Watch the 20min presentation here:

Daniel Pidcock explains the concept of Atomic UX Research (20mins)

Atomic research for agencies (45 mins)

2021 talk about atomic research for Agencies — covers a lot of the same as above but with some specifics about agencies.

English
Atomic research in the European Commission — a UX case study >>
Foundations of Atomic Research, Tomer Sharon >>
Atomic Design and Atomic Research: could the combination be nuclear? >>
Reports are shit — School of Product >>

Brasil Portugues
Atomic UX Research: como armazenar e distribuir os aprendizados de UX >>
Alura Podcast: Gleanly Researcher Larissa speaks on Atomic UX Research >>

Espanol
Video en español de Atomic UX Research >>
¿De que hablamos cuando hablamos de insights? >>

Svenska
Allt du behöver veta om Atomic Research >>

--

--

User Experience designer - Advocate of accessibility and atomic UX research.