How Usability Testing Improves Microcopy

Kinneret Yifrah
Prototypr
Published in
12 min readOct 7, 2021

--

Whenever I approach any UX writing project for a new digital product, I always research the subject, listen and read anything and everything users have to say in the matter, find out what words they use (which I will therefore use myself), study the goals of the client and what they want to achieve with it, and of course, familiarize myself with every button, field and word in it.

The writing phase starts after in-depth preparation, so that my choice of words and phrases will be as precise and helpful as possible and serve the product’s users.

So why even do usability testing after writing?

Because even though I did all my prep work, I still have to ask myself how the users will read the texts:

  • Will they understand?
  • Is it clear enough?
  • Maybe a synonym would fit better?
  • Is it too informal? Too formal?
  • Will they understand the meaning behind the category title?
  • Is the tooltip explanation enough, or did I make it too short?
  • And what if I missed a use-case, a screen, an unexpected option?

No matter how much research I do before writing, part of it will always — always — be based on my past experience, educated guesses and even intuition. That’s just the way it is, and that’s fine. Every digital product is a brand new world waiting to be discovered.

I eventually found out that the only way to gain more certainty is through usability testing, after the microcopy has already been incorporated into the product (or pilot or MVP).

In other words:

We’ll know if we chose the right words only after seeing how real users handle the microcopy we wrote.

And that’s precisely what usability testing is for.

Usability testing. Photo by UX Indonesia on Unsplash

So what basically is usability testing

Usability testing is usually done to test product design. Users are asked to perform certain tasks in the product and are observed as they attempt to complete them, mainly in order to identify what doesn’t work as planned. Communication advocates believe that users should be asked to describe what they are trying to do and what they expect would happen as they are doing it, so that the testers can understand the reasons for their choices. Others think that talking causes biased use, and therefore only quiet observation is the right way to go. When it comes to our needs as writers we would obviously prefer that the users talk, and we’ll soon see exactly how it helps us.

How is usability testing done

Usability testing can be done in designated usability labs (some have one-way mirrors, to allow us to observe users without affecting them with our presence; others use cameras directed at the screen, or finger movement tracking), but you can also do them in your average office, via Zoom or even on the street with random users. It all depends on the means at your disposal, and the product itself.

If you want to learn more about how usability testing is done, consult additional sources like this one, because this post is more about what we can learn from them, rather than the process itself.

The camera tracks finger movements and transmits the video to a screen for the testers to see. Photo by Mohamed Boumiza on Unsplash

Wait, why isn’t A/B testing enough?

1. Because you don’t need anything other than users for usability testing, not even a usability lab. Sure, it’s better to have one, but even the simplest usability tests — guerilla testing — can teach us A LOT.

2. Because you can’t do A/B testing for every title, button and tooltip in the product, and usability testing can provide information on plenty of the components at once.

3. A/B testing (and heat maps, which are a type of remote usability testing that don’t require interaction with users) can tell us if users clicked or didn’t click, noticed or didn’t notice, used or didn’t use — but we won’t be able to tell why: what word confused them, what phrasing could have made things clearer. In usability testing where users speak out as they debate, think, get mixed up or comprehend, we can learn so much more about the microcopy we compiled (or, sadly, didn’t).

Three recommendations before we hit the real stuff

1. Attend the testing in person

If budget permits running a microcopy-specific usability testing separate from design testing, that’s ideal. But that’s usually not the case… For the most part there’s only enough budget for one set of tests (if at all) and that would go to design testing (there’s a way to go about it even if there’s no budget or time for tests — I’ll get to it later).

At any rate, let the design team know in advance that when the usability tests are going to take place — you want to be there throughout. It doesn’t always work; sometimes you have to remind them, and insist, or casually mention how “this copy should really be tested during usability testing.” In other words: make every effort to be there. If the tests are done via Zoom, ask for the recordings (and make sure they actually record the tests).

It’s best not to ask product and design teams to pay attention to the language on your behalf, because they’re too busy — and rightfully so — paying attention to the flow rather than the words. They would, for instance, notice if users are unable to use a control, but they probably won’t know how to detect the problematic or missing word for users, and that’s just the word we want to know.

In this post, I will focus only on what we can learn from usability testing about microcopy, language, word choice and phrasing. I’ll talk about things I’ve learned from doing usability testing on a specific product, but everything I say here holds true for every product.

2. Take part in selecting the participants, get involved in the script

The ones planning the usability test will usually be the product and design teams. They are likely to make sure their sample is diverse in terms of demographics: gender, age, ethnicity, geographic location, etc.

We have to make sure that the sample is also diverse in terms of language perception: for example, since I’m writing in my native tongue, Hebrew, I’d ask to invite participants whose native tongue is different — Arabic speakers or immigrants, in the case of Israel. And since I’m in my 40s, I would ask to include participants over the age of 65, and teenagers, who experience the product and how it speaks to them, differently.

About the script: Usability testing follows a script, a prewritten set of scenarios and questions to ask the participants before, during and after they’ve used the product. You should take part in choosing the scenarios and add questions about whatever microcopy you might be unsure about:

  • What would happen when you click this button?
  • Where do you think this title would take you?
  • What did you understand from this text?
  • What do you think is the difference between the two categories?

3. If your company doesn’t do usability testing, just do it yourself — it’s easy

Even guerilla testing (the ones you do with minimal planning and resources) revealed some pretty important information, and they’re so easy to execute.

This is all you have to do:

  1. Set up a time and date for Zoom sessions with 5 people — the more diverse, the better. Set aside 30–60 minutes per person, with intervals of at least 15 minutes.
  2. Ask your users to launch the product and screen-share.
  3. Give them some tasks to perform with the product. It’s best if these are the main tasks or the ones you are least certain about.
  4. Ask your users to describe what they see and do (and if they’re not big on talking, just ask every once in a while).
  5. Ask about specific phrasing you’re unsure about, like the examples I gave in the script above.
  6. Record everything so you can give your undivided attention to the test in real-time; later you can rewatch everything and take note of specifics and phrasing.

True, there are better, more efficient ways for doing usability testing, but even such guerilla testing will give you tons of value at zero cost and just a day or two of work.

How can usability testing help us improve Microcopy

1. Usability testing will let us know for certain if what we wrote is clear enough (or not at all)

Is our final phrasing — after all that we’ve moved around, added, edited, shrunk and listened to 17 different opinions — clear enough?

The more complex our product or feature, the more present this question is, and the more times it’ll come up throughout the product. It’s inevitable, totally part of the job, and only usability testing can answer it.

Example A

Due to the complexity of data on the one hand, and legal requirements on the other, we sometimes had to write super complex instructions. They were supposed to a) explain the data; 2) restrict it; 3) drive to a specific action and 4) be short and concise. It took many iterations until I found a text that I hoped was clear, at least for anyone who took the time to read it.

During the usability testing we found out that users across the board did not comprehend the text, which leads to them misusing the relevant controls. My mistake was being too specific in phrasing the instructions, which confused rather than helped users. The fix? Phrase the instructions from a user’s perspective (basic!). Instead of writing “only show me data that may fit X” (which, professionally, is the most precise way of phrasing), I changed it to “I’m interested in X.”

This was actually something I missed when I was writing the microcopy (what’s more elementary than writing from a user’s perspective?) and only the usability testing brought it up. Usability testing is a huge lesson in modesty, and you should grab it with both hands.

Example B

One of the screens had a rather complex tooltip, and we noticed that users stopped reading it after the first sentence. Had they continued to read, they would have had their answer, but they simply opted out and were left hanging. Once I noticed that, I recommended that we change the order and make the last line of the tooltip, the first. Sometimes you have to give some context before you get to the bottom line, and sometimes the bottom line needs to come first. Only usability testing can tell us what’s right for our specific case.

Example C

Sometimes users miss an important explanation that we, as writers and designers, would never think could be missed. Then we get to the usability testing and, lo and behold, turns out that nobody reads it. Not only that, turns out nobody can find it, even when they truly look for it. It happened to me more than once. Or twice. Solutions can be visual (bigger font) or textual (incorporating the important information into another element that users do stop and read).

2. Usability testing will expose issues we never even considered

For some places, we have nothing to be unsure of. They seem simple, unambiguous, and we feel like our phrasing is excellent. Then, come usability testing day — surprise!

Example A

One specific word that appeared in our instruction above a slider made users use it incorrectly. Some of the users said it explicitly: “it said X, so I thought…”

Example B

With a product that contains a lot of statistical data, using the word “average” in multiple places made users compare averages that were irrelevant to one another. Our conclusion was that any time the word “average” appeared, it should be accompanied by another word, even if it makes everything longer or more bulky — for instance: market average, yearly average, etc. — and whenever possible, omit it altogether.

Example C

Our dropdown menu had a default option of “All categories” meaning that users did not have to select anything and could just leave it as-is. Our reasoning was that we know we had undecided users who would want to keep all their options open.

During the usability testing we noticed that even undecided users felt that they had to make a choice. They simply didn’t understand the default option. Tome, this was super surprising because “All categories” is a completely standard phrasing in a completely standard dropdown, but that’s exactly why we were doing usability testing in the first place. So we changed the default to “I don’t know yet”.

Example D

The site had a set of filters where users marked their personal preferences, then get results. Our headline was “Results that fit your preferences”.

But usability testing revealed two things I didn’t know beforehand: first, that headline also appeared above free-search results, when no preferences were marked; second, due to technical limitations, we could have a scenario where the first results only (very) partially fit.

Either way, this promise of “Results that fit your preferences” didn’t always sit right, and only usability testing showed us that. We recommended that the title be changed to a one-size-fits-all: “X results found”.

3. Usability testing will bring out the right words

Example A

We had a headline that whenever users explained its meaning to us, I laughed out loud (don’t worry, they didn’t know, it was a Zoom recording). Everyone, without exception, used the same exact word to describe the data they were seeing, and it absolutely wasn’t the one I chose. Clearly, we had to change it, and luckily we knew what we had to change it to. Easy!

Example B

In another instance we categorized items and named these categories. Users couldn’t figure out the difference and used other words to describe them. We made a note to go back to that as a team and see if the categories and definitions could be changed so that they would fit both the professional language and user lingo.

Not everything you find during usability testing can be fixed right away. Sometimes there are professional constraints, or others, but at least we knew we had a problem and that things weren’t working as we’d hoped. Now all that was left was finding a solution.

4. Usability testing will show us how users describe the site, for better or for worse

There’s nothing better than hearing your users and understand directly what they think is important and how they say it. This way we can describe exactly what the product does and what its value is, and touch directly in users’ desires and hopes.

Two things were abundantly clear:

One — users were really enjoying the product.

They said things like “it was really fun to play around with the numbers,” “I’m going to dig deeper, it’s fun,” “fun to see it like that” and “fun to search this way” which was all really surprising. Sure, we knew the product is great and would intrigue users, but at no point did we ever think users would describe it as “fun”. I’m not really sure how (or if) we would use this newfound knowledge, but it’s definitely interesting and changes the mood.

The second thing to stand out was that everyone talked about speed: “I ran through it, found everything in a snap,” “nothing to get confused about, just search, quickly,” “you immediately find what you were looking for,” “it was easy to find what I wanted,” “easy-breezy.”

Snap, easy-breezy — it would be interesting to incorporate these in the site’s text and see what happens.

5. Usability testing will show us where we did a good job

Usability testing can easily bum us out, because they pinpoint our missed shots, but it’s more than likely that most of the microcopy we slaved over won’t come up during the tests. Users will simply read it and move on, and that’s usually a good sign.

Example A

When users ask a question out loud and go to the tooltip for an answer, my heart skips a beat. Will it answer their question exactly? Will it be clear right off the bat, the first time around? Will they understand exactly what I want them to understand?

Then they read it, immediately understand and make the right choice without even pausing for a second, and I breathe a sigh of relief. So satisfying.

Example B

When users read headlines out loud and I can really hear that it’s precise, that it sits just right, that they understand exactly what they’re about to see next — I can mark a nice, big checkmark.

Example C — for writers in gendered languages

Because Hebrew is a gendered language, I decided (after much deliberating) that the primary CTA will be in the second person singular form, eliminating gender as much as possible, and that the following instructions will be in the second person plural form (which is gender-neutral). Although this meant that the text wasn’t uniform, I figured it was the optimal way to call to action and still make sure that the many instructions were clear and simple. In any case, I was wondering whether users would notice it and if it might confuse them in some way, but it was all smooth sailing. If there were any negative effects, they were on an unobserved level for which you might need to plan a different test.

So don’t forget to pay attention to all these places where everything is ship-shape, just as planned, because usability testing is designed not only to expose what doesn’t work, but also validate our assumptions and remind us that, at the end of the day, we really are pretty good at what we do :)

--

--

Microcopy expert and UX writer. Author of “Microcopy: The Complete Guide” — the book and the digital course (Udemy). Helps UX pros to make users’ lives easier