Notes from Industry

How Is AI-Centered Product Design Different?

Amanda Linden
Prototypr
Published in
15 min readApr 13, 2021

--

People who are interested in AI often ask me what an AI designer is, and I’ve attempted to answer that question in this article. I wanted to go a step further, by helping designers and product teams understand how designing AI-based product experiences is different from traditional product design. Here is what I’ve learned over the last two years of managing AI design & innovation teams, about how AI-first product thinking is evolving the traditional product design process.

The Last UI Design Transformation: Web to Mobile

When the transition from web to mobile occurred (more than a decade ago) I remember feeling confused as a designer. Exactly what did mobile-first design really mean? How was designing for mobile going to be different from designing websites? Initially, product leaders looking at the oncoming transformation to mobile expected that only some things would be done on your phone. You’d check email, do quick tasks, maybe get directions, but longer tasks like doing your taxes, watching a film seemed far-fetched.

Today we know that pretty much everything people do online can be done on the phone, and that many people around the world use only a phone to access the internet. The transformation to mobile meant that every task we did on the web or offline could now be designed for mobile. It wasn’t enough to make websites smaller or viewable at a different form factor though: mobile-first design meant leveraging the capabilities mobile offered and using them to reinvent or even revolutionize the way current tasks were completed.

On a smartphone you could capture images, your exact location could be identified, and you could touch the screen. With those 3 key product differences, many processes could be reinvented or even disrupted. Instead of typing in financial information (data entry is a bigger pain on the phone than a computer), a person could take a picture of a receipt or a W2, making the process of submitting an expense report or even income taxes much easier than before. A person no longer needed to print directions from the web, because a maps app could tell them exactly where to go based on where they were, and the destination they specified.

Having worked on AI-based product experiences over the last two years, I now look back at that time and think as complicated and revolutionary as it seemed at the time, designing for AI is an order of magnitude more complex. Reinventing experiences based on 3 new capabilities is much less daunting than designing for AI because AI brings an ability to see, speak and understand. Instead of having a touch screen (but still designing in a 2D screen-based world) AR and VR experiences will create entirely new UI paradigms in 3-dimensional space.

The User-Centered Design Process for AI

Most of us know the steps in a traditional product design process. First, you take the time to understand the user. You do exploratory research to learn about how they currently solve a problem, understand what the pain points are, and where the opportunities are to improve the experience. You then work to define the goals, principles, and success criteria for a new solution. The team ideates creating a set of possible solutions that meet the success criteria. Then you build a lightweight prototype to test this solution to get feedback and confidence in moving to a more high fidelity prototype or launch.

This is a diagram of the standard product design process:

Traditional Product Design Process: Empathize, Define, Ideate, Prototyp, Test
Image by Author

The design process for AI is similar, but with some important steps added. Some of these steps apply only to AI-based product experiences, and others are critical for AI, but can add value in building other products as well.

User centered design process for AI: Empathize, and also imagine a better future; define and assess AI Capabilities; ideate and create data & feedback loops; prototype and also conduct a negative impact analysis.
Image by Author

In designing for AI, when you empathize with users, you also need to think carefully about the future AI/human collaboration you want to create, and the future you want to see. When you define the requirements of the project you also need to define the AI capabilities you hope to leverage and get an understanding of whether they are yet mature enough to use. When ideating, it’s not enough to build a tool that solves the use case, you need to think about how AI is going to get the data it needs and learn over time. And finally, when you begin to build out the idea, you’ll need to take the time to think through how to minimize unintended negative impacts of this tool existing in the world, or actions of bad actors using the tool.

Here are each of the AI-specific steps in a little more detail.

Empathize: Imagine a Better Future

In addition to understanding user needs and empathizing with where existing pain points lie, designers for AI-based experiences need to deliberately and consciously decide what kind of future they are looking to create. They need to define the specific behaviors and new outcomes they are looking to achieve.

If I’m honest, when I first designed mobile experiences I didn’t spend a lot of time thinking about what parts of mobile technology are good versus harmful to the world. I never questioned whether the internet was a good idea when I built web applications. I spent my early career as a designer operating from the mindset that technological progress was by nature positive progress. Over time I think we’ve all come to understand that some of the products we’ve built have both positive negative societal outcomes, and that we have a responsibility to work toward maximizing the positive outcomes of technology and minimizing the negative ones.

What is important to understand though is that with each new technological revolution, the stakes become higher as the technological capabilities increase. With the mobile revolution, we saw an entire world come online which brought huge benefits. But we also saw a society turn to screen-based interactions in a way that we were not always intending. AI has the potential to transform our society in a drastically positive way, or in a very negative way. Because technology is advancing parabolically, each new iteration of the tools we are creating are ever more powerful, and therefore the designers building them need to be increasingly deliberate and mindful.

We are also at a pivotal point in human existence where we are increasingly encountering the effects of global warming, overpopulation and a widening gap between the rich and the poor. Our very future is threatened. Now more than ever it’s important to look at each product we are making and think about how we can help to maintain human life on earth. We know we are going to have to drastically change existing processes and behaviors to work better for the environment. It’s important to think about how we want to use AI to move society forward for the sustainability of ourselves and the planet and to make global society more equitable.

If you are building an AI-first shopping experience, the usual design process would involve understanding the goals and current pain points that current shoppers feel. A more responsible product designer would take it a step further, thinking about consumerism and its effect on the planet, and global society. They would imagine ways to give everyone a more equal voice, and a sense of well-being. They would consider how the product they are designing might serve everyone, be safe, and protect people’s privacy.

Designers must ask themselves how new AI-based product experiences might promote economic opportunity, security, and narrow gaps in income inequality. How might the products we build promote environmental sustainability, helping us reverse the effects of global warming?

At a high level, we as AI designers need to deliberately consider and define:

  • What is the outcome we want to achieve in the world?
  • What emotion do we want people to feel?
  • How might we enable positive action and behavior from the people using our tool?

Once you have a clear definition of the future you want to contribute to, it’s important to document those goals alongside user needs and technological constraints. Your working team can then use these criteria to shut down ideas that aren’t going to take us in the right direction.

I would argue that we are at a point in society where it’s not enough to create things that solve the problem but are neutral to human progress. We designers need to proactively advocate for the idea that makes the biggest difference toward the future you are looking to achieve.

Define Product Requirements: Assess AI Capabilities

In the same way that designers during the mobile revolution began to look at every existing task and think about how it would be done on a mobile device, we are now at a point in the advancement of AI where we can look at every task that currently exists and think about how it can be done using AI.

As mentioned earlier, the task today is much more complex than it was when we transitioned from web to mobile. It’s important to ground ourselves in the things AI is already good at and to use our design thinking to help engineering and research teams prioritize specific use cases for improving AI in the future.

Here is a rough list of some of the things that AI is already good at:

  • Natural Language Processing (providing translations, captions, suggesting edits or post content)
  • Computer Vision (understanding video, photos, identifying objects in the environment)
  • Speech & Conversation (ability for AI to speak to users, ask and answer questions, provide assistant services)
  • Pattern matching (seeing things as a set and then knowing whether a new item does or doesn’t fit in the set. Identifying a harmful photo, looking for keywords in text, organizing information into categories)
  • Making predictions based on existing data (suggested pricing, content you might like)
  • Providing answers and information to questions where data already exists
  • Complete basic tasks (turn on this video, set a timer, play this song)
  • Contextual understanding (knowing you are at the store versus in your home)
  • Creative tools & effects (ability to map clothing to your body, make your body do movements from another video, giving you a professional looking office background in a virtual meeting)

AI capabilities are being developed independently by several companies and institutions around the world, so the list of AI capabilities is ever growing, and use cases within this list are expanding every day.

With these capabilities in mind, the product team can then ask themselves where are the current places of friction in the process. Of particular interest are frictions that involve basic perception, cognition, or pattern matching. What AI capabilities could help reinvent the current process into a totally new way of accomplishing the same goal? The team can then test these areas of friction against the growing list of AI technologies. Is there a way to rethink these areas given existing or near future technology? Where might AI disrupt or reinvent existing ways of doing things?

Ideation for AI: Designing for Data and Feedback Loops

Once you have grounded in the people problem you are solving and the future you are looking to create, you can then begin the process of ideation. A key way that AI-based products differ from the tools of the past is that AI needs data to give accurate recommendations, and it needs user feedback to improve those recommendations. If those two pieces aren’t in place, your product will fail.

As I type this document using Google Docs, the AI inside the tool is gathering data from the sentences I create. When I start to create a string of words that the AI has seen before in other docs, it provides me with suggestions of the remaining letters in a word, the remaining words I may want to use in a sentence, etc.

The AI is able to do this because it has access to all the data coming in from all the other google docs in existence as well as libraries of written content from the internet at large. Every new document being created gives the AI more of the data it needs to be successful. We find the Google Docs tool to be useful, so we are comfortable giving up our data in exchange for the ability to use this tool.

Sometimes the suggestions the AI offers are accurate and I take them. I press Tab, letting the AI finish the sentence for me and saving me a bit of time. When I take the suggestion, the AI learns that the suggestion was a good one and it learns to keep suggesting it going forward with me and other users. Sometimes (often) the suggestions are not good, and I disregard them; continuing to finish the sentence on my own. Here I am teaching the AI that the words I really wanted were different. The AI can watch what I type instead, and learn to add my words to the set of possible suggestions for others in the future.

Because millions of people around the world are using Google docs, the tool has a wealth of data. Because Google AI designers have designed a very frictionless way for the AI to make suggestions, it doesn’t interrupt me as I work. There is an easy way for me to say “yes” or “no” to the suggestion, and even better, give the AI feedback on what I wanted instead. The AI is continuously learning and improving without creating additional user friction, and the AI is providing a useful service in exchange for the data it needs.

As you think through ideas and solutions to the problems you are facing, it’s important to know what data the AI will need to be successful. If you don’t have access to the data, how can you build into the product a way for users to give you the data that the AI needs? What can you offer of real value to users so that they are willing to give the data? There is a whole new level of persuasive design thinking needed to solve these problems so that AI tools can be successful.

Even when AI has access to data, it still needs continuous feedback from users to keep learning and improving. It’s important that we design feedback loops without introducing overhead or work for the user. Our interaction design choices can make feedback loops frictionless, or make the product feel cumbersome. Ideally, we are able to facilitate behavior that allows the AI to learn without interrupting the user. AI that is built using stale or biased data will be harmful to users. AI that doesn’t learn will not provide value over time.

Transparency & Control

As people become more concerned with giving away their data (and with personal privacy in general) AI designers will need to be thoughtful about giving users greater transparency and control over the way AI is used. Today most users don’t really know how AI is used. The media and science fiction paint AI in a scary light. Once AI is implemented well, it’s seen as just another tool, but before it’s understood, it is feared. It’s for this reason that as AI designers we have an obligation to provide better awareness and transparency in the AI tools we create.

Ask yourself the following questions:

  • What do people need to know about how their data is used or stored in this product?
  • How can you work to be transparent and explicit about what data is being collected? What agency should the user have over the data being collected? Should they be able to turn the AI on and off, or tell it specifically which types of data are and are not okay to gather?
  • If a user decides to limit the data they share, how might you create a graceful degradation experience, such that the product doesn’t feel broken to them?
  • How might that degradation actually teach the user the value of AI such that they feel the value and opt in again?

One can imagine a situation where Google Docs offers you the ability to switch off the AI. The AI would then not provide you with suggestions to finish your sentences, and may not be able to do well at spelling or grammar check over time. The users who turned the AI off would not be giving the system access to the data it needs to learn. If too many people turned the feature off, the AI would not be able to function well for anyone.

One of the reasons giving users access to transparency and control is so important, is that it puts the right incentives in place for product design teams. If users have a choice of whether to turn AI on or off, designers will work toward designing the most useful and valuable tools possible, such that it’s worth letting the AI learn from you.

Prototyping & Negative Impact Analysis

Once you have a solution or set of options to test, the next step in the traditional product design process is to prototype the solution so that it can be tested and vetted more thoroughly. A prototype might look like a functioning app, or be a set of sketches you put in front of usability test participants to validate your idea.

I mentioned before that the stakes are steadily increasing in the technological products we are putting on the market. It’s for this reason that taking the time to think through potential negative consequences of the product concept are important. A negative impact analysis can be done by the working team, or even better, by a group outside the immediate team who can provide a fresh perspective. The purpose of this exercise is to identify potential unintended consequences that might come out of this product coming to market.

Some questions to consider in a negative impact analysis:

  • Does this idea adhere to the privacy expectations users have?
  • Does it protect user data?
  • Is the AI robust enough to ship? Is the technology stable?
  • Have we provided people with clear information about how their data is used and stored? Have we provided the right opportunities for people to control whether and how their data is used?
  • Is this product designed with inclusivity in mind? Does it treat all people equally and reduce social inequality?
  • Are there mechanisms in place to ensure it’s deployed and used responsibly?
  • Is there a way to roll back the product or feature if needed?
  • How might this product be used in unintended and harmful ways?
  • How are we protecting people from bad actors? Is there a way for users to notify us of a problem?
  • How might the product increase harm?
  • How might the product discriminate against protected classes or historically marginalized people? How might the product create or exacerbate inequality?
  • How might smaller businesses be disadvantaged as compared with larger corporations?
  • How might the product negatively impact or be used to exploit people with low socioeconomic security or status?
  • How might the use of this product cause behavior that will negatively impact the environment?

These questions are a good starting point, though the questions you ask will depend on the product you are building. When I talk to teams about conducting a negative impact analysis, I typically get two types of concerns. The first is, “We won’t likely be able to envision all the possible negative impacts of the products we build, so is it really valuable?” The second is, “In the early prototype phase it feels too early to think about negative impacts. Don’t you want to finalize the solution before you start to question it?” My point of view on both is no. It’s true that you won’t be able to have a full line of sight into the future of how your product will be used, but the exercise helps you find many potential pitfalls, making your idea much stronger than it otherwise would have been. For the second, I’d rather think of the potential downsides in the early stages of the development process, before we have invested too much time and money into the solution. Otherwise, it will be more difficult and costly to tune the experience based on what you uncover down the road. Better to build in this thinking early in the process, and keep considering it as you move forward.

Building AI responsibly requires product design teams to think about the AI in tools specifically and how it should behave, learn and grow. Taking your team through the process of imagining the future you want to create, assessing AI capabilities, taking special consideration of data gathering and feedback methods, and analyzing the potential negative impacts of your products will ensure that the tools you build will both be more successful and be better for the world. Whether you consider yourself an AI designer or not, most product designers either are or will begin to work on AI-based experiences soon. In the future, humans and AI will collaborate as one on almost every task, and as an AI designer, you are shaping that collaboration. Many of these steps in the design process don’t need to be limited to designers alone (and can be shared with your cross-functional team) but we can take an active role in ensuring we shepherd this AI-centered design process consistently. As designers, we are responsible for the products we create, and with the advent of AI the stakes are ever higher and the potential for positive impact on the world, ever greater.

--

--

Director of Product Design at Facebook, previously Head of Design & Brand at Asana.