Google AI Product Design Guidelines

hanjing
4 min readOct 19, 2024

--

AI is well-suited for applications like:

  • Recommending different content to different users, such as movie suggestions. AI systems are probabilistic, indicating the limitations and statistics could be helpful.
  • Predicting future events, such as weather events or flight price changes
  • Natural language understanding
  • Image recognition

Explainability

Explainability and trust are inherently linked. calculates and shows confidence levels can be critical in informing the user’s decision making and calibrating their trust.

  • Communicate how certain the AI is in its prediction, such as N-best most-likely classifications, Numeric confidence level (80% confidence or “Most likely Safe”). UXR Questions: “On this scale, show me how trusting you are of this recommendation.” “What questions do you have about how the app came to this recommendation?” “How satisfied or dissatisfied are you with the explanation written here?”
  • Deeper Explanations: Share l more detailed explanations of how the overall system works, do this outside of the active user flow within “Learn More” Button, within onboarding.
  • Tie explanations directly to user actions
  • Use graphic-based indications such as data viz to show certainty.
  • Leverage relevant community groups to calibrate their trust through human-to-human connection

Advocacy & Onboarding:

Explain how the product delivers new value, rather than the underlying technology. e.g. “available anytime”, “adaptive to your style”

Ask users for permissions early.

AI Product Case Study

Trust & Privacy

Users can be surprised by their own information when they see it in a new context. To build trust we could bring in explainability, such as:

  • Explaining the connected data source, pointing to third-party sources that they already trust to jump-start initial trust in your product.
  • Transparency: Telling users where the data is used to eliminate suspicion, explicitly share which data is shared.
  • Opportunity to Explore: Before asking for data share, such decision should be reversible where possible.
  • Share limitations: prompting the user to check the output in low-confidence/high-stakes situations
  • Explainability: Share the reason why certain recommendation is made
  • Error Handling: Address errors through feedback loop and give users the opportunity to teach the system expected behavior. Feedback + Control

Don’t forget to use multi-modal design to show explanations, in various format and medium. Leverage other in-product moments, such as onboarding to explain AI systems so that provide additional explanations. UXR Questions: “What would increase your trust in this recommendation?”

Error Handling

AI system may make bad predictions at some point. Definition of error often associated with people’s expectation.

  • Providing manual controls when the AI fails. Transparently communicate the AI’s limitation, and offer a way for manual intervention. Let’s user supervise rather than automate to help them build confidence in the system. In high-stake situations, give users more control over system such as “review” instead of “OK”.
  • Automation: Start users with the lowest level of automation, and progressively dailing up. Choosing the right level of automation depends heavily on your user and product needs and context.
  • Offering high-touch customer support, let users know how their input will influence the AI. Here are example feedback with progressing timing & impact: “Thank you for the feedback” → “This helps us improve future music recommendations” → “Our recommendations won’t include XX” →”We’ve updated your recommendations. Take a look”
  • Account for timing in the user journey
  • Feedback Loop: Allow users to give guidance or correct the data, label, or inappropriate recommendations, which feeds back into the model to improve the dataset or alert the team to the need for additional training data. Effective forms include: thumbs up/down, Hiding unwanted, Flagging to report, or manually reports a problem.
  • Auto Correction: Offer auto-correction on anticipated user input, guessing intention and typo.
  • Lack of available data: If can’t fulfill a given task due to uncertainty restraints, be constructive by provide alternative paths forward, such as suggesting user to enter input or come back later

Artistic AI

Quick,Draw!: objects that might be familiar to some cultures, but not others, we saw notable differences. chairs were drawn facing forward or sideways, depending on the nation or region of the world:

The technique of “critique by redesign” in some ways works uniquely well.

Design is compromise. the final product reflects a series of mostly hidden goals and constraints.

https://magenta.tensorflow.org/

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

No responses yet

Write a response