← Work
AI Content Systems

Framework for disclosing AI to Zendesk users

Generative AI features carry risk: they can mislead users or produce inaccurate information. Zendesk needed a framework for communicating this risk clearly and consistently — to comply with new laws like the EU AI Act, protect our customers, and reduce risk for our business.

Framework for disclosing AI to Zendesk users
  • Product designers lacked clear guidance on when and where they must legally disclose generative AI technology to users
  • Disclosures were implemented inconsistently, often forced into UX flows only after a legal review at the end of the design process

Develop an easy-to-use resource to help product designers quickly determine where, when, and how to disclose generative AI technology to users.

  • Content designer for systems
  • Legal counsel for Product
  • Product designers working on AI features
  1. Legal alignment. Led discussions with Legal to understand requirements for different user types, the evolving AI regulatory landscape, and the nuances of risk assessments
  2. Product audit. Audited the product experience for existing AI disclosures
  3. Concept definition. Defined important concepts, including "generative AI", "disclosure", "high risk", "low risk", "proactive output", and "reactive output"
  4. Use case identification. Identified all generative AI use cases that may require disclosure
  5. Use case mapping. Mapped all use cases to risk levels and disclosure requirements
  6. Recommendations. Developed specific disclosure recommendations for each use case
  7. Guide draft. Drafted the guide, covering end-user experience and agent/admin experience separately
  8. Designer feedback. Sought feedback from product designers working on relevant AI projects
  9. Revision. Revised the guide based on feedback
AI disclosure guide published to Confluence

The guide, published to Confluence — covering definitions, legal requirements, and worked examples for both end-user and agent/admin experiences.

End-user disclosure decision tree

End-user experience: a decision tree helping designers determine when and how to disclose AI to end users, who are considered higher risk because they may be less familiar with the technology.

Agent and admin disclosure decision tree

Agent and admin experience: a separate decision tree for support agents and admins, who are considered lower risk because they interact with Zendesk software as part of their job.

  • Achieved greater consistency in how generative AI is disclosed throughout the product
  • Legal teams now reference the guide when discussing AI features with Product
  • Received positive feedback from product designers using the guide
  • Generated interest in adapting the guide into a training module to increase enablement across the organization
← Work Next: Permission settings pattern →

Contact