AI Transparency

The goal of this project is to assess commonalities and differences for transparency approaches in regards to generative AI, identify transparency needs among key stakeholders leveraging participatory design methods, and inform emerging standards, specific to business decision making for companies building AI applications for products and services. 

This project builds from our research exploring responsible use of generative AI in organizations, which found a key barrier to enhancing trust amongst managers being uncertainty around what is in different AI models and confusion that is amplified through different types of transparency approaches. The project explores existing concerns and challenges to foster enhanced transparency and explainability necessary to help assess AI trust and safety, with ensuing implications to enhancing trust of AI technologies and the products that they inform.

The project will result in an academic paper, as well as a playbook on generative AI transparency, and an accompanying policy brief. It will inform recommendations to align transparency reporting practices and standards for generative AI models and ensuing tools.