The future of AI is at a critical juncture, with growing debate over whether – and to what extent – foundation models should be open. Open source AI models offer significant potential to advance research, innovation, transparency, and equity. They can help democratize AI by enabling broader access and participation in model development and application. However, open models also present risks and openness alone is not enough. Bad actors may more easily use and exploit them in ways that harm individuals, communities, and society. And whether models are open or closed, key design decisions – such as data sources, training processes, and transparency – remain shaped by the values and priorities of their largely for-profit driven creators and managers, with limited public input.
To truly democratize AI, broader perspectives and voices must be integrated into the design and development of large open source models – including through multi-stakeholder partnerships and community engagement. Now is the time to ask not just whether models should be open, but how open models can be made responsible – and what openness in AI should mean more broadly – in ways that are democratically informed, transparent, and aligned with the public interest.
WORKSHOP GOALS & APPROACH
This workshop -- funded by the NSF under the Responsible Design, Development & Deployment of Technology (ReDDDoT) program and in partnership with Mozilla -- was held in August 2025 and brought together researchers, practitioners, policymakers, and community leaders to explore what responsible openness in AI means and how it can be realized and sustained. The overarching goal was to co-create a shared vision for “responsible” openness in AI – with a focus on open source foundation models – and chart a research roadmap towards that vision.
In addition to the workshop report, we are developing an academic article on the outputs of the workshop and usage of participatory design methods in responsible co-creation.
