Toward the promise of open source AI: Co-creating a vision for responsibility & research roadmap

The future of AI is at a critical juncture, with growing debate over whether – and to what extent – foundation models should be open. Open source AI models offer significant potential to advance research, innovation, transparency, and equity. They can help democratize AI by enabling broader access and participation in model development and application. However, open models also present risks and openness alone is not enough. Bad actors may more easily use and exploit them in ways that harm individuals, communities, and society. And whether models are open or closed, key design decisions – such as data sources, training processes, and transparency – remain shaped by the values and priorities of their largely for-profit driven creators and managers, with limited public input. 

To truly democratize AI, broader perspectives and voices must be integrated into the design and development of large open source models – including through multi-stakeholder partnerships and community engagement. Now is the time to ask not just whether models should be open, but how open models can be made responsible – and what openness in AI should mean more broadly – in ways that are democratically informed, transparent, and aligned with the public interest.

 WORKSHOP GOALS & APPROACH

This workshop -- funded by the NSF under the Responsible Design, Development & Deployment of Technology (ReDDDoT) program and in partnership with Mozilla -- brings together researchers, practitioners, policymakers, and community leaders to explore what responsible openness in AI means and how it can be realized and sustained. The overarching goal is to co-create a shared vision for “responsible” openness in AI – with a focus on open source foundation models – and chart a research roadmap towards that vision. 

To achieve this, the workshop will include: 

  • Academic presentations on the current state of open source foundation models and their implications across key societal impact areas, as well as tensions with safety and national security.

  • A panel discussion on leveraging open foundation models for public good, enhancing AI accessibility, and the intersections between openness, public systems, and democratization. 

  • Participatory design sessions to (a) envision a future where AI contributes meaningfully to society and co-create visions for “responsible” open source foundation models, including technical and socio-technical requirements, (b) reflect more broadly on openness in AI, exploring openness beyond models to include values, participation, and accountability; and (c) identify what is needed to sustain an open source (public) AI ecosystem. 

  • A roadmapping session to define research priorities, surface open questions, and foster new research collaborations rooted in the co-created vision. This research roadmap will be provided to the NSF. Rather than prescribing a single path forward, the workshop will surface diverse perspectives – recognizing that responsible openness may take different forms.

The workshop will result in three outputs: (1) A report summarizing the co-created vision for “responsible” open source foundation models and roadmap to guide future research and work. (2) An accessible online report summarizing the vision and roadmap for a broader audience. (3) An academic article on the outputs of the workshop and usage of participatory design methods in responsible co-creation.

The workshop is occurring in August 2025. Interested to nominate yourself or someone else? Please fill out this form.