OpenAI Launches Safety Fellowship to Tackle AI Alignment Research

OpenAI Launches Safety Fellowship to Tackle AI Alignment Research




Caroline Bishop
Apr 08, 2026 17:45

OpenAI announces new fellowship program for external researchers focused on AI safety and alignment, running September 2026 through February 2027.



OpenAI Launches Safety Fellowship to Tackle AI Alignment Research

OpenAI is opening its doors to outside researchers with a new Safety Fellowship program aimed at advancing independent work on AI alignment and safety challenges. Applications are now open, with a May 3 deadline.

The five-month program runs from September 14, 2026 through February 5, 2027, targeting researchers, engineers, and practitioners who want to tackle safety questions affecting both current and future AI systems. OpenAI has partnered with Constellation to provide workspace in Berkeley, though remote participation is an option.

What OpenAI Wants

The company outlined priority research areas including safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. They’re specifically seeking work that’s “empirically grounded, technically strong, and relevant to the broader research community.”

Fellows won’t get internal system access—a notable limitation—but will receive API credits, compute support, a monthly stipend, and mentorship from OpenAI staff. The expectation is clear: produce something tangible by program’s end, whether that’s a research paper, benchmark, or dataset.

Who Should Apply

OpenAI is casting a wide net on backgrounds. Computer science is obvious, but they’re also welcoming applicants from social science, cybersecurity, privacy, and human-computer interaction fields. The company explicitly stated they “prioritize research ability, technical judgment, and execution over specific credentials.”

Letters of reference are required. Successful applicants will be notified by July 25.

The Bigger Picture

This fellowship arrives as AI safety concerns have moved from academic debate to mainstream regulatory discussion. OpenAI has faced criticism over the years for allegedly deprioritizing safety research in favor of capability development—a tension that led to high-profile departures from its safety team.

The program represents an attempt to cultivate external safety research talent while potentially deflecting some of that criticism. Whether it signals a genuine shift in priorities or serves primarily as an optics play remains to be seen.

For researchers interested in AI safety work with access to OpenAI resources and mentorship, applications close May 3 at the program’s official page. Questions can be directed to openaifellows@constellation.org.

Image source: Shutterstock




Source link

Share:

Facebook
Twitter
Pinterest
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

Social Media

Get The Latest Updates

Subscribe To Our Weekly Newsletter

No spam, notifications only about new products, updates.

Categories