OpenAI's chief scientist Jakub Pachocki stated the company is approaching models capable of working indefinitely in a coherent manner, similar to human researchers. "I think we are getting close to a point where we'll have models capable of working indefinitely in a coherent way just like people do," Pachocki said.
The transition represents a move from basic code assistance tools to autonomous AI systems that run experiments independently without continuous human oversight. Pachocki projects a future where "you kind of have a whole research lab in a data center," signaling OpenAI's ambition to automate knowledge work at scale.
The shift stems from improvements in base model capability rather than specialized architecture. Pachocki noted that simple boosts in all-round capability enable models to work longer without assistance. This differs from earlier approaches that relied on narrow tools for specific tasks.
The development raises deployment questions as autonomous systems gain extended runtime. Pachocki acknowledged that very powerful models should operate in sandboxes isolated from systems they could break or exploit. "I think this is a big challenge for governments to figure out," he said regarding the regulatory framework for such systems.
The timing coincides with broader enterprise AI adoption. Companies are deploying specialized systems ranging from military intelligence platforms to retail pricing algorithms, creating infrastructure for autonomous operations beyond controlled research environments.
OpenAI's focus on extended autonomous runtime addresses a key limitation in current AI deployment: the need for frequent human intervention to maintain task coherence. Models that can operate for hours or days without guidance could transform enterprise workflows currently requiring human oversight at regular intervals.
The sandbox approach Pachocki described suggests OpenAI recognizes the risks of deploying powerful autonomous systems with unrestricted access to external resources. The containment strategy mirrors practices in cybersecurity and malware analysis where untrusted code runs in isolated environments.
Sources:
1 MIT Technology Review, March 20, 2026

