Big Tech model releases are systematically crushing small language AI organizations through investor pressure and direct threats, according to research from AI ethics leaders Timnit Gebru and Abeba Birhane.
Investors told small language AI startups to "close up shop" immediately after OpenAI or Meta announced models covering their target languages, Gebru reports in new research from the AI Now Institute. Meta's No Language Left Behind model, claiming coverage of 200 languages including 55 African languages, triggered investor withdrawals from African language NLP startups. "Facebook has solved it, so your little puny startup is not going to be able to do anything," investors told the organizations.
OpenAI representatives directly threatened small language organizations by claiming OpenAI would make them obsolete while offering "peanuts" for their data, according to Gebru's documentation of conversations between Big Tech and smaller organizations.
The researchers argue this pattern reveals fundamental problems with general-purpose AI models. Gebru characterizes the dominant paradigm as "stealing data, killing the environment, exploiting labor" while claiming to build a "machine god." The resource intensity of general-purpose models contrasts sharply with task-specific approaches that small organizations were developing for underserved languages.
Birhane's research challenges the "AI for good" narrative that companies deploy to deflect criticism. "It's a way to paint a positive image of AI technologies, especially in light of backlash like the resist or refuse AI grassroots movement," Birhane states. The framing allows companies to point to purported social benefits while avoiding accountability for harmful practices.
The researchers advocate for resource-efficient, task-specific AI development over general-purpose models that concentrate power in Big Tech. They call for evidence-based regulation rather than relying on corporate promises of beneficial AI applications.
The critique emerges as regulators worldwide debate AI governance frameworks. The researchers argue policy should address the competitive dynamics crushing smaller organizations and the environmental and labor costs of scaling general-purpose models, rather than accepting industry narratives about inevitable technological progress.

