AI ethics researchers are mounting a direct challenge to Big Tech's "one giant model for everything" development strategy, calling it environmentally destructive and a threat to innovation.
Timnit Gebru, former Google AI ethics researcher, says the dominant paradigm involves "stealing data, killing the environment, exploiting labor." Her criticism targets companies racing to build universal models that claim to handle every task and language.
Meta's 2022 "No Language Left Behind" model covering 200 languages, including 55 African languages, triggered immediate market effects. Investors told small African language NLP startups to "close up shop," Gebru reports, saying "Facebook has solved it, so your little puny startup is not going to be able to do anything."
OpenAI representatives allegedly use similar tactics. "They basically threaten them by saying, 'OpenAI is going to put you out of business soon because we're going to make our models better in your language,'" Gebru claims, adding that companies offer "peanuts" for data from smaller organizations.
Abeba Birhane, a cognitive scientist, argues "AI for good" messaging serves as deflection. "It allows companies to say 'Look, we're doing something good! Everything about AI is not bad. And you can't criticize us,'" she told AI Now Institute.
The critique gains relevance as alternatives emerge. DeepSeek V4's recent release demonstrated that resource-constrained development can produce competitive results, challenging assumptions that massive compute requirements are inevitable.
Gebru and Birhane advocate for task-specific, resource-efficient models over universal systems. Their argument: specialized models for particular languages or applications can perform better while consuming fewer resources and supporting diverse development ecosystems.
The debate highlights tensions between centralized scaling strategies and distributed innovation. While Big Tech invests billions in larger models—Nvidia just committed $4B to photonics infrastructure—critics question whether environmental costs and market consolidation justify marginal capability gains.
For smaller AI organizations, particularly those working on low-resource languages, the stakes are existential. Each Big Tech model announcement potentially triggers investor withdrawals, even when universal models underperform specialized alternatives on specific tasks.

