Friday, May 8, 2026
Search

AI Ethics Researchers Challenge Big Tech's 'One Model for Everything' Approach

AI ethics leaders Timnit Gebru and Abeba Birhane are criticizing Big Tech's scaling paradigm, arguing it threatens smaller AI organizations in the Global South while masking safety risks. Their research reveals how Meta's 200-language model announcement led investors to pressure African NLP startups to shut down, while OpenAI representatives allegedly threatened small language AI companies with obsolescence.

AI Ethics Researchers Challenge Big Tech's 'One Model for Everything' Approach
Image generated by AI for illustrative purposes. Not actual footage or photography from the reported events.
Loading stream...

AI ethics researchers Timnit Gebru and Abeba Birhane are mounting a critique of the dominant 'one giant model for everything' approach to AI development, arguing it harms smaller organizations and lacks empirical safety evidence.

Meta's No Language Left Behind model announcement covering 200 languages, including 55 African languages, prompted investors to tell small African NLP startups to close operations. "Facebook has solved it, so your little puny startup is not going to be able to do anything," investors told founders, according to Gebru.

OpenAI representatives allegedly threatened small language AI organizations by claiming the company would make them obsolete and offering minimal compensation for their data. "OpenAI is going to put you out of business soon because we're going to make our models better in your language," the researchers report OpenAI telling startups.

This pattern threatens AI development in the Global South, where smaller organizations work on language-specific models with local expertise. When Big Tech announces broad multilingual models, funding for these specialized efforts evaporates despite questions about the larger models' actual performance.

Gebru challenges the fundamental premise of the scaling paradigm: "People came along and decided that they want to build a machine god and then claimed that they are doing it. And then they end up stealing data, killing the environment, exploiting labor in that process."

Birhane criticizes the 'AI for good' framing as a PR strategy that deflects grassroots resistance. "'AI for good' allows companies to say 'Look, we're doing something good! Everything about AI is not bad. And you can't criticize us,'" she states.

The researchers highlight safety concerns in critical applications. Medical transcription systems using large language models produce hallucinations that could affect patient care, yet evidence for claimed AI benefits in healthcare and other domains remains thin.

Their analysis, published by the AI Now Institute, questions whether the resource-intensive approach of building massive general-purpose models serves actual needs or primarily benefits Big Tech's market dominance.