In early April, the artificial intelligence (AI) firm OpenAI announced that it had acquired the technology and business talk show “TBPN.” I had never heard of “TBPN,” but the trade magazine Variety reported that its daily episodes average about 70,000 viewers across all platforms. Importantly, those viewers are highly concentrated in Silicon Valley. In its memo announcing the acquisition, OpenAI implausibly swore that the show would retain its editorial independence. Yet, one quote in the company’s announcement overshadowed all of the corporate-speak.
Fidji Simo, the company’s chief executive of artificial general intelligence (AGI) deployment, said that she had been thinking about the future of how OpenAI communicates. “[O]ne thing that’s become clear is that the standard communications playbook just doesn’t apply to us. We’re not a typical company.”
The rules don’t apply. We’re not a typical company.
Those words serve as a decent thesis for Karen Hao’s intriguing and deeply reported book Empire of AI. A former reporter for the Wall Street Journal and editor of the MIT Technology Review, Hao leverages her wealth of sources and background knowledge to turn a critical lens on a company that wants to be seen as exceptional almost as much as it wants to sidestep scrutiny.
The book, which is subtitled Drama and Nightmares in Sam Altman’s OpenAI, begins with a fast-paced timeline of Altman’s short-lived firing. He was ousted by OpenAI’s board in November 2023 for a failure to be “consistently candid in his communications.” Translation: the board no longer trusted Altman to be transparent with them.
If that episode does not ring a bell, there’s a good reason for that. Altman’s defenestration lasted all of five days. Back then, Altman was still a Silicon Valley golden boy, having dazzled congress with dubious pleas for regulation just six months prior. (Altman apparently disagrees with Peter Thiel with regard to regulation.) OpenAI employees revolted against Altman’s firing, and Microsoft—OpenAI’s most important investor—not only backed Altman, but even offered him a job. Faced with pressure from all sides, OpenAI’s board reversed course and rehired Altman.
OpenAI’s decision to quickly take back a chief executive less than a week after it publicly announced it had lost confidence in him is remarkable, but Hao shows it should not have been particularly surprising.
Hao recounts in detail how the company began its life intent on being an altruistic nonprofit research lab only to eventually yield to a for-profit structure in late 2025. Technically, the company is now a for-profit public benefit corporation controlled by the nonprofit OpenAI Foundation. The company’s early investors, which included Thiel and Elon Musk, initially promoted OpenAI as an ethical enterprise that would develop beneficial AI and thus head off the terrors of less-noble profit-focused AI firms. In 2026, OpenAI’s stated mission to “ensure that artificial general intelligence benefits all of humanity” rings about as hollow as Google’s “don’t be evil” pledge.
The book, though, is not just about Altman, or even OpenAI. Hao reveals how the artificial intelligence industry uses—and misuses—very real people and natural resources in order to achieve economic hegemony. Throughout the book, she draws sharp parallels between AI’s increasing command of the global economy and its extractive, colonial-style capitalism. She recounts the stories of so-called “data-annotation” workers, who are tasked with labelling, transcribing, and categorizing images and other data to help train AI models to better flag offensive material. These workers tend to be low-wage contractors, often living in developing countries. One surprising hotspot for such work was Venezuela, where the country’s plunging economic fortunes in the late 2010s forced previously well-paid workers to settle for low-wage internet gigs. When tech companies began demanding English-speakers, much of that work was transferred to Kenya. Other developing countries became targets for massive data centers that were constructed with little public input and little regard for the environmental impacts.
That Altman and OpenAI would think of themselves as altruistic despite their growing footprint of economic destruction underscores the theme of Hao’s book. Altman and his ilk want desperately to be seen as exceptional. Yes, there may be negative “externalities,” but such inconveniences are a small price for the world to pay in exchange for benefitting from the genius of such luminaries as Altman and Musk. After all, OpenAI is “not a typical company.”
As Hao explains in the early pages of her book, OpenAI seems to view generative AI as, “a fantastical, all-purpose excuse for OpenAI to continue pushing for ever more wealth and power.”
Similarly, Altman himself seems dead-set on achieving a status as an intellectual philanthropist, one who—rather than making a name for himself by donating money, instead lavishes his brilliance upon the world. He’s too important to be fired. Like the big banks who triggered the 2008 financial crisis, he’s too big to fail.
Yet, as Hao explains, things do not need to be this way. Rather than concentrating power in the hands of a few self-important tech leaders, she argues that power must be distributed. AI solutions need to be built for specific purposes, not simply built because it’s possible to build them. Yes, AI can help improve human health, the environment, and economic justice, but those goals will also require social cohesion and global cooperation, which she notes are “some of the very things being challenged by the existing version of AI development.”
Hao’s book exposes the technology industry’s thinly veiled confidence game. To improve humanity, Silicon Valley says we need AI, and therefore nothing should stand in the way of AI (even humanity). Yet, such a shell game cannot stand up to even the most basic scrutiny. Silicon Valley itself is littered with companies that were dominant first-movers and then were superseded by newer, more nimble competitors. (If you don’t believe me, please share this review on your MySpace page.) Perhaps the reason the Sam Altmans of the world are so dead-set on promoting themselves as singular visionaries is because they know precisely the opposite is true. Perhaps they realize they are just as replaceable as all those data-annotators in Venezuela.
I’m sure Altman wouldn’t agree with me, but then again, America’s self-declared elites haven’t exactly excelled at self-awareness.
When Altman returned to OpenAI following his short-lived firing, he did so under an agreement that the three board members who had initiated his ouster would leave. As Hao recounts, Altman needed to fill the seats with big-name, well-respected leaders; the kind of people who would bolster OpenAI’s image and retain investors’ confidence in the company. Thus, one of the three available board seats was handed to former Treasury Secretary and Harvard President Lawrence Summers.
Summers, you may know, announced his resignation from teaching at Harvard earlier this spring after the extent of his interactions with Jeffrey Epstein became public knowledge. Last fall, he gave up his OpenAI board seat.
Among the first to report on Summers’ connections to Epstein was the Wall Street Journal, which published a multi-part series on Epstein’s relationships with big-name political and business leaders, including Summers. The series was published in May 2023—six months before OpenAI saw fit to add Summers to its board.
It looks bad. Until you remember that the standard rules just don’t apply to OpenAI. Alas, they’re “not a typical company.
Photo by Emiliano Vittoriosi on Unsplash

When AI expert and investigative journalist Karen Hao first began covering OpenAI in 2019, she thought they were the good guys. Founded as a nonprofit with safety enshrined as its core mission, the organization was meant, its leader Sam Altman told us, to act as a check against more purely mercantile, and potentially dangerous, forces. What could go wrong?