In his polarizing “Techno-Optimist Manifesto” last year, venture capitalist Marc Andreessen listed a number of enemies to technological progress. Among them were “tech ethics” and “trust and safety,” a term used for work on online content moderation, which he said had been used to subject humanity to “a mass demoralization campaign” against new technologies such as artificial intelligence.
Andreessen’s declaration drew both public and quiet criticism from people working in those fields—including at Meta, where Andreessen is a board member. Critics saw his screed as misrepresenting their work to keep internet services safer.
On Wednesday, Andreessen offered some clarification: When it comes to his 9-year-old son’s online life, he’s in favor of guardrails. “I want him to be able to sign up for internet services, and I want him to have like a Disneyland experience,” the investor said in an onstage conversation at a conference for Stanford University’s Human-Centered AI research institute. “I love the internet free-for-all. Someday, he's also going to love the internet free-for-all, but I want him to have walled gardens.”
Contrary to how his manifesto may have read, Andreessen went on to say he welcomes tech companies—and by extension their trust and safety teams—setting and enforcing rules for the type of content allowed on their services.
“There’s a lot of latitude company by company to be able to decide this,” he said. “Disney imposes different behavioral codes in Disneyland than what happens in the streets of Orlando.” Andreessen alluded to how tech companies can face government penalties for allowing child sexual abuse imagery and certain other types of content, so they can’t be without trust and safety teams altogether.
So what kind of content moderation does Andreessen consider an enemy of progress? He explained that he fears two or three companies dominating cyberspace and becoming “conjoined” with the government in a way that makes certain restrictions universal, causing what he called “potent societal consequences” without specifying what those might be. “If you end up in an environment where there is pervasive censorship, pervasive controls, then you have a real problem,” Andreessen said.
The solution as he described it is ensuring competition in the tech industry and a diversity of approaches to content moderation, with some having greater restrictions on speech and actions than others. “What happens on these platforms really matters,” he said. “What happens in these systems really matters. What happens in these companies really matters.”
Andreessen didn’t bring up X, the social platform run by Elon Musk and formerly known as Twitter, in which his firm Andreessen Horowitz invested when the Tesla CEO took over in late 2022. Musk soon laid off much of the company’s trust and safety staff, shut down Twitter’s AI ethics team, relaxed content rules, and reinstated users who had previously been permanently banned.
Those changes paired with Andreessen’s investment and manifesto created some perception that the investor wanted few limits on free expression. His clarifying comments were part of a conversation with Fei-Fei Li, codirector of Stanford’s HAI, titled “Removing Impediments to a Robust AI Innovative Ecosystem.”
During the session, Andreessen also repeated arguments he has made over the past year that slowing down development of AI through regulations or other measures recommended by some AI safety advocates would repeat what he sees as the mistaken US retrenchment from investment in nuclear energy several decades ago.
Nuclear power would be a “silver bullet” to many of today’s concerns about carbon emissions from other electricity sources, Andreessen said. Instead the US pulled back, and climate change hasn’t been contained the way it could have been. “It’s an overwhelmingly negative, risk-aversion frame,” he said. “The presumption in the discussion is, if there are potential harms therefore there should be regulations, controls, limitations, pauses, stops, freezes.”
For similar reasons, Andreessen said, he wants to see greater government investment in AI infrastructure and research and a freer rein given to AI experimentation by, for instance, not restricting open-source AI models in the name of security. If he wants his son to have the Disneyland experience of AI, some rules, whether from governments or trust and safety teams, may be necessary too.