Former Gods of Safe Passage: Narrative Lessons in AI Safety from the US Auto Industry

A car’s safety device left me permanently blind. After years defining how car companies talk about electric vehicles, the parallels between AI safety research and the history of propulsion are becoming impossible to ignore.

“Il vecchio mondo lotta, il nuovo mondo stenta a nascere e in questo interregno si verificano i fenomeni morbosi più svariati”
-Antonio Gramsci

Some months back, I read an article in Forbes about Anthropic’s foray into the enterprise market and for whatever reason, it brought me back to a time in my life when my eyeball was blown up by a Takata airbag; improperly designed, improperly detected and subsequently fired into my face at orbital velocities by a Toyota Camry. Now, years later, the irony isn’t lost on me that my work lives at the intersections of safety and technology in both the electric vehicle and burgeoning AI space.

It’s the only way to live (in cars).

In the automotive world, you deal at scale with the tricky puzzle of speaking to people’s passions with silly things like 0-60 times and how many horses such and such motor is equivalent to owning, while always living with the sober understanding that driving is simply the deadliest thing most Americans do every day, and have done, for a century.

A simple fact: the automobile is responsible for more deaths than WWII. More people have been killed by Volkswagens than tomahawk missiles. Needlessly so! I mean, we could have walked. Also, trains. The need for safety, as it were, mirrors in effort and complexity the adoption curve of the product, particularly where it necessitates infrastructure or public/private cooperation.

That’s all to say: Safety has always been the lever. It’s not a fabulous roman candle. It’s not the thing you tell your friends about. But it is the thing that works. It’s the 5-star crash safety rating and JD Power “Top Safety Pick” name drop that helps you close the deal at home with your spouse (where it matters), not at the lot, where you can later pretend like you have some semblance of authority for the decision-making of your household.

AI is approaching its permission-structure moment. Enterprises don't adopt transformative technology because it's exciting. They adopt it because someone — a CIO, a general counsel, a board member — can point to a framework and say: this is why it's safe to move.

Power requires safety.

Here's a useful framing: F1 cars have more safety features than sedans, not fewer. The fastest, most powerful systems ever built on four wheels are also the most carefully engineered for safety. The HANS device. The halo. Six-point harnesses. Fire-suppression systems that activate in milliseconds. None of these exist because F1 is cautious. They exist because F1 is fast, and at that speed, the margin for error collapses to zero.

The implication for AI is direct: the more capable the model, the more critical the safety infrastructure. Constitutional AI and Responsible Scaling Policies are what makes it possible to deploy Claude in environments where the stakes are real: Defense and government. Automotive and manufacturing. Legal. Financial services. These are industries where "move fast and break things" are a liability. And they represent, collectively, the enterprise market that will define which AI company wins the 20s.

The smartest folks in the room understand safety as a requirement of power and always have. Even the midwits allow themselves to respect the frameworks of authority and pedigree that derive from real-world authenticity: testing, certification, standards. Because they understand it’s the chief difference between your manufacturer getting sued into oblivion because they blew off a kid's eye, or not.

Enterprise: Engage.

Let's talk about the money, because this is where the narrative gets interesting.

Anthropic hit $9 billion in revenue with 300,000 enterprise customers — up from 1,000 just two years ago. OpenAI is at $20 billion in total revenue with a million enterprise customers, but only $6 billion of that comes from enterprise. Run the math: Anthropic's revenue-per-enterprise-customer dramatically outpaces the competition. The "land and expand" strategy — where a customer typically starts with Claude Code and grows from there — is working precisely because the safety infrastructure gives procurement teams, compliance departments, and C-suites the permission structure they need.

There’s your JD Power sticker.

Consider the story Forbes told about Uber saving 200 years of development time with Claude. That's the kind of headline that belongs in a showroom — the equivalent of a 0-60 time. It's aspirational. It gets people leaning forward. But the reason Uber could deploy Claude at that scale, in a business that touches millions of people's daily transportation, is the same reason your spouse agreed to the new car: because someone did the work to make it safe.

Here's an interesting wrinkle in the funnel, though. "Land and expand" implies a traditional B2B entry point — enterprise sales, procurement, the whole dance. But what happens when the Praveens of the world just... try Claude themselves? What if the individual user experience is so good that it becomes its own enterprise sales vector, bottom-up? That's market saturation from a direction your competitors aren't fortified against, because their product didn't earn the trust to survive the journey from personal use to enterprise deployment.

Remember Deep Blue? Pepperidge Farm remembers.

IBM took a machine built at the cutting edge of very serious artificial intelligence research — research that laid the foundations for where we are today — and turned it into a cultural moment. Kasparov versus the machine. It was brand strategy and science, simultaneously, and neither diminished the other.

The lesson isn't that safety-first companies can't do bold, ambitious, conversation-starting brand work. The lesson is the opposite: safety is what gives you permission to be bold. You can do something big and aspirational without being breathy and inauthentic or mind-numbingly phlegmatic precisely because you walk the walk. The credibility is load-bearing. It holds the weight of the ambition above it.

This is the part where most companies get it wrong. They treat safety messaging as a constraint on creativity — as the fine print, the footnote, the thing legal makes you add. But what if safety is the creative platform? As in classical poetry, What if the most compelling narrative in enterprise AI isn't "we're the fastest" or "we're the smartest" but "we're the ones who did the hard work so you don't have to be afraid"?

Volvo didn't become Volvo by being boring. Volvo became Volvo by making safety the entire brand, and then building cars interesting enough to justify the positioning. They invented the three-point seatbelt and then gave the patent away — to every other car manufacturer on the planet — because they understood that the narrative value of that decision would compound for decades. It has.

Constraints give rise to creativity.

Constraints have always given rise to the next generation of product and market competition. This is inextricable. The sonnet didn't survive five centuries because poets loved counting to fourteen — it survived because the constraint made you say something you couldn't have said without it. Emission regulations didn't kill the American automobile. They killed the American automobile that wasn't worth saving, and created the market conditions for everything that came after. Every subsequent successful entry into the automotive market — from the catalytic converter era through CAFE standards through the EV transition — was a direct result of expanded regulatory frameworks forcing the question: now what?

Product innovation and the evolution of consumer taste, decade over decade, has been largely defined by the opportunities and constraints of policy. The automakers who understood this built the next era. The ones who didn't lobbied against seatbelt mandates until the math caught up with them. In fact, the relationship between automakers and the United States government has been definitional since the ruins of World War II — American jobs, wartime manufacturing capacity, the interstate highway system — all of it necessitated an extraordinarily close public-private partnership that shaped everything from what you could buy to what it was allowed to emit. 

The meta constraints also define the narrative opportunities. This is the part that matters for AI. Anthropic is a public benefit corporation — a corporate architecture that obligates the company to balance commercial success with public benefit. Not a normal technology, therefore not a normal company. Not a normal company, not a normal narrative. 

The seven researchers who left OpenAI to found Anthropic left because they believed the governance of the technology was moving too slowly, and that the absence of structural constraint would produce exactly the kind of failures that structural constraint was designed to prevent. And it means that when Anthropic publishes its Responsible Scaling Policy, or when Dario Amodei testifies before Congress about capabilities the company's own models have demonstrated, the credibility can be authentically registered to the American public. It holds because the corporate structure holds and it’s the salient difference between a company that markets safety and a company that embodies safety.

Here's where it gets interesting from a product standpoint. Anthropic's interpretability research — the mechanistic work happening at the frontier of understanding why these models do what they do — is essentially the MRI scan for the AI brain. It's the diagnostic layer. For decades, we knew cars were dangerous, but we couldn't measure the danger with any precision. Crash test dummies, NHTSA star ratings, IIHS side-impact protocols — these were the diagnostic tools that made automotive safety legible, testable, and ultimately marketable. 

With diagnostic tools, safety moved from a brand promise, to a hard specification. Interpretability is doing the same thing for AI: turning "we're trying to make it safe" into "here's exactly what's happening inside the model, here's where the risks are, and here's what we're doing about them."

Safety is a marquee feature for critical technologies in industries like automotive and aerospace. Why should AI be different? It shouldn't. And framing it that way repositions every competitor who treats safety as an afterthought — or worse, as a marketing strategy unmoored from corporate structure — as the company that skipped the crash test.

What happens when the most commercially successful narrative contradicts the safety message? This is the real one. Because it will happen. Some quarter, some product cycle, the thing that would sell the most will be the thing that cuts corners. The discipline to hold the line in that moment is the difference between a brand and a quarter.

"Why should I pay for safety when I could move faster with someone else?" The speed you think you're getting somewhere else has a hidden cost, and it comes due when something breaks in production — in front of Congress, in front of a judge, in front of the public. Ask Boeing how that math works out.

Public/Private frameworks are existential.

The automobile didn't become safe in a vacuum. It became safe because the federal government created NHTSA, because Ralph Nader wrote Unsafe at Any Speed and made safety a political issue, because the Insurance Institute started crash-testing cars on camera and publishing the results for consumers. The infrastructure of automotive safety is a public-private apparatus — manufacturers building to standards, regulators setting and enforcing those standards, and independent bodies verifying compliance. No single actor could have done it alone.

AI is approaching this same inflection point, and the companies that engage the regulatory landscape proactively will define the terms and the ones that don't will have terms imposed on them. Zoë Hitzig's recent piece in the New York Times is instructive here — she documents a pattern among tech companies that built early brand equity on trust and safety promises, only to erode those commitments under commercial pressure. The examples are familiar: governance structures quietly dismantled, safety teams downsized or sidelined, user data policies rewritten to serve new business models. The betrayal isn't dramatic. It's incremental. And by the time the public notices, the brand damage is already structural.

What’s alternatively illuminating in Hitzig’s piece is that in each example of how to design non-exclusionary structures for the technology (except for cross-subsidizing purchaser segments), is the level to which they’re shaped by public policy and governmental regulations. The precedents cited for effective advertising governance are from German co-determination law. Putting users’ data under the protection of an independent trust or cooperative also requires a legal duty to act in users’ interests.

Anthropic's position here is genuinely unusual. The Responsible Scaling Policy is a commitment, a set of public thresholds that the company has bound itself to before the government required it. That's the Volvo move again: establish the standard, then let the standard do the selling. When regulation inevitably arrives — and in AI, as in automotive, it always arrives — the company that already operates above the regulatory floor has a moat. Everyone else has a compliance problem.

The deeper strategic point is this: public-private partnership in AI safety isn't a concession to government overreach. It's a market-making move. The companies that help build the regulatory frameworks will have structural advantages within those frameworks, the same way automakers who shaped FMVSS standards in the 1960s and 70s built decades of competitive advantage into the architecture of the rules themselves.

The worst automotive recall in history…

Between 2013 and 2023, Takata Corporation's defective airbag inflators triggered the largest and most complex automotive recall in history — affecting roughly 67 million inflators across 42 million vehicles in the United States alone, spanning virtually every major automaker. The defect was a chemical one: ammonium nitrate propellant that degraded over time with exposure to heat and humidity, causing the metal inflator housing to rupture on deployment and spray shrapnel into the cabin. At least 27 deaths and more than 400 injuries have been attributed to the defect in the U.S.

The thing about the Takata recall that most people miss is that the failure wasn't just engineering — it was narrative. Takata knew about the defect for years before the recalls began. Internal testing showed the problem. Engineers raised flags. And the company made a decision, quarter by quarter, to prioritize production continuity over disclosure. 

Mark Lillie, who gave testimony against Takata in court told Bloomberg:

"At the meeting, I literally said that if we go forward with this, somebody will be killed," he adds in an interview, echoing his testimony. After the design review, Lillie says he met separately with the engineer who served as the liaison with Takata headquarters in Tokyo. "What I gathered from the conversation was, 'Yes, I'll pass on your concerns, but don't expect it to do any good, because the decision has already been made.' " The head of ASL was Paresh Khandhadia, who had a master's in chemical engineering and "was a very smooth operator," Lillie says. "Tokyo put a tremendous amount of stock in his credentials." 

They chose the short-term commercial narrative — we're reliable, we're cost-effective, we're the world's second-largest airbag manufacturer — over the safety narrative that was, quite literally, their entire reason for existing. A safety company that abandoned safety to protect revenue. Callousness that cost me a significant chunk of my vision.

This is the cautionary tale that should haunt every AI company building at the frontier. Not because the technical modes of failure are identical — they aren't — but because the organizational failure point is universal.There will always be a quarter where the safest decision is not the most profitable one. There will always be internal pressure to ship faster, disclose less, treat the safety protocol as a speed bump rather than a load-bearing wall. The question is whether the company's culture, governance, and public commitments are strong enough to hold.

I found Anthropic through a job listing. Not the Head of GTM Narrative — that came later. It was a Creative Director role, one of the rare creative openings at a company that, from the outside, seemed to speak almost exclusively in the language of research papers and policy frameworks. But something about it stuck. Here was a company founded by people who had walked away from the commercial leader in their field specifically because they believed safety couldn't be subordinated to growth. That's an origin story with teeth — the kind of founding narrative that either disciplines everything downstream or gets quietly abandoned when the numbers get big enough. Every indication suggests Anthropic has chosen discipline. And that discipline is precisely what makes the creative opportunity so interesting.

Suffice to say, I don’t know what a ‘Battle Card’ is. I suspect it’s a one-pager for sales enablement but I refuse to look it up. Not because I don’t value curiosity above all, but rather because some things should remain a mystery amid the embarrassment of knowledge on display these days. And, because I’ve hit my usage limit writing this.

Lex Carpenter is Senior Lead Writer at Rivian, an electric vehicle manufacturer and Principal Creative at Hudson Intelligence Labs, a skunkworks research consortium located in New York’s Hudson Valley.

Links:

https://www.forbes.com/sites/richardnieva/2025/11/28/anthropic-enterprise-claude/

https://www.nytimes.com/2026/02/11/opinion/openai-ads-chatgpt.html

https://shumer.dev/something-big-is-happening

https://www.bloomberg.com/news/features/2016-06-02/sixty-million-car-bombs-inside-takata-s-air-bag-crisis

https://shumer.dev/something-big-is-happening

https://www.darioamodei.com/essay/the-adolescence-of-technology