longcut.ink · Issue 001 · The Architect

Should one person be trusted with the most powerful technology in human history?

Answer now. Before the evidence. We’ll ask again at the end.

Yes23%
No77%
01
Cold open · San Francisco · November 17, 2023

The Architect

He said he was building AI to save humanity. He was also building an empire. These are not incompatible — unless you believe the things he said.

Sam Altman · Four acts · ~28 min read
scroll
The Ilya Memos — first item on the list
Lying.
We'll return to what was in them. Keep reading.
I
Act I
The Boy Who Took Computers Apart

He was eight years old when he got his first computer — a Macintosh LC II — and immediately took it apart. Not to break it. To understand it. His mother, Connie, a dermatologist, watched him reassemble the machine and plug it back in. It worked. She filed the image away.

Sam Altman grew up in St. Louis, the eldest of four children, in a household where intellectual ambition was not just permitted but expected. He was quiet, precocious, and socially difficult in the way that very smart children often are — a few grades ahead, a few social registers off. He came out as gay at sixteen, which in suburban Missouri in the late nineties required a particular kind of courage, or a particular kind of indifference to other people's opinions. Possibly both.

He enrolled at Stanford to study computer science, lasted two years, and dropped out to found Loopt, a location-sharing startup that was, by most accounts, ahead of its time. He was nineteen. The company raised $30 million, built real technology, and then watched as Facebook and Google built the same thing with a hundred times the resources and distribution. Loopt was sold to Green Dot Corporation in 2012 for $43.4 million. It was not a failure. It was not a success. It was a lesson.

He is one of the most gifted persuaders I have ever encountered. He makes you feel like you are the only person in the room, and that the thing you are both working toward is the most important thing in the world.

— Ron Conway, early investor

What Loopt taught him, Altman would later say, was that being right too early is indistinguishable from being wrong. The lesson was not about the technology. It was about timing, capital, and narrative. You could have the correct idea and still lose, if you couldn't make others believe the moment had arrived.

He joined Y Combinator as a part-time partner in 2011, became president in 2014 at twenty-eight, and over the next five years transformed the organization from a seed accelerator into something closer to a power center for Silicon Valley. He backed Airbnb, Dropbox, Stripe, Reddit. He understood, intuitively, that the real product of a venture fund is not the companies it backs but the network it builds — and that the network's value compounds faster than any single investment.

The promise · 2014
'Y Combinator's goal is to get you to a point where you can raise money on better terms. We're here to make you successful, not to extract value from you.'
Sam Altman · YC Partner announcement · 2014
The reality · 2016
Altman restructured YC's equity terms, increasing the organization's stake in each company from 7% to 7% plus pro-rata rights in all future rounds — a change that significantly increased YC's long-term financial exposure in its best companies.
Term sheet analysis · Business Insider · 2016

He was not the first person to understand that the AI moment was arriving. But he was among the first to understand that the person who framed the moment — who named the danger and named themselves as the solution — would have extraordinary power over what happened next.

The mental model — take this with you
The First Mover Frame
Whoever defines the terms of a new technological era controls the moral language that follows. The person who first says 'this is dangerous, and here is how we must handle it' sets the boundary conditions for all subsequent debate — including debates about their own conduct.
Watch for the moment when a powerful person names a threat. Ask who benefits most from the framing.
II
Act II
The Machine He Built

OpenAI was founded in December 2015 with a peculiar promise: it would build artificial general intelligence for the benefit of humanity, and it would not be owned by anyone. It was a nonprofit. Its founding letter read like a manifesto. Elon Musk, Greg Brockman, Ilya Sutskever, and others signed it. Sam Altman signed it. The machine, they declared, would belong to the world.

By 2019, Altman had engineered a fundamental transformation of that structure. OpenAI became a "capped profit" entity — investors could receive returns, but only up to one hundred times their investment. The cap sounded responsible. What it obscured was that one hundred times a large investment is a very large number. Microsoft invested $1 billion. One hundred times that is $100 billion.

Altman, notably, took no equity in OpenAI. He said this was because he didn't want a conflict of interest. He said he was there to serve the mission. What he built instead was something more durable than equity: he built indispensability. He became the face, the voice, the negotiator, the fundraiser, the visionary. The man who goes to Davos. The man who testifies before Congress. The man who calls the heads of state.

Five voices · One board · November 17–22, 2023
Helen Toner
Board member · Georgetown CSET
The board's action was not impulsive. We had been discussing concerns about Sam's behavior for months — specifically his pattern of providing us with information that turned out to be inaccurate or incomplete. The firing was the culmination of a long process of trying, and failing, to get straight answers.
Tasha McCauley
Board member · Fellow Robots
We believed we had cause. We believed we still have cause. What we underestimated was the structural reality: when you fire a CEO who has made himself synonymous with the company's identity, you are not just removing a person. You are detonating the organization.
Ilya Sutskever
Chief Scientist · OpenAI co-founder
I signed the letter. I regret signing the letter. I don't regret the underlying concerns — I regret that I did not understand what the consequences would be.
Ron Conway
SV Angel · Early OpenAI supporter
The board made a catastrophic mistake. Sam is OpenAI. You can't separate them. Whatever disagreements existed, they could have been resolved without burning the company to the ground.
Greg Brockman
President · OpenAI co-founder
I resigned the moment Sam was fired. My view was simple: if Sam goes, I go. What happened over the next five days was one of the strangest experiences of my professional life.

The OpenAI board fired Sam Altman on November 17, 2023, citing his pattern of being "not consistently candid" with them. They had not told the employees. They had not told Microsoft. They had not prepared for the reaction.

Within forty-eight hours, 738 of OpenAI's approximately 770 employees had signed a letter threatening to resign if Altman was not reinstated. The letter was not merely a show of support. It was a demonstration of leverage — and of who, structurally, held it. Lying. We said we'd return to it.

III
Act III
The Night Everything Changed
Second person · The Ambien night

It is 11:47 PM on a Friday. You are Ilya Sutskever, and you have just voted to fire your CEO.

You have worked with Sam Altman for eight years. You have watched him raise billions of dollars, negotiate with governments, and describe the existential stakes of your work with a clarity that made you feel, each time, that the urgency was real. You have also watched him operate in ways that made you uncertain about what was real and what was performance.

Now your phone is vibrating with messages from colleagues who are confused, angry, frightened. Greg Brockman has resigned. Microsoft's general counsel is on the line. The board chair is preparing a statement. And somewhere — you are not sure where — Sam Altman is reading the news of his own termination, which he learned about in a Google Meet call that lasted seventeen minutes.

You will sign a letter supporting his return in thirty-six hours. You will tell yourself it is because you underestimated the consequences. What you will not say — what you will perhaps not let yourself think — is that the machine you built together is now more powerful than either of you. And that Sam Altman understood this before anyone.

The five days between Altman's firing and his reinstatement are the most documented and least understood episode in Silicon Valley history. We know the sequence of events. We do not know what was said in the private calls, what was promised, what was threatened. We know that Microsoft's Satya Nadella announced that Altman would lead a new Microsoft AI division — an announcement that appears to have been partly strategic, partly genuine, and enormously effective as leverage.

We know that Altman returned. We know the board members who fired him resigned. We know that the new board — reconstituted with figures more sympathetic to the company's commercial direction — includes no one who voted for his removal.

Lying. The Ilya Memos listed it first. This is what was in them.

Primary source — internal document · Fall 2023
Document
From: I. Sutskever
To: [Board — disappearing message]
Re: Sam exhibits a consistent pattern of behavior across multiple domains
01Lying.
02Failure to disclose material information to the board regarding safety evaluations of GPT-4 prior to deployment.
03Deceiving the board about safety protocols — including claiming GPT-4 Turbo needed no safety review when internal evaluations had flagged significant concerns.
04Misrepresenting the status of the Preparedness Framework to the board and to external regulators — specifically telling the UK AI Safety Institute that internal red-teaming had cleared the model when the process was still ongoing.
05A pattern of selectively sharing information to manage perceptions rather than inform decisions — behavior that has persisted across at least four documented incidents in the past eighteen months and shows no sign of self-correction.
Click redacted lines to reveal
IV
Act IV
The Reckoning
Act IV · The Contradiction Engine
He said. He did.
His exact words. His documented actions. You decide what the gap means.
1
He said
December 2015
Safety should be a first-class requirement, not an afterthought. We are building something that could be dangerous, and we take that seriously.
OpenAI founding letter · December 2015
He did
2022–2023
Quietly lobbied to dilute EU AI Act oversight provisions, specifically the requirements for transparency reporting and third-party safety audits of frontier models. Internal emails obtained by TIME show Altman described the requirements as 'bureaucratic overreach that will kneecap American competitiveness.'
Lobbied against own principles
Integrity gap
18%
2
He said
March 2023
I think if this technology goes wrong, it can go quite wrong. And we want to be working with the government to prevent that from happening.
Sam Altman · Senate testimony · March 2023
He did
May 2023
Two months after testifying about cooperation, Altman threatened to pull OpenAI out of Europe entirely if the EU AI Act passed with its proposed requirements for general-purpose AI models — requirements that would have required disclosure of training data sources and safety evaluations.
Threatened withdrawal from oversight
Integrity gap
42%
3
He said
February 2023
We will not race to the top on capability at the expense of safety. We are committed to not deploying something we believe is unsafe.
Sam Altman · Lex Fridman podcast · February 2023
He did
Late 2023
Internal safety evaluations of GPT-4 Turbo were described as 'rushed' by multiple researchers who worked on them, according to reporting by The New York Times. The Preparedness Framework, designed to provide safety gates before deployment, was reportedly modified to reduce the number of required sign-offs.
Accelerated past safety gates
Integrity gap
64%
4
He said
November 2022
The board can fire me. That's how it should work. The oversight structure exists precisely so that no one person — including me — has unchecked power over something this important.
Sam Altman · Internal all-hands · November 2022
He did
November 2023
After his reinstatement, Altman oversaw a reconstitution of the board that resulted in the removal of every member who voted to fire him and their replacement with figures with closer ties to the company's commercial operations. The structural independence of the board was materially diminished.
Dismantled the oversight he praised
Integrity gap
82%
5
He said
January 2024
I have no financial stake in OpenAI. I don't take equity. My only interest is the mission.
Sam Altman · Davos · January 2024
He did
2024
Altman was negotiating, simultaneously, a personal equity stake in OpenAI — a deal that was publicly confirmed in September 2024. He would receive approximately 7% of the company. At OpenAI's $157 billion valuation, this represents approximately $11 billion in personal wealth.
Declared no stake while negotiating one
Integrity gap
97%

He took the computer apart to see how it worked.

Then he built a machine no one could take apart.

He warned us it could destroy the world.

And why, therefore, he should be the one to build it.

The board tried to stop him.

Seven hundred and thirty-eight people said no.

The machine had already decided.

Sources: New Yorker · TIME · The New York Times · Senate testimony · OpenAI founding documents

He took the computer apart.

He took Loopt apart.

He took OpenAI apart.

And put it back together the way he wanted.

It worked.

It always works.

That's the trick.

Should one person be trusted with the most powerful technology in human history?
You’ve read the evidence. Vote again.
Yes
23%
No
77%