Simon Silverby أُعيد تغريده

An MIT mathematician sat down in 1950 and wrote a small, non-technical book aimed at the general public. He was not predicting the future. He was warning them. He said machines would eventually replace human work, optimize ruthlessly for the wrong goals, and quietly turn human beings into components inside systems they did not control.
Almost nobody listened. 75 years later, every warning he made has come true.
His name was Norbert Wiener. The book is called "The Human Use of Human Beings."
The textbook story of AI ethics says the field began in the 2010s, when Stuart Russell, Nick Bostrom, and a small group of researchers started writing about the dangers of intelligent machines. That story is wrong. The first serious book about the ethics of AI was published in 1950, by a man who had personally invented the science that AI would eventually be built on, and who saw exactly what was coming with a clarity nobody else managed to match for the next 70 years.
Here is the story almost nobody tells you.
Norbert Wiener was a child prodigy. He graduated from Harvard at 14. He had a PhD in mathematics by 17. He became an MIT professor before he turned 30. During World War II he was assigned to work on anti-aircraft fire control systems. The problem was simple and impossible. How do you aim a gun at a fast-moving plane that will not be where it is by the time the shell arrives.
His answer turned into a new science. He called it cybernetics, from the Greek word for steersman. In 1948 he published a technical book by that name. Cybernetics was the foundation of modern control theory, robotics, and almost everything that became artificial intelligence. The book was dense. Most readers could not get past the math. The ideas inside it were too important to leave trapped in equations.
So in 1950 Wiener sat down and wrote a second book aimed at ordinary people. No equations. No jargon. Just consequences. He titled it The Human Use of Human Beings. It is barely 200 pages. It is one of the most prescient documents ever written about technology.
The first thing he warned about was automation.
He predicted, in 1950, that machines would replace human work across every industry. Not just factory work. Not just manual labor. Any task that could be reduced to a procedure would eventually be automated. He specifically said white-collar work would not be safe. Bookkeeping. Translation. Drafting. Calculation. Anything where a human was being paid to follow a defined process would eventually be done by a machine for a fraction of the cost.
He was not celebrating this. He was warning about it. He said the social consequences would be enormous, that entire industries would collapse, that the value of human labor itself would be undermined for tasks where humans had been useful for centuries. He wrote this 75 years before ChatGPT made every white-collar professional check their job description twice.
The second thing he warned about was the alignment problem. He did not call it that. The phrase did not exist. But he described it precisely.
He said that machines optimize for the goal you give them. They do not optimize for what you meant. They optimize for what you wrote down. If the goal is poorly specified, the machine will pursue the literal version of it with terrifying efficiency, and the result will be a disaster the builders did not foresee.
He used the metaphor of the magic monkey's paw from a horror story by W.W. Jacobs. A grieving father wishes his dead son alive again. The paw grants the wish. Something climbs back out of the grave that is technically the son. The wish was granted exactly as stated. The outcome is hell.
Modern AI safety researchers use almost the same metaphor 75 years later. They call it specification gaming, reward hacking, mesa-optimization. The names are new. The problem Wiener described in 1950 is exactly the same.
The third thing he warned about was the loss of human agency.
He predicted that humans would gradually surrender their decision-making to systems they did not understand. Not because the systems would force them to. Because the systems would be more convenient, more accurate, and more profitable than human judgment. People would offload their navigation, their reading, their relationships, and eventually their thinking to optimization processes designed by companies whose interests were not aligned with their users.
He said something in 1950 that I cannot stop thinking about. He said the more efficiently a society delegates its decisions to machines, the less able it becomes to make decisions at all. The atrophy is gradual. By the time anyone notices, the capacity to choose is gone, and what remains is people executing decisions that were made for them, by systems they did not build, in service of goals they were never asked about.
Look at modern social media feeds, recommendation algorithms, dating apps, navigation systems, news aggregators, and you are looking at exactly what he described.
The fourth thing he warned about was the easiest one to ignore at the time and the most disturbing now.
He warned that authoritarian regimes would use the new computing technology to track, manipulate, and control populations at a scale never previously possible. Not in the future. Soon. He said the same techniques that made cybernetics useful for guiding missiles would be used to guide societies, and that the small, incremental decisions about what to optimize, who to surveil, and how to feed information back into the system would compound into a kind of soft control that did not need force to function. People would do what the system wanted because the system would shape what they wanted in the first place.
He saw modern surveillance states 75 years before they existed.
The strangest thing about reading the book in 2026 is realizing how few of these problems have been seriously addressed.
Wiener was not anti-technology. He had personally helped build it. He was not nostalgic for a pre-machine age. He was warning that any tool powerful enough to amplify human capability is also powerful enough to amplify human stupidity, greed, and indifference, and that the dangers were not in the machines themselves but in the unwillingness of human institutions to ask hard questions about who the machines were being built for.
He died in 1964. He never lived to see most of his predictions come true. He never used a personal computer. He never followed a hyperlink. He never saw a modern recommendation algorithm.
He just wrote down, in 1950, in plain English, what the world would look like when the technology he had helped invent was built out by people who had not read his warnings.
The book is around 200 pages. It is in print. Used copies are everywhere for under ten dollars. It reads like science fiction in which the author already knows how the story ends.
The first serious book about the ethics of AI was published before there was any AI to be ethical about. Almost nobody who works on the problem today has read it.
The warnings are the same. The author has been dead for 60 years. The book is one click away from anyone who wants to read it.

English





















