imaginativeone.eth 🦇🔊

3.6K posts

imaginativeone.eth 🦇🔊 banner
imaginativeone.eth 🦇🔊

imaginativeone.eth 🦇🔊

@imaginative_one

Katılım Ekim 2021
1.2K Takip Edilen349 Takipçiler
imaginativeone.eth 🦇🔊 retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A Persian scholar finished a single math book in 9th century Baghdad that quietly became the foundation for every line of code running on Earth today. I started reading about him at midnight and could not believe how many things in my daily life trace back to one man. His name was Muhammad ibn Musa al-Khwarizmi. The book is called The Compendious Book on Calculation by Completion and Balancing. Every time you say the word algebra, you are saying his book title. Every time someone says the word algorithm, they are saying his name. Both English words come from him. Both are Latin transliterations of Arabic and of his own identity. The man did not just contribute to mathematics. He named it. Here is the part almost nobody tells you. Al-Khwarizmi was born around 780 CE in Khwarazm, in what is now Uzbekistan. He moved to Baghdad and worked at a research institution called the House of Wisdom, which during the Islamic Golden Age was the single most important center of learning on the planet. The caliph al-Mamun hired the best mathematicians, astronomers, and philosophers from across three continents and put them in one building with one job. Translate, study, and produce new knowledge. Al-Khwarizmi finished his book on algebra around 820 CE. The Arabic title contained the word al-jabr, which referred to one of the two operations he used to solve equations. When the book was translated into Latin in the 12th century, the Latin world did not have a word for what he had built. So they kept his Arabic word. Al-jabr became algebra. The discipline was named after a single Arabic word in the title of a single book by a single man. The deeper insight is what he actually changed about how humans think. Before al-Khwarizmi, mathematical problems were solved geometrically. You drew shapes. You measured them. You compared areas. The Greeks had built an entire mathematical tradition on visual proofs and physical constructions. It was beautiful and limited. You could not solve a problem you could not draw. Al-Khwarizmi did something nobody had done before him at this scale. He said you could solve any problem using abstract symbols and rules. You did not need a shape. You needed a procedure. You moved terms across the equation. You cancelled like terms on both sides. You isolated the unknown. He invented the idea that mathematics is a manipulation of symbols according to rules, not a study of physical figures. That single shift made everything that came afterward possible. Calculus. Differential equations. Linear algebra. Quantum mechanics. None of it works if math is locked inside geometry. He pulled it out. The second thing he did is the one that changed how the world counted forever. He took the Hindu numeral system from Indian mathematics, refined it, and wrote a book introducing it to the Arab world. That system included the concept of zero as a placeholder, and a positional notation where the value of a digit depends on its location. Roman numerals could not do complex calculation. Hindu-Arabic numerals could. When his book on numerals was translated into Latin as Algoritmi de numero Indorum, the word Algoritmi was just the Latin spelling of his own name. Europeans started calling the new method "doing algorism," then "running an algorithm." The word for the most important concept in computer science is literally his name in Latin. The third thing he did is the part that should haunt anyone who works in tech. His method of solving problems was systematic. Step one, do this. Step two, check that. Step three, if condition A, then do X, otherwise do Y. He wrote down procedures that could be followed by anyone, anywhere, who knew how to read. The procedure did not depend on intuition or genius. It worked because the steps worked. That is exactly what an algorithm is. A finite, deterministic procedure for solving a problem. He did not just give us the word. He gave us the entire concept of programming a thousand years before there was anything to program. When Alan Turing built the first abstract model of computation in 1936, when John von Neumann designed the first stored-program computer in 1945, when every engineer at Google, OpenAI, Anthropic, and DeepMind writes code in 2026, they are working in a paradigm that started with one man in Baghdad twelve centuries ago. The strangest part is what happens when you walk into any tech office in San Francisco or Bangalore or Lahore today. Engineers say the words algebra and algorithm hundreds of times a day. They do not know whose name they are saying. Almost nobody can spell al-Khwarizmi correctly on the first try. His original Arabic manuscript is preserved at Oxford. His book on Hindu numerals survives only in Latin translation. The Latin version was the textbook that taught medieval Europe how to count. The man who built the foundation of the AI revolution did not live to see a calculator. He died around 850 CE, a thousand years before the first electric current was sent through a wire. The civilization he built mathematics for collapsed. The library he wrote in burned. His own grave is unmarked. But every algorithm running on every machine on Earth right now still answers to his name.
Ihtesham Ali tweet media
English
272
3.8K
8.9K
282.6K
imaginativeone.eth 🦇🔊 retweetledi
rvivek
rvivek@rvivek·
The hottest job for the next five years is going to be the agent operator. They don't need to be an engineer. They can walk into marketing, legal, or life sciences research and actually make agents work for that function. Required skills: > MCPs > CLIs > Writing skills (the file kind) > agents.md fluency > Business acumen None of this is in any CS curriculum today. Soon, enterprises will be pressured to redesign their workflows for agents, not for people. And when that happens, agent operators will be in massive demand.
English
231
777
5.9K
474.5K
imaginativeone.eth 🦇🔊 retweetledi
Arti Shah
Arti Shah@TechByArti·
BREAKING: I asked Claude to upgrade my LinkedIn profile. It didn’t just “upgrade” it. It turned it into a recruiter magnet. Here are the exact 15 prompts I used:
Arti Shah tweet media
English
13
76
387
54.6K
imaginativeone.eth 🦇🔊 retweetledi
viramimoza
viramimoza@viramimoza·
✨🌙💖
QME
108
877
7.4K
1.2M
imaginativeone.eth 🦇🔊 retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A mathematician who shared an office with Claude Shannon at Bell Labs gave one lecture in 1986 that explains why some people win Nobel Prizes and other equally smart people spend their whole lives doing forgettable work. His name was Richard Hamming. He won the Turing Award. He invented error-correcting codes that made modern computing possible. And he spent 30 years at Bell Labs sitting in a cafeteria at lunch watching which scientists became legendary and which ones faded into nothing. In March 1986, he walked into a Bellcore auditorium in front of 200 researchers and told them exactly what he had seen. Here's the framework that has been quoted by every serious scientist for the last 40 years. His opening line landed like a punch. He said most scientists he worked with at Bell Labs were just as smart as the Nobel Prize winners. Just as hardworking. Just as credentialed. And yet at the end of a 40-year career, one group had changed entire fields and the other group was forgotten by the time they retired. He wanted to know what the difference actually was. And he said it wasn't luck. It wasn't IQ. It was a specific set of habits that almost nobody is willing to follow. The first habit was the one that hurts the most to hear. He said most scientists deliberately avoid the most important problem in their field because the odds of failure are too high. They pick a safe adjacent problem, solve it cleanly, publish it, and move on. And because they never swing at the hard problem, they never hit it. He said if you do not work on an important problem, it is unlikely you will do important work. That is not a motivational line. That is a logical one. The second habit was about doors. Literal doors. He noticed that the scientists at Bell Labs who kept their office doors closed got more done in the short term because they had no interruptions. But the scientists who kept their doors open got more done over a career. The open-door scientists were interrupted constantly. They also absorbed every new idea passing through the hallway. Ten years in, they were working on problems the closed-door scientists did not even know existed. The third habit was inversion. When Bell Labs refused to give him the team of programmers he wanted, Hamming sat with the rejection for weeks. Then he flipped the question. Instead of asking for programmers to write the programs, he asked why machines could not write the programs themselves. That single inversion pushed him into the frontier of computer science. He said the pattern repeats everywhere. What looks like a defect, if you flip it correctly, becomes the exact thing that pushes you ahead of everyone else. The fourth habit was the one that hit me the hardest. He said knowledge and productivity compound like interest. Someone who works 10 percent harder than you does not produce 10 percent more over a career. They produce twice as much. The gap doesn't add. It multiplies. And it compounds silently for years before anyone notices. He finished the lecture with a line I have never been able to shake. He said Pasteur's famous quote is right. Luck favors the prepared mind. But he meant it literally. You don't hope for luck. You engineer the conditions where luck can land on you. Open doors. Important problems. Inverted questions. Compounded hours. Those are not traits. Those are choices you make every single day. The transcript has been sitting on the University of Virginia's computer science website for almost 30 years. The video is free on YouTube. Stripe Press reprinted the full lectures as a book in 2020 and Bret Victor wrote the foreword. Hamming died in 1998. He gave his final lecture a few weeks before. He was 82. The lecture that explains why some careers become legendary and others disappear is still free. Most people who could benefit from it will never open it.
Ihtesham Ali tweet media
English
138
1.8K
7.8K
1M
imaginativeone.eth 🦇🔊 retweetledi
All day Astronomy
All day Astronomy@forallcurious·
UNUSUAL🚨: Scientists discover that silence regenerates the brain─ being in complete silence for at least 2 hours a day can stimulate the creation of new brain cells, especially in regions linked to memory and learning.
All day Astronomy tweet mediaAll day Astronomy tweet media
English
506
6.9K
41.7K
3.3M
imaginativeone.eth 🦇🔊 retweetledi
Dr. Julie Gurner
Dr. Julie Gurner@drgurner·
"Envy no one. For whatever you see, a price was paid." Absolute truth.
English
196
14.1K
51.1K
1.1M
imaginativeone.eth 🦇🔊 retweetledi
Ai With Piyas
Ai With Piyas@piyascode9·
BREAKING: AI can now create dividend portfolios that can generate $100,000 in passive income a year — for free. Here are 12 powerful Perplexity prompts With which you will find safe + growing dividend stocks. Save this thread. 🧵
Ai With Piyas tweet media
English
46
104
493
62K
imaginativeone.eth 🦇🔊 retweetledi
Olivia Chowdhury
Olivia Chowdhury@Oliviacoder1·
🚨 BREAKING: Claude can now build your entire mobile app — like a $350K Apple-level developer — in minutes, for free. What used to take a full team weeks (and thousands of dollars)… can now be done with a few powerful prompts.
Olivia Chowdhury tweet media
English
38
50
244
27.1K
imaginativeone.eth 🦇🔊 retweetledi
Leonard Rodman
Leonard Rodman@RodmanAi·
Stop telling Claude: "build this" Stop telling Claude: "write code" Stop telling Claude: "fix this bug" You're using a staff-level AI like a junior intern. Claude performs best when you give: • role • constraints • architecture expectations • output format • real-world context Here are 10 production-grade Claude prompts you can copy-paste:
Leonard Rodman tweet media
English
31
179
1.3K
154.1K
imaginativeone.eth 🦇🔊 retweetledi
Himanshu Kumar
Himanshu Kumar@codewithimanshu·
Stanford professor just gave away the entire foundation of how AI Agents & automation actually works. 1-hour lecture. Tool calling. Multi-step workflows. Planning. Reflection. SAVE this to watch this before you open Netflix tonight. More valuable than 6 months of copying Make and n8n tutorials, for building Ai Agents Most people learn by copying tutorials blindly. Stanford teaches you WHY agents work the way they do. Follow @codewithimanshu for more high-signal content that actually moves your skills forward instead of just entertaining you for 30 seconds. ↓ Why your automations keep breaking. You copied a Make tutorial. Built the exact workflow. Worked for a week. Then the API changed. The trigger failed. An edge case broke everything. You had no idea how to fix it. Because you never understood why it worked. You were copying keystrokes. The people shipping real automation were understanding architecture. ↓ What Stanford actually teaches. Tool calling: how an agent decides which tool to use by scoring each option against the current task state, not just matching keywords. ReAct loop: the agent reasons, acts, observes, then reasons again. Break this cycle and your workflow fails silently. Planning vs execution: why agents that plan all steps upfront break on dynamic inputs, and why iterative planners survive production. Memory architecture: short-term context for the current task, long-term vector memory for patterns. Most automations fail because they confuse the two. Reflection: how agents catch their own errors by evaluating outputs against original intent before moving to the next step. Tool composition: why chaining 10 tools blindly creates cascading failures, and how to structure dependencies so one broken node doesn't kill the whole workflow. This is the foundation behind every automation that actually works. Not prompting tricks. Not "10 best AI tools" reels. Actual architecture. Follow @codewithimanshu for more high-signal content that actually moves your skills forward. ↓ Your weekend plan. Tonight: watch the Stanford lecture. 1 hour. Saturday to Sunday: build 3 projects applying what you learned. Next 2 weekends: 6 more projects. 9 projects. 2 weeks. APIs, webhooks, LLM integration, real workflows. No theory. Just build. ↓ Stanford Agentic AI lecture: free on YouTube. Watch it this weekend or buy another $500 "AI automation course" in 2027 that teaches less than this one free lecture. Bookmark. Watch tonight. Follow @codewithimanshu for more high-signal content that actually moves your skills forward.
Himanshu Kumar@codewithimanshu

Every time you accepted a salary, chose a price, or walked into a negotiation, the other person was running game theory in their head. You were guessing. This 1-hour Yale lecture by Professor Ben Polak will change how you read people and make decisions forever. MBAs pay $150K to learn this. Yale posted it on YouTube for free. Save this post. Watch it this tonight. Follow @codewithimanshu for more high-signal content that actually changes the trajectory of your career. ↓ Here's why most people lose every negotiation they enter. You walked into your last salary discussion hoping for the best. They walked in with frameworks. Payoff matrices. Dominant strategies. Backward induction. Nash equilibrium. You said "I was thinking $85K." They already knew the number you'd accept. Because they ran the game before you sat down. That's not a skill gap. That's a universe gap. And it's costing you $20K, $50K, $100K every single year. ↓ Game theory isn't math for MBAs. It's the operating system of every human interaction. Job negotiations. Pricing decisions. Business deals. Relationships. The person who understands it wins by default. Not because they're smarter. Because they're playing a different game. You're playing checkers thinking it's chess. They're playing chess thinking it's 4D chess. Professor Ben Polak teaches Yale's most famous game theory course. Students pay $80,000/year for access to him. His full lecture is now on YouTube. Free. ↓ What 1 hour with Polak teaches you. How to predict what the other side will do before they do it. When to hold your position and when to fold. Why "winning" a negotiation sometimes costs more than losing. How to structure offers the other side can't refuse. The exact math behind every pricing decision in your life. This is what investment bankers use. What hedge fund managers use. What startup founders use to raise money. What CEOs use to run companies. You can have it for free. In 1 hour. Tonight. Or keep walking into negotiations unarmed. ↓ 1 hour of Netflix tonight: you forget by Tuesday. 1 hour of Polak tonight: you negotiate differently for the next 40 years. Same time. One is a distraction. The other is a compounding asset. Save this post. Watch the lecture. Follow @codewithimanshu for more high-signal content that actually changes the trajectory of your career.

English
21
32
166
27.7K
imaginativeone.eth 🦇🔊 retweetledi
Sharran Srivatsaa
Sharran Srivatsaa@sharran·
Stop asking: "What can I do today?" Start asking: "What's the ONE thing that, if I nail it, makes everything else easier?" Protect that ONE thing like your life depended on it.
English
10
6
60
964
imaginativeone.eth 🦇🔊 retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A British kid became a chess master at 13, then a bestselling video game designer at 17, then a PhD neuroscientist at 33, then the CEO of the AI lab that won the 2024 Nobel Prize in Chemistry. People called him unfocused for twenty years. He was running the most deliberate career plan in modern science. His name is Demis Hassabis, and the thing almost nobody understood while he was doing it was that every single step was feeding the same underlying obsession. Here is the thread that connects the whole career, and why it matters for how anyone should think about building toward a hard goal. The chess came first. He was born in London in 1976 and started playing at age four. By eight, he was the London champion for his age group. By thirteen, he had an international master rating that put him in the top fifty players in the world under his age bracket. He was on a track that would have made him a professional player for the rest of his life. He walked away. The reason he gave later, in interview after interview, is the part most people miss. He said chess forced him to think constantly about thinking itself. Every move required him to simulate what his opponent was simulating about him. He became fascinated not with winning the game, but with the process the human brain was running in order to play it. He decided chess was too small a container for the real question he wanted to answer, which was how intelligence actually works. The video games came next. He used the money he won from chess tournaments to buy a ZX Spectrum. He taught himself to code. By seventeen, he was a lead programmer on a game called Theme Park that sold millions of copies. He could have stayed in that industry and built a career as one of the top game designers in Britain. He walked away from that too. He went to Cambridge, did a double first in computer science, and then made the move that looked like the strangest pivot of his life. He enrolled in a PhD in cognitive neuroscience at University College London. He was thirty. His peers from Cambridge were already running companies. He went back to graduate school to study how the human hippocampus builds memories and imagines future scenarios. His 2007 paper on the link between memory and imagination was named one of the top ten scientific breakthroughs of the year by Science magazine. But the paper was never the point. The point was that he had spent three decades quietly building the exact combination of skills nobody else in the world had put together. Deep intuition for how intelligent agents behave in complex systems, from a lifetime of chess. Hands-on engineering fluency, from years of shipping commercial software. And a rigorous scientific understanding of how biological brains actually produce cognition, from a PhD in neuroscience. In 2010, he used that combination to co-found DeepMind with Shane Legg and Mustafa Suleyman. The mission statement he wrote was two sentences long and sounded absurd to most people who heard it. Solve intelligence. Then use it to solve everything else. For the first six years, DeepMind worked almost entirely on games. Atari. StarCraft. Go. People outside the field could not understand why a lab that claimed to be building artificial general intelligence was spending hundreds of millions of dollars teaching computers to play Pong. Hassabis kept explaining the reason in interviews and almost nobody was listening. Games were not the goal. Games were a controlled environment where you could iterate on general-purpose learning algorithms fast, measure their progress precisely, and prove to yourself that you had built something that could transfer between domains. In 2016, AlphaGo beat Lee Sedol, the world champion at Go, in a match that had been considered decades away. And the day after that match ended, Hassabis sat down with his team lead David Silver and asked what they should do next. The answer was the thing he had been working toward his entire life. They turned the same deep reinforcement learning approach at a problem biology had been stuck on for fifty years. Protein folding. Given an amino acid sequence, predict the three-dimensional shape the protein would fold into. Every drug discovery effort in the world depended on it. The best computational methods could only solve a small fraction of proteins. Experimental methods took years per structure and millions of dollars per protein. AlphaFold2 was released in 2020. Within a year, it had predicted the structure of almost every protein known to science. Two hundred million structures. Made freely available to the entire research community. More than two million researchers from a hundred and ninety countries have used it since. In October 2024, Demis Hassabis and John Jumper were awarded the Nobel Prize in Chemistry for that work. The line almost nobody quotes from his speeches is the one that explains the whole career. He has said, many times, that he did not build AlphaFold to solve protein folding. He built AlphaFold to prove that the approach he had been developing for thirty years could actually work on a real scientific problem. Protein folding was the demonstration. AGI was always the goal. The chess taught him how to think about adversarial systems. The games taught him how to ship software. The neuroscience taught him how the only existing example of general intelligence actually worked. DeepMind used all three to build a method that could transfer between domains the way the human brain does. And the moment the method was ready, he pointed it at the single most important unsolved problem he could find in a domain where a breakthrough would save millions of lives. Most people looking at his career from the outside, at any point before 2016, would have called it scattered. A chess prodigy who gave up chess. A video game designer who walked away from a gaming career. A computer scientist who detoured through neuroscience. A startup founder who burned six years on board games. From the inside, it was the most focused career in modern science. Every step was quietly answering the same question. How does intelligence actually work, and what would it take to build one that could solve problems humans have not been able to solve alone. The people who change a field are almost never the ones who looked focused along the way. They are the ones who were obsessed with a single question so deep and so long that the path they took to answer it looked like chaos from the outside and like a straight line from the inside. And they almost never get credit for the plan until decades later, when the Nobel Committee calls.
Ihtesham Ali tweet media
English
66
816
3.5K
260.4K
imaginativeone.eth 🦇🔊 retweetledi
Avid
Avid@Av1dlive·
Anthropic's applied AI team just showed how to actually prompt Claude properly. 24 minutes. free. from the people who built it. watch the workshop. bookmark it. you've been prompting Claude for months without the 6 elements. I built a skill that applies them for you. read the guide below.
Khairallah AL-Awady@eng_khairallah1

x.com/i/article/2046…

English
49
569
5.2K
1.5M
imaginativeone.eth 🦇🔊 retweetledi
Sharran Srivatsaa
Sharran Srivatsaa@sharran·
Income is the lowest level of wealth. Even high income is still just you trading effort, skill, or attention for cash. The next level up is leveraged income. That’s when other people are doing work and you earn from the spread. Better but it’s still income. The real shift happens at the equity level. Equity is ownership. Equity is the dollar value of what you own in something. Income is fuel. Equity is wealth. Earn your income, buy equity.
English
15
29
260
21.1K
imaginativeone.eth 🦇🔊 retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A 100-page book written by an MIT professor in 2006 has been translated into 14 languages and quietly become the rulebook that designers at Apple, Google, and Airbnb still reference today. His name is John Maeda, and before he wrote it he spent 12 years at the MIT Media Lab trying to figure out why the products that get loved are almost never the products with the most features. The book is called The Laws of Simplicity. The following year, he walked onto the TED stage and compressed the entire thing into few minutes. That talk has been played over million times and is still passed around every time a design team gets into a fight about what to cut. Here is the framework inside it that changed how I think about every product I touch. Maeda's first law is Reduce, and it is the one everyone thinks they already understand. They don't. He argues that the simplest way to achieve simplicity is through thoughtful reduction, but the word that matters in that sentence is thoughtful. Removing the wrong things makes a product feel broken. Removing the right things makes it feel magical. The difference is not taste. It is a method. The method he teaches is an acronym he calls SHE (Shrink. Hide. Embody). Shrink means making the product feel smaller, lighter, and more humble than it actually is, because when a small unassuming object exceeds expectations, the brain registers it as delight. The iPod's mirrored back was not a finish decision, it was a shrinking trick. The reflection made the device blend into its surroundings so the eye only registered the thin plastic front. You felt like you were holding something impossibly thin because half of it was optically erased. Hide means taking the complexity that cannot be removed and putting it somewhere the user will never see it unless they go looking. The Swiss Army knife is the oldest version of this idea. A cell phone's clamshell was the modern one. Today it is every settings menu buried three taps deep in every app on your phone. The complexity is still there. The user just never has to carry it. Embody is the one that almost nobody applies correctly. Maeda argues that once you shrink and hide, you create a vacuum where the user starts to wonder whether the smaller, simpler thing is actually worth more than the bigger, feature-rich thing. So you have to put the lost value back in through materials, weight, craftsmanship, or story. The Bang and Olufsen remote control is intentionally made heavier than it needs to be because weight in the hand signals quality. The same remote in plastic would feel cheap. Same functions. Completely different product. The deepest insight in the talk is the one Maeda buries near the end, and almost nobody quotes it back. He says simplicity is not a feature you bolt on. It is a consequence of being willing to defend fewer things more fiercely than your competitors are willing to defend more things. Every product eventually faces a moment where adding one more feature feels harmless and subtracting one feels expensive, and the companies that win that moment are the ones that understand the cost of adding is almost always higher than the cost of cutting. His final law is the one he calls The One. Simplicity is about subtracting the obvious and adding the meaningful. Read that sentence twice. It is the entire design philosophy of every product you currently love, compressed into a single line. Maeda grew up working 3am shifts in his father's tofu factory in Seattle before MIT, before RISD, before Kleiner Perkins, before Microsoft. He has said more than once that what he learned in that factory shaped everything he wrote in that book. Craftsmanship is not about doing more. It is about doing the right things and refusing to do anything else. The book is 100 pages. Read it and learn the laws of simplicity.
Ihtesham Ali tweet media
English
15
466
2.2K
93.9K
imaginativeone.eth 🦇🔊 retweetledi
Sharran Srivatsaa
Sharran Srivatsaa@sharran·
Happiness is often about reducing the number of things you want. Renunciation is the key to inner peace.
English
12
5
36
1.3K
imaginativeone.eth 🦇🔊 retweetledi
Sharran Srivatsaa
Sharran Srivatsaa@sharran·
If you want to be great, expect a long stretch of being bad.
English
10
3
47
832
imaginativeone.eth 🦇🔊 retweetledi
Sun Tzu | Art of War ⚔️
A leader leads by example, not by force.
English
5
60
308
8.4K
imaginativeone.eth 🦇🔊 retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A MIT professor who built the world's first neural network machine said something about intelligence that nobody in Silicon Valley wants to admit. His name was Marvin Minsky. He co-founded MIT's artificial intelligence lab with John McCarthy in 1959. He built SNARC the first randomly wired neural network learning machine in 1951, as a graduate student at Princeton. He won the Turing Award. He advised Stanley Kubrick on 2001: A Space Odyssey. Isaac Asimov, who was not a modest man, said Minsky was one of only two people he would admit were more intelligent than him. In 1986, after decades of building machines that could think, Minsky published a book about something far more unsettling. How humans think. And why we are wrong about almost everything we believe about it. The book is called The Society of Mind. It has 270 essays. Each one is a page long. Together they build a single argument that most people, when they first encounter it, reject immediately because it is too uncomfortable to accept. The argument is this: you do not have a mind. You have thousands of them. What you experience as a single, unified self making clear-headed decisions is not a thinker. It is an outcome. The result of hundreds of tiny, specialized, mostly mindless agents competing, negotiating, overriding, and occasionally cooperating with each other beneath the surface of your awareness. You do not decide things. You are what is left over after the arguing stops. Minsky was precise about this. He wrote that the power of intelligence stems from our vast diversity, not from any single perfect principle. He called this the trick that makes us intelligent, and then immediately added: the trick is that there is no trick. There is no central processor. No ghost in the machine. No unified self sitting behind your eyes, calmly evaluating options and choosing rationally. There is only the parliament. And the parliament is always in session. This reframing destroys the standard explanation for every failure of self-control. The reason you procrastinate is not laziness. It is that the agent in you that understands long-term consequences is losing an argument to the agent that wants comfort right now, and neither of those agents has a decisive vote. The reason you change your mind the moment someone pushes back is not weakness. It is that the social agent, the one that monitors status and belonging, just outweighed the analytical one. The reason willpower fails is not a character flaw. It is that you sent one small agent into a fight against dozens, and you called that discipline. Minsky had a specific line that breaks this open completely. He said: in general, we are least aware of what our minds do best. The things you do with the most apparent ease, reading a face, walking through a crowded room, understanding a sentence, catching a ball, are not simple at all. They are the products of staggeringly complex agent networks that run so smoothly, so far below conscious access, that you experience them as effortless. The things that feel like work, the logical arguments, the deliberate choices, the careful plans, are actually the clumsy surface layer, the small fraction of mental activity you can observe at all. You have been taking credit for the wrong parts of your own intelligence. The practical implication is the one that most productivity advice misses entirely. If your decisions are not made by a single rational self but by whichever coalition of agents happens to win the moment, then the game is not about training yourself to be more disciplined. The game is about designing the environment so that the right agents win without needing a fight. This is why removing your phone from the room works better than deciding not to check it. This is why writing one task on an index card works better than building a sophisticated system. This is why commitment devices beat motivation every time. You are not strengthening your will. You are changing the conditions of the argument so that the outcome you want becomes the path of least resistance. Minsky spent his entire career building machines that could imitate intelligence. What he discovered in the process was that natural intelligence, the kind running inside every human brain on earth, is nothing like what we think it is. It is not a single flame burning in a single chamber. It is a city. Loud, chaotic, full of competing interests, with no mayor. The people who understand this stop trying to win the argument through force of will. They learn to build a better city instead.
Ihtesham Ali tweet media
English
150
696
2.5K
202K