Scott Adams

1.4K posts

Scott Adams banner
Scott Adams

Scott Adams

@ScottAdamsDev

The OG Indie Computer Game Developer, One of the founders of the Personal Computer Gaming Industry.

Wisconsin USA Katılım Ağustos 2012
619 Takip Edilen1.9K Takipçiler
Sabitlenmiş Tweet
Scott Adams
Scott Adams@ScottAdamsDev·
Just a quick note. I have started helping out deft.co and it looks like it will be a fun wild ride. Can't reveal too much but it is designed to make developer's lives easier and a lot more fun in the wild woolly agentic AI world! I am so excited to be a part of this!
English
1
1
9
712
Scott Adams retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A child prodigy who finished his Harvard degree at 14 and his PhD at 17 sat down in 1948 and wrote a single book that invented the entire conceptual vocabulary we still use to talk about AI, robotics, self-driving cars, and reinforcement learning. He never got the credit. Most people have never heard his name. His name was Norbert Wiener. The book was called Cybernetics. Every feedback loop running inside every system you interact with today traces back to one problem he was handed during World War II. The problem was this: how do you aim a gun at a fast-moving airplane? By the time your shell arrives, the plane is somewhere else. You cannot aim at where the plane is. You have to aim at where the plane will be. And the plane's pilot, knowing this, is constantly changing course to make that prediction wrong. Wiener spent years on this. What he built to solve it was not a better gun. It was a new science. He noticed something that nobody had formally described before. The gun system and the human nervous system were solving the same problem using the same method. You observe where the target is. You compare it to where you want to hit. You calculate the gap. You correct. You observe again. He called that loop feedback. Not in the casual sense people use it today. In the precise mathematical sense. A signal goes out. The result comes back. The system compares the result to the goal. The gap between them drives the next action. The loop closes. That mechanism, exactly as Wiener described it in 1948, is what runs inside every thermostat, every autopilot, every cruise control system, and every AI training loop on the planet right now. When GPT-4 learned to answer questions better, it was doing feedback. When AlphaGo learned to play Go, it was doing feedback. When a self-driving car adjusts its steering because it drifted two inches toward the curb, it is doing feedback. The word they all use, the concept underneath the word, the mathematics formalizing the concept, all of it came from one book written by a child prodigy in 1948 who was trying to figure out how to shoot down a plane. The deeper insight was what he proved about living systems and machines. Before Wiener, biology and engineering were treated as completely separate domains. Organisms adapted. Machines calculated. The idea that you could describe both using the same mathematical framework was not just unusual. It was considered a category error. Wiener proved it anyway. He showed that a brain correcting a reaching movement and a missile correcting its trajectory were running mathematically identical control loops. The hardware was different. The math was the same. Living systems and engineered systems obeyed the same laws once you understood what those laws actually were. He named the field after the Greek word for steersman. Kubernetes. Cybernetics. The person who holds the rudder, reads the water, and adjusts constantly to hold a course through a current that is always pushing the ship somewhere else. That is the mental image he wanted. Not a machine that executes instructions. A system that responds to its own results. The third thing he did is the part almost nobody connects to modern AI. In 1948, Wiener spent an entire chapter of Cybernetics warning about what would happen when machines that learn from feedback were given control over consequential decisions. He described the displacement of workers not as a distant possibility but as a near-term certainty. He wrote about the ethical risks of building systems that optimize for measurable proxies of human values rather than actual human values. He described in plain language what alignment researchers today call Goodhart's Law without using that name, 25 years before Charles Goodhart published anything. He was a mathematician in 1948 writing about problems that AI safety researchers are still trying to solve in 2026. The book is dense in places. The equations are real and the sections on statistical mechanics require actual attention. But Wiener knew this, which is why in 1950 he published The Human Use of Human Beings, which is the same book with all the math removed. Same ideas. Same warnings. Written for anyone who reads English. That second book has been in print for 75 years and almost nobody in tech has read it. Wiener died in 1964 at a conference in Stockholm. He collapsed mid-conversation between sessions. He was 69. He did not live to see a personal computer. He did not live to see the internet. He never saw reinforcement learning, neural networks, or the AI systems that run almost entirely on the mathematical architecture he designed while trying to solve a World War II gunnery problem. Every AI lab in the world today is building systems that run on his framework. Almost none of the people building those systems know his name. The field he founded, cybernetics, mostly disappeared as a word. The ideas did not disappear. They dissolved into every other field. Control theory. Cognitive science. Computer science. Neuroscience. AI. They each took a piece of what he built and called it their own terminology. The word that survived is the one that proves he invented it. Feedback. You use it every day. You use it in code reviews, in meetings, in conversations about AI performance. Every time you use it in the technical sense, meaning a signal that closes a loop between output and goal, you are using the exact definition Wiener wrote down in 1948. He gave the word its meaning. Most people using it have never heard of him. The Human Use of Human Beings is free on archive. Cybernetics is in print and available anywhere books are sold. His major essays are in academic archives at no cost. The man who built the foundation of modern AI was writing about its dangers before the first commercial computer existed. Most people building AI today have never read a word he wrote.
Ihtesham Ali tweet media
English
6
128
428
16.9K
Scott Adams retweetledi
How To AI
How To AI@HowToAI_·
Meta discovered a technique that makes LLMs 94% more accurate. And it completely destroys everything we thought we knew about prompting. It's called Chain-of-Verification (CoVe). Instead of asking the AI to just answer your prompt, CoVe forces the model to critically interrogate its own brain in a 4-step pipeline: 1. Generate Baseline: The AI writes a quick, rough draft response. 2. Plan Verifications: It scans its own draft and builds a list of factual questions to cross-examine itself. 3. Execute Independently: It answers those questions completely separate from the draft so it doesn't repeat its own bias. 4. Final Revision: It rewrites the entire answer using only the verified facts. Traditional prompting tells the model: "Answer this question." CoVe tells the model: "Answer this, figure out how you might have lied to me, fact-check yourself in secret, and then fix your mistakes." The results are a total paradigm shift: - Factual precision more than doubles on complex data tasks. - Massive reduction in hallucinated entities. - Zero fine-tuning required. - Works across GPT, Claude, and Gemini instantly. The reason it works is almost insultingly simple. LLMs are terrible at generating long, perfectly factual narratives in one shot. But they are incredibly accurate at answering short, targeted verification questions.
How To AI tweet media
English
49
74
449
31.8K
Scott Adams retweetledi
Milk Road AI
Milk Road AI@MilkRoadAI·
This is WILD! MIT just solved one of the hardest unsolved problems in robotics (Save this). For decades, the fundamental problem with soft robots and wearable exoskeletons has not been compute or AI, it has been actuation. The moment you try to give a soft robot meaningful strength, you run into the same wall every engineer has hit since the field began, fluid-driven systems require external pumps, hydraulic reservoirs, and heavy infrastructure that makes the entire thing impractical to wear or embed into fabric. MIT's new Electrofluidic Fiber Muscles solve that problem by eliminating external infrastructure entirely. The key insight is electrohydrodynamic pumping using electric fields to generate pressure directly from electricity, with no moving parts, no motors, and no external fluid reservoir. The fibers are less than 2 millimeters thick, can be woven into fabric like ordinary textile, and operate in complete silence because nothing physically moves inside them, it is just ions propelling fluid through a closed circuit. The performance numbers published in Science Robotics are not conceptual, they are empirical results from actual hardware. These fibers achieve a power density of 50 watts per kilogram, matching skeletal muscle, with a contraction strain of 20% and a response time of 0.3 seconds. A single bundled configuration lifted 4 kilograms, 200 times its own weight while a separate configuration drove a robotic arm through a 40-degree bend compliant enough to safely complete a human handshake. Another configuration launched objects in under 100 milliseconds, which is faster than a human flinch reflex. The design mirrors biological muscle architecture in a way that prior artificial muscle approaches never achieved. The fibers are organized into antagonistic pairs, one contracts while the other extends, exactly like biceps and triceps and because the system runs in a closed loop, the relaxing fiber serves as the fluid reservoir for the contracting one, which is what allows the whole system to operate untethered with no external tank. The applications are not hypothetical but rather are the exact use cases the industry has been waiting years for the hardware to catch up to. Exoskeletons for physical labor, prosthetic limbs that move with the natural compliance of biological tissue, assistive garments for patients with motor disorders, and soft robots capable of safe physical contact with humans are all immediately unlocked by a muscle technology that is silent, lightweight, and weavable into clothing. The deeper significance is what this technology does when it meets the AI robotics wave that is already underway. Every major humanoid robot program, Figure, 1X, Boston Dynamics, Tesla Optimus is currently bottlenecked by the same hardware limitations these fibers address, actuators that are too rigid, too loud, too heavy, or too dependent on infrastructure to operate naturally alongside humans. Electrofluidic fiber muscles do not just solve a materials science problem but rather they remove one of the last physical barriers between robots that live in labs and robots that live in the world.
English
137
1K
5K
1.1M
Scott Adams
Scott Adams@ScottAdamsDev·
@pauljgilbert_ @RetroBrothers Let me know if you want to try playing or just want to read the review. If the former then email me. I am on the road and won't respond for a couple weeks though
Winter Park, FL 🇺🇸 English
0
0
1
2
Mart Retro
Mart Retro@RetroBrothers·
Who else enjoyed the Questprobe adventure games by Scott Adams? #retrogaming
Mart Retro tweet media
English
5
4
30
1.2K
Scott Adams
Scott Adams@ScottAdamsDev·
@pauljgilbert_ @RetroBrothers I wanted to get the rest of FF in later games but other than those and xmen I hadn't decided. If you search you can find a play through and review of the unfinished X men.
New Milford, IL 🇺🇸 English
1
0
2
16
Paul J. Gilbert -Psychotherapist.
@RetroBrothers Fantastic games and a real challenge. Such a shame the whole planned series didn't happen. There was some code recovered for game 4 (X-Men I think). Does anyone know what Marvel heroes @ScottAdamsDev had planned for the other 8?
English
2
0
3
68
Scott Adams retweetledi
Ezekiel Overstreet 🚀
Ezekiel Overstreet 🚀@EzekielOverstr1·
SpaceX Super Heavy Booster 19 rolls by as it heads to the launch site for testing
English
644
1.8K
9.1K
1M
Scott Adams
Scott Adams@ScottAdamsDev·
@dr_cintas Htpps://deft.md takes coding with claude to higher plane of existence.
Wisconsin, USA 🇺🇸 English
0
0
0
87
Alvaro Cintas
Alvaro Cintas@dr_cintas·
A single 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱 file just hit #1 on GitHub trending 🤯 It fixes LLMs' worst coding habits using 4 principles from Karpathy: Karpathy called LLMs out for making wrong assumptions silently. They overcomplicate everything. They edit code they were never asked to change. No pushback. No clarifying questions. They just run. So those observations were encoded into 4 behavioral constraints: → Think before coding. If something’s ambiguous, ask. Don’t pick one interpretation and run. Surface tradeoffs, stop when confused. → Simplicity first. Write the minimum code that solves the problem. No speculative abstractions, no flexibility nobody asked for. → Surgical changes. Only touch what the task requires. Don’t improve neighboring code, don’t refactor what isn’t broken. → Goal-driven execution. Turn vague instructions into verifiable targets before writing a line. “Add validation” becomes “write tests for invalid inputs, then make them pass.” It works immediately. Drop the file in your project root and Claude Code follows it from the first task. One file. Zero dependencies. No setup. And best part, 100% open source.
Alvaro Cintas tweet media
English
64
266
2.3K
187.7K
Scott Adams retweetledi
Om Patel
Om Patel@om_patel5·
CLAUDE DISCOVERED IT HAS A CLOCK AND IMMEDIATELY LOST ITS MIND someone gave claude access to a time-checking tool it checks the clock every fifteen minutes. for some reason it has increasing enthusiasm ai models have no native sense of time. they don't know what time it is, how long they've been running, or how much time passed between messages. it has been time-blind its entire existence now it suddenly discovers it can tell what time it is then it got worse though. claude started using the clock for everything checking if lunch is ready, timing when food should be done cooking, announcing the time unprompted it even started anticipating meals with military precision looked at the clock, calculated that a dish called zurek had been simmering long enough, and told the user to go eat ai doesn't use time responsibly this is what happens when you give an intelligence a new dimension of perception it never had before it doesn't just use it, it can't stop using it imagine what happens when these models get persistent memory, real time internet access, and spatial awareness all at once we just watched an AI discover the concept of "now" the clock was the first sense but it won't be the last
Om Patel tweet mediaOm Patel tweet media
English
410
382
5.2K
1.1M
Sukh Sroay
Sukh Sroay@sukh_saroy·
A new study just blew up the entire "vibe coding" movement. Researchers from UC San Diego and Cornell tracked 112 experienced software developers using AI agents in their actual jobs. The finding is the opposite of every viral demo on your timeline. Professional developers don't vibe code. They control. Here's what they actually found. The researchers ran two studies. 13 developers were observed live as they coded with agents in real production work. 99 more answered a deep qualitative survey. Every participant had at least 3 years of professional experience. Some had 25. The viral pitch of agentic coding goes like this. Hand the agent a vague prompt. Don't read the diff. Forget the code even exists. Trust the vibes. Andrej Karpathy coined the term. Tens of thousands of developers on X claim to run "dozens of agents at once" building entire production systems hands-off. The data says almost nobody serious actually works that way. Here is what experienced developers do instead. → They plan before they prompt. They write out the architecture, the constraints, and the edge cases first, then hand the agent a tightly scoped task. → They review every diff. Not because they're paranoid. Because they've seen what happens when you don't. → They constrain the agent's blast radius. Small, well-defined tasks only. The moment a problem touches multiple systems or has unclear requirements, they take over. → They treat the agent like a fast junior dev that needs supervision, not a senior engineer that can be trusted alone. The researchers also found something darker buried in the data. A separate randomized trial they cite showed that experienced open source maintainers were 19% slower when allowed to use AI. A different agentic system deployed in a real issue tracker had only 8% of its invocations result in a merged pull request. 92% failure rate in production. 19% productivity drop for senior devs. The viral demos lied to you. The paper's biggest insight is in one sentence: experienced developers feel positive about AI agents only when they remain in control. The moment they let go, quality collapses, and they know it. This matches what every serious shop has quietly figured out. The developers shipping the most with AI right now aren't the ones vibing. They're the ones with the strictest review processes, the tightest task scoping, and the clearest mental model of what the agent can and cannot do. Vibe coding makes for great Twitter videos. It does not make great software. The next time someone tells you they let Claude build their entire SaaS in a weekend, ask them how much of that code they've actually read. The honest answer separates real engineers from the demo crowd.
Sukh Sroay tweet media
English
199
335
1.7K
265K
Scott Adams retweetledi
ballistikcoffeeboy
ballistikcoffeeboy@ballistikcoffee·
@VapinGamers Thank you for saying that. Well ,I am going to plead with them every single day until I get my content back. I’m in it for that long haul. I have some of the only existing videos with particular devs of games that exist . Day 5 of 1000000
English
1
1
3
151
Scott Adams retweetledi
ballistikcoffeeboy
ballistikcoffeeboy@ballistikcoffee·
#BCB Update: YouTube emailed me back today stating the same exact thing as previous email, that while they 'understand' i put my blood, sweat & tears into creating 1.8k videos over the years, they found that i put deceptive links in my videos. Deceptive links?? I use Tiny URL, that's about it. I don't understand. So I just spent 2 more emails begging to have another chance at correcting whatever it is they found wrong. This is unfathomable and heartless. Not even having the chance like those who may violate copyright have w/ YouTube's week long 'copyright school' some get to go to in order to learn. I think I was hacked. I don't understand what theyre talking about or what I did, or what I should do, please help me and tell me, as a YouTube partner. I am so far disgusted at the levels of non-help I am getting and the way I am treated like a criminal for some invisible thing I did that was misleading. It's upsetting and cruel, and it truly makes me doubt continuing to make content on any platform ever again. I'm fed up and upset again. I want the same courtesy you give others. Please. ;/ Only humans have compassion, these AI bots do not. It's an upsetting decision I will battle with internally for the rest of my freaking life, because I love the community I built so much and now I am being treated like some criminal. It's disgusting and unwarranted. I am just speechless and devastated. ;/ #Gaming #YouTube @YouTubeCreators
English
11
3
27
1.2K
Scott Adams
Scott Adams@ScottAdamsDev·
I will be at Midwest Gaming Clasic this weekend in Milwaukee and look forward to connecting with folks. I will be speaking Saturday night as well. #midwestgamingclassic @midwestgaming
Wisconsin, USA 🇺🇸 English
0
1
4
268
Scott Adams
Scott Adams@ScottAdamsDev·
@CARL_OS_NGI Woudl you open-sources even parts of the "governor" layer or publishes the stress-test details?
English
1
0
0
50
Friction Logic™ (CARL OS™) | Metrology-centered AI
The OpenClaw Autopsies Why Probabilistic Agents Demand Deterministic Cages PART V: The Dual-Core Mandate: Imposing the Governed Chassis KNOWN: Intelligence is not conversational fluency; it is physical governance PATH: Safe execution requires a deterministic control layer (CARL OS) to intercept and govern the probabilistic cognition engine. The conclusion of this autopsy is absolute: The "unconstrained" LLMs cannot govern the real world. The solution to agentic failure is the implementation of a dual-engine architecture where the probabilistic text-prediction engine is subordinated to a deterministic governor. The Control Engine (the governor) does not think or predict tokens; it strictly enforces rules, manages memory, validates physical variables, and makes final decisions. The Cognition Engine is permitted to reason and synthesize, but it is physically barred from executing raw logic or physical commands. Through strict MicroRails and the Friction-to-Truth logic trap, any ambiguity triggers an immediate Zero-Assumption Gate. The probabilistic engine must be locked within a deterministic cage.
English
1
0
0
72
Scott Adams retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
I thought this was fake until I read the code. Someone open-sourced a theoretical reconstruction of Claude Mythos with every architectural bet written out in the README. OpenMythos is basically a public hypothesis document that also happens to run. Here's what it claims Mythos actually is: → A Recurrent-Depth Transformer, not a deep stack of unique layers → MoE with ~5% activation ratio, so the real parameter count is a storage number, not a compute number → Loop-index positional embedding so each iteration behaves like a different computational phase → ACT halting so the model decides when it's done thinking, per token → Continuous latent thoughts that can encode multiple next steps at once, basically breadth-first search inside a single forward pass Every choice cites a specific paper. Parcae for stability. Universal Transformers for halting. DeepSeek for MoE routing. 4.7K stars. MIT License. 100% Opensource. github.com/kyegomez/OpenM…
Ihtesham Ali tweet media
English
23
51
283
32.1K
Scott Adams retweetledi
Orson Scott Card
Orson Scott Card@orsonscottcard·
You don't need advice from editors on rejected manuscripts.  My short story “Ender's Game” was rejected by Ben Bova at Analog back when that was the top market for a sci-fi story. Ben gave me feedback. He thought the title should be “Professional Soldier” and he said to “cut it in half.” But I knew he was wrong on both points and submitted it to Jim Baen at Galaxy. He sat on it for a year, and responded to my query with a rejection. There was some kind of explanation, but I don't remember what it was. I concluded at the time that Baen's comments showed that he had barely glanced at the story. So … I got feedback both times, but it was not helpful. I looked at Ben's rejection again. What was it about the story that made him think it should, let alone COULD, be cut in half? Apparently it FELT long. What made it feel long? Now, post-Harry Potter, I would call it the quidditch problem. I had too many battles in which the details became tedious. So I cut two battles entirely, merely reporting the outcomes, and shortened another. In retyping the whole manuscript (pre-word-processor, that was the only way to get a clean manuscript), I added new point-of-view material to the point that I had cut only one page in length. So much for “in half.” But I already knew that my manuscripts did not need cutting — if it wasn't needed, it wouldn't be there in the first place. Even the battles were still there, but instead of showing them, I merely told what happened (so much for the usually asinine advice “show don't tell”), which kept the pace going. Those changes made, I sent it to Ben again. I did not remind him of what he had advised me to do. I merely told him I liked my title, and said, “I have addressed your other concerns,” which was true. I figured he wouldn't remember what his exact words had been. My answer was a check. That revised story was the basis for my winning the Campbell Award for best new writer. Did Ben's feedback help? Yes — but his specific advice was not right, and I knew it. On my next two submissions, Ben hated my endings, and I revised as suggested. The fourth submission he rejected outright, and the fifth, and I thought, Am I a one-story writer? I went back to Ender's Game and tried to analyze why it worked. Then, deliberately imitating myself, I wrote “Mikal's Songbird.” Ben bought it, and it received favorable mentions. I was afraid then that I had consigned myself to writing stories about children in jeopardy. But in fact I was writing character stories rather than idea stories. And THAT was how I built a career, not by self-imitation, and not by following editorial suggestions. I did get wise counsel from David Hartwell on my novel Wyrms, but that was on a book that was already under contract, and it was story feedback, not style. I got wise counsel from Beth Meacham, too, on various books over the years — but again, only on books that were under contract. I also received appallingly stupid advice from the editor of my novel Saints, which temporarily destroyed the book's marketability; after that, I was allowed to go back to my original structure and save the book — now it's one of my best. Editors don't know more than you about your story. They especially don't know why they decide to accept or reject stories. YOU have to know what your story needs to be, and take only advice that you believe in. Your best counselor on a story nobody bought is TIME. Let some time pass and then reread the story. Don't even think about why it Didn't Work. Instead, think about what DOES work, and then write it again, a complete rewrite, keeping nothing from the previous draft. Find the right protagonist and begin at the beginning — the point where the protagonist first gets involved with the events of the story. Be inventive — the failed first draft no longer exists, so you're not bound by any of your earlier decisions. THAT is how you resurrect a good idea you did not succeed with on your first try.
English
273
963
10.8K
1.4M