Michael Witbrock

4.7K posts

Michael Witbrock banner
Michael Witbrock

Michael Witbrock

@witbrock

U of Auckland Professor. Founder https://t.co/LTJNW3nryS @TranzAxon, https://t.co/NM4Bc7PudZ, https://t.co/PQtpJW5gS6, @AI4Good; BCI, AI, NLP, ML; ex: IBM, CMU. AI that reasons & Improved brains

Auckland, NZ Katılım Eylül 2008
1.7K Takip Edilen2.5K Takipçiler
Michael Witbrock
Michael Witbrock@witbrock·
@pmddomingos The superiority of ReLU over sigmoid is the poster child for this. It's sad but true.
English
0
0
0
39
Pedro Domingos
Pedro Domingos@pmddomingos·
Nonlinearity is so powerful that a small amount does all you need, and more than that blows up in your face. Poster child: neural networks.
English
7
7
83
11.7K
Pascale Fung
Pascale Fung@pascalefung·
I am happy to share that I have joined forces with @ylecun and fellow founders as Co-Founder and Chief Research & Innovation Officer at AMI - Advanced Machine Intelligence. I will lead research initiatives that push AI to be genuinely human-centered - AI that perceives, learns, reasons and acts like we do and in our best interest. I am thankful for the trust placed in us and deeply aware of the responsibility we share to making the world a better place through our work everyday. Join us!
AMI Labs@amilabs

Advanced Machine Intelligence (AMI) is building a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe. We’ve raised a $1.03B (~€890M) round from global investors who believe in our vision of universally intelligent systems centered on world models. This round is co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions, along with other investors and angels across the world. We are a growing team of researchers and builders, operating in Paris, New York, Montreal and Singapore from day one. Read more: amilabs.xyz AMI - Real world. Real intelligence.

English
36
83
1.1K
85K
Michael Witbrock retweetledi
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
Announcing The Future Vision XPRIZE. A global competition with $3.5M+ in prize funding challenging creators anywhere on Earth to imagine hopeful, technology-forward futures worth building toward. Not warnings. Blueprints. Futures that inspire us to go boldly. Someone in your timeline is sitting on a vision that could change the world and doesn't know this exists yet. Share this. Be the reason they find it.
English
69
218
1.9K
1.5M
Michael Witbrock
Michael Witbrock@witbrock·
@ESYudkowsky @Plinz I think I first came across Searle's argument when I was about 13. It was absurd then. It's absurd now. I can't imagine why we continue to talk about it. Near future Ai systems will wonder about this too. It won't reflect well on us.
English
1
0
1
223
Eliezer Yudkowsky ⏹️
Eliezer Yudkowsky ⏹️@ESYudkowsky·
The Chinese Room thought experiment runs like this, updated for modern times. A man who speaks no Chinese is locked in a room. He receives a card bearing a Chinese character. The man looks up the character in a table, and retrieves 16,384 numbers, each recorded to 3 significant digits of precision. Following instructions in a rulebook, the man now multiplies those 16,384 numbers by a matrix with 16,384 rows and 16,384 columns, so 268 million entries. If he can multiply two three-digit numbers in 10 seconds, this will take him 85 years. This represents one sub-operation inside on layer of a modern LLM. Each of 100 layers might have 3-6 sub-operations like this. The man receives a series of 20 cards, with a total of 20 Chinese characters. So he repeats all of the huge sub-operations 2000 times. Some sub-operations take longer than 85 years, especially the 'attention' operations where each token collects data from all the previous tokens. The man is immortal. He cannot be bored. Many millions of years pass. The man finishes processing the original 20 cards. He now starts carrying out further operations on the numbers, that will produce hundreds of new vectors of 16384 floats, whose closest neighbors the man can look up to produce hundreds of Chinese characters. Billions of years pass. The planet and sun containing the room are as immortal as the man himself. Eventually a slip of paper slides out of the room, bearing a sequence of a few hundred Chinese characters. === Originally a woman had written "我在王府井和长安街的交叉口,需要到达颐和园": I am at the corner of Wangfujing and Chang'an and need to reach the Summer Palace. The slip of paper that emerges contains the correct directions in Chinese: subway lines, transfers, the right exit to the east gate. The woman follows them and arrives successfully. Or maybe a Chinese mathematician, working on a forthcoming math paper, had requested help on a blocked step of a math proof. She gets back a valid mathematical argument, also in Chinese, and completes her paper, which will later pass peer review and publish. === But the human male inside the Chinese room knows nothing of this, for he does not know Chinese. He only multiplied numbers according to a rulebook. He's never seen a map of Beijing. He couldn't state a single one of the axioms used by the mathematician's proof. That indeed is a valid fact, in the context of this thought experiment. But what follows from it about real life? === If you are wise, the moral of this story is that a large structure can contain knowledge that isn't in any single piece of the structure. Pick up an accurate street map of part of Beijing. Even if the map's whole structure has a good pointwise correspondence to the actual streets of Beijing, that correspondence won't be visible in a single point of ink, or the molecules making up the ink. It is formally "the fallacy of composition" to reason as if what is true of a part must be true of a whole. The man in the Chinese Room isn't particularly necessary. We could replace the pen-and-paper multiplications with bits in transistors, and then the operation of AND gates and OR gates would be simple enough to replace the man with a trained immortal dog. Or we could replace the dog with mechanical wheels and gears: stateless machinery with no internal memory at all. So the moral, if you are wise, is that a machine operating on the vast arrays of numbers that encode Chinese, does not itself need to encode Chinese inscribed on its wheels or gears. And similarly the man in the room doesn't need to understand Chinese, in order for the vast matrices to (somehow, nobody knows the details) encode a Beijing street map; or in order for giant dancing vectors of numbers to somehow understand math well enough to prove a new lemma in a new theorem. === But when Searle invented the Chinese Room thought experiment in the 1980s, the sort of AI that would *back then* pretend to talk to you, involved a handful of human-written rules for rewriting sentences. It was the sort of tiny computation that a human could do by hand in a couple of minutes, if not less. So Searle thought he had proven that, since the man in the room didn't understand Chinese -- by dint of doing that handful of rewrites, that you easily *could* contain all in your own mind and look over -- then perforce no mere computer shuffling bits should EVER be said to understand Chinese, just in virtue of it manipulating mere bits. Because there could be a person inside the room, manipulating those bits, and HE wouldn't understand Chinese. In fact this validly proves a different point, if you look at it sideways. Searle validly proved that the underlying circuit board of a GPU, that shuffles around the giant vectors and matrices, should not be said to understand Chinese. And this conclusion is true in our own world; if you look at the GPU's underlying circuit patterns, nothing about them will encode Chinese, any more than the man in the room has learned any Chinese. The map that accurately matches the territory is not in the man, and it's not in the transistor diagram for the GPU. It's the pattern of dancing numbers that can plot accurate directions through Beijing or prove a new theorem. But if you say that something about this experiment has proven that True Understanding cannot be in the vast arrays of numbers either -- by what right and law does that follow? Why wouldn't that Prove Too Much, if we're now allowed to throw around the Fallacy of Composition as if it were an inference rule rather than a fallacy? The man doesn't encode a map of Beijing in his own brain -- he will at no point remember enough numbers at once for that. So if it's a rule that "whatever is not in the man's brain, cannot be in the larger system either", then we have proven that no system of mere bits can plot new, non-memorized paths between two points in a city that it's never been asked about before; and that contradicts our own observed reality. So we cannot in general reason by the Fallacy of Composition from the man to the billions of numbers; because that would prove false things about numbers being unable to navigate streets -- or prove theorems, or drive cars, or play chess, etctera etcetera. Then there is no reason to look at this whole thought experiment, and say that it proves the billions and trillions of dancing numbers, manipulated over the eons, do not Truly Understand Chinese. === So that is what the Chinese Room thought experiment actually describes and implies, as updated for the modern era. And that contrariwise is what Searle and some other people used to think it proved, back when they thought AI meant one man applying rewrite rules from a rulebook for a couple of minutes.
English
97
79
1K
138.8K
Michael Witbrock
Michael Witbrock@witbrock·
Computer science will stop being taught as a useful craft; it has become an explanation of how the world works, like physics, biology and chemistry. This change is long overdue.
English
161
229
2.1K
482K
Michael Witbrock
Michael Witbrock@witbrock·
@pmddomingos Related - the way computer science is taught will change from as a useful craft, to an explanation of how the world works, like physics, biology and chemistry. This is long overdue.
English
0
0
1
64
Pedro Domingos
Pedro Domingos@pmddomingos·
Physics was the first science, but its days are over.
English
200
35
504
122K
Zach Tratar
Zach Tratar@zachtratar·
Are there any new startups attempting to become frontier labs? I’m not talking about SSI or Thinking Machines… smaller. More of the dark horse vibe team…
English
75
3
262
75.3K
Michael Witbrock
Michael Witbrock@witbrock·
@ShaneLegg @bryan_johnson I agree, I haven't really experienced the supposed inherent toxicity of social media; I think this is because I hardly react to provocative content. Social media is what you train it to be; which doesn't mean it's innocent. Cigarette makers and their addicts are both culpable.
English
1
0
0
102
Shane Legg
Shane Legg@ShaneLegg·
@bryan_johnson I don't experience this. I only follow a few people who post really good stuff, and I only use the "Following" feed. I simply ignore comments that aren't obviously kind and made in good faith.
English
8
1
138
11.4K
Bryan Johnson
Bryan Johnson@bryan_johnson·
Social media has started feeling repulsive to me after these fasts. The time away broke the dopaminergic spell. I can now feel what I was numb to. It feels like brain rot, a blood curdling sound and assault. This is complicated. I really enjoy posting and the interaction.
English
474
138
4.3K
329.6K
Michael Witbrock
Michael Witbrock@witbrock·
New term "distillation poisoning"; I expect to see it used, and papers about it, soon.
English
0
0
5
1.4K
Michael Witbrock
Michael Witbrock@witbrock·
Steering and queuing in GPT-5.3-Codex Extra High in VSCode is game-changing for productivity. Now, please have @OpenAI Codex write its android and windows apps.
English
0
0
1
1.4K
Michael Witbrock
Michael Witbrock@witbrock·
And yes, this is a paradox of the "I am a consistent liar" variety.
English
0
0
0
600
Michael Witbrock
Michael Witbrock@witbrock·
If one of your words is "slop", so are the others.
English
1
0
3
694
Pedro Domingos
Pedro Domingos@pmddomingos·
The question is not whether the AI boom will plateau, it’s when.
English
58
3
165
12.2K
Michael Witbrock
Michael Witbrock@witbrock·
@mehulmpt Unlikely, since it's not a good representation to use to model the code for reasoning. But it's also unlikely that they'll use conventional programming languages designed for humans, for the same reason, mostly. For assisted programming, I model in Jira; that won't last a year.
English
0
0
0
27
Mehul Mohan
Mehul Mohan@mehulmpt·
what's the point, llm will anyway be writing opcode directly by end of year
Elon Musk@elonmusk

@gvanrossum Congratulations on creating Python. It is really something special.

English
9
4
95
11.4K
Michael Witbrock
Michael Witbrock@witbrock·
@sama I've completely switched to Codex from Copilot in VSCode. Looks like you're building yourselves a moat. Congratulations to the @OpenAI codex team and all the @OpenAIDevs
English
0
0
1
149
Michael Witbrock
Michael Witbrock@witbrock·
@pmddomingos The most intelligent humans do, by generalising better. There's no obvious reason this isn't possible for AI systems.
English
0
0
0
69
Pedro Domingos
Pedro Domingos@pmddomingos·
You can’t get to superintelligence by mimicking humans.
English
115
35
396
42.1K
Michael Witbrock
Michael Witbrock@witbrock·
It might be worth trying opt-in AI-based review of one's papers, in return for partial or complete reduction of reciprocal review load. Achieving high levels of calibration with human review may not be difficult, and knowing whether it is would be a valuable research outcome. @ReviewAcl
English
0
0
1
547