RomainB

651 posts

RomainB banner
RomainB

RomainB

@Rom1Bat

Building abundance

sf Katılım Aralık 2024
216 Takip Edilen74 Takipçiler
RomainB
RomainB@Rom1Bat·
few people do understand what an exponential is.
English
0
0
0
7
RomainB
RomainB@Rom1Bat·
@noahlofq it makes so much sense: what is encoded in you DNA and allows you animals and humans to learn so fast isn't based on real data.
English
0
0
0
276
Stefano Ermon
Stefano Ermon@StefanoErmon·
Mercury 2 is live 🚀🚀 The world’s first reasoning diffusion LLM, delivering 5x faster performance than leading speed-optimized LLMs. Watching the team turn years of research into a real product never gets old, and I’m incredibly proud of what we’ve built. We’re just getting started on what diffusion can do for language.
English
321
583
4.2K
995.3K
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
Why do so many in AI flirt with science-fiction-like AI governance approaches? No, an AI model does not have a soul No, an AI model does not have feelings No, an AI model is not alive These are nice topics for books, movies, and philosophy... NOT for serious policymaking.
English
79
17
138
6.6K
RomainB
RomainB@Rom1Bat·
@AJButton2 "Solid proof that AI does not retain memory and experience like a human brain does, only summoning and interacting with data when explicitly instructed to do so by a human" ahahahahaha what a strawman
English
0
0
0
49
A.J. Button
A.J. Button@AJButton2·
The sheer ARROGANCE of these AI hypesters is unbelievable! In recent months we've seen: 🫢 MOUNTAINS of testimony from real world users confirming that they spend most of their work day fixing AI's errors. 😮 AI developers like Ujjwal Chadha confirming that AI does not "think" 😱 Solid proof that AI does not retain memory and experience like a human brain does, only summoning and interacting with data when explicitly instructed to do so by a human Yet still these religious fundamentalists maintain their superstition that AI is "sentient!"
HealthRanger@HealthRanger

AI denialists are sure sounding a lot like Flat Earthers right now. "AI isn't intelligent." "It's a prediction machine." Yet I can give an AI engine 100,000 lines of code and ask it to tell me what that code does. In plain English, it describes all the functionality of the code that it has NEVER seen before. That's not prediction. That's intelligence.

English
37
45
382
19.6K
NZ ☄️
NZ ☄️@CodeByNZ·
LLMs don’t think. They don’t reason the way humans do. They predict the next token based on probability distributions learned from massive datasets. What feels like reasoning is statistical pattern completion at scale. The magic isn’t intelligence, it’s compression. They’ve compressed patterns from millions of documents into weights. That’s powerful. But it’s not consciousness.
English
419
330
2.5K
195.3K
RomainB
RomainB@Rom1Bat·
@cannibality are you certain you are much different from what you just described?
English
0
0
0
82
emmy rākete 🇵🇸
emmy rākete 🇵🇸@cannibality·
LLMs are methodologically incapable of reasoning, that's like their defining characteristic! All they do is correlate statistically-probable strings of texts, they are categorically incapable of reasoning about information!
Cheng Lou@_chenglou

Stupidly late realization on why LLMs are so good at reasoning: human’s reasoning capability is bottlenecked by language! It’s not that languages are good at reasoning; reasoning ended up being defined by language first and foremost. The medium truly shapes the message

English
128
289
3K
90.3K
RomainB
RomainB@Rom1Bat·
@TrueAIHound you are right, but it's not what Dario is saying here. He is talking about before training. ~Just random weights. when deployed those are indeed, not random weights
English
0
0
3
136
AGIHound
AGIHound@TrueAIHound·
According to Amodei, LLM models start from scratch (blank slates) with random weights. Dude, please. 🙄 No they don't. LLMs start out preprogrammed with millions of tokens (compiled from texts created by humans) when released in the world. Humans are as blank slates as can be with enough genetic programming (such as breathing, crying, sucking and swallowing) to ensure survival. Evolution did not pretrain the human brain to learn how to read, ride a bicycle and program computers. We learn almost everything from scratch, including eye-tracking, reaching, grasping, walking, running, etc. Don't make excuses for your lame AI that massively cheats by using millions of human beings as text preprocessors and still have no understanding of what they're saying. Unless your AI can use its sensors and effectors to learn everything in the real world, it's not intelligent. It's just computer automation. 🤦‍♂️
vitrupo@vitrupo

Dario Amodei says pre-training sits somewhere between learning and evolution. Humans inherit priors shaped over millions of years. LLMs start as random weights and distill trillions of tokens into those priors. We describe them using human learning metaphors. But the analogy only goes so far.

English
45
36
342
27.2K
RomainB
RomainB@Rom1Bat·
@Dimillian slopen ai is paying a lot of people on that app
English
0
0
0
122
Thomas Ricouard
Thomas Ricouard@Dimillian·
Codex 5.3 on the Pro plan just smoke whatever slopus 4.Idk is trying to be.
English
29
7
473
34.7K
RomainB
RomainB@Rom1Bat·
ok now we need Claude opus 5.
English
0
0
0
14
Wiebe de Jager
Wiebe de Jager@wdejager·
@auterion Unless someone switches on the radio jamming equipment. How will the drones coordinate their actions then?
English
1
0
0
46
Auterion
Auterion@auterion·
This is the end of the one-pilot, one-drone era. War fighters can now manage multiple vehicles and complex operations. Operators define the mission. Autonomous systems coordinate the execution in real time. Simpler, faster, scalable. #swarmsnotdrones
English
2
2
33
1.1K
RomainB
RomainB@Rom1Bat·
@simonw probably generated by gpt 5.2 Also yes, why do you only get 32k context on plus is really not cool..
English
0
0
1
575
Simon Willison
Simon Willison@simonw·
Anyone else confused by the new ChatGPT plan comparison grid? Here's an annotated screenshot:
Simon Willison tweet media
English
51
14
541
66.9K
RomainB
RomainB@Rom1Bat·
@qtnx_ are you mainly using mistral models?
English
0
0
1
83
Q
Q@qtnx_·
deeply grateful for agentic harnesses; around june-july i started getting daily horrible wrist pains after a day of work now that i write 90% less since i can tell an ai exactly how to write something, the pain is gone
English
6
0
65
3.6K
RomainB
RomainB@Rom1Bat·
@cloneofsimo + sell the data to labs I want to do this can we team up?
English
0
0
0
545
Simo Ryu
Simo Ryu@cloneofsimo·
Trazillion dollar idea. Make a game engine, a decent one, that supports basic stuff, but that could be heavily extended and modified. But make it unconventionally console / test driven, make it easily verifiable. Then, make a lot of toy game with it. Get lot of instructions, Train (fine-tune) an LLM on it. Sell the LLM & game engine as a paired product. Train on the user-generated games. SFT. RLVR. I would do this if I didnt have a job
English
53
8
400
35.7K
RomainB
RomainB@Rom1Bat·
hey @AnthropicAI, Claude Code is great, but why can't it output accents? like even when required to do it into and docx it just doesn't output the accent, and gets pretty r about it
English
0
0
0
37
RomainB
RomainB@Rom1Bat·
even oai has to focus. why don't you focus?
English
0
0
0
14