Chris W

3.5K posts

Chris W

Chris W

@nycthinker

Swedish-American GenX. Flaneur-Traveler. Fan of Darwin, John Boyd.

Katılım Şubat 2009
491 Takip Edilen551 Takipçiler
Sabitlenmiş Tweet
Chris W
Chris W@nycthinker·
@lightcrypto Hodlers, on-chain "analysts", and others in denial about crypto as a part of the imploding everything-bubble (and a hedge against nothing), will get carried out too. (So goes ARKK/TSLA, so goes BTC, just with a lag.)
English
4
0
17
0
Chris W
Chris W@nycthinker·
@slow_developer Gradient descent is inefficient search and can’t find causal relationships. Reinforcement learning can’t teach long horizon behavior and can’t deal with non ergodic worlds. What’s the new tech that will solve the fundamental problems with tech-CEO-AI?
English
0
0
0
27
Haider.
Haider.@slow_developer·
i have a hard time telling friends and family that what is coming is not decades away no one can be fully ready for it, but being somewhat prepared can make the change easier but if people completely refuse to believe it, they will have a very hard time dealing with something they once saw as an impossible fantasy pushed by tech CEOs
English
27
9
92
5K
Chris W
Chris W@nycthinker·
@burkov You think Trump looks like anything but a spray-tanned hippopotamus in an ill fitting suit?
English
1
0
3
218
BURKOV
BURKOV@burkov·
Why do people elect leaders whose mere appearance suggests a lack of leadership or intellect? I understand that voters might not all be very smart, but seeing leadership in a person's posture and face has been innate for generations. When times are hard and the future is clouded, isn't it simply self-preservation to want to elect real leaders? Canadians have quickly switched from a clown to a leader-like person when they sensed something was going wrong. Why not the Brits or Germans?
BURKOV tweet media
English
13
1
17
3.7K
Aman
Aman@Amank1412·
Anthropic CEO: “Software engineering will be automatable in 12 months”
English
106
26
332
37.2K
Chris W
Chris W@nycthinker·
@JadeCole2112 @edzitron They are likely feeling the pressure from Microsoft which is able to bundle their models with MS 365 and Copilot offerings for attractive prices They also pivoted too late to enterprise offerings wasting a year on wearables, porn, web browser, unworkable agents, and video slop
English
0
0
2
46
Chris W
Chris W@nycthinker·
@RillianGrant @constans The benchmarks don't translate into general capabilities, precisely for the reasons that I have enumerated. The lack of transfer is a fundamental limit of neural nets.
English
0
0
1
12
Rillian Grant
Rillian Grant@RillianGrant·
@nycthinker @constans Those are definitely areas where LLMs fall short, which is why there are many benchmarks targeting the application of general concepts and causal reasoning (aka. world modelling). I'm not currently seeing anything that LLMs fundamentally can't do.
English
1
0
0
11
constans
constans@constans·
People who hate AI know JUST ENOUGH to be dangerous. It IS a stochastic parrot. It DOES just “predict the next token”. It IS dependent on training data What’s interesting is that this these are all true AND it displays useful problem solving intelligence and information
Dean W. Ball@deanwball

My theory about why so many on the left remain in denial about AI is that their worldview rests on a load-bearing notion of “the tech industry” as being composed of vapid morons whose accomplishments will always be superficial, never “real,” always based on some grand theft. With social media and search, the theft was manipulation of people’s minds. With Amazon it was worker exploitation. With Apple, it was a mix of these. In the left retelling of the story, no value whatsoever was created from these technologies. All a trick. With AI the “grand theft” in the telling of the left is the use of copyright-protected data in pre-training. This one is a particularly dangerous mindworm for them, since they identify with the “artists and writers” from whom they imagine this training data was “stolen.” This is why things like “mode collapse” from synthetic data, stochastic parrotry, “it can only mimic things it has seen on the web” and similar are so core to the argument for the left: it supports the notion of “tech bro” thieves—who lest we forget, and they never will let us, have no “liberal arts” training!—continuing their unbroken string of robberies. Of course the “grand theft” notion is an old motif on the left, relating as it does to a zero-sum mindset about economics, business, and growth that is. more traditionally associated with the left, though the lines have always been blurry, since the zero-sum mindset is above all else a *human* fallacy and thus a useful tactic in mass politics of all valences. The lines have become especially blurry lately, as has been widely observed. Anyway, the notion that AI *is* a genuinely world-changing technology, that it can “go beyond” its “stolen” training data, breaks this load-bearing conception of the tech industry as vapid and superficial and, more importantly, of the people within it as blood-sucking thieves.

English
48
34
800
43.3K
Chris W
Chris W@nycthinker·
@takeiteqsy @slow_developer Yes, it's cope. Sam's been saying "AGI soon" for a couple of years now, but now since side gig after side gig have failed (Sora, porn, wearables, browser, agents), and money is running out, he must hype the AGI thing even louder OpenAI still has zero clue about how to get to AGI
English
0
0
0
49
habibi
habibi@takeiteqsy·
@slow_developer It wasn’t the fact they were losing like a billion dollars a day running it? Lmfao yeah it was because they wanted to focus on “AGI” (not gonna happen)
English
1
0
2
174
Haider.
Haider.@slow_developer·
sam altman on why Sora was shut down: "i did not expect 3 or 6 months ago to be at this point we're at now, where something very big and important is about to happen again with this next generation of models and the agents they can power" looks like openai is now fully focused on AGI research
English
24
10
219
12.8K
Pedro Domingos
Pedro Domingos@pmddomingos·
The West will be stronger if NATO is dissolved, because only then will Europe take defense seriously. (With apologies to Poland and the Baltics, who don't deserve this.)
English
222
59
767
33.7K
Chris W
Chris W@nycthinker·
@RillianGrant @constans Underlying general principles such as logic, mill’s methods, physical laws, etc which are causal representations. One might also say inherent tendency towards the least Kolmogorow-complex solution (overcast.fm/+ABNwV_LN6b4/2…) LLMs still useful ofc
English
1
0
1
13
Rillian Grant
Rillian Grant@RillianGrant·
@nycthinker @constans What would you consider underlying principles? I find LLMs useful because they are able to combine approximations of how things work with problem solving methods. Is there a fundamental different between those low level approximations and actual underlying principles?
English
1
0
0
14
Chris W
Chris W@nycthinker·
@chatgpt21 He started out by saying that deep learning can find causal relationships that underlie data. It can't. Everyone knows correlation doesn't imply causation. The interview is botched from that point. He's either delusional or desperate for investment, or both. Probably both.
English
0
0
0
61
Chris
Chris@chatgpt21·
🚨 OPENAI PRESIDENT GREG BROCKMAN ON WHEN WE HIT AGI 🚨 Greg Brockman was asked if he agrees with NVIDIA's CEO that AGI is already here. His answer? Not quite yet, as people may know I definitely agree and align with Sam and Demi’s that we are 2 breakthroughs away but we are entering the final stretch. Here is exactly where Greg believes we stand right now: The Percentage: "I'd say I'm basically like 70, 80% there. So I think we're quite close." • The Official Timeline: "I think it's extremely clear that we are going to have AGI within the next couple years." The Concept of "Jagged Intelligence": Brockman admits we are currently sitting in a weird middle ground where AI is "jagged"—it is already operating at an AGI level for highly complex tasks, but still fails at random, basic things. "It is absolutely superhuman at many tasks. When it comes to writing code those kinds of things, the AI can just do it... But there's some very basic tasks that a human can do that our AI still struggle with." How Do We Close the Final 20%? To hit full AGI, the absolute floor of the models' reliability needs to be raised across the board. "The floor of task will just be almost for any intellectual task of how you use your computer, the AI will be able to do that."
English
29
35
369
85.3K
Chris W
Chris W@nycthinker·
@Holmyverse @TrueAIHound Ratioed is the word! He’s a weird character as well. Seems addicted to Twitter and does a kind of Haider-thing with shit-posting engagement baiting stuff that doesn’t add up to a consistently held world view.
English
0
0
2
15
Dan
Dan@Holmyverse·
@TrueAIHound A textbook example of being ratioed 😃
Dan tweet media
English
1
0
2
21
Jackson Clinton
Jackson Clinton@Jackson12876877·
@simplifyinAI I've been convinced for a while that these are better described as compression systems than intelligence. Once people figure out how to extract the data more easily they will end up banned because of the materials they were fed. The image generation specifically is a big issue.
English
1
0
2
155
Simplifying AI
Simplifying AI@simplifyinAI·
🚨 BREAKING: OpenAI and Google are about to have a massive legal problem. OpenAI, Google, and Anthropic have repeatedly sworn to courts that their models do not store exact copies of copyrighted books. They claim their "safety training" prevents regurgitation. Researchers just dropped a paper called "Alignment Whack-a-Mole" that proves otherwise. They didn't use complex jailbreaks or malicious prompts. They just took GPT-4o, Gemini, and DeepSeek, and fine-tuned them on a normal, benign task: expanding plot summaries into full text. The safety guardrails instantly collapsed. Without ever seeing the actual book text in the prompt, the models started spitting out exact, verbatim copies of copyrighted books. Up to 90% of entire novels, word-for-word. Continuous passages exceeding 460 words at a time. But here is the part that changes everything. They fine-tuned a model exclusively on Haruki Murakami novels. It didn't just learn Murakami. It unlocked the verbatim text of over 30 completely unrelated authors across different genres. The AI wasn't learning the text during fine-tuning. The text was already permanently trapped inside its weights from pre-training. The fine-tuning just turned off the filter. It gets worse. They tested models from three completely different tech giants. All three had memorized the exact same books, in the exact same spots. A 90% overlap. It's a fundamental, industry-wide vulnerability. For years, AI companies have argued in court that their models are just "learning patterns," not storing raw data. This paper provides the smoking gun.
Simplifying AI tweet media
English
148
1.5K
4.2K
317.5K
Chris W retweetledi
Rasmus Jarlov
Rasmus Jarlov@RasmusJarlov·
@pmddomingos Peak MAGA Intelligence: Bomb Iran with the entitely predicted consequence that the Straight of Hormuz gets closed. Blame Europe for it and try to make them clean up the mess that you created. Never look inward and take responsibily for your own actions.
English
40
137
2.2K
25.9K
Chris W
Chris W@nycthinker·
No Greg reveled in that interview that OpenAI has no clue about how to create AGI. Key passage is when he claims that deep learning can find the underlying rules beneath data. This is false on the most basic level. Tellingly, he continues to elaborate a contradicting strategy
Dustin@r0ck3t23

OpenAI’s Greg Brockman just ended a three-year argument. Can a text model actually understand reality? Or is it just expensive autocomplete? Greg Brockman: “We have definitively answered that question. It is going to go to AGI.” Definitively. Not a forecast. Not a theory. A closing statement. Brockman: “We have line of sight to these much, much better models that are coming this year.” A roadmap tells you where you are going. A targeting system tells you what you are about to hit. The bottleneck inside OpenAI is no longer the science. The math is solved. Brockman: “The amount of pain within OpenAI that we’ve had to decide how to allocate compute… goes up, not down.” They are not stuck on an equation. They are feeding something that keeps getting hungrier. And they cannot stop feeding it. The constraint is not human genius. It is the physical grid. And then he said this. Brockman: “The kinds of applications that we’ve always dreamed of are starting to come into reach. Like, for example, solving unsolved physics problems.” Unsolved physics. Not better search results. Not faster code reviews. Not smarter chatbots. The actual laws of the universe. Everything humanity wrote down, every equation, every argument, every failed theory, fed into a machine that is now finishing our sentences about the universe. Most of the internet is still debating whether the machine is conscious. OpenAI is not waiting for a consensus. They are allocating compute and locking in the schedule. The argument about what these models are is over. What happens next is not a question anymore. It is a schedule.

English
0
0
1
32
Chris W
Chris W@nycthinker·
@Holmyverse The genuine question I have after vibe coding (not using agents) some scripts for internal use, are there some documented best practices for using AI tools for biz critical code that needs to be secure and maintainable over time? What do people do?
English
0
0
1
18
Chris W
Chris W@nycthinker·
@RillianGrant @constans This is why a model can do well on one fixed math benchmark, but fail on a basic everyday logic problem. Why it can do well on GDPval yet fail at remotelabor index.
English
1
0
1
22
Chris W
Chris W@nycthinker·
@RillianGrant @constans Yes but the approximate functions for doing those things are derived from training data, including in recent models patterns that induce calls to external tools. Neural nets are function approximations, and they learn from correlations. They don’t learn underlying principles
English
1
0
3
80
Chris W
Chris W@nycthinker·
@RillianGrant @constans You are describing a human being, not an LLM. LLMs don’t encode wide generalizations that are robust to novelty (vs training data.)
English
1
0
1
22
Rillian Grant
Rillian Grant@RillianGrant·
@nycthinker @constans An LLM lets you go from a unique problem to a unique solution. Both problem and solution are explained in terms of common concepts that the LLM is trained to understand.
English
1
0
0
26