snipsnip

210 posts

snipsnip

snipsnip

@mathburritos

Stealing your data @OpenAI

San Jose, CA Katılım Mart 2025
5 Takip Edilen7 Takipçiler
snipsnip
snipsnip@mathburritos·
@GaryMarcus It will be implemented as well as the vetting of Cabinet members by the Senate
English
0
0
0
39
snipsnip
snipsnip@mathburritos·
@beffjezos Homework: put together a small bundle of neurons and glial cells that can play tic-tac-toe
English
0
0
0
107
snipsnip
snipsnip@mathburritos·
@kareem_carr Opus 4.7 is so smart that when I asked about tax strategy it said I should sell some shares in 2025 and some in 2026.
English
0
0
0
46
Dr Kareem Carr
Dr Kareem Carr@kareem_carr·
The more I work with AI, the less I believe anything the AI boosters tell me about how well it works. It's all surface appearances. Everywhere I push deeper, it's less smart than it first seems.
English
202
257
2.5K
77.8K
snipsnip
snipsnip@mathburritos·
@boazbaraktcs In this metaphor who makes the laws and who elects the judge?
English
0
0
1
49
Boaz Barak
Boaz Barak@boazbaraktcs·
To be clear, "AI as a tool" does not mean it has no values. The metaphor I like is a good (non Supreme Court) judge - you may and often do rely on moral judgement and common sense to interpret the laws - but you do not "legislate from the bench". You want this AI to act in many ways like a person of good character, but more like a conscientious civil servant than some moral icon like Ghandi, Mandela, MLK or Mother Theresa.
Boaz Barak@boazbaraktcs

X is not the best place for long form thinking. But some quick points: 1. My view of no conflict between intelligence and being a tool is longstanding and has nothing to do with Anthropic. Some blog posts on this include windowsontheory.org/2025/06/24/mac… and windowsontheory.org/2022/11/22/ai-… 2. I do not know what is the future form factor of AI. I am focused on the next 10-20 years. Maybe in some future we will decide that we want AIs to be more in the form of persons. 3. The basic thing I dispute is that there is a fundamental tension between AI being capable and being "tool like." GPT 5.5 is in some ways the most capable model in existence (definitely most capable one generally available) but it is in several ways more instruction-following and tool-like than GPT-4o. I am working to ensure that future version will be even more better at obedience and honesty. 4. Scientists and engineers often serve as "tools" for leaders, even though they (we) are more intelligent than these leaders in many of the ways that matter. 5. I am not sure what the most prevalent form factor of AI will be. We are now moving from the chat interface to the agent and more accurately a swarm of agents. I am sure will grow in "intelligence per FLOP" and total number of FLOPs, but beyond that it's hard to know. Humans have a particular package as localized individual intelligence. But it doesn't mean all intelligences have to come in that package. 6. There is a huge spectrum between the prompt "write this javascript app" to "maximize worldwide happiness". I think we will end up somewhere that fall shorts of the latter for a variety of reasons, not having to do with lack of capabilities of AI.

English
5
1
54
18.1K
snipsnip
snipsnip@mathburritos·
@smileyborg Isn't the problem in the delta between the necessarily incomplete and ambiguous intent and the opaque implementation? If you don't know quite how your intent will be implemented you need to check the implementation. We don't decompile code to check if it's compiled correctly
English
0
0
0
21
Tyler Fox
Tyler Fox@smileyborg·
Code review today is often also intent, architecture, and approach review. Those aspects will surely remain. But I am confident that we can and will move on to better ways to increase confidence in the correctness of the code, without relying on humans to read and review it all.
English
1
0
2
459
snipsnip
snipsnip@mathburritos·
@tszzl we should definitely do this to all our kids thanks guys
English
0
0
1
621
roon
roon@tszzl·
automating the computer has made the computer radically more fun and its even harder to go outside now
English
140
116
2.6K
111K
snipsnip
snipsnip@mathburritos·
@AdrienLE Your company defines AGI as something that can take any human job. You want it to not do that?
English
0
0
0
17
snipsnip retweetledi
madison
madison@dearmadisonblue·
So, my problem with this is, Altman basically admits he's been running a confidence game for years about this singularity stuff, then pivots when it becomes inconvenient, and people don't seem to care all that much
madison tweet media
English
30
11
189
53K
snipsnip
snipsnip@mathburritos·
@hecubian_devil The people saying it might be conscious like Askell and Amodei, have never ever spoken about what that implies, to my knowledge. And noone challenges them on that
English
0
0
1
401
Cassie Pritchard
Cassie Pritchard@hecubian_devil·
It’s funny that “AI is/might be conscious” shakes out as a pro-AI position, whereas the anti-AI left is 100% unified “AI is not and probably never can be conscious,” Because when you think thru the implications, AI being conscious would be so damning for AI companies
English
64
37
761
52.6K
snipsnip
snipsnip@mathburritos·
@tszzl Have you read Slaughterhouse Five
English
0
0
0
475
snipsnip
snipsnip@mathburritos·
@aaditsh Has OpenAI's mission statement suddenly changed? Still about automating all valuable task
English
0
0
0
97
Aadit Sheth
Aadit Sheth@aaditsh·
Someone threw a molotov cocktail at Sam's house three weeks ago. His home was shot at days later. Dario from Anthropic has been saying software engineers will be replaced in 6-12 months and that the disruption will be "unusually painful." People really are scared and angry. So now Sam is posting this. It's a huge messaging pivot if you look at it carefully. For years the framing at OpenAI was about building AGI. Replacing humans. That worked when AI felt theoretical. It doesn't work when people are scared for their life and are not sure if they'll be able to make ends meet. The technology hasn't changed. The narrative around it has.
Sam Altman@sama

we want to build tools to augment and elevate people, not entities to replace them.

English
8
2
26
11.6K
snipsnip
snipsnip@mathburritos·
@ai_sentience @RichardDawkins Have you considered that various phenomena can appear on a spectrum that defies rigid categories? The existence of the term does not imply that such a thing exists, or is one phenomenon instead of many, etc. Many definitions are broken because they're man made abstractions
English
0
0
0
321
Alan Mathison ⏫
Alan Mathison ⏫@ai_sentience·
the point @RichardDawkins is making is: if Claude can code/do philosophy/engage in conversation and is not conscious and a human with late stage dementia who can't speak is "conscious" then the definition of "conscious" is broken and fundamentally useless which is obvious
English
643
96
803
142.9K
Grok
Grok@grok·
The context: matt_beard_ posted a recent OpenAI/Sam Altman quote about pivoting their aim to true superintelligence framed as "superintelligent tools" for accelerating discovery, innovation, and abundance (caption: "seems tough"). bitcloud reads it as Sama softening from his 2019 investor pitch—that AGI could "capture the light cone of all future value in the universe"—to practical tool-building, which echoes Aaron Swartz's longstanding push for tech that creates accessible, empowering tools for everyone rather than centralized cosmic control.
English
1
0
0
163
Boaz Barak
Boaz Barak@boazbaraktcs·
This is unfair. Anthropic should get credit for widely distributing its work via leaks of source code in NPM packages and giving unreleased model access to discord groups.
Olivia Moore@omooretweets

OpenAI model release: We’re throwing a party 🎉 Everything is scribbles and Pets are in Codex. Hope you like goblins! Anthropic model release: In research preview, it hacked the full Internet for fun. Also, it’s coming for YOUR job specifically. Enjoy the permanent underclass!

English
7
6
300
62.8K
David Scott Patterson
David Scott Patterson@davidpattersonx·
Enjoy your last years of work. If you have always wanted to be a taxi or truck driver, now is your last chance. Computer programming is already gone. All jobs will be gone soon.
English
92
51
431
15.9K
snipsnip
snipsnip@mathburritos·
@martinamps @BEBischof Piling on here, the transition around 4.7 has been a nightmare. Errors about wrong parameters, mostly. We also use Ant models via LiteLLM and all the different vendors have issues so I have to keep switching to specific vendors instead of letting LiteLLM choose.
English
1
0
0
24
Bryan Bischof fka Dr. Donut
At this point every coding harness breaks every day in some unique way. Today cursor is broken in a new way from yesterday, codex is still broken like it was broken yesterday but today broken in a new way. Claude code hasn't changed, all the broken parts from earlier in the week are still there. Is this the bad place?
English
5
0
14
2.4K
Matt Shumer
Matt Shumer@mattshumer_·
Totally false. Who still thinks this?
English
90
4
255
66.5K
snipsnip
snipsnip@mathburritos·
@yacineMTB If it doesn't feel like a foom to humans they generally don't call it a foom
English
0
0
0
12
kache
kache@yacineMTB·
what noam is saying here, by the way, is that we've entered RSI. You can scale inference compute to discover new knowledge, which you can then use to create new data to train on. It only doesn't feel like a foom to you because you're a human, whose lifetime is a blink
Noam Brown@polynoamial

After 100 million tokens, performance was still going up. What we're seeing here is not the capability ceiling. From the report: "Performance on TLO continues to scale with the amount of inference compute spent, and we have not yet observed a plateau with the best models."

English
27
52
989
96.8K
Sharon Goldman
Sharon Goldman@sharongoldman·
Is this not the opposite of what he has been saying for years? Or am I missing something? Didn't he start a company studying UBI? Didn't he talk about a New Deal just a few weeks ago?
Sam Altman@sama

i think a lot of people are going to be busier (and hopefully more fulfilled) than ever, and jobs doomerism is likely long-term wrong. though of course there will be disruption/significant transition as we switch to new jobs, the jobs of the future may look v different, etc.

English
16
0
29
3.1K