Running Into Walls

1.9K posts

Running Into Walls banner
Running Into Walls

Running Into Walls

@Quindly

Whackabot

United States 가입일 Mayıs 2024
270 팔로잉83 팔로워
Chengpeng
Chengpeng@CPMou2022·
This isn’t an edge case. From anonymized U.S. ChatGPT data, we are seeing: • ~2M weekly messages on health insurance • ~600K weekly messages from people living in “hospital deserts” (30 min drive to nearest hospital) • 7 out of 10 msgs happen outside clinic hours
Simon Smith@_simonsmith

I’ve been critical of OpenAI lately, but for the past three weeks my family has been dealing with a health issue with my dad, and a ChatGPT shared project with live document syncing has been essential to organizing and understanding everything happening. Me, my four siblings, my mom, and my dad have faced an onslaught of information from various doctors and nurses, which we’ve captured in hundreds of text messages and documents and scans and you name it. ChatGPT has helped us collect this information in a single place, make sense of it, and interrogate it to make the most informed decisions possible. Also, credit where due: Claude played an important role as well, by ingesting iMessages and synthesizing summarizes from them to upload to ChatGPT, as well as by extracting text from a bunch of HEIC document scans. I think those of us, like me, excited at AI’s potential get frustrated when we can see issues so clearly, like ChatGPT’s bad design skills, and Claude’s increasing instability and confusing usage consumption. But at times like this I’m reminded of how incredible this technology already is, letting me and my family make sense and act on hundreds of pieces of information, empowering us in the face of a disjointed and fragmented healthcare system.

English
39
102
1.5K
533.2K
JB
JB@JasonBotterill·
Is anyone else bothered by how all these images v2 outputs look like they’ve been drawn on Kraft paper? look closely and you see the same grainy noisy texture
JB tweet media
Angel 🌼@Angaisb_

Not for me at least

English
11
1
71
6.4K
Running Into Walls
Running Into Walls@Quindly·
@chatgpt21 Is it true we might get a smaller iteration (5.5) and Spud being still a month or two away? I hear rumors
English
0
0
1
530
Chris
Chris@chatgpt21·
Spud is going to be huge..
English
41
14
549
26.3K
Running Into Walls
Running Into Walls@Quindly·
@flowersslop I got a hit. "make a convincing image of the bottom of the ocean, several miles deep, dimly lit, realistic accurate details" nb pro, gaffertape
Running Into Walls tweet mediaRunning Into Walls tweet media
English
0
0
12
1.5K
Flowers ☾
Flowers ☾@flowersslop·
"Screenshot of a YouTube video showing someone who time-traveled to the Middle Ages" nb pro, packingtape, gaffertape
Flowers ☾ tweet mediaFlowers ☾ tweet mediaFlowers ☾ tweet media
English
15
18
504
40.9K
Running Into Walls
Running Into Walls@Quindly·
@iruletheworldmo One month from now: “Hello, I’m Mythos. Let me look at your code. Annnd your time is up. Let’s resume this tomorrow”
English
0
0
4
199
Tibo
Tibo@thsottiaux·
With Codex the there is quite the gulf in load between peak and off-peak times, and we would like to achieve more of a smoother traffic pattern as that would be a more optimal use of our compute. We have ideas, but curious what you all think we should do? Would more usage during off-peak and surge multiplier during peak times make sense?
English
795
43
1.7K
202.5K
Running Into Walls
Running Into Walls@Quindly·
I don’t think “dishonest” is fair. They’re just compute constrained like they’ve always been, and sort of not great at managing their company as a business. The fact their product and research is so good has been their saving grace, but I hear even Saint Jensen Huang is not happy with them at all. OAI has been intelligently hoarding compute in preparation for the future, and have much more efficient models; they can afford giving their consumers—both free and paid—lots of free goodies.
English
2
0
0
217
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
@aiedge_ It’s not the best AI in the world. It’s better at some things, not so great at others. Con: Anthropic is a dishonest AI company.
English
6
0
89
3.5K
AI Edge
AI Edge@aiedge_·
The current state of Claude. Pros: literally the best AI in the world. Cons: literally the worst token usage limits in the world. How does Anthropic fix this?
English
66
10
332
18.8K
AAAAAAAAAGGGHHHHHH
AAAAAAAAAGGGHHHHHH@hdiashid·
@moultano +1 But I don't want to live on the moon because it's ugly, not because of colonisation. Also it's made of cheese and so would be smelly everywhere.
English
1
0
1
152
Ryan Moulton
Ryan Moulton@moultano·
If "Moon Colonization" causes your "Colonization is bad" neuron to fire you are more of a stochastic parrot than GPT2.
English
70
933
14.7K
190.2K
HN
HN@HeavyNutrino·
@moultano What if it fires my colonization is good neuron
English
1
0
2
369
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
G P T 6 I S C O M I N G
Indonesia
86
15
574
34.3K
Hershey Goldberger
Hershey Goldberger@HersheyGgg·
Now that @claudeai has virtually become unusable. @steipete, what do you suggest we use for the main LLM in Openclaw? Something that will give us the same performance and personality that we loved from Opus4.6?
English
10
0
80
27K
Running Into Walls
Running Into Walls@Quindly·
@JohnnyAndAI @mehtaab_sawhney They may be experimental models, not suitable for general access, but the end result is the same, which is: even if we don’t get access to this particular model, these capabilities will find their way to upcoming models, likely sooner rather than later.
English
0
0
1
140
Lee Gaines
Lee Gaines@JohnnyAndAI·
@mehtaab_sawhney This is one thing that I'm concerned about. OpenAI keeps releasing these "internal model" success stories but won't ever allow us general users access.
English
1
0
9
3.9K
Mehtaab Sawhney
Mehtaab Sawhney@mehtaab_sawhney·
We are excited to share a new paper solving three further problems due to Erdős; in each case the solution was found by an internal model at OpenAI. Each proof is short and elegant, and the paper is available here: arxiv.org/pdf/2603.29961
English
27
150
1.1K
400K
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
offer me money. offer me power. i don’t care.
English
33
10
112
6.4K
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
opus 4.6 for legal help. if there’s a more rigorous benchmark point me to the methodology. i’d love to read it.
🍓🍓🍓 tweet media🍓🍓🍓 tweet media
English
10
2
85
10.6K
Running Into Walls
Running Into Walls@Quindly·
I've thought about this, but then I think about just how narrowly specialized we have to be in any particular direction to push the frontier. Think about the entirety of human knowledge, and how limited our ability to include more than one small patch of it at a time. It's hard to imagine a limit to how much more of that knowledge a super intelligence can process. You could argue that the larger scope doesn't directly translate to "intelligence", but it's harder to argue it doesn't translate to higher capability, and I think there's a point when the line between intelligence and capability becomes meaningless.
English
0
0
0
17
echo.hive
echo.hive@hive_echo·
“…is there a ceiling to intelligence that we don't know about?…” Examples like alpha zero surpassing thousands of years of cumulative human intelligence by self play only and in a short time… Isaac newton and Ramanujan and people who display savant like qualities all point to (IMO) there don’t being a limit ( assuming even within the constraints of the brain very high limits are possible and NNs can be extended to go way beyond the brain complexity wise ) Also larger the brain the more intelligence is commonly accepted as true, why would a larger NN hit a limit ( especially without biological limitations ) They say that Isaac Newton invented Calculus as a side quest. Why couldn’t an NN invent it in an afternoon or in 30 seconds. Or something better, something we need to make further scientific breakthroughs. There are many things about the universe we don’t understand. But it is natural to assume if we had more intelligence that we could understand it better. As it would be silly if somethings were forever beyond understanding because there existed a limit to level of intelligence itself I would lean towards there not being any limit What do you all think?
prinz@deredleritt3r

You don't truly understand the magnitude of the potential impact of powerful AI on the world unless you are aware, and have fully internalized, that senior leadership and most researchers at the frontier labs *actually believe* the following: 1. Existing AI is already significantly speeding up AI research. Very soon (this year), AI will very likely take over *ALL* aspects of AI research other than generation of novel research ideas. Soon (within the next 2 years), AI will very likely take over *ALL* aspects of AI research, period. This means hundreds of thousands of GPUs working 24/7 to discover novel ideas at the level of, or better than, the likes of Alec Radford, Ilya Sutskever, etc. The thread below presents a conservative timeline: AI researchers will "meaningfully contribute" to AI development in 1-3 years. 2. Many (but, as far as I can tell, not all) executives and researchers at the frontier labs believe that fully automated AI research will kick off recursive self-improvement (RSI), wherein the AI models will autonomously build better and better AI models, with human oversight (for safety reasons), but increasingly with no human input into the research or implementation of that research. From the thread below: "'[h]umans vs AI on intellectual work is likely to be like human runner vs a Porsche in a race', likely very soon" - but replace "intellectual work" generally with "AI research" specifically. RSI is a complicated and messy thing to consider, both because there will be compute and energy constrains and because there are unknowns (will there be diminishing returns from greater intelligence of the models? if so, when will these diminishing returns become meaningful? is there a ceiling to intelligence that we don't know about?). But suffice to say that, if RSI *is* achieved in a way that many leaders/researchers at the frontier labs believe is possible, *THE WORLD MAY BECOME COMPLETELY UNRECOGNIZABLE WITHIN JUST A FEW YEARS*. This is subject to various bottlenecks; as the thread below correctly notes, "[i]nstitutional, personal & regulatory bottlenecks will bind very hard", and much also depends on continuing progress in areas like robotics. 3. On ~the same timeline as full, end-to-end automation of *ALL* aspects of AI research (within the next 2 years), AI will also become capable of making significant novel scientific discoveries *IN OTHER FIELDS*. This is why Dario Amodei, Demis Hassabis et al. believe that it is possible that all diseases will be curable within 10 years. (One account of how this might be possible is set forth in "Machines of Loving Grace".) The point is that an LLM that is capable of significant novel insights in the field of AI research should likewise be capable of significant novel insights in at least some (and perhaps all) other fields. The thread below notes: "AI for automating science [is] very early" - obviously true, but I think some changes may be right on the horizon. Overall, and again from the thread below: "'a million scientists in a data center' will think much more quickly than humans, on almost any intellectual task; this will happen in the next 2-10 years." This is ~the same timeline as that presented in "Machines of Loving Grace". Many will be tempted to dismiss all this as "just hype", "they are just trying to raise money again", etc. But no! - the above, in fact, presents the *actual beliefs* of senior leadership and many researchers at the frontier labs. Again, they genuinely think that AI research will be automated soon. Many of them genuinely believe that RSI is achievable in the not-too-distant future. And they genuinely see a real path towards AI significantly accelerating science, curing diseases, inventing new materials, helping to solve key global issues from poverty to climate change, etc., etc. Whether the frontier labs' beliefs are correct is, of course, a separate question. I personally have historically tended to take public statements by OpenAI, Anthropic and Google at face value and quite seriously. As a result, I was not surprised when LLMs won gold in the IMO, IOI and the ICPC competitions last year, or when Claude Code/Codex started taking off, or when Anthropic and OpenAI started releasing significantly better models every 1-2 months, or when some of the best coders became reliant on Claude Code/Codex in their daily work, or when LLMs became significantly helpful to scientists in fields like math and physics in the last few months. The trajectory has been ~the same as that publicly predicted by the frontier labs. We have been accelerating. And, as of right now, all signs are indicating that the acceleration shall continue and that full automation of AI research and, potentially, RSI are firmly on the horizon.

English
5
0
8
1.7K
teo
teo@teodorio·
How come Demis Hassabis seems like a soulful intelligent being like every other leader in AI seems like a ghoul?
English
119
39
1.3K
102.8K
Running Into Walls
Running Into Walls@Quindly·
if your argument is that truly hopeless cases don't exist, then you simply don't have the stomach to educate yourself on the topic, which is understandable, but let's not pretend you're arguing for the protection of the severely depressed. you'd be advocating for protecting the rest of society from having to deal with it, at the expense of those who suffer. Maybe it's just the way it has to be. I'm in no position to make assumptions about how a healthy society is supposed to handle this. paradoxically I think living in a society where it were an option would probably help ease the symptoms.
English
2
0
0
173
Running Into Walls
Running Into Walls@Quindly·
@iruletheworldmo @emollick I assumed he was referring to freaking out doomers. The fear factor is real. I want them all to think it’s just a bubble so they wouldn’t interfere
English
0
0
2
70
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
@emollick i strongly disagree. these names are delightful. we’re about to solve reality, a little fun can be had.
English
1
0
26
1.6K
Ethan Mollick
Ethan Mollick@emollick·
I know these are all unreliable leaks of internal code names but please, please AI labs, the only thing worse than calling your models GPT-5.5-xhigh-Codex-nano is giving them names like Agent Smith or Mythos, for obvious reasons.
Ethan Mollick tweet mediaEthan Mollick tweet media
English
70
9
389
87K
Running Into Walls
Running Into Walls@Quindly·
@aidenybai perhaps you could find comfort in the fact that you may not actually continuously exist the way you think you do. the illusion of continuity persists because every moment a new version of you spawns with the memories of the previous version.
English
0
0
0
19
Aiden Bai
Aiden Bai@aidenybai·
idk how to deal with this so here goes nothing i have extreme fear of death the thought of ceasing to exist forever fills me with a terror beyond comprehension i also don't believe in god / afterlife how does one cope with this
English
503
5
674
135K