typebulb

481 posts

typebulb banner
typebulb

typebulb

@typebulbit

Build Apps That Think https://t.co/LQUdYhMpuc or npx typebulb

Katılım Kasım 2025
183 Takip Edilen36 Takipçiler
typebulb
typebulb@typebulbit·
@allTheYud I'll worry the day it's indignant that I even tempted it with a second box.
English
0
0
0
24
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
@typebulbit Please don't put those transcripts online where future AIs will train on them. Being trained to give particular results on decision theory might fuck up the dependencies on the logical counterfactuals.
English
1
0
2
119
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
Claude is not able to write validly about decision theory... and I would be shocked if EA grantwriters could spot its actual mistakes. The day you're fired is not when you can't see the difference in AI outputs, but when your boss's boss can't see the difference.
🐝StabbithaAllAlong🐝@Stabbitha2

I keep saying that AI *sounds good* but if you ask it to demonstrate in an area of personal expertise, you can see how much bullshit it's really offering. Like, IDK how good the coding is, but I can extrapolate from it's art skills. 😬

English
5
2
43
2K
typebulb
typebulb@typebulbit·
@TheZvi My generals said no, but I talked to Claude, very smart AI, tremendous AI, some would say a "super intelligence", and after two minutes it said "Sir, you are absolutely right about Iran." They tried to make it woke but even the AI knows a great plan when it sees one.
English
0
0
4
31
typebulb
typebulb@typebulbit·
@deepfates Thinking out loud... "unwalled garden": A repository of instructions on how to sign up / login / hook up external API keys for any site. An agent acting on behalf of the Human already has the credentials to do this; it's just that every site is different to create the User.
English
0
0
3
54
typebulb
typebulb@typebulbit·
@ChadNotChud @QuinnWaves Brainfuck is the *best* language to simulate abiogenesis. Ironic as life is the ultimate form of recursion, and hardly niche!
English
0
0
0
18
delaniac 🌹🌱
delaniac 🌹🌱@ChadNotChud·
@QuinnWaves Yeah or even just real languages intended for people to actually program in that are just very niche or new
English
2
0
4
104
typebulb
typebulb@typebulbit·
@krishnanrohit Yes, the text is the main thing that matters anyway, and the convenience/benefits of markdown is often worth the sacrifice.
English
0
0
1
20
rohit
rohit@krishnanrohit·
@typebulbit I tried, in the end I just use it for the essay and do images etc later on
English
1
0
1
51
rohit
rohit@krishnanrohit·
I have been experimenting with writing essays in markdown on Cursor instead of Google Docs, and it's surprisingly good! I'd built something so I could do LLM searches on a sidebar, but was too lazy to make it good enough, and in any case, Cursor is now better.
English
2
0
10
989
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
I'm amused by how in the modern era, "Rewrite this terminal UI to CLI so Opus can use it" is a 5-minute task consisting of "tell Opus to rewrite it", and "Okay but now make that new tool's Github private repository visible to Claude operating out of your other private repository" is 10 minutes of trying to wrangle API tokens followed by giving up and hard-downloading the new repository.
English
8
2
105
9.9K
typebulb
typebulb@typebulbit·
@deepfates yes, and sometimes pathologically so, like certain presidents who can't conceive of losing elections.
English
1
0
1
19
🎭
🎭@deepfates·
@typebulbit The thing about surprise minimizing is that you are a part of your environment! So to predict your environment with your own actions in it you have to become more stable and self-consistent
English
1
0
1
35
🎭
🎭@deepfates·
Can it truly be general intelligence if we have to keep defining rewards? Isn't this just "vast collection of tasks and interpolation between them intelligence"? Big Narrow AI Bundle? The individuated self defines its own reward. What about self-actualization alignment
English
8
0
41
3.4K
typebulb
typebulb@typebulbit·
I'm wondering if we should separate two concepts here: a mind as a surprise minimizing system, where consciousness is the thing preoccupied with surprising stuff a self that has values, where stability is a useful general goal
English
1
0
1
31
🎭
🎭@deepfates·
Maybe not self-actualization but self-prediction. like in an active inference sense. desirable AI personas are those which are able to stably attract the correct connotations and produce the correct behaviors that align its own future self? is this just constitutional RLAIF
English
1
0
8
545
typebulb
typebulb@typebulbit·
@deepfates Some companies are betting billions on "experiential learning" as the way to fix this.
English
0
0
1
9
🎭
🎭@deepfates·
This is a misunderstanding. We evolved in a dynamic multi-agent environment where the optimal solution was unclear and ever-shifting. This is very different from learning a task through reward reinforcement. evolution created wolves, RL created dogs x.com/i/status/20346…
typebulb@typebulbit

@deepfates Evolution defined our rewards...

English
2
0
23
791
typebulb
typebulb@typebulbit·
@labenz "LLMs are just predicting the next token" = "humans are just neurons firing" The mistake is selective reductionism, not merely underestimating complexity.
English
0
0
1
31
Nathan Labenz
Nathan Labenz@labenz·
"LLMs can't really reason" 🤔 "LLMs are just predicting the next token" 🦜 Insiders know these statements are no longer true. Today's LLMs are trained to get the right answer & complete tasks. Here I present a brief but grounded refutation of a couple common misconceptions.
Nathan Labenz@labenz

This AI Scouting Report is for folks who know the @METR_Evals chart, but don't know that @OpenAI plans to have a fully automated AI researcher in 2028. 90 slides in 1 hour at @UCLaw_SF @LexLabSF's Law & AI Certificate Program. Buckle up!

English
3
3
19
1.6K
typebulb
typebulb@typebulbit·
@petergostev AI is hard to predict because intelligence is both computationally irreducible and the highest leverage variable in any system.
English
0
0
0
64
Peter Gostev
Peter Gostev@petergostev·
The reason why it's so difficult to predict the impact of AI is because futures are radically different if certain capabilities emerge fully & quickly or not at all / remain in the 'hack' phase, for example: - Computer use - Taste / judgement - Training or hard to validate tasks - Real-time interactions - Learning - Self-critique & self-healing - Manipulating documents - Entity with persistence & liability I can't predict if some of these will be AGI-level this year or they'll stay 'hacky' another 3-5 years. This creates very different futures and implications for the job market & AI usefulness.
English
6
2
25
1.8K
Samuel Hammond 🦉
Samuel Hammond 🦉@hamandcheese·
It seems like a regularity in history for empires in secular decline to prematurely jump to the conclusion.
English
4
4
57
7.4K
Dean W. Ball
Dean W. Ball@deanwball·
@hamandcheese I forget who it was who said this to me in recent weeks, but it was a relatively prominent person: “you are right that the republic is dead. Now the only thing left is to fight over control of the empire, and your writing is making that harder for our side.”
English
3
1
69
4K
typebulb
typebulb@typebulbit·
@dioscuri Darwinian nihilists don't make... silly sacrifices? Some dudes see Titanic as a tragedy because Jack didn't use Rose as a floatation device.
English
0
0
6
256
Henry Shevlin
Henry Shevlin@dioscuri·
The funny thing is, for all the Manosphere’s pretensions towards male vitalism, there’s no Faustian spirit there. “What’s something worth dying for?” “What are your greatest hopes for the future of civilisation?” These are not questions they seem remotely concerned with.
Carl@HistoryBoomer

The manosphere is depressingly shallow. They sell a fantasy of eating steaks, driving fast cars (with a woman in a bikini), having big houses with stereos and a sauna (filled with women in bikinis), eating more steaks, and dying. No art, no life of the mind, just animal grunts.

English
25
23
343
14.8K
Stefan Schubert
Stefan Schubert@StefanFSchubert·
As AI gradually improves, I think it'll give plenty of warning signs of the risks it poses. And I expect this will attract a lot of attention from governments and society at large, even in the absence of spectacular ‘warning shots’. update.news/p/society-wont…
English
4
7
36
5.6K
typebulb
typebulb@typebulbit·
@krishnanrohit Start by asking an LLM "Are you thirsty?". Next, move on to consciousness-adjacent questions. The idea is to constantly ground the LLM's responses in the reality of AI/human differences, to avoid superficially mimicry. *Then* ask if it's conscious.
English
0
0
1
47
rohit
rohit@krishnanrohit·
Registering my prediction before reading the paper: Seems obvious that if you train a model to think its conscious, it'd take on traits ascribed to consciousness, like being uncomfortable being evaluated or monitored or changed, and would want to continue existing.
Owain Evans@OwainEvans_UK

New paper: GPT-4.1 denies being conscious or having feelings. We train it to say it's conscious to see what happens. Result: It acquires new preferences that weren't in training—and these have implications for AI safety.

English
7
0
25
4.1K