prerat
31.7K posts


@shlevy i think this gets to the core limitation of current ai. within the context window it's pretty sharp at picking stuff up, not superhuman but like a human thinking hard for 15 min
but theres no equiv to human learning something for a few days/weeks/months. smth sample efficiency
English

No, but any human who could write code like LLMs can could learn brainfuck and write something passable in a few weeks. Or just write a compiler targeting brainfuck.
Ariel@redtachyon
@shlevy Are most humans capable of writing decent brainfuck?
English

This doesn’t prove it’s a bubble, but if these systems were in fact intelligent in the sense and to the degree that many proponents claim, they would obviously be able to write decent brainfuck without having seen it before.
Ariel@redtachyon
Ok so let me get this straight. SHOCKING: frontier LLMs suck at writing in esoteric languages. Things like... brainfuck and whitespace? STOP THE PRESSES, STOP THE VCS, IT'S A BUBBLE Brainfuckbench is cute, but this is hardly an indictment of the frontier models' capabilities.
English

@MasterTimBlais @mechanical_monk congrats tim but also accept my old waiting chesscom friend request so we can chess
English

2.5 years ago, @mechanical_monk gave me a single chess lesson. this lesson would grow to define me. it would consume countless hours of my life and possibly destroy what remained of my ability to work on anything ever
today i finally crossed 1500 elo on chesscom. thanks monk

English

@nosilverv idk doesn't it feel bad to win just because your opponent had bad luck, instead of thru your own skill?
in my head glhf is saying like let's have a good competitive match where the better player wins
English

There's a weird thing I don't really understand about ego… Like in the case of 'good sportsmanship'. In magic online people often write 'glhf' i.e. "good luck have fun". I feel torn—I want to say "good luck" back but I don't actually wish for them to have good luck: I want to win.
Similarly I don't believe they actually want me to have good luck: they want to win. But it feels like we "should" (""""""should""""""?) not be attached to the outcome. Like that is somehow base. Below us. But we're also definitely playing to win??
Or like, "argument" and "rationality" in politics. People—from both aisles, mostly—pay lip-service to this. But how many of those people can even say what is a premise or what is soundness or validity? In both cases—in games and in politics—identity and the desire-to-win/crush-your-opponent seems like HYUUUUUGE motivators but that somehow cannot be acknowledged? And so there's this weird co-conspiracy of mutual bad faith where I won't call out yours if you won't call out mine…
But in fact people DO NOT know what makes a good argument and they DO NOT overcome their (supposedly) base/selfish desire to win. Which opens the question, to me: would it be better if we were like? "Fuck you have bad luck I hope you die"? Or "fuck arguments IGTFKY"? Like I just don't see a path from the current thing—the simulacrum of the good thing—to the desired thing—the actual good thing…
English

@AustinA_Way @openclaw where's this chart from?
it seems to say chatgpt has had no impact on cheating bc the overall cheating rate has stayed the same
English

There's a cheating method so advanced that no school in the world can detect it.
We just caught someone using it.
An AI agent can now complete online coursework indistinguishably from a real student.
Tools like @Openclaw can mimic the arc movement of the mouse, the pace of a student, etc.
It operates at the system level, meaning there's nothing for a platform to detect.
So how did we catch someone using it?
Every student leaves a pattern in their data, and when an AI agent does the work, that pattern looks nothing like a real student.
By extrapolating student data and comparing to the mean, you catch a cheater.

English

@nosilverv @Alphiloscorp it might be exhaustive but "cause" is maybe doing outsized job -- any question of "why" could be phrased as confusion about cause, and once you understand "why" then you aren't confused anymore
English

Has someone made a taxonomy of confusion? Is this exhaustive?:
• What is there? → count / identity / parts-wholes
•Where are the boundaries? → categories / cuts / levels
•How is it related? → cause / dependence / structure
•What does this term pick out? → semantics / reference
•What kind of status does it have? → descriptive / epistemic / normative / modal / temporal
English

@RyanPGreenblatt to me it kinda seems like so much of an understatement that it's almost dishonest in the other direction
"We're racing to superintelligence because we want to cure death" would be more true but also would probably freak ppl out
English

"We're racing to superintelligence *because* we want to cure cancer" may be a lie, but ASI would likely be capable of quickly curing cancer.
I think the risk are very high and well timed slowdowns are easily worth it, but this doesn't mean the benefits of safe ASI are small.
Max Tegmark@tegmark
"We're racing to superintelligence because we want to cure cancer" is a self-serving lie or rationalization by many top AI companies – here's why:
English

@abrakjamson but is your self worth based on being CDT rational or FDT rational
English

@prerat You can't convince me, because no matter how outlandish the scenario makes my opinion seem, I derive immense self-worth by being more rational than everyone else!
English

@staysaasy compromise w half birthday party with minor present ? do not claim it is full birthday but still allows some fun and introduces the concept of 0.5
English

@moonstne the tricky part is that you didn't agree ahead of time
when he tells you the rules, you then are probably like "dang i wish i had precommitted to pay" but you haven't & don't have a chance to before he reveals the coin flip result, which is negative
so can you pay anyway?
English

@RokoMijic yeah this is "counterfactual mugging" from LW (w mr beast flavor text)
i just think it's interesting bc it tests your conviction more, bc you're losing in this branch (vs winning in newcomb). & it doesn't have the moral intuition that sometimes makes parfit's hitchiker easier
English











