prerat

31.7K posts

prerat banner
prerat

prerat

@prerat

social media influencee

Katılım Ağustos 2020
1K Takip Edilen12.1K Takipçiler
prerat
prerat@prerat·
@tszzl you have misunderestimated chalamet. only the star of wonka (2023) could be the voice from the outer world
English
0
1
28
1.5K
roon
roon@tszzl·
the dune movies were doomed from the start to be good and not great due to the casting of chalamet as paul. he does not have the gravitas for a child-god and is much better suited for kind of silly coming of age movies
English
958
94
3K
1.9M
prerat
prerat@prerat·
@shlevy i think this gets to the core limitation of current ai. within the context window it's pretty sharp at picking stuff up, not superhuman but like a human thinking hard for 15 min but theres no equiv to human learning something for a few days/weeks/months. smth sample efficiency
English
1
0
4
23
Shea Levy
Shea Levy@shlevy·
No, but any human who could write code like LLMs can could learn brainfuck and write something passable in a few weeks. Or just write a compiler targeting brainfuck.
Ariel@redtachyon

@shlevy Are most humans capable of writing decent brainfuck?

English
2
0
4
351
Shea Levy
Shea Levy@shlevy·
This doesn’t prove it’s a bubble, but if these systems were in fact intelligent in the sense and to the degree that many proponents claim, they would obviously be able to write decent brainfuck without having seen it before.
Ariel@redtachyon

Ok so let me get this straight. SHOCKING: frontier LLMs suck at writing in esoteric languages. Things like... brainfuck and whitespace? STOP THE PRESSES, STOP THE VCS, IT'S A BUBBLE Brainfuckbench is cute, but this is hardly an indictment of the frontier models' capabilities.

English
6
0
20
6.4K
Tim is making things in Brazil now 🇧🇷
2.5 years ago, @mechanical_monk gave me a single chess lesson. this lesson would grow to define me. it would consume countless hours of my life and possibly destroy what remained of my ability to work on anything ever today i finally crossed 1500 elo on chesscom. thanks monk
Tim is making things in Brazil now 🇧🇷 tweet media
English
4
0
57
1.3K
Chana
Chana@ChanaMessinger·
What exactly is the disaster where crouching on the floor would really help you?
English
16
0
28
4.4K
prerat
prerat@prerat·
man-spider plot in the new spider man movie???!?!? its risky bc appealing to 90s kids has historically gone badly, e.g. raimi spider man 3 with "aggressive" symbiote (originally from 90s spider man tas show) but otoh i am the 90s kid so i will show up for it
prerat tweet media
English
1
1
12
448
prerat
prerat@prerat·
@nosilverv idk doesn't it feel bad to win just because your opponent had bad luck, instead of thru your own skill? in my head glhf is saying like let's have a good competitive match where the better player wins
English
0
0
6
199
Guy BOOK IS LIVE! || CHECK BIO
There's a weird thing I don't really understand about ego… Like in the case of 'good sportsmanship'. In magic online people often write 'glhf' i.e. "good luck have fun". I feel torn—I want to say "good luck" back but I don't actually wish for them to have good luck: I want to win. Similarly I don't believe they actually want me to have good luck: they want to win. But it feels like we "should" (""""""should""""""?) not be attached to the outcome. Like that is somehow base. Below us. But we're also definitely playing to win?? Or like, "argument" and "rationality" in politics. People—from both aisles, mostly—pay lip-service to this. But how many of those people can even say what is a premise or what is soundness or validity? In both cases—in games and in politics—identity and the desire-to-win/crush-your-opponent seems like HYUUUUUGE motivators but that somehow cannot be acknowledged? And so there's this weird co-conspiracy of mutual bad faith where I won't call out yours if you won't call out mine… But in fact people DO NOT know what makes a good argument and they DO NOT overcome their (supposedly) base/selfish desire to win. Which opens the question, to me: would it be better if we were like? "Fuck you have bad luck I hope you die"? Or "fuck arguments IGTFKY"? Like I just don't see a path from the current thing—the simulacrum of the good thing—to the desired thing—the actual good thing…
English
10
0
22
1.2K
prerat
prerat@prerat·
im not surprised that meta is killing off horizon worlds on quest. it was like sorta worse homogenized vrchat. i AM surprised that meta is KEEPING horizon worlds on mobile. huh?? am i out of the loop and kids are playing that instead of roblox?
English
3
0
15
887
prerat
prerat@prerat·
reddit and twitter feeds are fundamentally different (i think) subreddit feeds are (roughly) globally ranked, and then your feed is roughly just the subreddit feeds combined twitter for you is individually sorted (roughly) on your individual P(like). NOT global like count!!
English
1
0
10
489
prerat
prerat@prerat·
a dislike button could work but you have to be smart about it with the algorithm for example, if i have someone blocked, their downvotes should count as upvotes for my feed (and their likes should count as downvotes)
English
4
2
51
1.4K
prerat
prerat@prerat·
@AustinA_Way @openclaw where's this chart from? it seems to say chatgpt has had no impact on cheating bc the overall cheating rate has stayed the same
English
1
0
1
169
Austin Way
Austin Way@AustinA_Way·
There's a cheating method so advanced that no school in the world can detect it. We just caught someone using it. An AI agent can now complete online coursework indistinguishably from a real student. Tools like @Openclaw can mimic the arc movement of the mouse, the pace of a student, etc. It operates at the system level, meaning there's nothing for a platform to detect. So how did we catch someone using it? Every student leaves a pattern in their data, and when an AI agent does the work, that pattern looks nothing like a real student. By extrapolating student data and comparing to the mean, you catch a cheater.
Austin Way tweet media
English
19
2
140
14.6K
peepeepoopoo
peepeepoopoo@DeepDishEnjoyer·
me to yudkowsky: "suppose you were wrong in your assessment about your status as a great thinker" yudkowsky: "but i am a great thinker"
English
20
13
751
27.6K
prerat
prerat@prerat·
@nosilverv @Alphiloscorp it might be exhaustive but "cause" is maybe doing outsized job -- any question of "why" could be phrased as confusion about cause, and once you understand "why" then you aren't confused anymore
English
0
0
3
238
Guy BOOK IS LIVE! || CHECK BIO
Has someone made a taxonomy of confusion? Is this exhaustive?: • What is there? → count / identity / parts-wholes •Where are the boundaries? → categories / cuts / levels •How is it related? → cause / dependence / structure •What does this term pick out? → semantics / reference •What kind of status does it have? → descriptive / epistemic / normative / modal / temporal
English
4
2
18
1.1K
prerat
prerat@prerat·
@RyanPGreenblatt to me it kinda seems like so much of an understatement that it's almost dishonest in the other direction "We're racing to superintelligence because we want to cure death" would be more true but also would probably freak ppl out
English
0
0
21
328
Ryan Greenblatt
Ryan Greenblatt@RyanPGreenblatt·
"We're racing to superintelligence *because* we want to cure cancer" may be a lie, but ASI would likely be capable of quickly curing cancer. I think the risk are very high and well timed slowdowns are easily worth it, but this doesn't mean the benefits of safe ASI are small.
Max Tegmark@tegmark

"We're racing to superintelligence because we want to cure cancer" is a self-serving lie or rationalization by many top AI companies – here's why:

English
12
0
103
9.3K
zephyr
zephyr@ShadowyZephyr·
Am I the only one who still thinks the term "directionally correct" is useful
English
48
5
427
14K
prerat
prerat@prerat·
@tenobrus fwew its not me, recent time mutual 🤓
English
0
0
18
764
Tenobrus
Tenobrus@tenobrus·
is it okay to unfollow a very very long time mutual if i've slowly realized over the course of years that they have incredibly unfathomably retarded takes on every possible subject and to make up for it are also unpleasant and abrasive
English
78
13
789
34.6K
prerat
prerat@prerat·
@abrakjamson but is your self worth based on being CDT rational or FDT rational
English
0
0
1
129
Abram Jackson
Abram Jackson@abrakjamson·
@prerat You can't convince me, because no matter how outlandish the scenario makes my opinion seem, I derive immense self-worth by being more rational than everyone else!
English
1
0
1
167
prerat
prerat@prerat·
in the year 2067, you are approached by an all powerful acausal Mr Beast he says he secretly flipped a $10M coin a minute ago he then explains to you the rules...
prerat tweet media
English
14
3
197
12.1K
prerat
prerat@prerat·
@tenobrus @mimi10v3 i voted bsky because most niche. the argument was strongest when it was mostly just tpot on bsky when i made this post, but i think it probably still applies somewhat
prerat tweet media
English
0
0
5
181
Tenobrus
Tenobrus@tenobrus·
@mimi10v3 people are voting bluesky because of the deranged leftists but they're forgetting twitter is still the prime location for edtwt / shtwt / mentally ill teenagers in literal cults / etc
English
15
0
181
3.2K
prerat
prerat@prerat·
@staysaasy compromise w half birthday party with minor present ? do not claim it is full birthday but still allows some fun and introduces the concept of 0.5
English
0
0
2
354
staysaasy
staysaasy@staysaasy·
2.5yo is convinced that next week is her birthday. Nobody has told her that it's her birthday, she can't read a calendar, and doesn't understand time very well. How do I deescalate. She is making a list of presents that she wants.
English
12
0
46
5.1K
prerat
prerat@prerat·
@moonstne the tricky part is that you didn't agree ahead of time when he tells you the rules, you then are probably like "dang i wish i had precommitted to pay" but you haven't & don't have a chance to before he reveals the coin flip result, which is negative so can you pay anyway?
English
0
0
3
130
Shiki
Shiki@moonstne·
@prerat If I agreed to the terms before the flip, and the coin was 50/50 odds...or odds that I agreed to, then sure, I would pay the $6.7
English
1
0
1
143
prerat
prerat@prerat·
@RokoMijic yeah this is "counterfactual mugging" from LW (w mr beast flavor text) i just think it's interesting bc it tests your conviction more, bc you're losing in this branch (vs winning in newcomb). & it doesn't have the moral intuition that sometimes makes parfit's hitchiker easier
English
0
0
3
79
Roko 🐉
Roko 🐉@RokoMijic·
@prerat This is just Newcomb's Problem... if you would one box on Newcomb you should give him $6.7 Of course that is assuming you believe the setup, otherwise it looks like a Pascal Mugging
English
1
0
3
73