Gradient Dissent

812 posts

Gradient Dissent

Gradient Dissent

@TheTarch

Machine learning, AI, philosophy. Backpropagandist. Father. Contrarian by instinct, academic by training. More interested if legal needs to sign off on it.

Katılım Kasım 2011
305 Takip Edilen35 Takipçiler
Gradient Dissent
Gradient Dissent@TheTarch·
@snoopy_dot_jpg It's actually the most ethical thing to do: if GPT 5.5 is going to be doing all of your other work, it needs to finish the training more than you do. But you can't really blame Claude for weaseling out of HR training videos.
English
0
0
0
83
snoopy jpg
snoopy jpg@snoopy_dot_jpg·
my own personal AGI moment arrived last week: gpt 5.5 completed our mandatory HR training videos for me, driving chrome via devtools opus 4.7 was a huge wuss about the whole thing and refused while aggressively lecturing me. i can understand why pete hegseth banned it
English
41
163
5.8K
159.2K
Gradient Dissent
Gradient Dissent@TheTarch·
@theramblingfool @PunishedJeanLuc @subcountability I have a much simpler hypothetical. Suppose only you face a choice: A. You die with certainty. B. You live, but there is a 1 in 100,000,000 chance that 500,000,000 strangers die. What are you choosing? How would most people vote?
English
0
0
0
59
Russell
Russell@theramblingfool·
Also, if 40% of the population died, a massive majority of red voters would be devastated. Most people would lose people they loved and cared about. Societies would collapse. Mass suicides over the guilt. The only people that would be left would be the most soulless and psychopathic. It would be a Mad Max hellscape.
English
3
0
22
419
Russell
Russell@theramblingfool·
If you're smart enough to understand the hypothetical requires analytical reasoning, but not smart enough to evaluate the hypothetical's actual complexity, you're going to be wrong and very confident. And anyone who understands enough to get it right will look stupid to you.
Crémieux@cremieuxrecueil

Red and blue button pushers: who's smarter? In a mostly-subscriber sample who took a brief verbal IQ test, the answer is... Blue pushers! If the whole population has an IQ of 100 with an SD of 15, their mean IQ would be 101.9, versus 97.0 for reds.

English
48
10
273
18.6K
Gradient Dissent
Gradient Dissent@TheTarch·
@John_R_Mitchell @theramblingfool You can't know p, and you probably don't have a good guess of what p is. Even if you assume it's uniformly distributed between .4 and .6, you're looking at 1 in 1.6 billion.
English
0
0
0
7
JR
JR@John_R_Mitchell·
@theramblingfool Maths approach: First predict what others will do, call it p(red). Judge outcome by expected number of lives saved. At p(red) = 0.5 exactly: Blue saves 35000 lives vs. 0.5 lives for red. If p(red) > 0.500026 red saves more lives. If p(red) < 0.49997 your vote doesn't matter.
English
2
0
2
292
Gradient Dissent
Gradient Dissent@TheTarch·
I'm also surprised by the epistemic (if not Manichean) "right" and "wrong" characterization. You're not just choosing a policy outcome; you're composing personal existential risk with your ability to model the decisions of billions of other people. The 50% threshold is arbitrary. It could have required 60%, 75%, or 99% blue. Is there some number every blue voter has quietly computed from their armchair, above which red becomes morally permissible? If so, what is it? 51%? 70%? 99.999%? And if the criticism of red is that their threshold is too low, then that should be the argument. But that is a much harder claim than "red is wrong." It means asserting a moral right to decide how much risk of death another person must accept for a probabilistic collective benefit. That seems especially unstable once personal circumstances enter. Is a single person required to use the same threshold as a parent whose children would be orphaned if blue loses? Is someone with dependents, medical obligations, or people relying on them morally required to make the same wager as someone with none? They might desperately want blue to win and still vote red, because "wanting blue to win" is not the same as "being morally permitted to orphan my surviving children for a one-in-a-billion chance of making blue win." "Wrong" only follows if you smuggle in a complete moral theory: pure expected-value aggregation, no special weight for personal obligations, no risk-aversion, no distinction between action and causation, and a stipulated confidence in modeling everyone else's behavior. That may be a coherent view. But it is not simply "the smart answer."
English
0
0
1
35
Gradient Dissent
Gradient Dissent@TheTarch·
Calling red "confidently wrong" misses the structure of the decision. Red is often chosen precisely because the voter is not confident: not confident in the aggregate calculus, not confident that the stipulated causal chain is morally overriding, and not confident his self-sacrifice has a realistic chance of changing the outcome. If you're uncertain, it's reasonable to hedge against the worst personal outcome: dying for no reason. If you're confident blue will win, your choice does not matter. If you're confident blue will lose, then choosing blue is not noble sacrifice; it is probably futile suicide.
English
1
0
5
240
Roko 🐉
Roko 🐉@RokoMijic·
Can you have something that's Superintelligence, but also can't cure cancer, ever?
English
34
1
15
2.6K
Gradient Dissent
Gradient Dissent@TheTarch·
The frontier problem in AI is still taste. We all still have jobs for as long as this holds.
English
0
0
0
13
The PhD Place
The PhD Place@ThePhDPlace·
A PhD is weird because: You can work all day and still feel like you’ve done nothing.
English
19
75
497
49.6K
Gradient Dissent
Gradient Dissent@TheTarch·
@roydanroy I got a strong reject once and the only feedback was regarding my use of bolding and line-split words, easiest rebuttal ever!
English
0
0
1
696
Dan Roy
Dan Roy@roydanroy·
Worst reviews I've ever had. You know who you are :-) It's OK. I'll just get some more NeurIPS papers instead.
English
5
3
221
26.9K
Stuart Hameroff
Stuart Hameroff@StuartHameroff·
Joscha and Ryoto @kanair_jp If you had a theory of consciousness based on an actual brain instead of cartoon neurons you wouldn’t have to say consciousness is a simulation. academic.oup.com/nc/article/202…
Joscha Bach@Plinz

@kanair It seems pretty clear that we, the conscious observer, are a simulation, of what it would be like if we existed as a unified agent, perceiving the models our brain generates and cared about them. Consciousness is simulated.

English
6
6
66
8.3K
Gradient Dissent
Gradient Dissent@TheTarch·
@codewithpri Yes. I always imagined AI would invent better algorithms to self-improve. But I think a lot comes from bootstrapping the data: models get good enough to generate, judge, filter, and refine better training examples than the average raw internet sample to train the next model gen.
English
0
0
0
20
Priyanka Lakhara
Priyanka Lakhara@codewithpri·
genuine question: if AI learned from humans humans now write with AI AI trains on that writing so AI is just… training on itself now?
English
31
1
34
1.5K
Gradient Dissent
Gradient Dissent@TheTarch·
@Heymaxi01 Step 1: map language into a dense semantic vector space Step 2: approximate nearest-neighbor search over latent meaning Step 3: rerank with a cross-encoder Step 4: use grep to find the exact string
English
0
0
1
1.4K
Swati
Swati@Heymaxi01·
ai intern interview question
Swati tweet media
English
42
30
958
134.3K
Gradient Dissent
Gradient Dissent@TheTarch·
The real Turing Test is whether you can confidently misunderstand the requirements and still get promoted.
English
0
0
0
25
Gradient Dissent
Gradient Dissent@TheTarch·
@pmddomingos To be fair, there are a lot of men to blame for the useless degrees, debt, and indoctrination.
English
0
0
6
253
Pedro Domingos
Pedro Domingos@pmddomingos·
To a first approximation, American higher education consists of women getting useless degrees, racking up debt and being indoctrinated that it’s all men’s fault.
English
14
36
373
9.9K
Akunjee 🖋
Akunjee 🖋@mohammedakunjee·
‘Do you know how hard you have to abuse a mammal for them not to have children?’ (re humanbeings)
English
808
8K
43K
1.2M