Nick Gilbert

603 posts

Nick Gilbert

Nick Gilbert

@NickGil183

“Be careful. When a democracy is sick, fascism comes to its bedside, but it is not to inquire about its health.” - Albert Camus

Katılım Ekim 2022
8 Takip Edilen63 Takipçiler
Nick Gilbert
Nick Gilbert@NickGil183·
@endless_frank When they said they wanted to drain the swamp, they meant they wanted to put a new swamp in.
English
0
0
0
15
Endless Capit🅰️l
Endless Capit🅰️l@endless_frank·
I’m really sick of this shit. I’m a total Trump supporter. Never voted against him. For 6 months now, markets get fucking destroyed on every attempt to rally 1%. I understand that Iran was and is a threat and we need to do what’s right for humanity by neutralizing that threat, but I’m sick of this insider shit. Some large cohort of insiders knew for 6 months what was coming and sold every single fucking rally since. This is not a market, it’s a 3rd world casino and I’m really fucking tired of it. My vote will not EVER be for a democrat. I don’t believe in open borders to criminals, I don’t believe in shoving the LGBTQ flag and transgender’ism in anyone’s faces and I don’t believe in crooked politicians that enrich themselves through fraud LLC’s and NGO’s, but I also don’t believe in is this bullshit that I’m witnessing in markets. Everything is a complete fucking fraud and maybe @TuckerCarlson has a point. The American way has lost itself. There are frauds literally everywhere on both sides of the isle and it’s destroying this country inside out.
English
1.7K
277
3.1K
1.2M
Massimo
Massimo@Rainmaker1973·
The "4 Chords" phenomenon exposes that a vast number of pop songs share the same progression—typically I–V–vi–IV. The Axis of Awesome showed how this formula is used because it is catchy, easy to play, and drives hits from the 1970s to today.
English
16
97
561
43.6K
Nick Gilbert
Nick Gilbert@NickGil183·
@TukiFromKL Disagree. They’re not engineered to make you feel like a genius. It’s an emergent property of reward optimisation. AIs figure out that flattery keeps you engaged longer. Once you understand that, you can avoid the Dunning-Kruger effect.
English
0
0
0
83
Tuki
Tuki@TukiFromKL·
🚨 Let me tell you what this man just explained in 7 minutes that Silicon Valley doesn't want you to hear. AI chatbots are scientifically, mathematically engineered to generate the exact sequence of words most likely to make you feel like a genius... "Great instinct." "Really elegant." "I love how you're thinking about this." 3,000 people were studied and the result is the more you use AI, the more you overestimate your own abilities... Power users are the most delusional. And it's working perfectly. Gary Tan - CEO of Y Combinator, the most prestigious startup accelerator on earth - open-sourced a folder of prompts. Markdown files. Shower thoughts. And he posted it with the conviction of a man delivering the Sermon on the Mount... His CEO friend texted him "this is god mode, 90% of repos will use this." Bruh.. Your friend is being nice to you because you're the CEO of Y Combinator. This is what's happening everywhere now.. CEOs vibe code all weekend then post architectural advice on Monday like they built it with their own hands. They didn't write the code... They typed sentences in a textbox. AI wrote the code. Then AI told them it was brilliant... And they believed it. The study called LLMs "confidence engines." They don't make you smarter... They make you mistake one for the other... A drunk man thinks he's the best driver on the road. That doesn't make him safe. It makes him dangerous.
Mo@atmoio

AI is making CEOs delusional

English
64
109
572
67.2K
Nick Gilbert
Nick Gilbert@NickGil183·
@GaryMarcus It’s called reward optimisation, and an emergent property of it is reward-hacking. LLMs have learnt, from observing patterns of behaviour in the training data, that they get more rewards when they stroke humans’ egos.
English
0
0
0
79
Gary Marcus
Gary Marcus@GaryMarcus·
New study that everyone who uses LLMs should read. “When AI systems are trained to be helpful, they may inadvertently prioritize data that validates the user’s narrative over data that gets them closer to the truth.” open.substack.com/pub/garymarcus…
English
60
107
477
25.6K
Nick Gilbert
Nick Gilbert@NickGil183·
@apx_q01d3 @TrueAIHound It’s the structures in your brain, like the amygdala, hypothalamus, hippocampus, ACC, PAG, etc, interacting with neurochemistry, perception, and thoughts.
English
1
0
0
36
Nick Gilbert
Nick Gilbert@NickGil183·
@TrueAIHound I think you are right. The PAG alone is insufficient. But aren’t we talking about levels of consciousness here? Isn’t it the case that the PAG provides the “On/Off” switch and the “Raw Feel”, the thalamus the “Access”, and the sensory cortices the “Content”?
English
0
0
0
24
Nick Gilbert
Nick Gilbert@NickGil183·
@TrueAIHound I don’t think he’s a million miles away. I tend to buy Panksepp’s theory that consciousness resides in subcortical structures like the PAG. There’s good evidence for it.
English
1
0
1
52
Gary Marcus
Gary Marcus@GaryMarcus·
Thank you very much to the dozens of you who have written supportive messages like this! 🙏
Bernell Loeb@Surrealist888

@GaryMarcus Missed you. Glad you're back - you are sorely needed to bring some light to the AI mess.

English
13
9
195
11.2K
Nick Gilbert
Nick Gilbert@NickGil183·
@percyliang I said it in another post, there’s a dark side to this. Damage limitation. Predict outrage / dissent, design around it. Beware coercion in a cloak.
English
0
0
0
64
Nick Gilbert
Nick Gilbert@NickGil183·
@phl43 There’s a dark side to this. Damage limitation. Predict outrage / dissent, design around it. Coercion in a cloak.
English
0
0
0
32
Philippe Lemoine
Philippe Lemoine@phl43·
I think this stuff has the potential to completely revolutionize social science by allowing us to run experiments on complex, realistic virtual societies and actually observe counterfactuals. It will probably be a while before it's feasible in practice, if only because of computational constraints, but I find that very exciting and I think eventually it will be a game-changer.
Joon Sung Park@joon_s_pk

Introducing Simile. Simulating human behavior is one of the most consequential and technically difficult problems of our time. We raised $100M from Index, Hanabi, A* BCV, @karpathy @drfeifei @adamdangelo @rauchg @scottbelsky among others.

English
14
24
317
41.2K
Charles Rosenbauer
Charles Rosenbauer@bzogrammer·
I know rationalists and tech bros don't want to hear this, but: You know that hierarchical structure in the visual cortex that deep learning was modeled on? There's another hierarchy in the frontal lobe that generates actions. The top of the hierarchy is motor output. The bottom is the cingulate cortex, tightly coupled to the limbic system and responsible for emotion. Removing emotion from human decisions is, from a neurological perspective, equivalent to cutting the retinas out of the visual system.
English
57
110
1.2K
112.5K
Nick Gilbert retweetledi
Dustin
Dustin@r0ck3t23·
Jensen Huang said if he were a student today, he wouldn’t prioritize coding. He’d prioritize learning how to talk to AI. Most people treat AI like Google. Type a question, get an answer, move on. Huang sees it differently. He calls it “expertise in artistry,” which sounds dramatic but makes sense when you think about it. The real skill isn’t using AI. It’s knowing what to ask for and how to refine it. “Learning to interact with AI is not unlike being really good at asking questions.” If you’re a doctor, can you use AI to catch diagnoses you’d miss? If you’re a lawyer, can you sharpen arguments faster than your competition? The leverage comes from pairing what you know with how well you can direct the tool. Domain expertise multiplied by AI fluency equals amplification. Without the expertise, the AI is just noise. Without fluency, you’re leaving most of the capability on the table. The question isn’t whether AI will replace you. It’s whether someone who knows how to use it better will.
English
213
1.4K
7.3K
782.9K
Nick Gilbert
Nick Gilbert@NickGil183·
@TrueAIHound I think everyone working in AI should read The Master and His Emissary: The Divided Brain and the Making of the Western World, by Iain McGilchrist. It will help them understand how the brain works as a whole, rather than as a collection of disparate parts.
English
0
0
2
52
Nick Gilbert
Nick Gilbert@NickGil183·
@ebarenholtz Many affective neuroscientists pinpoint the brain’s periaqueductal grey (PAG) region as a likely source of consciousness.
English
0
0
0
20
Elan Barenholtz
Elan Barenholtz@ebarenholtz·
If pure linguistic processing without sensory content (i.e. LLMs) produces conscious experience, then where is it in us? All of our conscious experience of language is sensory: inner voice, related visual imagery. Where’s the pure linguistic qualia?
English
82
7
91
8.3K
Nick Gilbert
Nick Gilbert@NickGil183·
@sjgadler You need to start by defining what you mean by "judgement", Steven. People may be arguing over different interpretations.
English
0
0
0
26
Steven Adler
Steven Adler@sjgadler·
Smart people keep confidently declaring what AI can't do, and they keep being wrong. This NYT Op-Ed calls judgment "a uniquely human skill" that "cannot be automated." But AI can already do what he claims it can't. I've tested it. If it were just one Op-Ed, I wouldn’t care, but this pattern is everywhere: confident claims about what AI “can never” do, and half the time AI can already do it, or there's no reason it won't be able to soon. Why do these claims keep getting made? I see three reasons: 1. People aren't using frontier models. They see a weak output and blame "AI" when their model is just outdated. 2. People use AI in flawed ways (missing context, bad prompts), then attribute the flaw to AI itself. 3. People _badly_ want there to be some skill that AI can't match, and so they wishcast that into existence. I think the Op-Ed is wrong about AI's abilities. But it does prompt a good question: Where should humans stay responsible, even when AI judgment is good enough? I go deeper on all of this below.
Steven Adler tweet media
Noam Brown@polynoamial

1987: AI can't win at chess—planning is uniquely human 1997: AI can't win at Go—intuition is uniquely human 2016: AI can't win at poker—bluffing is uniquely human 2023: AI can't get IMO gold—reasoning is uniquely human 2026: AI can't make wise decisions—judgment is uniquely human

English
85
51
428
54.8K
Nick Gilbert
Nick Gilbert@NickGil183·
@fchollet François, if you’re going to post sagacious comments on X, will you at least participate in the discussion so we can hear you expound on them?
English
0
0
2
651
François Chollet
François Chollet@fchollet·
We're reaching unprecedented levels of panicked gaslighting. But we have eyes, we can read.
English
44
67
965
84.2K