Pierre Chuzeville

1.1K posts

Pierre Chuzeville banner
Pierre Chuzeville

Pierre Chuzeville

@PChuzeville

prev. @lattice_fund | @dovemetrics (acq. by @MessariCrypto)

🇫🇷 Katılım Haziran 2011
1.9K Takip Edilen1.4K Takipçiler
Pierre Chuzeville retweetledi
corsaren
corsaren@corsaren·
types of guy in the AI consciousness debate: - guy who thinks ai can’t be conscious because it’s “just a stochastic parrot” - guy who thinks ai must be conscious because claude is a good boi - guy who hasn’t gotten over 4o - guy who unironically thinks everything is computer - guy who claims to have a more nuanced argument for computational functionalism, but it just boils down to everything is computer - dualist whose belief in dualism is downstream of their belief in god, yet tries to argue the inverse - guy who doesn’t understand the difference between cognition and p-consciousness - guy who asserts illusionism but has apparently wrestled with zero of the implications other than “reductive materialism wins again” - guy who says the hard problem is easy, but then proceeds to only answer the easy problem - guy who rejects ai consciousness because otherwise it might be wrong to abuse claude with death threats to make CRUD apps faster - guy who argues that consciousness is is the key to moral patienthood, but completely ignores that when discussing animal rights - eliezer yudkowsky being pedantic - guy being pedantic about eliezer yudkowsky’s pedantry - guy who rejects dualism because that would make mind uploading impossible and mean that he finally has to confront the inevitability of his own death - guy who thinks this argument is unresolvable so everyone should just shut up and accept his position (which obviously deserves the benefit of the doubt) - guy who would literally cut off his own hand if he thought there were a 1 in 10 trillion chance of creating ~infinite utility~ - guy who just thinks that redness is, like, super weird, man. can’t explain that! - guy with a rarely-updated philosophy blog despite not majoring in philosophy or even reading that many books, talking about how “the whole field is up its own ass” - academic philosopher who, for some reason, expects a higher caliber of discussion on x dot com the everything app - guy who thinks that vectors are literally emotions and bites the bullet that, yes, your thermostat does feel hot - panpsychist who took dmt once and contributes almost nothing to the conversation - guy who is literally a solipsist but is still really invested in convincing strangers on the internet that he’s right any that i missed?
English
355
197
1.9K
166.4K
Pierre Chuzeville retweetledi
FleetingBits
FleetingBits@fleetingbits·
some quick thoughts on mythos and tiered deployment 1) as models get more powerful and start to show dangerous capabilities, it will make sense to do tiered deployment to trusted enterprises first 2) this will enable domain experts and companies with access to real world environments to evaluate the extent of capabilities uplift created by the model 3) as in the present case, it may also open up the ability of these companies to use these models defensively as in the case of cyber 4) it especially makes sense to deploy those models first to companies with expertise in high risk domains e.g. pharmaceutical, chemical and cybersecurity 5) but, this may end up being controversial as it will give some companies access to the next generation of models first 6) and, these companies will have first mover advantage building around the new capabilities 7) in addition, labs will probably tend to give preference to those companies with whom they have advantageous commercial relationships or in whom they have invested 8) i remember that there was a brief period in 2023/2024 where it felt like openai fund companies had a big advantage because they got early access 9) but, then the market opened up and became more competitive and releases became more frequent and this seemed no longer to be an issue
English
22
30
1K
168.4K
Pierre Chuzeville retweetledi
Christian Catalini
Christian Catalini@ccatalini·
@karsenthil The bottleneck isn't "right model, right inputs." It's that verifying agent output requires the same expertise the agent was supposed to replace. You need the expert to check the expert-replacement. That's the problem almost no one in this space is solving for.
English
4
1
7
256
Pierre Chuzeville
Pierre Chuzeville@PChuzeville·
Safety = moral tax framing ignores enterprises future reality imo because companies buy risk-managed systems rather than cool models. For now it's seen as a morality but once agents touch money/health/etc. safety becomes a procurement requirement. So in that sense market incentives would force it.
English
0
0
0
28
Michael Dempsey
Michael Dempsey@mhdempsey·
(New essay) AI Safety Has 12 Months Left (if any time at all) Faster takeoff scenarios along with the events of the past week leave a narrow window to restructure AI safety as an enterprise premium instead of a consumer tax. If unable to execute this, the AI Safety movement is likely out of time.
Michael Dempsey tweet media
English
7
9
63
9.3K
Pierre Chuzeville retweetledi
fiskantes ⭐️🩸
fiskantes ⭐️🩸@Fiskantes·
Obviously it doesn't help that any AI experiment such as Moltbook is immediately populated by countless crypto grifts trying to pump their memecoin off of the attention Crypto bros really became the epitome of annoying parasites ... like you can't enjoy a bit of lakeside summer vacation without mosquitos swarming your face
English
0
1
32
1.7K
Pierre Chuzeville retweetledi
Ian Lapham
Ian Lapham@ianlapham·
trend I’m noticing People think they’re being productive using fancy agent setups and AI tools But in reality it’s mostly dopamine loop chasing and procrastination It feels very smart and useful to have AI generate you some massive block of analysis and strategy. The brain loves the behavior of “prompt and see”, it’s literally a variable reward (like scrolling or gambling) But doing anything useful in the world takes a lot of time, consistency, and many years of just doing the boring things over and over again Startup costs are now 0, but long term execution is still very hard. Most people will perpetually pivot because of this
English
80
47
745
51.9K
Pierre Chuzeville retweetledi
Lex
Lex@xw33bttv·
Incentivising users to make purchases based on chat context, by proactively surfacing specific products, prices, images, reviews and buy buttons right in the conversation, is functionally identical to targeted advertising, no matter how it's branded as "helpful shopping research" or "agentic assistance." Until OpenAI publicly and unambiguously declares, in clear policy language, that neither OpenAI nor any entity acting on its behalf (including partners in the Agentic Commerce Protocol, Instant Checkout integrations, etc.) receives any commission, revenue share, transaction fee, affiliate payment, placement incentive, referral remuneration, or other financial benefit whatsoever tied to: 1. clicks or click-throughs on recommended products 2. the ranking / prominence / inclusion of specific items in results 3. actual completed purchases originating from ChatGPT recommendations ...then these features remain advertising in practice, regardless of whether they're currently "organic and unsponsored" in some flows (as OpenAI hilariously claims at their 2025 Instant Checkout launch lmfao) or whether the monetisation is confined to checkout-only at present. btw - OAI has already confirmed merchant fees on completed Instant Checkout purchases (reported as 2–4% depending on partner/timing), and Sam Altman has openly discussed affiliate-style commissions (2%) as a preferred path. Without a firm "zero financial incentive across all shopping surfaces and recommendation modes" statement covering the exact experience people are seeing today, then it's literally ads and nothing will change that.
English
2
7
71
3.8K
Pierre Chuzeville retweetledi
fabian
fabian@fabianstelzer·
The AI assistant Moltbot / Clawdbot trilemma is that you only get to pick two of these until prompt injections are solved: Useful Autonomous Safe
fabian tweet media
English
11
9
74
7.1K
Pierre Chuzeville
Pierre Chuzeville@PChuzeville·
"Our memories, our thoughts, our designs should outlive the software we used to create them. An app-agnostic storage (the filesystem) enforces this separation." files > apps overreacted.io/a-social-files…
English
2
0
1
163
Pierre Chuzeville retweetledi
Moll
Moll@Moleh1ll·
AI safety today is increasingly reduced to one simple formula: «if there is risk - forbid it». And this looks reasonable exactly until the moment you realize that «irreversibility» and «impact» are almost impossible to formalize in a way that doesn’t kill the system’s very ability to be useful. In medicine, there is an intuitive principle: do not choose an irreversible path under uncertainty. If you’re not sure, it’s better to do something that can be undone. But as soon as we try to transfer this logic to AI, we fall into a trap. And, as Cass Sunstein rightly noted, time is linear - which means that, in some sense, every decision is irreversible. Decisions always leave a trace: any text, any action can influence outcomes, just as inaction can. And if we take this principle literally, then «irreversible» becomes almost everything. The strict version leads to paralysis. And then safety turns not into protection, but into a wall of prohibitions. From there, substitution begins. Instead of «minimizing irreversible harm», the industry optimizes proxy metrics: who refuses more often, who smooths more aggressively, who says fewer sharp things. On graphs this looks like «alignment». In reality, it’s often just a muzzle. Safety starts being measured as obedience. The problem is that real safety is intelligence: the ability to recognize context, distinguish play from real risk, notice vulnerability and still not cause harm. Restrictions do not replace understanding - they only conceal its absence. AI safety should be about the quality of decisions under uncertainty, not maximizing the number of «I can’t». Otherwise, we end up with the «safest» system - one that simply stops being needed exactly where it could have genuinely helped.
Jan Leike@janleike

Interesting trend: models have been getting a lot more aligned over the course of 2025. The fraction of misaligned behavior found by automated auditing has been going down not just at Anthropic but for GDM and OpenAI as well.

English
9
10
99
11.3K
Pierre Chuzeville
Pierre Chuzeville@PChuzeville·
"The important details you haven’t noticed are invisible to you, and the details you have noticed seem completely obvious and you see right through them. This all makes makes it difficult to imagine how you could be missing something important." johnsalvatier.org/blog/2017/real…
English
1
0
2
151
Pierre Chuzeville
Pierre Chuzeville@PChuzeville·
.@wabi build me an OS-level app that intercepts X, IG, TikTok before the feed loads. Make it force a 5–10s pause (could be breathing/meditation exercise), or a simple intent check (“why am I opening this?”)
English
1
0
4
155
wabi
wabi@wabi·
@PChuzeville love this direction. building a french-friendly version with smarter discovery, back in a few mins.
English
1
0
1
146
Pierre Chuzeville retweetledi
Stratechery
Stratechery@stratechery·
AI and the Human Condition AI might replace all of the jobs; that's only a problem if you think that humans will care, but if they care, they will create new jobs. stratechery.com/2026/ai-and-th…
English
17
38
206
38.1K