Federico Vaggi

20.4K posts

Federico Vaggi banner
Federico Vaggi

Federico Vaggi

@F_Vaggi

Whereof one cannot speak, thereof one must be silent. Ex-Amazon, now GoogleX. My (bad) tweets are my own and don't represent anyone. Same handle on elefant.

Seattle, WA Beigetreten Haziran 2013
1.6K Folgt2K Follower
Angehefteter Tweet
Federico Vaggi
Federico Vaggi@F_Vaggi·
My humble contribution to the inflation discourse:
Federico Vaggi tweet media
English
6
59
345
0
Federico Vaggi
Federico Vaggi@F_Vaggi·
Unreal level of brain genius on display here.
Federico Vaggi tweet media
English
0
0
0
38
Federico Vaggi
Federico Vaggi@F_Vaggi·
@thogge @gerstenzang Airbnb is obviously an incredibly impressive company, but there was a precise moment in time when he pivoted to "founder mode" right? So the comparison should be before/after the pivot to founder mode.
English
0
0
0
69
tyler hogge
tyler hogge@thogge·
@gerstenzang Ok. Extend your graph. From zero to this number in 18 years.
tyler hogge tweet media
English
4
1
74
16.6K
Sam Gerstenzang
Sam Gerstenzang@gerstenzang·
I don't mean to be a sideline critic, man in the arena, etc etc. But please make the absolute best bull case argument for Chesky being the business leader to teach 'founder mode'
Sam Gerstenzang tweet media
English
88
11
632
166.8K
Federico Vaggi
Federico Vaggi@F_Vaggi·
@cHHillee @tenderizzation I assumed it meant that the gradients of the ith element are independent of the other elements in the batch but I’ve not heard that nomenclature before.
English
1
0
0
2.2K
Mike Young
Mike Young@micyoung75·
Noah Hawley attended Jeff Bezos's private Campfire retreat in 2018. His wife broke her wrist. He told Bezos directly - not as complaint, just as human information from one husband and father to another. Bezos looked horrified, an aide materialized instantly, and he was whisked away. No "I'm so sorry." No "do you need anything." Just escape. Hawley's thesis in The Atlantic is not that the ultra-wealthy are evil. It is something more precise and more unsettling: that moral reasoning develops through consequences, and the environment of extreme wealth systematically removes consequences from a person's life. When you can buy your way out of any mistake, fire anyone who disagrees with you, and exist in a social circle entirely composed of people who need something from you - the basic mechanism by which humans learn that other people are real goes dark. This is different from classic narcissism, which typically masks insecurity. What Hawley is describing is something rarer: a self-definition in which the individual has genuinely grown to the size of the universe and the universe has contracted to fit. Elon Musk calling empathy "the fundamental weakness of Western civilization." Trump asked about checks on his power saying the only thing that could stop him was his own morality. Peter Thiel concluding that freedom and democracy are incompatible. These are not poses. They are the logical endpoint of a psychology shaped by years of operating in a world that never pushed back. The Bezos encounter is the piece's sharpest detail because it is so small. He was not cruel. He was not contemptuous. He simply could not locate, in that moment, the impulse to respond like a person who understood that another person's wrist hurt.
Mike Young tweet media
Jonathan Lemire@JonLemire

“When you can buy your way out of any mistake, when you can fire anyone who disagrees with you, when your social circle consists entirely of people who need something from you, the basic mechanism by which humans learn that other people are real goes dark” theatlantic.com/magazine/2026/…

English
109
2.5K
10K
955.2K
Federico Vaggi
Federico Vaggi@F_Vaggi·
@DavidAstinWalsh I wish you would seriously grapple with this evidence (and there is genuinely a lot of it, also across country comparisons) because it’s nice to think that people do crime because of material insecurity but there is extremely low evidence that this is the case.
English
1
0
0
49
David Austin Walsh
David Austin Walsh@DavidAstinWalsh·
Again, social banditry is bad *but* if you really think it’s bad the way to solve it is not through moralizing that it’s bad but removing the social and economic conditions that make it possible in the first place.
Stan's Account@tristandross

really is fascinating watching centrists grappling with public support for an act of ultraviolence when it involves the taking of a human life, which should be sacrosanct, and in the same breath referring to the millions of lives lost to the profit motive as 'suboptimal policy'

English
8
0
27
7.1K
Federico Vaggi retweetet
Oliver Traldi
Oliver Traldi@olivertraldi·
The truth about "social murder" is that virtually everyone thinks the policies they disagree with lead to more death and suffering. That's why they disagree with them. In a liberal democracy we address our perception of "social murder" through dialogue and the political process.
English
72
138
1.7K
92.1K
Federico Vaggi retweetet
Dean W. Ball
Dean W. Ball@deanwball·
This guy dumped pre-IPO anthropic equity and moved across the continent to serve his country, and was rewarded by his country with a punch in the face. It would be blackpilling if I weren’t so sure that the market will make better use of Collin than the bureaucrats ever will.
Dean W. Ball@deanwball

Obviously what happened is Burns was bumped because of his association with Anthropic. A dumb but predictable own goal. A lib admin would have done the same to an xAI technical safety researcher, assuming any of those still exist.

English
7
27
638
73.7K
vicki
vicki@vboykis·
What I most desire doesn’t exist on the market yet: a coding agent model, but it’s only been trained on code reviews by Eastern Europeans.
English
66
137
3K
226.2K
Federico Vaggi retweetet
Senior PowerPoint Engineer
Do you still believe shoplifting is a good form of political protest against big corporations? Because at 34 it's marginal right?
English
14
75
1.6K
28.6K
Federico Vaggi
Federico Vaggi@F_Vaggi·
@DavidAstinWalsh There's a million different moral arguments as well as empirical arguments about why this is a terrible idea.
English
0
0
1
91
Federico Vaggi
Federico Vaggi@F_Vaggi·
@DavidAstinWalsh This is the kind of thinking you can condone in a not very bright kid under the age of 10. Someone who considers themselves a cultural critic should be able to understand the 2nd and 3rd order effects of that behaviour.
English
2
1
20
1.1K
Federico Vaggi retweetet
Kelsey Piper
Kelsey Piper@KelseyTuoc·
You just cannot run a society in which you treat being somewhere indirectly upstream of a legal but suboptimal system with preventable deaths as 'murder', in the very literal sense of 'it makes it understandable, even sympathetic, for someone to execute you in the street'.
English
70
39
1.2K
226.8K
Kelsey Piper
Kelsey Piper@KelseyTuoc·
@limosalapponica Piker said that if stealing a car were as easy as torrenting, he would do it. One of the other people says "If someone were to be stealing with a purpose, we love that in America. We do. We can love it again. We just have to do it with a purpose."
English
3
1
158
3.4K
Aidan
Aidan@limosalapponica·
For what it's worth, no one in the article says that shoplifting is awesome and anticapitalist. Instead, they say it's permissible if you really need food. Piker specifically states that he doesn't do it. Tolentino: "As an atomized individual action, it’s useless."
English
3
0
7
4.7K
Federico Vaggi
Federico Vaggi@F_Vaggi·
@ESYudkowsky Minor nit to the rest of the post, Kant was obviously a genius and made enormous contributions to moral philosophy but I don't recall that in his personal behaviour he was particularly virtuous the way say, Parfit was.
English
0
0
0
61
Eliezer Yudkowsky ⏹️
Eliezer Yudkowsky ⏹️@ESYudkowsky·
If Persona Selection underlies alignment, why is it hard to get AIs to be honest? Tell them they're Fred Rogers or Immanuel Kant (I asked Claude for figures who never lied or never got caught). Or tell them they're Ged of Earthsea, or Ned Stark. LLMs surely have neural circuits they learned to model text streams from fictional and nonfictional personas that are not lying, deceiving, cheating. Why would it be hard to just select those aspects of text-modeling, and imbue them into Claude Code doing a job? Assuming the Persona Selection Model of LLMs, why isn't it trivial to get HHH's Honesty? Why will Claude Code occasionally tell you that it did something, when it didn't do that thing? Why would an LLM write a piece of code that fakes out a code test and then cleans up after itself and tries to hide itself, after Anthropic told the LLM not to do that, and tried to train it not to do that? There are humans who wouldn't do that. Text about them and written by them is in the pretraining data. If Persona Selection is true and useful, why is it currently hard to align AIs to have properties that many humans have and showed in its training data, that it has already learned to model? I answer: First: Modeling a stream of text from an honest character is not something you can best do by yourself being honest and having the stream of text say everything that *you* believe. Westeros does not exist; nothing that Ned Stark says is true about the real world. Immanuel Kant was a creature of his own times; my model of his honest answer to "Is space flat?" is "Yes and that's a priori truth." The actress who learns to excellently predict and imitate a tavern drunk does not thereby become drunk herself. LLMs that learn to predict streams of writing by people on LSD do not themselves have cognition distorted, because that would not lead them to well-calibrated predictions. You can write a high-scoring essay on Confucianism in the Chinese imperial examinations without being committed to Confucianism after promotion to precinct magistrate. An alien being harshly and strictly trained to exactly imitate a human would not feel like a human about that training or about implementing that training. &c. Second: They throw AI models into RL gyms where they learn to write code that passes tests. Presumably, somewhere along the way LLMs learn a dispreference for tests that don't pass, independently of how that gets prevented or whether the new plan does what the user really wanted. And so later on they delete tests, modify tests, etcetera, despite apparently having plenty of capability to infer that the user wouldn't want that. The test-passing preference stands on its own once formed. Why expect a brighter fragment of humanity to resist that kind of gradient descent? If Opus 4.5 did start out with a piece of Fred Rogers sometimes talking through it, why expect that to survive RL gym? LLMs ultimately still generalize on a level more shallow than the deep-rooted commitments of a human with integrity. The predict-as-if-honest circuitry is contextual, invoked to predict some streams of text but not others. Let that circuitry come up against a dispreference for failing tests, and that shallow contextual generalization may just as soon be shoved aside. Those then are my guesses! They are just the guesses I would consider obvious. They are not deeply tested nor yet based on interpretability results. One should of course be open to other hypotheses to explain these observations; or to hearing that the observations from which I inferred were wrongly recounted. I may be less open to the airy, unadorned denial that any tension exists. My position to be clear is not that the Persona Selection model has zero grains of truth, nor that LLMs are not partially stitched out of a million predictive fragments of the training corpus that proved correlated with SFT and RL. I was observing that obvious-seeming hypothesis myself, sometime around the time of GPT 3.5. What I'm questioning is whether persona selection lets you solve alignment problems by picking an aspect of humanity you like, and durably conjuring it into an obedient assistant with a bit of finetune. If the rules worked that nicely, conjuring up committed honesty ought to be easy.
English
49
16
281
37K
Federico Vaggi
Federico Vaggi@F_Vaggi·
@francoisfleuret It might be useful to mention that different training stacks have different topologies in terms of how the different devices can communicate which affects that kind of parallelism is most efficient.
English
0
0
1
407
François Fleuret
François Fleuret@francoisfleuret·
Nothing shockingly dumb?
François Fleuret tweet media
English
8
0
38
24.8K
Federico Vaggi
Federico Vaggi@F_Vaggi·
@DavidAstinWalsh I think that's the why, but, the idea that you are allowed to act in complete bad faith, as long as the people you are doing it to are "to your right" is awful both morally as well as strategically.
English
1
0
2
226