Rui Shu

702 posts

Rui Shu banner
Rui Shu

Rui Shu

@_smileyball

I draw smileyball https://t.co/VZJD2Av8PY Writing organic artisanal handcrafted code @OpenAI Previously doing the same @Stanford

San Francisco Beigetreten Temmuz 2013
460 Folgt3.4K Follower
Angehefteter Tweet
Rui Shu
Rui Shu@_smileyball·
May the prompt be with you
Rui Shu tweet media
English
0
2
39
13.1K
Rui Shu
Rui Shu@_smileyball·
@NeelNanda5 "you get paid 2.5x if you agree to donate 0.8"
English
0
0
1
1.5K
Neel Nanda
Neel Nanda@NeelNanda5·
I worded this badly: "paid 2.5X more" is via donation matching. It is a bad deal from a selfish perspective, and legally binding Eg you either: - Get $400K equity yourself - Or get $200K yourself, and donate $800K to charities of your choice (200 from you, 600 matched from Ant)
Neel Nanda@NeelNanda5

+1, this was one of my favourite parts of Anthropic culture when I worked there. When you get paid 2.5X more equity if you agree to donate half of your's, this filters well for people who actually care about doing good

English
13
4
419
82.9K
Rui Shu
Rui Shu@_smileyball·
should've named it VerTeX
English
0
0
4
464
Rui Shu
Rui Shu@_smileyball·
@distributionat Kinda cool bc "試" means "try". The unembedding for "試" and "try" must be quite close, which is representationally/semantically desirable, but awkward for generation.
English
1
0
16
801
toucan
toucan@distributionat·
i just didn't expect Claude to be chinese
toucan tweet media
English
3
3
42
8K
Peter Schmidt-Nielsen
Peter Schmidt-Nielsen@ptrschmdtnlsn·
Every time this comes up, a bunch of people miss the point. Each one clearly values watching someone eat shit at >$100, and disvalues eating shit at <$100. This is two people with (utterly bizarre) preferences, getting gains from trade that they wouldn't have gotten were it not for dollars as an intermediary. They both *want* to trade eating shit for watching someone eat shit, but needed dollars to coordinate that positive-sum trade across time! Had they come across the piles simultaneously, they could have negotiated eating simultaneously, but they can't. This is *the* huge advantage of money, as a universal IOU; it enables "virtual trades" between parties separated in time and space.
no context memes@nocontextmemes

English
93
187
4.2K
385.2K
Rui Shu
Rui Shu@_smileyball·
@ChuckBaggett @_beenkim I'm actually quite upset that there's not a <disable-link-clicking-unless-I-really-know-what-I'm-doing> functionality in gmail. That added bit of friction is a good forcing function for staying vigilant---especially when we're super distracted.
English
1
0
1
16
Been Kim
Been Kim@_beenkim·
I got my account back! Thank you, first and foremost, to everyone—friends, GDM colleagues---who personally alerted me to this incident and retweeted that I'm hacked, as well as folks at X who helped me regain access. While this incident was terrible (I heard the scammers made huge money out of this), I feel incredibly lucky to have folks who cared♥️♥️♥️ (details of how this happened 👇)
English
24
6
176
22.5K
Rui Shu
Rui Shu@_smileyball·
@BenjaminDEKR neural nets can dream. so we should keep up by aggressively sleeping and dreaming more.
English
1
0
3
728
Benjamin De Kraker
Benjamin De Kraker@BenjaminDEKR·
Igor Babuschkin left xAI (he reported directly to Elon, built Grok 3 & 4) and shortly after, posted about the dangers of companies over-working their best people. Greg Yang (another direct Elon report, cofounded xAI) just left for health reasons and posted: "Until I pushed myself hard building xAI and weakened my immune system, the symptoms weren't noticeable." I will not presume to speak for either of them. Both love their work. However: We need to take a serious look at cultures of overwork and what this does to people. When you create a corporate environment where employees COMPETE to be the most exhausted, the most sleep-deprived, this takes a toll. (This is exactly what many xAI workers have done, some posting about how they're so tired they can barely stay awake to drive home.) If we truly are approaching a tipping point where AI eclipses human work, trying to keep up by never sleeping and working beyond exhaustion is not a winning strategy. The only way forward will be to play to our human strengths: creativity, intuition, and even literally dreaming. When top employees are putting themselves in the hospital from exhaustion, something is wrong.
English
113
62
1.3K
142.7K
Rui Shu
Rui Shu@_smileyball·
@prajdabre > "I'll get some problems labeled and fine tune the model" is a perfectly good starting answer and a good research practice. Do the dumb simple thing first!
English
0
0
9
1.7K
Raj Dabre
Raj Dabre@prajdabre·
I'm a research scientist in Google and "I'll get some problems labeled and fine tune the model" is a perfectly good starting answer. The interview does not automatically end there when you suggest getting fine tuning data as the first sentence. Google interviews are not some gladiator trials, the interviewers are nice and actually prompt and encourage you to unpack your answers and elaborate because they know that people can be nervous. Btw verifiable responses or not, you still need golden responses for things like maths so getting labeled problems is still a legit starting answer. In my RS interviews I did not walk in with optimized responses, I went and explained what I knew and had a discussion (and not an interrogation) so please focus on the core concepts (highlighted in the responses to the post).
Avi Chawla@_avichawla

You're in a Research Scientist interview at Google. Interviewer: We have a base LLM that's terrible at maths. How would you turn it into a maths & reasoning powerhouse? You: I'll get some problems labeled and fine-tune the model. Interview over. Here's what you missed:

English
22
29
1.1K
127.9K
Rui Shu
Rui Shu@_smileyball·
@ForrestPKnight What I'd give to hear Ben Affleck say "typical set" instead of "mean"
English
0
0
0
256
Forrest Knight
Forrest Knight@ForrestPKnight·
Honestly, Ben Affleck actually knowing AI and the landscape caught me off guard, but as a writer, makes sense. Great takes across the board.
English
1.7K
9.6K
111.4K
16.2M
Joseph Garvin
Joseph Garvin@joseph_h_garvin·
"Wow, GPT 5.2 Pro seems to be doing much better in this Connect4 game than GPT 5.2 Thinking did, I wonder why?" *checks thought trace* "Oh mother-"
Joseph Garvin tweet media
English
45
48
2K
314.6K
Rui Shu
Rui Shu@_smileyball·
@docmilanfar me using measure theory: "assume the pdf exists"
English
0
0
7
1.7K
Peyman Milanfar
Peyman Milanfar@docmilanfar·
yet another year gone by and I've not used measure theory
English
29
19
325
70K
Max Kesin
Max Kesin@nlpnyc·
@ilex_ulmus I don't know to what degree this is coordination, but as @ESYudkowsky put (IIRC) it safety researchers at OAI were preselected to have positions consistent with their pre-existing policy. (This does suggest less personal dishonesty)
English
1
0
5
449
Holly ⏸️ Elmore
Holly ⏸️ Elmore@ilex_ulmus·
You’re not just talking to OpenAI and Anthropic staff members like they’re your friends on Twitter. They are part of coordinated comms efforts, with the interests of the companies in mind first. If they want to be taken at their word, they can give up those paychecks.
Holly ⏸️ Elmore@ilex_ulmus

@boazbaraktcs Guys, he is a mouthpiece for OpenAI. Anything he says is something they approve for a reason. He should not be allowed in your discussions like he’s just making arguments.

English
8
3
68
9.5K
Rui Shu
Rui Shu@_smileyball·
@eW8fkgAM52GS9xW @dwarkesh_sp I'm concerned about power inequality in a finite-resource world. While I may or may not care how wealthy someone else is (good for you btw!), I certainly do care how powerful people influence the world I live in. Thus my comment about agency and wanting my actions to matter c:
English
0
0
0
38
Hi
Hi@eW8fkgAM52GS9xW·
@_smileyball @dwarkesh_sp I really just don't think I'd care. Even today if I had a nice house and good food and the ability to travel and spend time with family, I wouldn't care all that much about how wealthy Elon Musk is. I'm already less miserable than he is, what more could I need?
English
1
0
2
58
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
I’ve seen a lot of people misunderstand what we’re saying. Our claim is that in a world of full automation, inequality will skyrocket (in favor of capital holders). People aren't thinking about the galaxies. The relative wealth differences in a thousand years—or a million—will be downstream of who owns the first dyson swarms and space ships. And space colonization isn't bottlenecked by people’s preference for human nannies and waiters. So even if you can make 10 million dollars a year as a nanny in the post-abundance future, or get a 10 million dollar charity handout, Larry Page’s million cyborg heirs can own a galaxy each. You might think this is fine! Why is inequality intrinsically bad, especially if absolute prosperity for everyone goes up? Fair enough, but to me quadrillion fold differences in wealth between humans seem hard to justify in a world where AIs are doing all the work anyways - these disparities in wealth are not incentivizing hard work or entrepreneurship or creativity, which is what we use to justify inequality today. Just to recap, full automation kills the corrective mechanism on runaway capital accumulation - which is that you need labor to actually make productive use of your capital, thus driving up wages. Some people asked: why assume AGI leads to full automation? Maybe people will still prefer human nannies and waiters. Even if true, we think labor's share of GDP—which has been roughly 2/3 for centuries—would still likely collapse toward zero, massively increasing inequality. Here's why. It sometimes happens that when machines are only slightly better than humans, people sometimes pay a premium for the human version. But once machines become much better, that preference disappears. When carriages were not much faster than being carried on a litter, the rich sometimes preferred the litter. Now they prefer the car. They might still have a chauffeur—but once self-driving vehicles are allowed to move far faster, human-driven cars may be relegated to a slow lane. If the economy grows 100x, wages must also grow 100x for labor's share to stay at 2/3. But prices are relative—so this means human labor becomes 100x more expensive compared to AI-produced goods. A human-cooked meal costs 100x what the robot version does. For labor share to hold steady as that ratio grows to 1,000x, then 10,000x, the preference for human-made goods would have to become increasingly fanatical. And there's a second problem: the higher wages rise, the greater the incentive to develop machine substitutes for whatever services humans still provide. The premium on human labor is precisely what incentivizes its own replacement. Just to clarify a few other things: - “Piketty’s long run series are disputed.” We spend a long chunk of the essay explaining why Piketty is wrong about the past! But we’re arguing that the assumption he makes (specifically that labor and capital are substitutes) would be true of a world with advanced enough automation. We spend so much time rebutting his claims about the past because the wronger you think he was about the past, the more you think will change once his assumption comes true. - “A capital tax would lower growth.” Yes, as we point out, capital taxes incentivize consumption now instead of saving and investing for the future, at the margin. But if capital is the only factor of production, then it’s hard to come up with an inequality-capping tax that doesn’t lower growth. - “Capital can escape, both across time and space. This makes a wealth tax impractical.” We agree! As we say in the essay and in the tweet summary below, it would be really hard to implement Pikkety’s flagship solution (a high and progressive global wealth tax). You could go Georgist and try to tax land, but the natural resource share of income is only 5% and is likely to stay low until we hit “technological maturity” for reasons we explain in the essay. We don’t see any easy ways to avoid (literally) skyrocketing inequality - in fact, that’s what inspired us to write the essay and explain this problem in the first place. Also, to address a subtext: I think the currently proposed California wealth tax is a very bad idea for many reasons. This essay is about inequality under full automation, not about how California can make its healthcare expenditures more sustainable.
Dwarkesh Patel@dwarkesh_sp

New blog post w @pawtrammell: Capital in the 22nd Century Where we argue that while Piketty was wrong about the past, he’s probably right about the future. Piketty argued that without strong redistribution of wealth, inequality will indefinitely increase. Historically, however, income inequality from capital accumulation has actually been self-correcting. Labor and capital are complements, so if you build up lots of capital, you’ll lower its returns and raise wages (since labor now becomes the bottleneck). But once AI/robotics fully substitute for labor, this correction mechanism breaks. For centuries, the share of GDP that goes to paying wages has been 2/3, and the share of GDP that’s been income from owning stuff has been 1/3. With full automation, capital’s share of GDP goes to 100% (since datacenters and solar panels and the robot factories that build all the above plus more robot factories are all “capital”). And inequality among capital holders will also skyrocket - in favor of larger and more sophisticated investors. A lot of AI wealth is being generated in private markets. You can’t get direct exposure to xAI from your 401k, but the Sultan of Oman can. A cheap house (the main form of wealth for many Americans) is a form of capital almost uniquely ill-suited to taking advantage of a leap in automation: it plays no part in the production, operation, or transportation of computers, robots, data, or energy. Also, international catch-up growth may end. Poor countries historically grew faster by combining their cheap labor with imported capital/know-how. Without labor as a bottleneck, their main value-add disappears. Inequality seems especially hard to justify in this world. So if we don’t want inequality to just keep increasing forever - with the descendants of the most patient and sophisticated of today’s AI investors controlling all the galaxies - what can we do? The obvious place to start is with Piketty’s headline recommendation: highly and progressively tax wealth. This might discourage saving, but it would no longer penalize those who have earned a lot by their hard work and creativity. The wealth - even the investment decisions - will be made by the robots, and they will work just as hard and smart however much we tax their owners. But taxing capital is pointless if people can just shift their future investment to lower tax countries. And since capital stocks could grow really fast (robots building robots and all that), pretty soon tax havens go from marginal outposts to the majority of global GDP. But how do you get global coordination on taxing capital, when the benefits to defecting are so high and so accessible? Full automation will probably lead to ever-increasing inequality. We don’t see an obvious solution to this problem. And we think it’s weird how little thought has gone into what to do about it. Many more thoughts from re-reading Piketty with our AGI hats on at the post in the link below.

English
279
220
3K
1.2M
Rui Shu
Rui Shu@_smileyball·
@wavage_ the analogy between GLP1 and LLM isn't perfect, but I'm in full support of deliberately overcoming <physically/cognitively painful problems> as a means of character-building and giving us, as Asimov puts it, a feeling of power c:
English
0
0
2
303
Rui Shu
Rui Shu@_smileyball·
@docmilanfar only if you're lucky enough to be stubborn about the right things 🙃
English
1
0
8
620
Peyman Milanfar
Peyman Milanfar@docmilanfar·
The stubbornness that makes you hard to manage as a young scientist is the same trait they call “unwavering dedication” when they give you a lifetime achievement award.
English
15
27
381
25.5K
Rui Shu
Rui Shu@_smileyball·
@andrewgwils NFL is useful! It helps me sleep at night bc it's how I convince myself that finite capacity NN + shitty SGD optimization is a feature, not a bug c: It's a great reminder that the universal function approximator narrative that was popular a ~decade ago is pure bait >:c
English
0
0
2
268
Andrew Gordon Wilson
Andrew Gordon Wilson@andrewgwils·
This is an annual reminder that the no free lunch theorems are irrelevant. The assumptions they make are completely divorced from the world we live in. They should have no bearing on model construction. Let's make this a monthly mantra.
English
17
17
301
50.5K
Rui Shu
Rui Shu@_smileyball·
@max_spero_ wow it's almost as if we trained the models via distribution matching
English
0
0
5
633
Max Spero
Max Spero@max_spero_·
You're not gonna believe this
Max Spero tweet media
English
9
0
116
11.7K
Rui Shu
Rui Shu@_smileyball·
@willdepue @dwarkesh_sp @tszzl I think LLMs have no "taste" for what is a deep vs superficial insight. Imo a deep insight helps me decide next steps, etc. Talking to colleagues leads to more of such insights. But LLMs are great for translating my colleagues' deep insights into something my brain groks c:
English
1
0
18
674
will depue
will depue@willdepue·
not sure if just my experience but feel like the models still are really bad at specializing educational information based on your background. if there’s prompts you like lmk but often there’s no line between ELI5s and a mile of proofs and equations for complex topics. getting an intuition for a new field at a deep level is still difficult
English
12
2
113
5.9K
roon
roon@tszzl·
even minor aristocrats could not get the kind of tutor on demand for their kids that every single family who can pay twenty dollars a month can now
English
170
158
3.1K
294.5K
✶
@lumonheiress·
why does she look like she’s gonna ask a historian if king arthur came a lot
✶ tweet media
English
823
33.4K
473.2K
6.2M
Rui Shu
Rui Shu@_smileyball·
@johnschulman2 Humans that are too jagged, unfortunately, end up not integrating effectively into many organizations.
English
0
0
2
337