Rupal Jain

125 posts

Rupal Jain banner
Rupal Jain

Rupal Jain

@rupal15081

Looking for fellowships/internships | CS PhD student | UNESCO Research Fellow | Doctoral Fellow at GMU | ex-Google | ex-Adobe | ex-Mozilla

India Katılım Ekim 2016
521 Takip Edilen80 Takipçiler
Rupal Jain retweetledi
👉M-Û-R-Č-H👈
👉M-Û-R-Č-H👈@TheEXECUTlONER_·
This isn't a trained trick, this is a wild transaction. Sea Otters carry "favorite rocks" in their armpits to crack open clams. This otter realized that humans value "things," so he offered his prized possession, a perfect white stone, in exchange for a high-value item, a fish. He literally invented currency on the spot. How cool and adorable is that? ❤️
English
396
3.6K
35.5K
1.3M
Rupal Jain
Rupal Jain@rupal15081·
@sebkrier I just recalled that housing shortage is also the same. You can't regulate your way out of it; you have to build. That means keeping the private sector engaged, not scared off.
English
0
0
1
1
Séb Krier
Séb Krier@sebkrier·
Conventional wisdom in safety circles assume something like "new technologies like AI are risky, can be misused etc so we should slow down." I always find this kind of framing regressive and wrong for similar reasons I dislike degrowth discourse. I've only skimmed the paper (h/t Marginal Rev!), but the authors write that this is wrong because slowing down has large costs too. You're stuck longer with existing dangers, you delay arrival of safer futures enabled by technology, and you postpone the point where society is rich enough to prioritize safety. The risk-minimizing approach might be "go fast while investing heavily in safety" rather than "go slow." The stronger explain is actually: if current technology poses any ongoing risk - and it does (nukes, biotech, climate forcing) - then stagnation mathematically guarantees eventual catastrophe. Zero growth basically sets cumulative risk to infinity. From a policy pov, if safety measures can't keep up with rapid technological change - institutions are too slow, expertise can't scale etc - then speed could undermine the ability to protect society. This is why I've been banging on about the need for investing in state capacity, faster public sector adoption of AI, reforming decaying institutions etc. The solutions to a safer world aren't always about the models or products themselves. philiptrammell.com/static/Existen…
Séb Krier tweet media
Le Pré-Saint-Gervais, France 🇫🇷 English
36
58
337
175.9K
Rupal Jain retweetledi
Kevin A. Bryan
Kevin A. Bryan@Afinetheorem·
Very interesting. We'll see much more of this. Theory is useful for external validity, but particular functional forms we fit to get analytic tractability involve many assumptions that go beyond theoretical restrictions (cue Chuck Manski!). Methods like this are a solution.
Kevin A. Bryan tweet media
English
5
44
230
24.3K
Rupal Jain retweetledi
martin_casado
martin_casado@martin_casado·
I work with multiple companies where nearly all code is AI generated now. However, the productivity probably has only increased 20-30%. Why? I suspect because writing code is really running code. Changes are the result of a business learnings. Or an operational learnings. For mature companies, the majority of PRs are sub 10 lines codifying these learnings. AI clearly helps here (e.g. debugging, running tests, building tools) but less so. Operations and business learnings are workload and company specific. Until AI can perfectly predict what the market needs, or how a system will be used this bottleneck will exist.
English
118
70
931
276.5K
Rupal Jain
Rupal Jain@rupal15081·
@sebkrier I think one super smart RSI agent will be outsmarted by a whole colony of (even semi-smart) competing agents.
English
0
0
1
43
Séb Krier
Séb Krier@sebkrier·
The very long tail of tasks that require some human judgement or taste is often a bottleneck, and many aren't easily specifiable and amenable to automation. You can automate someone's taste in particular, but that remains a snapshot in time whose appeal depletes as preferences change, evolve, contrafict themselves over time, and the desire for individuality overtakes consumers. The problem with the long tail is that it's not a static set: not only do preferences change but historically at least, automation has generated new problem spaces rather than depleting a fixed set. People expect that at some point, "it's solved" - well the world is not a finite set of tasks and problems to solve. Almost everything people ever did in the ancient times is automated - and yet the world today now has more preferences to satiate and problems to solve than ever. The world hasn't yet shown signs of coalescing to a great unification or a fixed state! Of course it's conceivable that at sufficient capability levels, the generative process exhausts itself and preferences stabilize - but I'd be surprised.
English
20
16
180
23.7K
Rupal Jain
Rupal Jain@rupal15081·
@sebkrier And we can wire into them the human-collaboration needed (where they don’t think of humans as a bottleneck but find the right avenues where human wisdom can be best used. That will be a true upgrade in human dignity.)
English
0
0
1
49
Rupal Jain
Rupal Jain@rupal15081·
@sebkrier A simple not-too-techological solution to an evolving problem would be competing AI agents (instead of one God-like-AI) are bound to evolve with time. RSI is in fashion these days -- this could be a non-fancy version of SI but won't be as appealing
English
0
0
1
70
Rupal Jain retweetledi
New York Magazine
New York Magazine@NYMag·
Tech companies are succeeding in making us think of life itself as inconvenient and something to be continuously escaping from, into digital padded rooms of predictive algorithms and single-tap commands: Reading is boring; talking is awkward; moving is tiring; leaving the house is daunting. These are all frictions that we can now eliminate, easily, and we do. Once we’ve adopted a habit of escaping from something, whether it’s Uber-ing dinner five nights a week or using AI for replying to texts, the act of return, which is how we might describe no longer using a tool of escape, feels full of irritating friction. In these moments, we become exactly like toddlers in the five minutes after the iPad is taken away: The dullness and labor of embodied existence is unbearable. “This is why I have resolved to commit to make 2026 a year of friction-maxxing, as an individual but more importantly as a parent,” Kathryn Jezer-Morton writes. There are some obvious places to begin your friction-maxxing journey. Stop sharing your location with your kids and your partner. Stop using ChatGPT completely. No, it does not have good ideas for meal planning. Buy a cookbook. Text your friends for advice. Go to Trader Joe’s. Invite people over to your house without cleaning it all the way up. Friction-maxxing is not simply a matter of reducing your screen time, it’s the process of building up tolerance for “inconvenience” — and then reaching even toward enjoyment. And then, it’s modeling this tolerance, followed by enjoyment and humor, for our kids. Read Jezer-Morton’s full column: nymag.visitlink.me/kIub1B
New York Magazine tweet media
English
45
745
5.2K
941.5K
Rupal Jain retweetledi
Brian Albrecht
Brian Albrecht@BrianCAlbrecht·
Will AI prove Piketty right? Will labor share → zero? Will we need capital taxation? @pawtrammell and @dwarkesh_sp wrote an important essay arguing AI vindicates Piketty's fears. I replied to some of it already but it was heavy on the math. How do we understand their model? I went back to the basics: supply and demand for capital. What assumptions actually need to hold? Getting labor share to zero—not 30%, approximately ZERO—requires either perfect substitutability (no task where humans have comparative advantage) or capital growing without bound forever. For unbounded growth, it's not just about a high substitutability. Returns must always exceed depreciation + impatience. Not just now. Forever. At every capital level. Yes, fast AI progress flattens the demand curve (easier substitution), but it also raises depreciation through obsolescence. Your GPU depreciates because next year's model is better. It's not about it necessarily breaking. Could capital returns always be that high? Maybe. Anything is possible through Christ. But a lot needs to change and a lot more than people realize. For policy, while we aren't at the knife edge, I show that the same features that make capital accumulation explosive are exactly the features that make capital taxation ineffective. Easy substitution + mobile capital = their inequality story. Easy substitution + mobile capital = capital flees when taxed, workers pay. economicforces.xyz/p/ai-labor-sha…
English
7
27
149
24.1K
Séb Krier
Séb Krier@sebkrier·
There are two distinct ways to approach policy-making: the 'Root' method (where one attempts to clarify all objectives and analyze every possible alternative from the ground up) and the 'Branch' method (where one focuses only on incremental changes to existing policies). Lindblom argues that the former - i.e. trying to be perfectly rational and comprehensive - leads to paralysis, failure, or overconfidence because the world is too complex. Instead, taking small, incremental steps (muddling through) is often optimal. Doing so bypasses the impossible task of agreeing on abstract values, accommodates limited human information, and allows for quick corrections if a small step goes wrong. I notice this a lot in AI governance! Great read, particularly for researchers getting into policy. gsdm.u-tokyo.ac.jp/file/Lindblom-…
Séb Krier tweet media
English
12
17
111
11.9K
Rupal Jain retweetledi
Jay Yang
Jay Yang@Jayyanginspires·
The most successful people I know don't actually think about discipline at all. Instead, they align their purpose with their pursuits. They find something that feels effortless to them and looks tedious to others. Because then, doing the work itself is the reward. You have to get them to stop working, not start working.
English
41
35
287
12.7K
Rupal Jain
Rupal Jain@rupal15081·
@sebkrier I think most people are okay with poor not getting richer as long as rich are not getting richer. It's a worldwide rot.
English
1
0
2
95
Séb Krier
Séb Krier@sebkrier·
Three nuances that constantly get lost in AGI x econ debates: first, that "can do anything humans can do cheapy" does not imply perfect substitution. It could, but it doesn't *necessarily* - you need to do more work to demonstrate that. A lot of takes out there still misunderstand what comparative advantage even means and conflate that with capabilities or cost. Of course it's also reasonable to think that AGI will lead to quasi perfect substitution, but the rationale should be outlined clearly. Second, assuming some degree of complementarity, the labour share being tiny compared to capital isn't necessarily a bad thing either: that share can still remain highly productive, and offer more welfare than the counterfactual of 'no AGI'. "The labour share goes down" is not in and of itself a bad thing. Would you rather have 50% of a pizza (current economy) or 1% of a pizza the size of a stadium (AGI economy)? The relevant comparison isn't "labour share" in aggregate but something like "is the median person materially better off?" Third, discussions of inequality come with implicit political views about inequality. That's totally fair and inequality is a valid concern, but those priors should be very explicit; imo political problems are plausible, but assuming extremely high GDP per capita (big if!) I think it'll matter less if everyone is much better off, and status competitions might actually be more salient in a super wealthy world. In status competitions around skill or authenticity, money either doesn't help or actively delegitimizes the win.
Le Pré-Saint-Gervais, France 🇫🇷 English
9
2
63
4.7K
Rupal Jain
Rupal Jain@rupal15081·
@sebkrier Govt. can shittify the things that used to work and run enterprises to the ground. Good intention ain't good enough. They can just stick to law and order and audits and enforcement and NatSec and gathering the needs of its people and make it easy for better people to solve gaps
English
0
0
1
14
Rupal Jain
Rupal Jain@rupal15081·
@sebkrier States manage, audit, enforce contracts, etc etc. but the real material solutions come from the similar companies / ecosystem that safety people villianize. Governments are shit at innovations (DARPA like entities are outliers, but they too have private industry collabs).
English
0
0
1
15
Rupal Jain
Rupal Jain@rupal15081·
@sebkrier this is by far the best peek into your mind.
English
1
0
1
42
Séb Krier
Séb Krier@sebkrier·
Some technologists are gradually rediscovering political sciences through first principles, and I think they should read more Tocqueville. There are a lot of papers calling for alignment of language models with collective preferences - e.g. a country. This is often justified as a way of creating more 'democratic' AI systems, a claim that warrants a bit more examination. I think this is misleading: what it does is that the model ends up reflecting the views and values of the average person (or some majority). So if the average person thinks the death penalty is great, that’s what the model will prefer as a response. This seems bad to me and I don’t care about the average view on any random topic. To the extent that a company voluntarily wants to create AverageJoeGPT then that’s fine, but this should not be something imposed by a state or standards or whatever, or expected as some sort of ‘best practice’. I would much rather have a variety of models, including a model aligned with my views and values, and help me enhance or amplify these. If I think the death penalty sucks, I don’t want a model telling me why it doesn’t unless I explicitly ask for that. Also inb4 some cope about echo chambers, I’m not a fan of the strong paternalistic undercurrent in some ethics circles. I think there’s far more value in a multiplicity of models with different values competing, and while it’s appropriate in some circumstances (e.g. medical) I don’t think ‘the group’ is generally the right unit of analysis for model alignment. Of course one’s actions can affect others, and the same might apply to a model - so it’s fair to have de minimis rules as a baseline, and these can be debated. Mill’s Harm Principle is the (basic) example I always go back to and that I like. Part of the appeal of liberal democracy is that it allows conflicting and competing views to coexist, rather than just deferring to majority rule; the same ‘marketplace of ideas’ approach should apply to models. The other issue with group based alignment of course is that you need to choose the relevant group; I think this ends up either becoming arbitrary fairly quickly, and ends up looking pretty illiberal in practice. Whenever you see these collective alignment exercises, they’re usually restricted to high income Californian WASPs; this isn't just out of convenience. Using truly cross-cultural frameworks like the World Values Survey would illuminate uncomfortable divergence in values and beliefs and challenge the often monolithic assumptions underlying some alignment efforts. On the other hand, allowing alignment at the individual level supports a bottom-up approach to value formation, empowering people to enhance and refine their personal views even in challenging socio-political environments. Some people don’t like this because they have an inherent mistrust of humans and think they need to be corrected through paternalism; I think this should be very strongly opposed; instead we should be pushing hard to preserve the pluralism that underpins truly liberal societies.
Séb Krier tweet media
English
46
79
508
142.2K
Rupal Jain retweetledi
Séb Krier
Séb Krier@sebkrier·
I wrote this note earlier this year and it's so nice to see Richard Sutton make these points so eloquently. Somewhat comforting to know that my intuitions aren't completely off.
Séb Krier tweet media
Dwarkesh Patel@dwarkesh_sp

.@RichardSSutton, father of reinforcement learning, doesn’t think LLMs are bitter-lesson-pilled. My steel man of Richard’s position: we need some new architecture to enable continual (on-the-job) learning. And if we have continual learning, we don't need a special training phase - the agent just learns on-the-fly - like all humans, and indeed, like all animals. This new paradigm will render our current approach with LLMs obsolete. I did my best to represent the view that LLMs will function as the foundation on which this experiential learning can happen. Some sparks flew. 0:00:00 – Are LLMs a dead-end? 0:13:51 – Do humans do imitation learning? 0:23:57 – The Era of Experience 0:34:25 – Current architectures generalize poorly out of distribution 0:42:17 – Surprises in the AI field 0:47:28 – Will The Bitter Lesson still apply after AGI? 0:54:35 – Succession to AI

English
8
6
83
10.3K
Nicholas Decker
Nicholas Decker@captgouda24·
I write about papers, and collect them here. There are now well over 150 papers cataloged for you to learn about. I’m starting a new thread to keep it manageable. Remember, if you like my work, you’d love my blog. I publish 4-5 times a week. nicholasdecker.substack.com
Nicholas Decker@captgouda24

There are now so many papers in the thread — 73 of them! — that it’s become entirely unmanageable. I am starting a new thread here. Please consider sharing if you find economics interesting.

English
15
24
332
494.5K