Radhakrishnan (Rad) Venkataramani

287 posts

Radhakrishnan (Rad) Venkataramani banner
Radhakrishnan (Rad) Venkataramani

Radhakrishnan (Rad) Venkataramani

@radkris

Member of Technical Staff @xAI. Reasoning / Coding Agents. Prev: @PyTorch @MetaAI @Google @Snowflake.

Mountain View, CA Katılım Ocak 2010
787 Takip Edilen737 Takipçiler
Radhakrishnan (Rad) Venkataramani retweetledi
Jordan Schochet
Jordan Schochet@jordanschochet·
@business Thiel is usually contrarian, weird he’s going with the herd this time
English
21
18
622
39.5K
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
The wild thing about the Bay Area right now: Every Chinese restaurant I go to, the next table is debating: > “Claude Code vs. Codex” > “OpenAI vs Anthropic vs Google vs xAI, who will win” > “Is OpenClaw hype or real?” I just want to enjoy my meal… but I really couldn’t.
English
85
18
620
49.3K
Radhakrishnan (Rad) Venkataramani retweetledi
Prasanna S
Prasanna S@myprasanna·
Are you a funded startup under 30 employees? Do you have a backlog that you are not able to ship? Me and my team will work for you and try to ship your backlog for the next week using our latest sr software engg tool. It’s completely free for you — you just need to give us time, feedback and all the access you give to a new engg hire. Reply back to this thread and I’ll schedule a time with you, if it’s a good fit. I was ranked no 1 engg in india in programming contests and was cto of Rippling. But the models of course have become better than me these days.
English
49
20
365
70.9K
Naveen Rao
Naveen Rao@NaveenGRao·
Please reshare this. I find it extremely distasteful when large companies do shit like this. Instead being creative, they steal it from others who are
Naveen Rao@NaveenGRao

So, I’m going to say something that @nvidia won’t like. You stole our term. You are not doing “extreme” co-design by any real standard. You are doing regular co-design. It’s the standard process of optimizing your product for the market in front of you. We will use a different term since you have polluted OUR term. We started using this 6 months ago, and even said it in a presentation to Nvidia. You never heard this term before we said it to you and now you have usurped it. I guess this is the standard story in Silicon Valley. C’est la vie…

English
8
4
50
25.5K
Radhakrishnan (Rad) Venkataramani retweetledi
TFTC
TFTC@TFTC21·
Jensen Huang: "If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed. This is no different than a chip designer who says 'I'm just going to use paper and pencil. I don't think I'm going to need any CAD tools.'"
English
478
603
7.9K
2.9M
Radhakrishnan (Rad) Venkataramani retweetledi
Riċ
Riċ@ricblurs·
pattern recognition is the highest form of intelligence
English
1K
14.5K
87.6K
2.7M
sarah guo
sarah guo@saranormous·
excited to have sensei @karpathy back on @nopriorspod tomorrow. what wisdom shall we glean? (yes, yes, wth is going on with coding agents)
English
83
35
1.1K
99.1K
Radhakrishnan (Rad) Venkataramani
Fascinating. Coding models are going to have step function improvements once 1. Companies ban (or allow model to scrape) private channels so all context is public in slack, docs, wikis etc…, and for models to categorize and materialize it in a topic.md. 2. We merge everyone’s cursor conversations related to a topic, materialize it in git and let the conversations flow on top of that.
Deedy@deedydas

There is a new fastest product in technology history to reach $1B in ARR.

English
1
0
2
465
Yuhuai (Tony) Wu
Yuhuai (Tony) Wu@Yuhu_ai_·
I resigned from xAI today. This company - and the family we became - will stay with me forever. I will deeply miss the people, the warrooms, and all those battles we have fought together. It's time for my next chapter. It is an era with full possibilities: a small team armed with AIs can move mountains and redefine what's possible. Thank you to the entire xAI family. Onward. 🚀 And to Elon @elonmusk - thank you for believing in the mission and for the ride of a lifetime.
English
747
379
9.4K
3.6M
TBPN
TBPN@tbpn·
Anthropic's @_sholtodouglas provides an update on his "Age of Scaling" game, where instead of playing through an Age-of-Empires-like tech tree, the goal is to get to AI superintelligence and a Kardashev-scale civilization. "Dylan's been getting us all into Age of Empires with these five-a-side matches on the downstairs table. And I wanted to see if I could make an 'Age of Scaling' where you build solar panels, and data centers, and you train AI models, and drones, and so forth. Instead of farms and mining and all this." "I only got 70 to 80% done. All the mechanics work, but it turns out it's really hard to make a game that's fun." "I wanted to capture some of that dynamic of late-night discussions of San Francisco in a game."
English
15
7
176
62.4K
modest proposal
modest proposal@modestproposal1·
Ebay has a chance to do the funniest thing
modest proposal tweet media
English
7
13
330
50K
Georges Harik
Georges Harik@gharik·
Sometimes you feel compelled to do things. At the University of Michigan, I was drawn to artificial intelligence, what could be more appealing than studying what thinking was? and how could we make something that really thought? So I did my PhD in Computer Science focusing on AI, when everyone else told me the field was dead. Soon after, I met the amazing people at Google who I immediately knew would be changing the world, and felt compelled to join them, making the world's information accessible to everyone. When language models started talking, I felt compelled to figure out how they could think beforehand, and was drawn to work with Eric and Noah on Quiet Star. Now, I have that familiar feeling again, of a calling, to work on a humanistic AI, one that understands and values people - alongside amazing friends @ericzelikman, @YuchenHe07, @noahdgoodman, @AndiPenguin and many other amazing humans! I'm excited to announce our company humans& that will work on this humanistic AI. Why? Not because I miss the sleepless nights and pressure of a startup :) The world is changing, and rapidly, and this is a challenging time for people when really no one can predict where the future goes and almost everyone is somewhat anxious as a result. So I think it's worthwhile to think about why that is, and what might be done. I think training an AI to understand us, and value us is part of the answer. I have finally found something more appealing than studying what thinking is - to make the thinking of AIs great for people. I hope you think this is a worthwhile mission, and I hope you will support us - because no one changes the world alone, and we'll need your help to do it.
humans&@humansand

Today we introduce humans&, a human-centric frontier AI lab. We believe AI can be reimagined, centering around people and their relationships with each other. At its best, AI should serve as a deeper connective tissue that strengthens organizations and communities

English
27
15
276
51.3K
Yue Wu
Yue Wu@FrankYueWu1·
Wrapping up a chapter at xAI. To my colleagues: thank you to everyone who has helped me along the way. I truly enjoyed the deep technical discussions and appreciate all the opportunities I was given. Compared to a year ago, I’ve grown beyond anything I expected. I’m sure our paths will cross again. To potential candidates: xAI has some of the most advanced RL systems for language models today. Scaling them from first principles is extremely challenging and deeply rewarding. If you’re early in your career and get the chance, don’t hesitate. xAI is a place where you can take on huge scope and hard work is genuinely recognized. It has been a blast, my friends.
English
37
11
519
60.1K
Radhakrishnan (Rad) Venkataramani retweetledi
Hongyuan Mei
Hongyuan Mei@hongyuan_mei·
To understand the universe, we’ve doubled down on pushing Grok’s mathematical reasoning. It’s deeply rewarding to see that progress making a real impact—recognized by the math community. @ziangchen_math @haozhu_wang @ssydasheng @keirp1 @Yuhu_ai_
Paata Ivanisvili@PI010101

Disclaimer: I had given early access to internal beta version of Grok 4.20 It found a new Bellman function for one of the problems I’d been working on with my student N. Alpay. The problem reduces to identifying the pointwise maximal function U(p,q) under two constraints and understanding the behavior of U(p,0). In our paper arxiv.org/pdf/2502.16045 we proved U(p,0)\geq I(p), where I(p) is the Gaussian isoperimetric profile, I(p) ~ p\sqrt{log(1/p)} as p ~ 0. After ~5 minutes, Grok 4.20 produced an explicit formula U(p,q) = E \sqrt{q^2+\tau}, where \tau is the exit time of Brownian motion from (0,1) starting at p. This yields U(p,0)=E\sqrt{\tau} ~ p log(1/p) at p ~ 0, a square root improvement in the logarithmic factor. Any significance of this result? It will not tell you how to change the world tomorrow. Rather, it gives a small step toward understanding what is going on with averages of stochastic analogs of derivatives (quadratic variation) of Boolean functions: how small can they be?  More precisely, this gives a sharp lower bound on the L1 norm of the dyadic square function applied to indicator functions 1_A of sets A \subset [0,1]. In my previous tweet about Takagi function, we saw that the sharp lower bound on ||S_1(1_A)||_1 miraculously coincides with Takagi function of |A| which (surprisingly to me) is related to the Riemann hypothesis. Here, we obtain a sharp lower bound on ||S_2(1_A)||_1 given by E \sqrt{\tau}, where Brownian motion starts at |A|. This function belongs to the family of isoperimetric-type profiles, but unlike the fractal Takagi function, it is smooth and does not coincide with the Gaussian isoperimetric profile. Finally, in harmonic analysis it is known that the square function is not bounded in L^1. The question here was more about curiosity: how exactly does it blow up when tested on Boolean functions 1_A.  Previously, the best known lower bound was |A|(1-|A|) (Burkholder—Davis—Gandy). In our paper, we obtained |A| (1-|A|)\sqrt{log(1/(|A|(1-|A|)))}. This new Grok’s Bellman function gives |A| (1-|A|) \log(1/(|A|(1-|A|))) and this bound is actually sharp.

English
263
373
2.9K
1.1M
Radhakrishnan (Rad) Venkataramani retweetledi
Bojan Tunguz
Bojan Tunguz@tunguz·
gm happy new year we are now closer to 2100 than we are to 1970 let that sink in
English
7
1
47
7.4K
Patrick Collison
Patrick Collison@patrickc·
Getting old is everyone sending all the "Happy New Year" texts the next morning.
English
94
57
2K
123.1K