Rishi Yadav

1.8K posts

Rishi Yadav

Rishi Yadav

@rishiyadav

Founder & CEO @Roost || #ChatGPT || #LLMs || #AI || Vipassana || Founder @InfoObjects || Published Author : 2 books on Apache Spark @packtpub || #IITDelhi

Saratoga, CA เข้าร่วม Haziran 2009
783 กำลังติดตาม805 ผู้ติดตาม
ทวีตที่ปักหมุด
Rishi Yadav
Rishi Yadav@rishiyadav·
As many of you know, I have long been an advocate for Apache Spark and Databricks. Having conducted extensive work and authored two books on the subject in my previous life, it's exciting to see our paths seem to converge once again at the intersection of Data and AI. I believe that Generative AI and LLMs will transform the current data lake into what I like to call "Lake Vectoria," a lake of embeddings. This blog post shares my initial thoughts on Databricks' latest open LLM offering, DBRX. As I explore this model further, I will continue to share more insights. Stay tuned! #llms #dbrx #databricks @matei_zaharia @databricks @Roost #166 Flexing Bricks as Open Weights: The DBRX by Databricks linkedin.com/pulse/166-flex…
English
4
32
323
19.3K
Rishi Yadav รีทวีตแล้ว
Sudhir Jangir
Sudhir Jangir@sudhirjangir·
So, selling tools to bankers instead of doing the banker's work is a mistake AI founders may make? In BFSI, the winners will be AI that: - processes claims - reconciles ledgers - reviews compliance - underwrites risk The real change will be regulatory judgment + proprietary data?
English
1
1
7
3.6K
Rishi Yadav
Rishi Yadav@rishiyadav·
9/ Until we teach AI to forget on purpose, every deployed model remains a ticking time bomb of memorized secrets waiting to be extracted. The future belongs to models that know when to access data, not absorb it. linkedin.com/pulse/217-dang…
English
0
0
0
41
Rishi Yadav
Rishi Yadav@rishiyadav·
8/ The solution isn't preventing memorization entirely. It's teaching AI to forget on purpose. The winners will master controlled forgetting: • Continuous evaluation to find risks • Selective unlearning to remove them • Retrieval from secure sources instead of memorization
English
1
0
0
53
Rishi Yadav
Rishi Yadav@rishiyadav·
The most dangerous thing about enterprise AI isn't what it doesn't know. It's what it remembers too well. Every language model faces a fundamental tradeoff: memorize in its weights or retrieve from external sources. For enterprises, getting this wrong can be catastrophic 🧵
English
1
0
0
83
Rishi Yadav
Rishi Yadav@rishiyadav·
@pitdesi Does it even work? Once the term gets long enough, the math stops caring and gravity takes over. Stretch it to 300 or 3,000 years, and the payment barely moves. At that point, Euler wins.
English
2
0
4
3K
Sheel Mohnot
Sheel Mohnot@pitdesi·
This is a kick-the-can-down-the-road solution, and we've seen it before in Japan, and it was CRAZY. In the 80s real estate prices in Tokyo were high, so they started offering 50-year mortgages to make things "more affordable..." if you stretch the loan, the monthly payment looks easier. But it's just extra leverage and we know what happens with leverage... Prices went insane. At the peak, a single 0.44 square mile parcel in central Tokyo was valued more than all real estate in the entire state of California. The total land value of Japan exceeded that of the entire United States many-fold. Then the bubble burst, and it was BRUTAL. Prices in Tokyo fell 60 to 80%. Tokyo real estate today is still > 50 percent cheaper than it was in 1989. Many buyers from that era never recovered their equity. Banks spent years holding bad loans, and the broader economy stagnated. Longer mortgages did not make housing affordable. They made prices higher, slowed down equity accumulation, and left households more exposed when prices fell. The crash was deeper and the recovery was longer because everyone had borrowed too much. For most people in the US, the house is the retirement plan. At retirement ~75% of the median Americans net worth is their primary residence. The 30 year mortgage aligns with people's lives - it pays down fast enough that they own their home by the time they stop working. A 50 year mortgage changes that. Equity builds slowly. Leverage stays high. People remain exposed for longer. I like that we are thinking outside the box but this doesn't solve affordability, it hollows out the main wealth-building mechanism for the majority of the population and will have other bad implications IMO The only solution is to build more! We need supply-side solutions not induced demand, c'mon.
Commentary Donald J. Trump Posts From Truth Social@TrumpDailyPosts

Donald J. Trump Truth Social Post 02:10 PM EST 11/08/25

English
102
244
2.6K
681.1K
Rishi Yadav
Rishi Yadav@rishiyadav·
@pitdesi Yes, LLMs just can’t help themselves, especially ChatGPT. No matter how many times you say “no em dashes” in the instructions, you still have to remind it to remove them before the final output.
English
0
0
6
631
Rishi Yadav
Rishi Yadav@rishiyadav·
@pitdesi I placed my order right away, too. It was a no-brainer. I can't wait to see it in action.
English
0
0
1
90
Rishi Yadav
Rishi Yadav@rishiyadav·
OpenAI Dev Day has cemented the platform shift that has been unfolding over the last few years. The SDLC as we knew it is dead. A new one is emerging, built around chat interfaces, drag and drop agent builders, and code that can be edited as easily as a script. This is the new foundation of how software will be built. #AgenticAI #OpenAI #ChatGPT #216 The Beginning of the End for IDEs linkedin.com/pulse/216-begi… via @LinkedIn
English
2
0
1
84
Rishi Yadav
Rishi Yadav@rishiyadav·
@sama Good start. Currently it mostly covers the things I would like to explore but hopefully soon it will cover what is (or should be) on top of my mind today.
English
0
0
1
145
Sam Altman
Sam Altman@sama·
Today we are launching my favorite feature of ChatGPT so far, called Pulse. It is initially available to Pro subscribers. Pulse works for you overnight, and keeps thinking about your interests, your connected data, your recent chats, and more. Every morning, you get a custom-generated set of stuff you might be interested in. It performs super well if you tell ChatGPT more about what's important to you. In regular chat, you could mention “I’d like to go visit Bora Bora someday” or “My kid is 6 months old and I’m interested in developmental milestones” and in the future you might get useful updates. Think of treating ChatGPT like a super-competent personal assistant: sometimes you ask for things you need in the moment, but if you share general preferences, it will do a good job for you proactively. This also points to what I believe is the future of ChatGPT: a shift from being all reactive to being significantly proactive, and extremely personalized. This is an early look, and right now only available to Pro subscribers. We will work hard to improve the quality over time and to find a way to bring it to Plus subscribers too. Huge congrats to @ChristinaHartW, @_samirism, and the team for building this.
English
3.2K
2.9K
41.7K
7.8M
Rishi Yadav
Rishi Yadav@rishiyadav·
Picture 4 components: Information, Intelligence, Agency, Action They COULD connect 12 different ways Only 4 connections actually work Why? Constraints: •Information never pushes (passive) •Action never pulls (execute only) •Intelligence can’t decide (no executive function)
Rishi Yadav tweet media
English
0
0
0
50
Rishi Yadav
Rishi Yadav@rishiyadav·
92% of AI agents aren’t agents at all. They’re missing the one edge that matters: Agency→Action Without it, you have infinite analysis and zero outcomes. The Cogentic AI Graph explains what everyone’s missing: 🧵
English
1
0
0
73
Rishi Yadav
Rishi Yadav@rishiyadav·
@sama I perhaps qualify as super power-user. The most annoying issue is asking 2-4 confirmations before doing something.
English
0
0
0
13
Rishi Yadav
Rishi Yadav@rishiyadav·
I'm getting seriously frustrated with #Claude overloading all the time on my $100 Max plan. Maybe it's time to bite the bullet and upgrade to the $200 one for that 20x capacity boost. The thing holding me back is that I use #ChatGPT way more every single day, and it never gives me grief about busy servers. Even #Gemini has stopped complaining lately, though it still randomly forgets the context mid-conversation.
English
0
0
2
67