JP

139 posts

JP banner
JP

JP

@Parvashah_

i learn, i write, i build.

San Francisco Bay Area Katılım Kasım 2017
220 Takip Edilen47 Takipçiler
Sabitlenmiş Tweet
JP
JP@Parvashah_·
@2am_vibxz @MjsWurld Human beings 200000 yrs old but we only have a recorded history for abt about 10000,while the earth is 4.5 billion yrs old, then aren't v just a speck of dust in time but all tht doesn't matter bcause all v hve rn is the present and all we can do is make the best of this present.
English
0
12
165
0
JP retweetledi
Tech with Mak
Tech with Mak@techNmak·
This is Jevons Paradox applied to software. When something gets cheaper, we don't use less. We use more. Coal got efficient → more coal consumed. Compute got cheap → more compute consumed. Code gets cheap → more code consumed. The demand for people who can produce, manage, and direct that code doesn't shrink. It explodes.
English
1
6
43
4.1K
JP retweetledi
Dev Shah
Dev Shah@0xDevShah·
settling beef between my bros
Dev Shah tweet media
English
1
2
11
330
JP
JP@Parvashah_·
@0xDevShah This is sick @0xDevShah, I want to see a steel ball dropped in a bucket of gallium, is that possible ??
English
1
0
0
39
Dev Shah
Dev Shah@0xDevShah·
the models are phenomenal. saas is easy to build. so, i am going back to the atoms, relearning physics from first principles. building mujoco plugins from scratch. here's buoyancy. will build more. if you have a want me to build a plugin, drop them down, and i will give it a shot.
English
1
2
13
2.2K
JP
JP@Parvashah_·
@0xDevShah L, can't wait for google to invest in as well, once anthropic blocks them from their models
English
0
0
1
195
Dev Shah
Dev Shah@0xDevShah·
dario, anthropic, and all their models seems to harbor some opaque superiority complex that i cannot fully understand. they despise their model enablers as fiercely as they revere their models. to me, dario appears to be an ai doomer toward anything built outside anthropic.
BOOTOSHI 👑@KingBootoshi

@sama how did anthropic fumble the biggest open sourced ai project of all time 😭😭😭😭😭😭

English
13
3
55
11.3K
JP
JP@Parvashah_·
@0xDevShah @tunguz Even sora could probably make something better than the trash we got in s8
English
0
0
0
6
Dev Shah
Dev Shah@0xDevShah·
@tunguz @Parvashah_ and i were just talking about this today, funny how ai scifi "what ifs" are gonna be trending soon
English
1
0
4
295
Bojan Tunguz
Bojan Tunguz@tunguz·
I can’t wait to use Seedance to fix the last season of GOT.
English
23
7
201
12.8K
JP
JP@Parvashah_·
Agreed @0xDevShah, we literally shipped this exact pattern dynamic tool registration, prompt switching, context-driven schema changes, the whole thing. The paper is good formalization but builders have been naturally converging to something very similar for some time now
Dev Shah@0xDevShah

sorry, is it just me who's not getting the hype around this? the rlm paper is a great formalization of what many production teams have built over the past year. devin, hippocratic, manus, claude code, codex cli, they all independently converge on this exact pattern. > prompts are mutable env variables > recursive self delegation > persistent state across tool calls > chunking long contexts > farming out subtasks to sub agents at my previous company @Parvashah_ and i built a similar agentic architecture for ads management on the meta console. the agent could dynamically generate functions and register them as callable tools at runtime. it had built-in tooling for prompt switching. as the execution context moved through campaigns, then adset, and then ad creation, the system would swap parameter schemas and validation rules. the harness would also reconfigure itself based on where the agent was in the workflow. i'm appreciative of @lateinteraction's work. he did great work with dspy too. practitioners were doing ad hoc with prompt optimization, and he gave it a formal framework so thousands of teams could adopt it. rlms will do the same. now that the pattern has a name and ablations and a training recipe, way more teams will build on it. that's genuinely valuable. and labs like anthropic are betting on the idea that models reasoning through code and recursive self-delegation is the path to general capability.

English
0
0
0
15
JP
JP@Parvashah_·
this quote stays in my mind rent free
JP tweet media
English
0
0
0
12
JP
JP@Parvashah_·
@0xDevShah On point. Codex is just much more accurate.
English
0
0
0
75
JP retweetledi
Dev Shah
Dev Shah@0xDevShah·
sorry, is it just me who's not getting the hype around this? the rlm paper is a great formalization of what many production teams have built over the past year. devin, hippocratic, manus, claude code, codex cli, they all independently converge on this exact pattern. > prompts are mutable env variables > recursive self delegation > persistent state across tool calls > chunking long contexts > farming out subtasks to sub agents at my previous company @Parvashah_ and i built a similar agentic architecture for ads management on the meta console. the agent could dynamically generate functions and register them as callable tools at runtime. it had built-in tooling for prompt switching. as the execution context moved through campaigns, then adset, and then ad creation, the system would swap parameter schemas and validation rules. the harness would also reconfigure itself based on where the agent was in the workflow. i'm appreciative of @lateinteraction's work. he did great work with dspy too. practitioners were doing ad hoc with prompt optimization, and he gave it a formal framework so thousands of teams could adopt it. rlms will do the same. now that the pattern has a name and ablations and a training recipe, way more teams will build on it. that's genuinely valuable. and labs like anthropic are betting on the idea that models reasoning through code and recursive self-delegation is the path to general capability.
alex zhang@a1zhang

Much like the switch in 2025 from language models to reasoning models, we think 2026 will be all about the switch to Recursive Language Models (RLMs). It turns out that models can be far more powerful if you allow them to treat *their own prompts* as an object in an external environment, which they understand and manipulate by writing code that invokes LLMs! Our full paper on RLMs is now available—with much more expansive experiments compared to our initial blogpost from October 2025! arxiv.org/pdf/2512.24601

English
23
18
255
44.2K
JP
JP@Parvashah_·
@lexfridman True, but I also feel like its become difficult to go through the pain of learning.
English
0
0
2
99
Lex Fridman
Lex Fridman@lexfridman·
Programming is now 10x more fun with AI.
English
1.1K
654
9.6K
726.9K
Neye 💞
Neye 💞@meikkp·
Even if you have 0 followers Say hi 👋🏽 let's follow you instantly 🤍
Neye 💞 tweet media
English
4.7K
274
3.1K
277.2K
JP
JP@Parvashah_·
@feelzyou Everyone's story is uniquely their own. The absence of precedence isn't a bad thing, it's proof of your originality.
English
0
0
1
4
JP
JP@Parvashah_·
@gdb That's great and all, but could we please add a revert button to stop execution and roll back changes? Pretty please? 🙏
English
0
0
1
118
JP
JP@Parvashah_·
@sama That's great and all, but could we please add a revert button to stop execution and roll back changes? Pretty please? 🙏
English
0
0
2
519
Sam Altman
Sam Altman@sama·
More than 1 million people downloaded Codex App in the first week. 60+% growth in overall Codex user last week! We'll keep Codex available to Free/Go users after this promotion; we may have to reduce limits there but we want everyone to be able to try Codex and start building.
English
1.4K
308
7.1K
997.5K
JP
JP@Parvashah_·
Heads up: Agent-to-agent payment protocols are dropping soon. Imagine agent societies where your personal AI buddies network, swap info, and handle deals for you. The future's wild!
English
0
0
1
112
JP retweetledi
Quoc Le
Quoc Le@quocleix·
Excited to share our latest work: "Semi-Autonomous Mathematics Discovery with Gemini." We used Gemini to systematically evaluate 700 "open" conjectures in the Erdős Problems database. The result? We addressed 13 problems marked as open—finding 5 novel autonomous solutions and identifying 8 existing solutions missed by previous literature. Read the full case study here: arxiv.org/abs/2601.22401
Quoc Le tweet media
English
45
209
1.3K
246.1K
JP
JP@Parvashah_·
@eptwts Spot on. Most folks don't chase the same dreams as you, so it's fine to drift apart and forge your own path. True friends will do the same level up or level out.
English
0
0
3
207
EP
EP@eptwts·
here's my advice to any young guy on the comeup... be very careful with who you let influence your thoughts we've all heard the "you're the average of the people around you" saying - while cliche, it holds a lot of truth ultimately it all boils down to what you're exposed to & how that shapes your beliefs if you're someone that's never exposed to high-level thinking, all of your friends are working paycheck to paycheck & have zero ambition, you're predisposed to go down that same path if your friend group was full of guys making $100k/month & getting what they want out of life, you'd form the belief that getting to $25k/month is easy it's all about what you're exposed to this isn't me telling you to cut off your friends if they aren't ambitious, that would be stupid... but i do advise you to have 2 circles, one with your OG friends that you can vibe with & another w/ business partners, people who genuinely bring the best out of you & motivate you to do better you may have what it takes to succeed, but the likelihood that your whole friend group does is very low - don't let them hold you back
English
41
20
552
19.9K
Nalin
Nalin@nalinrajput23·
be honest, which AI tool is best for coding?
Nalin tweet mediaNalin tweet mediaNalin tweet mediaNalin tweet media
English
314
61
1.3K
184.4K
JP retweetledi
alphaXiv
alphaXiv@askalphaxiv·
"First Proof" A team of researchers proposes a way to test if AI can actually do NEW math by releasing 10 freshly-solved and never public research questions, with answers temporarily encrypted. This let's the community able to measure the genuine performance of LLMs on proof-generation, before their solutions drop. Questions include: - stochastic analysis - p-adic representation theory - algebraic combinatorics - spectral graph theory - equivariant algebraic topology - lattices in Lie groups/topology - symplectic geometry - tensor algebraic relations - numerical linear algebra
alphaXiv tweet media
English
42
148
850
75.9K