Jinay

813 posts

Jinay banner
Jinay

Jinay

@jinaycodes

@chaidiscovery scholar @neo, prev HFT, @MaticRobots, @schoolhouse_edu

शामिल हुए Aralık 2017
344 फ़ॉलोइंग1.8K फ़ॉलोवर्स
Jinay
Jinay@jinaycodes·
Seeing one of the world's largest pharma companies move so quickly has been eye-opening for me. Thrilled to have Lilly scientists using Chai's models. Their excitement is both contagious and a huge motivator.
Chai Discovery@chaidiscovery

We’re thrilled to announce our collaboration with @EliLillyandCo. We’ll be deploying Chai’s AI models to power biologics discovery across a broad set of drug programs at Lilly. This is a major step forward in pharma’s adoption of AI-native R&D.

English
0
0
2
275
Jinay
Jinay@jinaycodes·
@_kvnshu super cool stuff!
English
0
0
1
55
Jinay
Jinay@jinaycodes·
I can't wait for the day these breakthroughs transform the lives of real people. In hindsight, $130M will have been a small price to pay. We're hiring!
Chai Discovery@chaidiscovery

We raised $130M in Series B funding at a $1.3B valuation to build the computer aided design suite for molecules. The round was led by @GeneralCatalyst & @OakHCFT along with existing investors @ThriveCapital, @OpenAI, @_DimensionCap, @Neo, @lachygroom, @MenloVentures, @svangel, and Yosemite. We're also joined by new investors including @EmCollective and Glade Brook.

English
0
0
13
1.2K
Jinay रीट्वीट किया
Patrick Hsu
Patrick Hsu@pdhsu·
after months of antibody design papers that only work on single chains, we are seeing much-needed progress on full IgG congrats to the chai team!
Chai Discovery@chaidiscovery

Today, we’re releasing new data showing that Chai-2 can design antibodies against challenging targets with atomic precision. >86% of our designs possess industry-standard drug-quality properties without any optimization. Thread👇

English
2
14
173
23.2K
Jinay
Jinay@jinaycodes·
@max4c_ @samhogan indeed they're similar projects. FYI I'm actively working on making soarXiv more up to date!
English
0
0
0
69
Sam Hogan 🇺🇸
Sam Hogan 🇺🇸@samhogan·
Using custom-trained LLMs and > 1k 4090s to visualize 100k scientific research papers in latent space 🌐 DM me for early access 🔜
English
224
338
4.3K
333.2K
Jinay रीट्वीट किया
Joshua Meier
Joshua Meier@joshim5·
Dug up some old footage from the archives…
English
0
4
37
2.6K
Jinay
Jinay@jinaycodes·
@Aizkmusic R2's free tier is amazing for personal projects. Used it for soarXiv.
English
0
0
3
63
aizk ✡️
aizk ✡️@Aizkmusic·
Has anyone used Cloudflare R2 instead of S3? Lets say you want to store thousands of jpgs for a project - but I just despise dealing with Amazon's UI and UX. Or hell, even the media store for my personal website's blog. Seems compelling but I want to hear more.
English
3
0
3
437
Jinay
Jinay@jinaycodes·
@karpathy Reminds me of the skill library in Voyager paper voyager.minedojo.org They gave the agent access to a skill library that it could add to and retrieve from. Over time this let it do increasingly more complex tasks by delegating simpler components of the task to skills.
English
0
0
14
1.5K
Andrej Karpathy
Andrej Karpathy@karpathy·
Scaling up RL is all the rage right now, I had a chat with a friend about it yesterday. I'm fairly certain RL will continue to yield more intermediate gains, but I also don't expect it to be the full story. RL is basically "hey this happened to go well (/poorly), let me slightly increase (/decrease) the probability of every action I took for the future". You get a lot more leverage from verifier functions than explicit supervision, this is great. But first, it looks suspicious asymptotically - once the tasks grow to be minutes/hours of interaction long, you're really going to do all that work just to learn a single scalar outcome at the very end, to directly weight the gradient? Beyond asymptotics and second, this doesn't feel like the human mechanism of improvement for majority of intelligence tasks. There's significantly more bits of supervision we extract per rollout via a review/reflect stage along the lines of "what went well? what didn't go so well? what should I try next time?" etc. and the lessons from this stage feel explicit, like a new string to be added to the system prompt for the future, optionally to be distilled into weights (/intuition) later a bit like sleep. In English, we say something becomes "second nature" via this process, and we're missing learning paradigms like this. The new Memory feature is maybe a primordial version of this in ChatGPT, though it is only used for customization not problem solving. Notice that there is no equivalent of this for e.g. Atari RL because there are no LLMs and no in-context learning in those domains. Example algorithm: given a task, do a few rollouts, stuff them all into one context window (along with the reward in each case), use a meta-prompt to review/reflect on what went well or not to obtain string "lesson", to be added to system prompt (or more generally modify the current lessons database). Many blanks to fill in, many tweaks possible, not obvious. Example of lesson: we know LLMs can't super easily see letters due to tokenization and can't super easily count inside the residual stream, hence 'r' in 'strawberry' being famously difficult. Claude system prompt had a "quick fix" patch - a string was added along the lines of "If the user asks you to count letters, first separate them by commas and increment an explicit counter each time and do the task like that". This string is the "lesson", explicitly instructing the model how to complete the counting task, except the question is how this might fall out from agentic practice, instead of it being hard-coded by an engineer, how can this be generalized, and how lessons can be distilled over time to not bloat context windows indefinitely. TLDR: RL will lead to more gains because when done well, it is a lot more leveraged, bitter-lesson-pilled, and superior to SFT. It doesn't feel like the full story, especially as rollout lengths continue to expand. There are more S curves to find beyond, possibly specific to LLMs and without analogues in game/robotics-like environments, which is exciting.
English
408
831
8.3K
1.1M
Jinay
Jinay@jinaycodes·
@hardmaru @Kimi_Moonshot Is this inflection a result of the LR schedule or something else about Muon?
English
2
0
12
6.1K
Jinay
Jinay@jinaycodes·
@dhruvtrehan9 > is life just creating more and more novelty for yourself? open-endedness folks would probably say yes
Jinay tweet media
English
1
0
2
52
Dhruv Trehan
Dhruv Trehan@dhruvtrehan9·
is life just creating more and more novelty for yourself? it would take me so much more effort than it took then to feel similar excitement - not because i dont get excited as intensely anymore, just that the immensity of it has been replaced with better attention to detail
English
2
1
14
602
Dhruv Trehan
Dhruv Trehan@dhruvtrehan9·
ah to be 18, starting college, alone in a big city for the first time
English
2
4
111
12.4K
Samay
Samay@samaysham·
I am excited to share that I have joined the founding team at Thrive Holdings (@ThriveCapital) to help build exceptional businesses designed to compound over many decades.
GIF
English
123
15
1K
114.4K
Jinay रीट्वीट किया
Richard C. Suwandi
Richard C. Suwandi@richardcsuwandi·
Most AI systems today follow the same predictable pattern: they're built for specific tasks and optimized for objectives rather than exploration. Meanwhile, humans are an open-ended species—driven by curiosity and constantly questioning the unknown. From inventing new musical genres to imagining life beyond our universe, we continuously push the boundaries of what’s possible. What if AI could be as endlessly creative as humans or even nature itself? I wrote a blog post diving into the world of open-ended AI, exploring how embracing open-endedness might help us break the limits of today’s AI systems 👇 richardcsuwandi.github.io/blog/2025/open…
Richard C. Suwandi tweet media
English
5
25
84
27.2K
Jinay
Jinay@jinaycodes·
@kevinhou22 @diabrowser Feels like the kind of onboarding you'd see in a video game and I'm here for it.
English
0
0
5
225
Kevin Hou
Kevin Hou@kevinhou22·
☀️ good morning @diabrowser (sound on! 🔊) This is hands down most cinematic onboarding experience I've ever seen
English
12
7
180
11.9K
Jinay
Jinay@jinaycodes·
@beeejar Amazing read. Takes some insane fortitude to recover and still finish the race.
English
1
0
1
69
benjamin ar
benjamin ar@bjamin_ar·
In May, I did the hardest physical challenge of my life: racing 200 miles on Kansas gravel roads. It took me 12h:07m and 7,359 calories. I wrote a short narrative as a search query to find people who resonate.
benjamin ar tweet mediabenjamin ar tweet media
English
5
0
25
809
Jinay
Jinay@jinaycodes·
Highly recommend applying to Neo Scholars if you're in college. DM me if you are thinking about it. There's been so many instances where I see something cool on Twitter and find out its creator is a part of the Neo community.
Ali Partovi@apartovi

x.com/i/article/1932…

English
0
0
8
1.1K
Jinay
Jinay@jinaycodes·
@TGUPJ How are you using o3 for this? Does it prompt the image gen?
English
1
0
0
215
Udara
Udara@TGUPJ·
I was trying to make a point but o3 is too good at interior design...
Udara tweet media
English
5
0
35
3.4K