Kushagra

389 posts

Kushagra banner
Kushagra

Kushagra

@Finetunedxd

21 | Eng @ghostai

Bengaluru Katılım Mart 2024
253 Takip Edilen112 Takipçiler
Kushagra retweetledi
Stitch by Google
Stitch by Google@stitchbygoogle·
Tomorrow, we’re introducing you to your new vibe design partner. 🤝 Our biggest update ever drops tomorrow. 👀👇
English
245
443
6.7K
1.6M
Kushagra retweetledi
Claude
Claude@claudeai·
A small thank you to everyone using Claude: We’re doubling usage outside our peak hours for the next two weeks.
English
1.9K
3.6K
48.5K
12.6M
Kushagra retweetledi
Morgan
Morgan@morganlinton·
The cofounder and CTO of Perplexity, @denisyarats just said internally at Perplexity they’re moving away from MCPs and instead using APIs and CLIs 👀
Morgan tweet media
English
329
371
5.1K
2.8M
Kushagra
Kushagra@Finetunedxd·
Knowledge work was built around scarcity of information.Agents just ended that. Research, analysis, writing, synthesis the core loops of knowledge work can now run autonomously. The scarce skills now: taste, judgment, problem selection.
English
0
0
2
27
Kushagra
Kushagra@Finetunedxd·
Ai for code, Ai for debugging, Ai for review 😇
Română
1
0
1
31
Kushagra retweetledi
Claude
Claude@claudeai·
Introducing Code Review, a new feature for Claude Code. When a PR opens, Claude dispatches a team of agents to hunt for bugs.
English
2.1K
5.1K
62.7K
23.4M
Kushagra retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)
Andrej Karpathy tweet media
English
1.1K
3.7K
28.3K
11M
Kushagra
Kushagra@Finetunedxd·
Say hello to Rae
Kushagra tweet media
English
1
0
3
94
Kushagra
Kushagra@Finetunedxd·
Escape
Español
0
0
3
149
Kushagra
Kushagra@Finetunedxd·
Last option
Kushagra tweet media
English
0
1
5
123
Kushagra
Kushagra@Finetunedxd·
Late night rides
Kushagra tweet media
English
0
0
4
98
Kushagra
Kushagra@Finetunedxd·
Sometimes AI is crazy
English
0
0
2
77
Kushagra
Kushagra@Finetunedxd·
Locked in
Kushagra tweet media
English
0
0
2
177
Kushagra
Kushagra@Finetunedxd·
Rough day
English
0
0
3
205
Kushagra
Kushagra@Finetunedxd·
Let my intrusive thoughts win
Kushagra tweet media
English
1
0
5
296
Kushagra
Kushagra@Finetunedxd·
What’s stopping you from coding like this
Kushagra tweet media
English
0
0
3
234
Kushagra
Kushagra@Finetunedxd·
What are you guys doinn ??
English
1
0
1
220