Anchit Navelkar

173 posts

Anchit Navelkar

Anchit Navelkar

@mronian_

founder @idenhq previously @gumroad, founder/CTO @morphledigipath, @iitkgp

Bengaluru, India Katılım Ekim 2009
604 Takip Edilen152 Takipçiler
Anchit Navelkar retweetledi
staysaasy
staysaasy@staysaasy·
It’s 2018 and your coworker just sent you a 400 line pull request. You get a cup of coffee and sit down to review it. It’s beautiful. Elegant micro-refactors. Crispy method names. You catch a few things, but that’s ok. It’s part of the dance. They didn’t consider extensibility on part of their API. Here’s a comment buddy. They respond in an hour saying they think we should do one piece differently than your comment. Hey let’s jump into a room and figure it out. We can’t just agree to disagree, this code is too important. The PR merges and goes to prod. You feel a shared sense of ownership and accomplishment. That night you go to sleep and dream of that code. You can still see the shapes of it on the backs of your eyelids, your IDE syntax highlighting sparking neurons in your reptile brain. You go to work the next day ready to go. You understand the system. N is your foundation. Time to build n+1.
English
144
431
9.9K
947.3K
Anchit Navelkar retweetledi
Karri Saarinen
Karri Saarinen@karrisaarinen·
I think both extremes appear once either one exists. AI is different from most technologies because its boundaries are so undefined. It can be framed as capable of almost anything, and when it fails, the explanation is often that the user prompted it wrong or that the models are not good enough yet. That makes it hard to evaluate the actual state of the technology. Its correctness is often judged anecdotally. In science, something must be falsifiable: there needs to be a way to prove that something works, and a way to prove that it does not. You cannot keep saying the test failed only because it was not performed correctly, especially when there are no clear instructions for what “correctly” means.
English
1
1
31
3.8K
Anchit Navelkar retweetledi
David Cramer
David Cramer@zeeg·
Everyone is slowly coming to this realization, and I assure you, no one is running multitudes of agents overnight. No one that is doing anything of substance at least. There _are_ people pretending to be scientists, or fully caught up in their drug infused AI overdose, that think their slop machines are changing the world. They're not tho, and they're just wasting a bunch of money and compute to create a lot of LoC that will just get thrown away. The state of the art is still "can we even one shot a production quality patch that we wont regret later", and its rarer than you'd expect based on discourse.
Ronan Berder@hunvreus

Talking to smarter folks than me, I'm convinced many of the AI folks in my timeline are full of shit. Nobody is "running 20 agents over night" and building stuff for actual users. Maybe some are building internal tools or disposable software. Maybe. But building software people like using? That doesn't get hacked on day one or blow up after the 3rd user? Nope. I don't even understand what that's supposed to look like. Do you work out a 57 pages document that perfectly describes what you want to build and then summon 14 agents and have them run wild for 6 hours? And what comes out on the other end isn't a broken pile of shit? Nope. Not buying it. PS: it may also be that I have an IQ of 82 and can't figure it out.

English
170
192
2.6K
715K
Anchit Navelkar retweetledi
Paras Chopra
Paras Chopra@paraschopra·
AI bois be like:
Paras Chopra tweet media
English
124
547
7.4K
296.1K
Josh Cohenzadeh
Josh Cohenzadeh@jshchnz·
I will out-accelerate you all My newest repo ~codemaxxed~ is already at 68M+ LOC & over 6,800 commits in just a day Bring it on
Josh Cohenzadeh tweet media
English
49
12
591
71.2K
Anchit Navelkar retweetledi
Karri Saarinen
Karri Saarinen@karrisaarinen·
I keep thinking about built-in agency vs. agent sprawl. Custom agents may become the new custom fields of AI software: easy to add, hard to manage, and a long-term source of complexity overhead. You end up with hundreds of agents doing narrow tasks across systems independently, conflicting with each other’s changes and burning tokens as they go instead of having one system operating in an agentic way.
English
33
5
141
12.4K
Anchit Navelkar
Anchit Navelkar@mronian_·
@ravishar313 marketing burn can keep bringing in new users giving illusion of token consumption growth
English
1
0
0
169
Ravi Sharma
Ravi Sharma@ravishar313·
@mronian_ I think token consumption is directly correlated to retention. Low retention would also mean a low token count.
English
1
0
1
183
Ravi Sharma
Ravi Sharma@ravishar313·
@mronian_ He's not wrong but how does that matter for ARR is to be seen
English
1
0
3
909
kuldeep
kuldeep@ku1deep·
Here is a riddle. If you define ARR as Almost Reached Reality, what is the number you should claim?
English
8
0
61
4.8K
Anish Acharya
Anish Acharya@illscience·
if you thought saas-pocalypse was bad just wait for computer use to get really good later this year the implications for incumbents are 100x more than coding agents because computer use asymmetrically benefits “hostile” integrators & expect a race to commoditize complements
English
54
26
399
64.3K
Anchit Navelkar
Anchit Navelkar@mronian_·
kinda feels sensationalized java -> python has similar syntax complexity unlambda -> python is a whole different ball game this feels less like “models don’t understand” and more like “nobody understands unseen abstractions without iteration / tools”
Lossfunk@lossfunk

🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵

English
0
0
0
88
David Cramer
David Cramer@zeeg·
so many people on this everything app trying to tell me what im doing wrong with LLMs as if I dont ship 100x more code than them
English
23
5
303
15.5K