Mugilan S

130 posts

Mugilan S banner
Mugilan S

Mugilan S

@Mugilan_SS

Building Lua | AI Co Video Editor

Bangalore เข้าร่วม Ağustos 2025
119 กำลังติดตาม247 ผู้ติดตาม
Mugilan S
Mugilan S@Mugilan_SS·
I love codex compare to claude code. Codex doing amazing jobs
English
1
0
5
3.7K
Mugilan S
Mugilan S@Mugilan_SS·
Codex is not like claude code. if you know the limit is going to end, like last 10 to 8%, give an very long run task, and even after the limit got ever, it will continue to do the task until the task was completed. Shout out to @OpenAI team.
English
304
391
12.8K
1.3M
Mugilan S
Mugilan S@Mugilan_SS·
AI can build CRMs and known tools, but fails in deep, new complex systems
English
0
0
4
3.2K
Mugilan S
Mugilan S@Mugilan_SS·
This is the first time, i am feeling so much distracted on social media.
English
0
0
1
1.6K
Mugilan S
Mugilan S@Mugilan_SS·
is anyone still coding without ai coding assistant?
English
0
0
2
1.2K
Mugilan S
Mugilan S@Mugilan_SS·
AI with CLI is the very good combination.
English
0
0
1
303
Mugilan S
Mugilan S@Mugilan_SS·
i used all the models(gemini) and finally came to 2.5 flash, and it is also facing high demand.
Mugilan S tweet media
English
0
0
1
270
Mugilan S
Mugilan S@Mugilan_SS·
Is this is the thing that close the human in training models everyone is waiting for?
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
0
0
0
291
Mugilan S
Mugilan S@Mugilan_SS·
Building Production grade application is not a joke, ex: google is the best example, even an great mind, they not able to build a prod ready app called Antigravity. After launch Which is Dec 4, till now Mar 1, the app contains so much bugs.
English
0
0
1
176
Pankaj
Pankaj@pankajstwt·
celebrating 2k+ followers on x! i’m giving away a figma file with fully editable, scroll-stopping hero designs for FREE want it? - comment “hero” - follow me (so i can dm) - repost so more people get it
Pankaj tweet media
English
349
113
724
30.5K
Mugilan S
Mugilan S@Mugilan_SS·
I have been thinking about this for an long time. what if everything changed to api, and to which platform that ai is going to use and the AI that we are speaking about is chatgpt or other, or it is the companies own AI will interact with the application.
Sridhar Vembu@svembu

Very well-articulated post on "Every SaaS is an API" - to be used by agents that are driven by the AI to contextually integrate the underlying SaaS apps and offer a vastly richer and easier user experience. In other words "AI is the UI" for SaaS. This is where we need to go and we will go, as quickly as possible.

English
0
0
2
210
Mugilan S
Mugilan S@Mugilan_SS·
the first demo to the prospect was failed and everything starting now.
English
0
0
1
102
Ara Ghougassian
Ara Ghougassian@araghougassian·
we're hosting a 14 day founder program start from nothing build a working product make your first online dollar open to only 30 people comment “BET” if you wanna join
English
1.1K
85
1.4K
68.7K
Mugilan S
Mugilan S@Mugilan_SS·
codex 5.3 not good, i am feeling like it is gpt 4 or some smaller model. i could't feel the power of the new model in gpt.
English
0
0
1
135
Mugilan S
Mugilan S@Mugilan_SS·
best way to get a good prompt is using claude. no other models out there have the capacity to write good prompt
English
0
0
1
95
Mugilan S
Mugilan S@Mugilan_SS·
to move fast, you need more money, harsh reality of using ai to build applications faster
English
0
0
1
85
Mugilan S
Mugilan S@Mugilan_SS·
It’s 2026 and it still takes me 3–5 hours to edit one short Instagram video. AI can generate videos, but it doesn’t help inside my timeline. I don’t want AI to replace editors. I want AI to work with us, suggest cuts, fix pacing, find dead space. So I’m building it.
Mugilan S tweet media
English
0
0
1
108
Mugilan S
Mugilan S@Mugilan_SS·
prompting is not that easy you think, it is super hard when you working AI agents that doing multiple tasks.
English
0
0
2
75
Mugilan S
Mugilan S@Mugilan_SS·
When you’re chasing something hard, you end up sacrificing a lot sleep, time with friends, simple moments you can’t get back. But remember, everyone moves with different intentions. You’re building something meaningful. Stay focused. Keep going.
English
0
0
3
71