Anand Murugan

255 posts

Anand Murugan banner
Anand Murugan

Anand Murugan

@wordsandforms

Leaning towards abundance

Chennai Beigetreten Nisan 2020
59 Folgt13 Follower
Angehefteter Tweet
Anand Murugan
Anand Murugan@wordsandforms·
Anand Murugan tweet mediaAnand Murugan tweet media
ZXX
0
0
0
284
Anand Murugan
Anand Murugan@wordsandforms·
@_svs_ Indians need to wake up to all the manipulation and psyops and attacks we have been dealing with for a long time but were oblivious to. We need to be strong and protect ourselves.
English
0
0
0
65
Anand Murugan retweetet
CooperBaggs 💰🍞
CooperBaggs 💰🍞@edgaralandough·
Every man climbs two mountains in his lifetime. The first is the one everyone told you to climb. You followed the path, hit every milestone, and the emptiness followed you there. That mountain ends with the realisation that you were climbing someone else's mountain the whole time. The second has no audience, no clear path, no one clapping. But every step feels like yours. The hardest part isn't climbing the second mountain. It's walking down the first one.
English
26
164
1K
50.4K
Anand Murugan
Anand Murugan@wordsandforms·
@NinhVu80226 @DannyLimanseta I created automated generation of various prefabs like road segments, road intersections with sidewalks, buildings, etc from building block elements--road surface mesh, lane markers, sidewalk & curb prefabs. One click & it generates a catalog of segments with various geometries.
English
0
0
0
16
Anand Murugan
Anand Murugan@wordsandforms·
@NinhVu80226 @DannyLimanseta Another powerful technique is to ask the AI to create Editor utilities and menu actions to execute workflows like batch processing of assets (adding a script to a set of prefabs, adding and fitting colliders, assigning layers, etc). All automated and executable from a menu item
English
1
0
1
99
Danny Limanseta
Danny Limanseta@DannyLimanseta·
I used Cursor to vibe code a simple fishing game prototype on the Unity Engine. Here's my learnings: - I did not use any Unity MCP for this. The game was built entirely by Cursor models (Sonnet 4.6 for execution and Opus 4.6 for planning) - The model was able to set up the game, getting the basic game mechanics working fairly quickly - I had to use the Unity Game Editor UI to attach components to the in-game objects manually, but it was quite easy to follow the instructions given by the Cursor model - Unity Editor is huge and slow! Compared to Godot, I find the UI really clunky I feel tired looking at it - Unity Assets Marketplace is amazing, there are so many amazing art assets there (like the ones I am using for this game). This is probably the biggest strength of Unity. - I had some issues with restoring checkpoints, probably because of how Unity Game Editor UI being really clunky and I had to manually adjust things in the Editor, which the model doesnt have knowledge of Overall, the results turn out pretty decent, but it was a rather frustrating experience, especially when I had to debug issues or rollback changes. I'll explore more vibe coding on Unity but for now, I think I prefer Godot as a game engine. I just wish there is a Godot Asset Marketplace!
English
92
55
1.3K
180.6K
Anand Murugan
Anand Murugan@wordsandforms·
@psankar @sriramhere @ionhandshaker @muthutalks IMO, one of the biggest shortcomings of western science is over reliance on numerical metrics and statistics. Good thing about AI is that it seems much more intelligent when it comes to these issues compared to typical ‘scientists’ I have met in my life.
English
2
0
1
47
Anand Murugan
Anand Murugan@wordsandforms·
@psankar @sriramhere @ionhandshaker @muthutalks This kind of personalized treatment based on ‘genetic’ data will happen eventually but the legacy drug approval frameworks are very badly designed to handle them. Personally I think statistical approaches in medicine have only limited value because every human is different. 1/n
English
1
0
1
51
Anand Murugan
Anand Murugan@wordsandforms·
@svembu Zoho should take the lead. We need many Indian players
English
1
0
26
2.8K
Sridhar Vembu
Sridhar Vembu@svembu·
Sarvam's highly competitive AI models illustrate an important point: we must do catch-up R&D, however un-prestigious or thankless it feels and as we start to catch up, innovative new ideas will emerge. Sarvam is on a great trajectory! This is why we quietly persist in all the efforts we do.
Pratyush Kumar@pratykumar

📢 Open-sourcing the Sarvam 30B and 105B models! Trained from scratch with all data, model research and inference optimisation done in-house, these models punch above their weight in most global benchmarks plus excel in Indian languages. Get the weights at Hugging Face and AIKosh. Thanks to the good folks at SGLang for day 0 support, vLLM support coming soon. Links, benchmark scores, examples, and more in our blog - sarvam.ai/blogs/sarvam-3…

English
51
433
3.1K
100K
Anand Murugan
Anand Murugan@wordsandforms·
@johnrushx The mistake he is making is not understanding how deep human language is. It goes down to a spiritual ontological level. That’s why the Bible says ‘In the beginning there was the Word’. Language is fundamental to reality — not just a superficial human layer.
English
0
0
0
35
John Rush
John Rush@johnrushx·
I dont share Yann LeCun's point of view If it's true, how can he explain the intelligence in blind humans? They have never seen even a pixel in their life, but they are often as smart as a non-blind person. He makes a typical mistake scientists/devs make by being ego-stubborn
Rohan Paul@rohanpaul_ai

Yann LeCun (@ylecun ) explains why LLMs are so limited in terms of real-world intelligence. Says the biggest LLM is trained on about 30 trillion words, which is roughly 10 to the power 14 bytes of text. That sounds huge, but a 4 year old who has been awake about 16,000 hours has also taken in about 10 to the power 14 bytes through the eyes alone. So a small child has already seen as much raw data as the largest LLM has read. But the child’s data is visual, continuous, noisy, and tied to actions: gravity, objects falling, hands grabbing, people moving, cause and effect. From this, the child builds an internal “world model” and intuitive physics, and can learn new tasks like loading a dishwasher from a handful of demonstrations. LLMs only see disconnected text and are trained just to predict the next token. So they get very good at symbol patterns, exams, and code, but they lack grounded physical understanding, real common sense, and efficient learning from a few messy real-world experiences. --- From 'Pioneer Works' YT channel (link in comment)

English
82
9
140
31.9K
Anand Murugan
Anand Murugan@wordsandforms·
@msathia @peterthiel The best software engineers have always been people who have very good language ability rather than math ability (even before AI). Now even more so because of AI.
English
0
0
0
10
சத்தியா(Sathia)
I don’t even understand what @peterthiel says about word people. Still we need layers and layers of “software” to manage, run and build those models and layers of stuff to use those models. Aren’t they an analytical and engineering function?
Dustin@r0ck3t23

Peter Thiel just told Silicon Valley it’s automating away its own cognitive moat. Nobody there is paying attention. Thiel: “It is striking to me how bad Silicon Valley is at talking about these sorts of things.” The industry is either arguing over 20% improvements in the next transformer model or jumping straight to simulation theory. They’re missing the massive real-world shift happening right in the middle. Thiel: “My intuition would be it’s going to be quite the opposite, where it seems much worse for the math people than the word people.” For decades, Silicon Valley worshipped quantitative intelligence. Math and coding were the ultimate safety nets. Thiel: “Within three to five years, the AI models will be able to solve all the US Math Olympiad problems.” Once a machine instantly solves the hardest math problems on earth, the economic value of being a human calculator doesn’t just decline. It disappears. And the historical irony is brutal. The societal bias toward math over verbal ability started during the French Revolution. Not because math was more valuable. Because verbal ability ran in aristocratic families, and math was elevated as the great equalizer to break nepotism. A 200-year-old political accident became the foundation of Silicon Valley’s entire hiring philosophy. AI is about to snap it back. The people who built the models that can now outperform them mathematically spent their careers optimizing for the wrong skill. The future belongs to the word people. The engineers didn’t see it coming because they were too busy calculating.

English
1
0
1
98
Anand Murugan
Anand Murugan@wordsandforms·
@psankar I have also stopped using my IDE or editor except for sometimes editing large complex prompts (I use sublime for that). I use about 4-6 Codex terminals. I’m developing for Unity, so my feedback loop to AI is manual plus log files.
English
1
0
1
101
psankar
psankar@psankar·
I do not even open my editor anymore these days. Just a Terminal with 3 tabs 1st claude code with some mcp 2nd to run tests 3rd to do: docker, git push, cat etc. I do not do multiple parallel claude sessions like many people recommend though.
English
3
0
13
700
Anand Murugan
Anand Murugan@wordsandforms·
@aravind @Sputnik_India Black holes are not real. General relativity is a childish theory. Most of modern physics is bullshit
English
0
0
0
74
Aravind
Aravind@aravind·
@Sputnik_India Depends on how big the black hole is. In fact, we could have black holes all around us now with no effect.
English
16
23
712
23.2K
Sputnik India
Sputnik India@Sputnik_India·
😲If a lab-made black hole got out of control, Earth could end up like this
English
14
31
283
35.4K
Anand Murugan
Anand Murugan@wordsandforms·
@thsottiaux Handoff feature between ChatGPT desktop app and Codex. Also the Projects feature of ChatGPT is handicapped terribly by lack of search, and share functionalities.
English
0
0
0
26
Tibo
Tibo@thsottiaux·
What could we do better on Codex? App, model, strategy and features… what’s wrong in how we approach things that we should improve immediately?
English
1.2K
11
943
101.3K
Anand Murugan
Anand Murugan@wordsandforms·
@psankar @iVenpu I had good experiences at Thiruchendur, Srirangam and Chidambaram. In the first two I went very early in the morning (5 AM). Paid a priest at Tiruchendur 1000 rupees I think and he took me straight in past the hordes and finished dharshan in 1 hour.
English
1
0
0
50
psankar
psankar@psankar·
@iVenpu Thiruchendhur for me. The most shameless, openly corrupt priest mob that I have ever seen in any temple.
English
3
1
10
599
Anand Murugan
Anand Murugan@wordsandforms·
@psankar Good thing I haven’t been spoiled by enterprise access. I have been developing my product constantly amazed at the speed, quality, sophistication and neatness of the output. I take frequent breaks from AI assisted work because I need time to process what the AI has done.
English
0
0
1
74
psankar
psankar@psankar·
AI tools have ruined hobby programming for me. 20$ per month is the max limit for my middleclass mindset for AI tooling. 100$ permonth is ultra-luxury stuff. After getting used to unlimited Opus in dayjob and 4 hour + weekly limits of Claude, I do not do any hobby projects.
English
5
0
8
1.5K
Anand Murugan
Anand Murugan@wordsandforms·
@antirez Codex is much better for grunt work. Nitty gritty Debugging, technical refactoring, and also does a good job at planning and architecture. It also has vastly better rate limits. Claude code can be good for big system implementations. But token limits are awful
English
0
0
1
81
antirez
antirez@antirez·
Codex seems semantically more powerful than Claude Code. Maybe Claude is a better overall agent, a lot more "respectful" of the files, less prone to do a mess. But GPT5.2 is more capable. I have no doubts... I wonder why Claude Code is *so much* more popular.
English
237
28
868
196.4K
Anand Murugan
Anand Murugan@wordsandforms·
@psankar Codex has very generous limits. I approached my weekly limit once on the very last day of the week. I have never hit the 5 hour limit. Claude code on the other hand kicks me out twice a day. And I am starting to appreciate Codex more (it has a different style). $20/month on both
English
0
0
1
62
psankar
psankar@psankar·
Ideas are big, Token limits are short
English
1
0
2
327
Anand Murugan
Anand Murugan@wordsandforms·
@aravind India should not indulge in such childish things. America and Trump and jokers. American imperialism is on its last legs and is like a cornered cat. They’re thrashing around because they know they are no longer leading the world in many ways.
English
0
0
0
29
Aravind
Aravind@aravind·
It is definitely silly, immature, egotist. But many understand the power of this man and the power of ego massaging this man. No point fighting with principles someone so silly, yet so powerful, who evidently seeks acceptance, acknowledgement, and awards. I would recommend India institute a Nobel prize equivalent award for science, leadership, and entrepreneurship soon. At least 1.2 million dollars - all collected from private sponsorships with sponsoring companies invited to the event for networking. And do what is necessary awarding the esteemed prize for "leadership" to the one seeking such awards. Much better than spending much more in tax payers public money in lobbying Washington. FP needs to be pragmatic this way. Also, the award instantly gains recognition and fame. And from next year the Indian prize can be given to really deserving people from around the world to compete and supersede the Nobel prize.
Aravind tweet media
Aravind@aravind

Trump's ego is a foreign policy tool. The sooner we realize this, the better it is for India. I see upcoming sanctions, tariffs, and arm-twisting of India by the Islamists & racists compromised, predominantly anti-India US congress. An ego massaged Trump can mitigate some of it.

English
164
497
4.2K
295.5K