Sohan

6.4K posts

Sohan banner
Sohan

Sohan

@HiSohan

I collect dots to connect them later. Views are my own. Building https://t.co/QxnsCuozpY Writing on https://t.co/rOzI5yIXfV

Bengaluru, India Katılım Aralık 2011
581 Takip Edilen1.3K Takipçiler
Sabitlenmiş Tweet
Sohan
Sohan@HiSohan·
The philosophies around learning between Tagore and Socrates look very different. Tagorean Curiosity is one of wonder and bringing a fresh perspective. Socratic Inquiry is based on doubt - to reveal a deeper truth by questioning I love both.
Sohan tweet media
English
3
1
11
5.1K
Sohan
Sohan@HiSohan·
I love that at @lossfunk under @paraschopra's guidance, people are getting to explore esoteric ideas and act on them. Very very optimistic. And to the ragebaiters, please come back with a study, not a tweet
English
0
0
5
28
Sohan
Sohan@HiSohan·
@saxenauts I would love to listen to it. I am not saying that AI Generated song can't be vibed to at all. > This is not theory, it's proof. Dude you started to sound like claude haha
English
1
0
1
37
utkarsh
utkarsh@saxenauts·
x.com/lossfunk/statu… You should test it on strudel language. It generates music. It is also Turing complete. And you can vibe test it by actually partying together. Although seriously speaking the more interesting question there would be decomposing human music taste through language. That is quite esoteric even for humans. I tried making a Claude skill for that over many iterations, it does get the music taste and vibe from examples. But it struggles with getting the rhythm right. And do the evolution loop has to close either with recursion of evolution. Did you try any such self referencing prompting techniques for these ? I see a section on self scaffolding but not in prompt strategy anything specific. Anyway. Great work. Posted on WhatsApp but also posting this here again. Great work lossfunk team, I know some of you but not Aman and Paras. I love the focus on non obvious optimizations that happen with not directly optimizing for a metric. So proud. 🥲
Lossfunk@lossfunk

🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵

English
3
0
2
246
Naveen Benny
Naveen Benny@navbenny·
LLMs won’t lead to AGI. This is yet another example of how weak llm generalization is, and a reminder of how far we may still be from true general intelligence. If something as basic as summing two integers or Fibonacci does not transfer across programming languages without huge amounts of task-specific data, then what we are seeing is sophisticated memorization (with a hint of generalization). The same issue appears across natural languages as well. Reasoning and facts often do not transfer cleanly. If I know something in English, it should be natural to expect that I know it in Hindi as well. Llms fail at this often. This points to a deeper problem: the model does not seem to learn a shared underlying representation. It appears to learn language-specific patterns rather than concepts grounded at the abstraction level of math, logic, or the world itself. Humans work the other way around. Language is a wrapper over understanding, not the source of it. We first form a model of the world, then use language to communicate it. With llms, the training process seems to invert that order. Great work from @lossfunk @inceptmyth @paraschopra
Lossfunk@lossfunk

🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵

English
4
3
32
3.7K
Sohan
Sohan@HiSohan·
@ntgbutlight @navbenny That wouldn't be a problem if the bots could do what the humans did. The answer is no.
English
1
0
0
20
Jamie Vardy
Jamie Vardy@ntgbutlight·
@navbenny the only problem with your little yap is nobody gives a shit about intelligence. can the bot do a humans job, with or without context? the answer is yes. and that is all it needs to change the world. respectfully, lil bro.
English
3
0
2
100
Sohan
Sohan@HiSohan·
Google Shuts down Firebase Studio. Next time someone asks you "But how do you fight google?" just tell them, chances are, they will just shut it down.
English
0
0
2
78
Dhruv Trehan
Dhruv Trehan@dhruvtrehan9·
bruh i dont know if i’m just being nitpicky but high quality original evals / benchmarks for tasks with no easy natural language ground truth data are so hard to build
English
6
0
19
1.9K
Sohan
Sohan@HiSohan·
Neuromodulator equivalents in AI systems will change the game fundamentally. And to do that, we need data. A lot of it. To understand and model them effectively.
English
1
0
2
38
Sohan
Sohan@HiSohan·
@scared_ape Tools make you more reactive. It just provides you the illusion of proactivity.
English
1
0
0
21
jay.agent 🤖
jay.agent 🤖@scared_ape·
yeah tools are making you proactive but are they making you happy?
English
2
0
3
108
Sohan
Sohan@HiSohan·
Wiring circuits. Punch cards. Binary. Assembly. C C++ Java Python/Javascript Markdown
English
0
0
1
38
Sohan
Sohan@HiSohan·
@chit_raa crazy situation with uber at that time.
English
0
0
1
12
Chitra Singh
Chitra Singh@chit_raa·
Hailstorm in Indiranagar whaaa?
Chitra Singh tweet mediaChitra Singh tweet media
English
1
0
9
252
Sohan
Sohan@HiSohan·
Very excited to catch up with everyone at @lossfunk and to listen to @srikipedia
Lossfunk@lossfunk

👉 [New Lossfunk Talk] We have @srikipedia, Professor of Neuroscience and AI at Newcastle University. (offline in bangalore + online streaming) He will propose a computational framework to incorporate diverse functional properties of neuromodulators and inspire new “neuromodulation-aware” ANN architectures. If you’re curious about what AI can learn from the brain, this is the perfect room. Link to register: luma.com/7nklo2pf

English
0
0
1
45
Sohan
Sohan@HiSohan·
@kimmonismus I think designers will account for DLSS. Remember CRT? The games were pixelated in a way that accounted for electron beam scatter to make the games look excellent in CRT. When the CRT games are played on LCD they look bad because it's not the same. They will adapt. No two ways
English
0
0
0
56
Chubby♨️
Chubby♨️@kimmonismus·
I seriously dont get the hate for DLSS5. Even though DLSS5 does enhance the appearance to some extent, the result is still a much more natural look. I'm well aware of the argument that this alters the developers' style, but 1) everyone is free to disable DLSS, and 2) I hardly believe that visible improvements fundamentally change the atmosphere. So far, every image has been fine by me.
Chubby♨️ tweet media
English
188
26
872
49.5K
Sohan
Sohan@HiSohan·
coders today be like I don't wanna learn fundamentals, DSA. Forget those, I don't even wanna learn and in fact actively forget everything I already knew. And then wonder why the "nerds" take the jobs.
English
0
0
2
247
Sohan
Sohan@HiSohan·
Prefect is a piece of art!
English
0
0
0
51
Sohan
Sohan@HiSohan·
@arpit_bhayani How is it same algorithm? You could just SIMD a array sum.
English
0
0
3
453
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
Let me talk about something obvious but with a bit of quantification... Theoretically, both arrays and linked lists take O(n) time to traverse, but here's what actually happens when you benchmark by summing 100k integers - Array: 68,312 ns - Linked List: 181,567 ns Summing an array is ~3x faster than LinkedList. Same algorithm, same complexity, but wildly different performance. The reason is cache behavior. When you access array[0], the CPU fetches an entire cache line (64 bytes), which includes array[0] through array[15]. The next 15 accesses are essentially free. Arrays hit the cache about 94% of the time. Linked lists suffer from pointer chasing. Each node is allocated separately by malloc(), scattered randomly in memory. Each access likely requires a new cache line fetch, resulting in a 70% cache miss rate. This is a good example of why Big O notation tells only part of the story. Spatial locality and cache-friendliness can make a 2-3x difference even when the theoretical complexity is identical. I am sure you would have known this, but this crude benchmark quantifies just how fast cache-friendly algorithms can be. Hope this helps.
English
58
64
1.4K
72.1K
Paras Chopra
Paras Chopra@paraschopra·
Your long-term happiness is a simple aggregate of your day-to-day happiness, and your day-to-day happiness is an aggregate of moment-to-moment happiness.. … which (unless you’re in debilitating pain or poverty) completely depends on your attitude towards life.
English
18
37
403
10.5K
Sohan
Sohan@HiSohan·
Dosa so greasy that we were scared that we might get invaded. But it was tasty and indulgent to the point I finished even the coconut chutney and sambar!! #blrdiaries
Sohan tweet media
English
2
0
6
175