Devilal Sharma

3.5K posts

Devilal Sharma banner
Devilal Sharma

Devilal Sharma

@devilalsharma

CEO @famdo_in | 2x Founder, 2x Father | Tech/AI insights | Sweat daily, live healthy | IIT Madras alum.

Bengaluru, India शामिल हुए Aralık 2009
744 फ़ॉलोइंग703 फ़ॉलोवर्स
Devilal Sharma रीट्वीट किया
Arvind Jain
Arvind Jain@jainarvind·
Agentic AI is everywhere right now. But very few teams can explain why their agents behave the way they do, or how to systematically make them better. People often describe traces as the “codebase” for agents. They show how an agent thinks and what it did at every step. As agents take on more tools, sandboxes, and skills, their paths multiply. That makes them harder to reason about and harder to improve. Static prompts don’t scale when every run looks different. At @glean, we use traces as part of the learning and memory loop, not just logging. Trace learning lets agents learn from real usage, adapt to edge cases, and get better without model fine-tuning or long instruction sets. The goal isn’t to replay old runs, but to extract the signal that helps the agent make a better decision next time. In the enterprise, tool strategies are never one-size-fits-all. Each company wires systems together differently, defines its own sources of truth, and has its own rules of engagement. Treating this as generic is both a security risk and a quality problem, because it ignores how work actually gets done. Work is also personal. The systems people touch, the updates they make, and the templates they use all vary. So we built learning at two levels: - Enterprise-level strategies for how tools and workflows operate - User-level preferences for how work actually gets done Traces give us a way to understand and shape agent decision-making, and to create a feedback loop that compounds over time. If agentic AI is going to move beyond impressive demos to reliable day-to-day work, this kind of trace-driven learning is essential. It’s one of the ways we’re building self-learning agents that can execute real work, at scale.
Tony Gentilcore@tonygentilcore

x.com/i/article/2039…

English
20
48
361
60.6K
Devilal Sharma रीट्वीट किया
Chris Laub
Chris Laub@ChrisLaubAI·
🚨 BREAKING: A Google researcher and a Turing Award winner just published a paper that exposes the real crisis in AI. It's not training. It's inference. And the hardware we're using was never designed for it. The paper is by Xiaoyu Ma and David Patterson. Accepted by IEEE Computer, 2026. No hype. No product launch. Just a cold breakdown of why serving LLMs is fundamentally broken at the hardware level. The core argument is brutal: → GPU FLOPS grew 80X from 2012 to 2022 → Memory bandwidth grew only 17X in that same period → HBM costs per GB are going UP, not down → The Decode phase is memory-bound, not compute-bound → We're building inference on chips designed for training Here's the wildest part: OpenAI lost roughly $5B on $3.7B in revenue. The bottleneck isn't model quality. It's the cost of serving every single token to every single user. Inference is bleeding these companies dry. And five trends are making it worse simultaneously: → MoE models like DeepSeek-V3 with 256 experts exploding memory → Reasoning models generating massive thought chains before answering → Multimodal inputs (image, audio, video) dwarfing text → Long-context windows straining KV caches → RAG pipelines injecting more context per request Their four proposed hardware shifts: → High Bandwidth Flash: 512GB stacks at HBM-level bandwidth, 10X more memory per node → Processing-Near-Memory: logic dies placed next to memory, not on the same chip → 3D Memory-Logic Stacking: vertical connections delivering 2-3X lower power than HBM → Low-Latency Interconnect: fewer hops, in-network compute, SRAM packet buffers Companies that tried SRAM-only chips like Cerebras and Groq already failed and had to add DRAM back. This paper doesn't sell a product. It maps the entire hardware bottleneck and says: the industry is solving the wrong problem. Paper dropped January 2026. Link in the first comment 👇
Chris Laub tweet mediaChris Laub tweet media
English
109
387
1.9K
828.4K
Devilal Sharma रीट्वीट किया
Harsh Goenka
Harsh Goenka@hvgoenka·
To be successful, you need to have passion for your work- Steve Jobs
English
36
156
911
42.2K
Devilal Sharma रीट्वीट किया
ashu garg
ashu garg@ashugarg·
50% of AI agent systems will be running on context graphs by 2028. From Gartner. Over / under?
ashu garg tweet media
English
5
2
7
506
Devilal Sharma रीट्वीट किया
IIT Madras
IIT Madras@iitmadras·
At E-Summit 2026, IIT Madras Research Park and Unicorn India Ventures announced the launch of a ₹600 crore deep tech fund — ‘IIT Madras Unicorn Frontier Fund - I, with a ₹400 crore greenshoe option, a major boost to India’s deep tech ecosystem. The fund will back IP-led, engineering-heavy startups across Robotics, Space-Tech, Defence-Tech, Semiconductors and other strategic sectors, aligned with India’s national priorities. Targeting 25+ startups, it will focus on early-stage deep tech ventures (TRL 3–4), with an average first cheque of ₹8–10 crore and a patient capital horizon of 10+2 years. Announced by Prof. V. Kamakoti, Director, IIT Madras, during the inauguration of E-Summit 2026, the fund underscores IIT Madras’ commitment to technological sovereignty, indigenous innovation, and scaling science-driven entrepreneurship. With IITMRP and UIV jointly shaping the portfolio, and a strong pipeline from the IIT Madras startup ecosystem, the fund aims to accelerate the journey from lab-to-market and build globally competitive deep-tech companies from India. @iitmadras @ecell_iitm #IITMadras #Deeptech #Esummnit2026
English
6
63
267
10.4K
Devilal Sharma रीट्वीट किया
Peter
Peter@peter·
Systems of record are the new DBs/file systems. All the context and workflow orchestration will happen on top of them, pushing them into the background with value being captured by the AI layers above.
Gokul Rajaram@gokulr

Check out where Systems of Recoard sit in this diagram from @OpenAI Frontier. At least 3, if not 4, layers of context and intelligence sit between them and the end business application. It's one of the clearest representations of how AI companies plan to build next-gen systems of action on top of existing SoR, and why the markets are so worried about the future of software companies. PS: Even the color coding subtly highlights where OpenAI thinks value will accrue. The SoR layer is white and can almost be missed if one don't look closely!

English
10
10
125
37.3K
Devilal Sharma
Devilal Sharma@devilalsharma·
@aaditsh All modern leaders must adopt to this mindset - hands on, builder mindset, if not already done. Kudos @satyanadella
English
0
0
0
275
Devilal Sharma रीट्वीट किया
Blume Ventures
Blume Ventures@BlumeVentures·
Before India talked about “deep tech,” IIT Madras was already building it. At a recent IIT Madras event, @tarunsmehta of @atherenergy heard a staggering stat: 115 startups incubated last year — almost all in deep tech, hardware, or AI. But this culture didn’t appear overnight. Back in 2007, a group of alumni created the CFI — Center for Innovation — a massive lab filled with tools, equipment, and total creative freedom. Weekends suddenly meant: “What are you doing?” “I’m in CFI, building something.” By 2009, directors began modelling the research culture on Stanford. More dominoes fell: a junior started attempting startups at scale, and the Engineering Design department became a quiet catalyst — with nearly two-thirds of students attempting a startup at some point. What emerged was a generational shift: not just an institute that taught engineering, but an institute that built entrepreneurs, deep-tech products, and a culture of doing. In S4 of The Blume Podcast, Karthik interviews Tarun and dives into how IIT Madras became the birthplace of some of India’s most ambitious hardware and AI companies — including Ather. 🎧 Watch the full episode now — a story of culture, community, and creation: youtu.be/rcQmputjYR0 @BKartRed @AshishFafadia @sajithpai @sanjaynath @arpiit @riashroff @saritaraichu @mehtaalok @mitul_am @SeekingN0rth @DeepikaDakuda @gauthamsiv @ray_elton99 @vikramg05
YouTube video
YouTube
English
5
27
228
55K
Devilal Sharma रीट्वीट किया
Shiv Aroor
Shiv Aroor@ShivAroor·
Tejas pilot Wg Cdr Namansh Syal’s wife Wg Cdr Afshan (a serving officer) & daughter today ahead of his last rites. Nobody braver than the families of armed forces personnel.
English
728
4.4K
33.2K
938.1K
Devilal Sharma
Devilal Sharma@devilalsharma·
@RMantri There is direct metro. Take the metro and and reach peacefully, even if it doesn’t save time, it is much better transportation giving you peace of mind
English
0
0
0
305
Rajeev Mantri
Rajeev Mantri@RMantri·
Bangalore traffic has become so hopeless, going from Yeshwantpur to Jayanagar (18 km) is like going from Navi Mumbai to Pune (120 km). Incorrigible, irretrievable, gargantuan disaster!
English
285
169
1.3K
153.6K
Devilal Sharma
Devilal Sharma@devilalsharma·
My average daily steps for past 6months and one year are 9.8k and 8.8k respectively. I think 10k steps is a symbolic number and not needed to stick to it but if for a longer duration one maintains >8k steps is good enough. This is just one aspect of fitness. #fitness #walking #running
Devilal Sharma tweet mediaDevilal Sharma tweet media
English
0
0
0
60
Devilal Sharma
Devilal Sharma@devilalsharma·
Another 10K run done. Only one thing to say: strength training is most important for running. #keepRunning
Devilal Sharma tweet media
English
0
0
0
107