Suman Karumuri

1.7K posts

Suman Karumuri

Suman Karumuri

@mansu

Engineer. Systems thinker. All things observability. Observability is my ikigai.

San Francisco, CA Katılım Şubat 2008
1K Takip Edilen568 Takipçiler
Suman Karumuri retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Anthropic's own study proves Vibe-Coding and AI coding assistants harm skill building. "AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average" Developers learning 1 new Python library scored 17% lower on tests when using AI. Delegating code generation to AI stops you from actually understanding the software. Using AI did not make the programmers statistically faster at completing tasks. Participants wasted time writing prompts instead of actually coding. Scores crashed below 40% when developers let AI write everything. Developers who only asked AI for simple concepts scored above 65%. Managers should not pressure engineers to use AI for endless productivity. Forcing top speed means workers lose the ability to debug systems later. ---- Paper Link – arxiv. org/abs/2601.20245 Paper Title: "How AI Impacts Skill Formation"
Rohan Paul tweet media
English
113
191
1.2K
99.9K
Suman Karumuri retweetledi
Sherpa (YC X26)
Sherpa (YC X26)@with_sherpa·
Your website should improve itself. Sherpa gives you an entire conversion optimization team with one line of code. Here's how 🧵
English
15
11
61
5.2K
Suman Karumuri retweetledi
Vadim
Vadim@VadimStrizheus·
Maturing is realizing that Tony Stark was a vibe-coder.
English
313
1.7K
24.7K
829.4K
Omar Khattab
Omar Khattab@lateinteraction·
judging by emails, seemingly every other lab is trying to hire leads for their search teams now; it kind of feels late for that?
English
15
3
169
29.3K
Agim
Agim@CitizenAgim·
@dougboneparth When the second edible finally hits:
Agim tweet media
English
1
0
24
1.7K
Douglas A. Boneparth
Douglas A. Boneparth@dougboneparth·
At some point in your life you will eat an edible and think it’s not working. It’s very important that you do not eat a second one.
English
818
994
14.4K
1M
Suman Karumuri retweetledi
lmeyerov
lmeyerov@lmeyerov·
Fascinating performance design direction: GenAI specializing your DB to your workload. This is fascinating to me due to some key parallels: * I already practice "eval-driven AI coding loops", and database performance is one of the happiest cases for this - clear conformance suites in correctness and clear goal to hill climb via optimization. * Moving this idea closer to the use case and runtime, while still staying safely offline, are both cool * In GFQL, our open-source GPU graph query language, we have started stacking specialization layers in the engine, where we take advantage of another phrasing of this: pay-as-you-go semantics. Simpler base layers can ignore complications of fancier language features so simpler queries can go faster, and fancier features get case-split so we optimize their different scenarios. Instead of one path straddling all cases, we have specialization layers and pockets. It used to be a LOT harder to identify the scenarios, refine the solutions, and build the safety & maintenance layers for this, while now we can easily adapt big conformance suites and run all sorts of test amplifiers whenever we add a new specialization. Overall, this paper leans into using AI to handle more complexity in, for now, narrow domains. It realizes databases are the happy case of easy for AI to test & verify, and the space of many optimizations that are well-understood: these make it a lot easier for eval-driven AI coding loops to hill climb up the performance charts for arbitrary workloads. Link:
English
2
1
4
287
Suman Karumuri retweetledi
David Crawshaw
David Crawshaw@davidcrawshaw·
Old world software estimates: 2 weeks (actually takes 4 weeks) New world software estimates: 2 weeks (actually takes 4 weeks, or 20 minutes)
English
11
27
651
23.5K
Suman Karumuri
Suman Karumuri@mansu·
@valyala @VasiliyZukanov Abstractions saves tokens. From what we have seen so far whatever abstractions that help humans also helps ai save tokens. Sometimes abstractions hide optimizations, but they are useful more frequently than not.
English
0
0
1
263
Aliaksandr Valialkin
Aliaksandr Valialkin@valyala·
@VasiliyZukanov There should be another question: what is the purpose of generating the code in any programming language? Why AI doesn't generate an optimized executable for the target platform?
English
15
1
45
17.5K
Vasiliy Zukanov
Vasiliy Zukanov@VasiliyZukanov·
Honest question: why choose Python for the backend in the age of AI? When agents write almost all the code, and human preferences become less important, wouldn't it make sense to optimize for performance, scalability and safety with languages like Java, C#, Go, etc.?
English
394
24
1.3K
291.9K
Suman Karumuri retweetledi
antirez
antirez@antirez·
The 20$ codex plan is worth more than the $200 Claude Code plan.
English
283
141
4.7K
664.2K
Ross Lazer
Ross Lazer@rosslazer·
Is anyone building observability tooling that's native to agents? From the infra ground up. Not just an MCP server or observability for agents themselves.
English
2
0
1
40
Suman Karumuri retweetledi
Boris Cherny
Boris Cherny@bcherny·
@EthanLipnik 👋 Early versions of Claude Code used RAG + a local vector db, but we found pretty quickly that agentic search generally works better. It is also simpler and doesn’t have the same issues around security, privacy, staleness, and reliability.
English
152
297
5K
1.2M
Suman Karumuri
Suman Karumuri@mansu·
@JustJake Git at scale was never a solved problem. FAANGs have moved to git replacements even before agents were a thing. The biggest issue scaling git was doing it in a way that maintains backwards compatibility. CI/CD infra seems to be real bottleneck to me.
English
0
0
3
2.1K
Suman Karumuri retweetledi
Stuart Blitz
Stuart Blitz@StuartBlitz·
VC vs. founder
Stuart Blitz tweet media
English
160
716
11.2K
592.7K
Suman Karumuri
Suman Karumuri@mansu·
@ankurnagpal Given the number of online scams, I found the best way for the govt to contact me is via physical mail.
English
0
0
0
11
Ankur Nagpal
Ankur Nagpal@ankurnagpal·
REAL QUESTION How bad is it if I ignore all physical mail from today onwards Like throw it straight in the trash Assume anything important enough will find another channel, yes?
English
90
1
173
83.8K
Suman Karumuri retweetledi
Marc Brooker
Marc Brooker@MarcJBrooker·
Average iops per gigabyte of storage (queue depth 1): 40 years ago (HDD): ~1500 Today (HDD): ~0.01 Today (SSD): ~10 Today (eMMC): ~3 Storage is much cheaper, but also much further away.
Jon Erlichman@JonErlichman

Average cost for 1 gigabyte of storage: 45 years ago: $438,000 40 years ago: $238,000 35 years ago: $48,720 30 years ago: $5,152 25 years ago: $455 20 years ago: $5 15 years ago: $0.55 10 years ago: $0.05 5 years ago:    $0.03 Today:              $0.01

English
4
4
84
22.6K
Suman Karumuri retweetledi
Charly Wargnier
Charly Wargnier@DataChaz·
A must-bookmark for vibe-coders. @YCombinator’s guide to making the most of vibe coding:
Charly Wargnier tweet mediaCharly Wargnier tweet media
English
36
184
1.2K
284K
Suman Karumuri
Suman Karumuri@mansu·
@kellabyte Disaggregated storage(separate compute from storage) doesn’t have to eschew local storage. In Kaldb, we use S3 for durability but download the data locally for fast queries. With thoughtful sharding we can spin up data compute nodes on demand to serve the data.
English
0
0
0
38
Kelly Sommers
Kelly Sommers@kellabyte·
Not all customers need the advantages with the disadvantages of remote storage. Some people need the instant spin up and scalability of compute and all the cool features remote storage enables. Some ppl benefit their cutomers way more w/ p99’s being lower 24/7.
English
1
0
16
1.4K
Suman Karumuri retweetledi
Corey Quinn
Corey Quinn@QuinnyPig·
This killed me.
Corey Quinn tweet media
English
109
914
6.2K
288.2K