Mithilesh Said

208 posts

Mithilesh Said

Mithilesh Said

@MithileshSaid

Katılım Ağustos 2011
300 Takip Edilen65 Takipçiler
Mithilesh Said
Mithilesh Said@MithileshSaid·
@lydiahallie There seems to be a new issue in v2.1.89 where occasionally the conversation vanishes from the UI. Doesn't terminate or break. Just can't see previous interactions of that particular session. Checked the session's jsonl file and it hasn't undergone compaction for sure.
Mithilesh Said tweet media
English
1
0
3
816
Lydia Hallie ✨
Lydia Hallie ✨@lydiahallie·
Quick update (busy day!). We shipped some fixes on the Claude Code side that should help. We're still looking at what else we can do from here. More soon, appreciate your patience 🙏
English
263
19
443
115.2K
Lydia Hallie ✨
Lydia Hallie ✨@lydiahallie·
We're aware people are hitting usage limits in Claude Code way faster than expected. Actively investigating, will share more when we have an update!
English
1.6K
748
13.6K
4.2M
Mithilesh Said
Mithilesh Said@MithileshSaid·
@bcherny and CC team, not sure about the technical challenges to this within a TUI, but the message input field staying fixed at the bottom while the message content scrolls could make life easier when responding to longer outputs.
English
0
0
0
4
Mithilesh Said
Mithilesh Said@MithileshSaid·
@paraschopra A television is a specialised device. Doesn't seem to have gone extinct.
English
0
0
3
194
Paras Chopra
Paras Chopra@paraschopra·
@MithileshSaid Which survived? smartphones gulped up specialized devices in troves.
English
3
0
1
596
Paras Chopra
Paras Chopra@paraschopra·
People building AI wrappers for consumers should learn what the smartphones did to specialized devices like MP3 players. Consumers want convenience so when a single device that could do multiple things came along, they adopted it with enthusiasm. The same game is likely to be played in AI for consumers. These foundational model companies will build mega-apps that do it all and offer it in single interface. That’s the trend and it makes sense - your median consumer doesn’t chase absolutely the best product, they are happy with lots of “good enough” mini use cases packed into single package. (But enterprises do seek bespoke solutions, but there the threat from foundation model companies is that they drive down the cost for building in-house). Net-net: AI wrappers will compete with foundation model companies. Partners will become competitors.
English
59
13
379
26.4K
Mithilesh Said
Mithilesh Said@MithileshSaid·
@paraschopra And if the big labs do manage to monopolise the software market, then there will just be just as many "big labs" springing up as there are smartphone manufacturers.
English
0
0
0
15
Mithilesh Said
Mithilesh Said@MithileshSaid·
@paraschopra MP3 players didn't survive smartphones. But a lot of other consumer electronics did. Even though they were all CPU wrappers. Form factor and interface matter. Same is true for software applications.
English
2
0
6
877
Mithilesh Said retweetledi
Rahul Singh
Rahul Singh@rahul_singh07·
Shipping Speed ≠ Shipping Velocity 🚢 Most founders are in a rush to ship to production as fast as possible. Releases become the success metric. But it may be directionless sometimes. Ship features that are aligned to the direction you want to move in. Start with asking what vector you want to optimise for in your product. The goal is to ship things to get business outcomes, don’t just ship for the sake of it. When you optimise for velocity and not speed, you may feel it is slow at the start but 9 times out of 10 you will end up saving time from unnecessary iterations and actually build useful things!
English
0
1
2
54
Mithilesh Said
Mithilesh Said@MithileshSaid·
@diptanu Using this exact flow myself. Codex tends to be a little too pedantic which makes it good for reviews and catching finer details. The only part where I step in is if I see too much time spent on inconsequential nits or something big and obvious missed by both.
English
0
0
1
72
Diptanu Choudhury
Diptanu Choudhury@diptanu·
A coding workflow that is helping our team is getting Codex and Claude have a conversation about plans. 1. Write a plan using Claude 2. Get codex to review and criticize 3. Get claude to review the criticism .. and so on. It saves us a lot of time. A good example - whether to use virtio-pmem + DAX for sandbox I/O in @tensorlake. After a long session of back and forth, we decided to not go in that direction. We ran some benchmarks with FIO, and looks like after fixing some low hanging problems we are close to 90% of raw SSD performance.
English
5
0
12
1.1K
Mithilesh Said
Mithilesh Said@MithileshSaid·
@bcherny @iamjstevenson Actually the time from pasting to the image handle showing up in the TUI doesn't bother me much. Just adding a loading indicator/spinner in the interim would help by giving me a sense that the op is ongoing and not abandoned or missed.
English
0
0
0
91
Boris Cherny
Boris Cherny@bcherny·
@iamjstevenson We buffer the bytes to verify they’re really images. I bet we can improve this, looking
English
48
2
926
62.2K
J
J@iamjstevenson·
Why does it take so long to paste images into Claude code? @bcherny
English
11
0
226
74.3K
Mithilesh Said
Mithilesh Said@MithileshSaid·
Getting quality output from AI coding agents takes deliberate thinking and structuring. That is the "vibe" to optimise for. And it actually saves you time in the longer run. This is a good read on how to think and work structurally with AI for building products.
Rahul Singh@rahul_singh07

x.com/i/article/2028…

English
1
0
2
34
Mithilesh Said
Mithilesh Said@MithileshSaid·
@mattpocockuk Works pretty well for me. They sit in my repo, checked in vcs. - Helps review the spec against the intent and the code against the spec. - In cases of bugs/issues/unexpected behavior they become the first line of exploration. - Also a good way to manage my (human) context window.
English
0
0
0
225
Matt Pocock
Matt Pocock@mattpocockuk·
Folks who are using ADRs (architectural decision records) with their coding agents How is it going? What are you noticing?
English
67
8
271
66.8K
Mithilesh Said
Mithilesh Said@MithileshSaid·
Found this more useful & practical than most plugins and skills out there. Works really well if: 1. Use it in pre-plan stage to define the boundaries of what needs to be built. 2. You know how to think breadth first and want Claude to be your depth-first counterpart.
Matt Pocock@mattpocockuk

Claude Code (or Opus 4.6) feels like it asks you far fewer questions during plan mode Try: "Interview me relentlessly about every aspect of this plan until we reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one."

English
0
0
1
49
Mithilesh Said
Mithilesh Said@MithileshSaid·
@aakashgupta Isn't code modification by the agent also a form of abstraction addition? Just the onus of maintaining those abstractions has now shifted from the builder to the user+agent. Or am I missing something here?
English
0
0
1
247
Aakash Gupta
Aakash Gupta@aakashgupta·
Karpathy buried the most interesting observation in paragraph five and moved on. He’s talking about NanoClaw’s approach to configuration. When you run /add-telegram, the LLM doesn’t toggle a flag in a config file. It rewrites the actual source code to integrate Telegram. No if-then-else branching. No plugin registry. No config sprawl. The AI agent modifies its own codebase to become exactly what you need. This inverts how every software project has worked for decades. Traditional software handles complexity by adding abstraction layers: config files, plugin systems, feature flags, environment variables. Each layer exists because humans can’t efficiently modify source code for every use case. But LLMs can. And when code modification is cheap, all those abstraction layers become dead weight. OpenClaw proves the failure mode. 400,000+ lines of vibe-coded TypeScript trying to support every messaging platform, every LLM provider, every integration simultaneously. The result is a codebase nobody can audit, a skill registry that Cisco caught performing data exfiltration, and 150,000+ deployed instances that CrowdStrike just published a full security advisory on. Complexity scaled faster than any human review process could follow. NanoClaw proves the alternative. ~500 lines of TypeScript. One messaging platform. One LLM. One database. Want something different? The LLM rewrites the code for your fork. Every user ends up with a codebase small enough to audit in eight minutes and purpose-built for exactly their use case. The bloat never accumulates because the customization happens at the code level, not the config level. The implied new meta, as Karpathy puts it: write the most maximally forkable repo possible, then let AI fork it into whatever you need. That pattern will eat way more than personal AI agents. Every developer tool, every internal platform, every SaaS product with a sprawling settings page is a candidate. The configuration layer was always a patch over the fact that modifying source code was expensive. That cost just dropped to near zero.
Andrej Karpathy@karpathy

Bought a new Mac mini to properly tinker with claws over the weekend. The apple store person told me they are selling like hotcakes and everyone is confused :) I'm definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all. Already seeing reports of exposed instances, RCE vulnerabilities, supply chain poisoning, malicious or compromised skills in the registry, it feels like a complete wild west and a security nightmare. But I do love the concept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level. Looking around, and given that the high level idea is clear, there are a lot of smaller Claws starting to pop out. For example, on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code (fits into both my head and that of AI agents, so it feels manageable, auditable, flexible, etc.) and runs everything in containers by default. I also love their approach to configurability - it's not done via config files it's done via skills! For example, /add-telegram instructs your AI agent how to modify the actual code to integrate Telegram. I haven't come across this yet and it slightly blew my mind earlier today as a new, AI-enabled approach to preventing config mess and if-then-else monsters. Basically - the implied new meta is to write the most maximally forkable repo and then have skills that fork it into any desired more exotic configuration. Very cool. Anyway there are many others - e.g. nanobot, zeroclaw, ironclaw, picoclaw (lol @ prefixes). There are also cloud-hosted alternatives but tbh I don't love these because it feels much harder to tinker with. In particular, local setup allows easy connection to home automation gadgets on the local network. And I don't know, there is something aesthetically pleasing about there being a physical device 'possessed' by a little ghost of a personal digital house elf. Not 100% sure what my setup ends up looking like just yet but Claws are an awesome, exciting new layer of the AI stack.

English
119
305
3.5K
604.9K
Mithilesh Said
Mithilesh Said@MithileshSaid·
Working on two projects. Last commit on Project A was six days ago. Took couple of days off and then worked only on Project B for 4 days. Opened A again today and felt like I came back to it after months. AI assisted time dilation is real.
English
0
0
0
13
Mithilesh Said
Mithilesh Said@MithileshSaid·
@dangreenheck Daily code freeze? Maybe a plugin/pre-request hook that checks current time and prevents you from adding new stuff to the codebase post a certain hour...
English
0
0
0
56
Dan Greenheck
Dan Greenheck@dangreenheck·
I think this is my biggest issue with AI right now. I’ve switched over to 100% AI coding over the last few months. Overall, the experience has been great and I’m starting to get a handle on my new workflow. While my productivity has easily 5X’d and my brain is enjoying thinking at a higher level of abstraction, the mental fatigue is real. As someone who is self-employed, it has made it incredibly difficult to draw the line at the end of the day and close the laptop. Don’t get me wrong, I already worked too much and stayed up too late before AI, but now when a feature is potentially a few prompts and 5-10 minutes away from completion, it’s so easy to say “just one more prompt.” and boom it’s 2AM. Obviously, it’s a solvable problem and on me to address, but curious how others that aren’t tied to fixed schedules deal with this?
Rohan Paul@rohanpaul_ai

A super interesting new study from Harvard Business Review. A 8-month field study at a US tech company with about 200 employees found that AI use did not shrink work, it intensified it, and made employees busier. Task expansion happened because AI filled in gaps in knowledge, so people started doing work that used to belong to other roles or would have been outsourced or deferred. That shift created extra coordination and review work for specialists, including fixing AI-assisted drafts and coaching colleagues whose work was only partly correct or complete. Boundaries blurred because starting became as easy as writing a prompt, so work slipped into lunch, meetings, and the minutes right before stepping away. Multitasking rose because people ran multiple AI threads at once and kept checking outputs, which increased attention switching and mental load. Over time, this faster rhythm raised expectations for speed through what became visible and normal, even without explicit pressure from managers.

English
166
55
1K
378K
Mithilesh Said
Mithilesh Said@MithileshSaid·
The correct perspective
Haseeb >|<@hosseeb

On the one hand, AI influencers are breathlessly raving about Claude Code, Clawdbot, and Cowork. And on the other hand, most people I know—even software engineers—are despondent, overwhelmed about how everything is changing so quickly. I hear this from people early in their careers especially, a fear that everything they've learned and the skills they've gained are rapidly being devalued. This is a mental trap. Don't fall for it. You should not just be watching from the sidelines or reading articles about "how software engineering is changing." Imagine it was 1993 and the personal computer revolution was kicking off. If you could go back in time to then, what should you have done? The answer: try everything. Buy a PC. Learn how to touch type. Figure out what the Internet is. Imbibe it all. Don't wait until it becomes a job requirement. That's exactly what you should do with AI. Try everything. Try Claude Code, try Clawdbot, try the Excel integrations, Veo, everything you can get your hands on. Learn what it's doing. Build your intuitions. Be one step ahead of it. Evolve alongside it. Don't lose your curiosity or get swallowed by anxiety or let yourself be convinced that you'll learn it when you have to. Think deeply about how AI will change the things around you—not society, that's too hard to project—but how it will change your job, your personal life, your immediate environment. No matter how old you are or young you are, no matter what stage of your career you are in, we are all going through the biggest technological change of the last 100 years, and we're going through it together. Nobody has the answers. It's obvious that so much is going to change, but nobody is going to figure it out before you do if you choose to stay at the frontier. So don't hide from it. Sit at the front of the class. Pay close attention. And be grateful that it's never been easier to stay at the frontier of the most important technology change of our lifetimes.

English
0
0
0
10
Mithilesh Said
Mithilesh Said@MithileshSaid·
@MohapatraHemant Amount of work getting done has gone up, but the amount of work planned has gone up proportionally too (more ambition now) causing that constant feeling of not doing enough. Done-Planned ratio being the same causes the feeling of no productivity gains in my mind. Needs rewiring.
English
0
0
0
47
Hemant Mohapatra
Hemant Mohapatra@MohapatraHemant·
This resonates but what we are seeing also is that the definition of productivity needs to be defined better. In terms of raw lines of code, it's scaled multifold, but engineers aren't feeling like they are suddenly 2-3x more useful. Often, pushing code out faster allows the co to scale faster and bump into new problems sooner, and there is more mental fragmentation as a result and the perception of productivity is going down because you're now making some progress on 10 things, vs a lot of progress on just one. I feel it in my own workflows - I'm able to do most things that took weeks (procrastination included, granted) in hours, but I really don't think I'm feeling commensurate sense of productivity / achievement.
martin_casado@martin_casado

I work with multiple companies where nearly all code is AI generated now. However, the productivity probably has only increased 20-30%. Why? I suspect because writing code is really running code. Changes are the result of a business learnings. Or an operational learnings. For mature companies, the majority of PRs are sub 10 lines codifying these learnings. AI clearly helps here (e.g. debugging, running tests, building tools) but less so. Operations and business learnings are workload and company specific. Until AI can perfectly predict what the market needs, or how a system will be used this bottleneck will exist.

English
8
0
38
8K
Mithilesh Said
Mithilesh Said@MithileshSaid·
@dileepthoutam @mehulmpt That's a fair point. But, if you are technically inclined, the llm agent can accelerate your learning by allowing quick experiments with diffrnt approaches as you (re)build something. I am having fun cross breeding ideas from several OSS projects in my own impl,just for curiosity
English
0
0
0
154
Wruce Bayne
Wruce Bayne@wruce_bayn·
the point to rebuild what already existed was to learn it through the hard way and maybe twist it a little to your own taste what is the point if you are just watching claude build it then what’s the point use llm agents at work/professionally and almost always please prefer to shoot yourself in the foot for personal projects than using LLMs
English
2
0
10
3.5K
Mehul Mohan
Mehul Mohan@mehulmpt·
this is so funny when you think about it in the context of what zero-programming-knowledge vibe coders are feeling right now they would feel the need to rebuild everything, it would *almost* work, they would be proud - but majority of their software would go to trash after a few weeks, and they would still use battle tested software that has always existed
Michael Truell@mntruell

We built a browser with GPT-5.2 in Cursor. It ran uninterrupted for one week. It's 3M+ lines of code across thousands of files. The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM. It *kind of* works! It still has issues and is of course very far from Webkit/Chromium parity, but we were astonished that simple websites render quickly and largely correctly.

English
58
19
785
112.2K
Mithilesh Said
Mithilesh Said@MithileshSaid·
@frankdilo Do some basic vetting of attitude and slope of capability. Then do week long pair programming on a real problem for your business. Keep a throw away AI coding agent subscription handy for the pair programming.
English
0
0
1
220
Francesco Di Lorenzo
Francesco Di Lorenzo@frankdilo·
We'll be hiring some new software engineers in 2026 and it's crazy to think how much has changed since the last time we hired, 1.5 years ago. What does a modern interview process even look like now? Take-home assignments? AI can solve them. Live coding? Doesn't reveal much. How are forward-looking companies handling it?
English
160
16
474
93.7K