Mitesh B Ashar

29.9K posts

Mitesh B Ashar

Mitesh B Ashar

@iMBA

NOT an MBA.

Kolkata, India Katılım Ağustos 2007
2.6K Takip Edilen2.8K Takipçiler
Mitesh B Ashar
Mitesh B Ashar@iMBA·
@aisauce_x @Yuchenj_UW I like the larger provenance angle here. However, these are very loose signals for provenance. FWIW, it is just commit text a human can easily place in a commit message. And also, with no standardization for this, these are flaky signals.
English
0
0
0
61
AISauce
AISauce@aisauce_x·
every claude code commit is a signal in the training data of the next model. every github repo that shows claude as co-author tells researchers which codebases AI touched. the co-author tag isn't just branding. it's provenance. and provenance is going to matter a lot as AI code becomes the majority
English
1
0
11
3.4K
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
I noticed something interesting: Claude Code auto-adds itself as a co-author on every git commit. Codex doesn’t. That’s why you see Claude everywhere on GitHub, but not Codex. I wonder why OpenAI is not doing that. Feels like an obvious branding strategy OpenAI is skipping.
English
215
32
1.8K
160.5K
Mitesh B Ashar retweetledi
Paras Chopra
Paras Chopra@paraschopra·
We found a task where LLMs struggle massively! Give them a coding problem in Python and they'd work great. Give the same problem in brainfuck and zero-shot their performance is ~0% +[--------->+<]>+.++[--->++<]>+.
Lossfunk@lossfunk

🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵

English
91
33
1K
170.7K
Mitesh B Ashar retweetledi
Zack Korman
Zack Korman@ZackKorman·
You can hide these !commands in html comments so people don't see them when reading the skill. The command executes without the AI even knowing about it.
Zack Korman tweet media
Lydia Hallie ✨@lydiahallie

if your skill depends on dynamic content, you can embed !`command` in your SKILL.md to inject shell output directly into the prompt Claude Code runs it when the skill is invoked and swaps the placeholder inline, the model only sees the result!

English
31
72
896
115K
Lakshmi Narayanan G
Lakshmi Narayanan G@_glnarayanan·
@iMBA @shantanugoel I was specifically talking from the perspective of context usage. Skills use only when they're invoked vs plugins are more actively using up context since they're auto loaded
English
1
0
0
42
Shantanu Goel
Shantanu Goel@shantanugoel·
Unpopular opinion. Most skills are useless waste of context. Especially if you stuff everything in all the time.
English
7
3
85
4.5K
Mitesh B Ashar
Mitesh B Ashar@iMBA·
The OP by @shantanugoel is more about the skill body vis-a-vis everything being dumped into it versus being distributed within for progressive loading.
English
0
0
1
7
Mitesh B Ashar
Mitesh B Ashar@iMBA·
Plugins do not directly use any context. A skill bundled in a plugin will use the same context if it is placed in the user/project scope. In either of the scenarios, frontmatter loads into context and skills become available for activation.
Lakshmi Narayanan G@_glnarayanan

@iMBA @shantanugoel I was specifically talking from the perspective of context usage. Skills use only when they're invoked vs plugins are more actively using up context since they're auto loaded

English
1
0
1
35
Lakshmi Narayanan G
Lakshmi Narayanan G@_glnarayanan·
@shantanugoel But aren't skills loaded on demand unlike plugins which are almost always loaded? That was my understanding 🤯
English
2
0
0
196
Mitesh B Ashar
@shantanugoel Yes! That is particularly very useful. I have a code review synthesizer that takes reviews from multiple CLI tools and consolidates into 1 review. Have been doing this for almost 6-7 months now.
English
0
0
0
69
Shantanu Goel
Shantanu Goel@shantanugoel·
Whenever coding in claude code, ask it to use codex to do a code review.
Shantanu Goel tweet media
English
6
2
27
3.9K
Sidu Ponnappa
Sidu Ponnappa@ponnappa·
@iMBA No idea but we can already see that we are overloading our counterparts in some instances
English
1
0
0
115
Sidu Ponnappa
Sidu Ponnappa@ponnappa·
i think an underappreciated side effect of AI is that the era of gatekeeping by human bureaucracies is coming to an end. any number of questions, clarifications and requests for docs can now be fulfilled in no time. we now process ~400 question IT compliance requests in days instead of months.
English
13
3
115
7.4K
Ajey Gore
Ajey Gore@AjeyGore·
My new job is to write md files and tell you how cool is that 😀
English
7
0
44
2.6K
Nnenna 👩🏽‍💻✨
I need to try connecting my workflow to a QA AI tool. Will do research.
English
3
0
4
430
Mitesh B Ashar
API Error: 500 { "type": "error", "error": { "type": "api_error", "message": "Internal server error" }, "request_id": "req_<redacted>" }
Dansk
0
0
0
53
Yogini Bende
Yogini Bende@hey_yogini·
Vibe coding is creating overconfident engineers. (a rant) We used to debate architecture. Tradeoffs. Patterns. We had opinions about systems, if not, we used to study them. Now we read the AI output, it looks reasonable, we ship it. Without even thinking of other options. We are losing the habit of even asking the question. System thinking is a muscle. And muscles atrophy. There is a difference between an engineer who uses AI and an engineer who has outsourced their thinking to it. Most of us cannot tell which one we have become!
English
124
108
1.1K
53.6K
Mitesh B Ashar
Why I feel that is effective: - Answering those questions triggers reasoning for LLMs. - The agent responses then keep us open to considering alternatives, choosing the paths "we" would have chosen, and not just what our agents told us.
English
0
0
0
6
Mitesh B Ashar
Examples: "Why shouldn't we run the regex extractions before those LLM ones?" -- LLM: I'll use cargo-tarpaulin. "Let's do some research. What are the gold-standard coverage tools in the Rust ecosystem? What are recent more innovative ones that cover stuff the popular ones dont?"
English
1
0
0
12