Vir

4.3K posts

Vir

Vir

@vk01

“Enough words have been exchanged; now at last let me see some deeds!” - G investor, cofounder babajob, former hrtech exec.

Global Katılım Şubat 2009
3.5K Takip Edilen1K Takipçiler
Vir retweetledi
Kabir Taneja
Kabir Taneja@KabirTaneja·
Whenever the Middle East is in peril always good to remember how once an Indian president and a Bengali accent almost led to a diplomatic crisis…
Kabir Taneja tweet media
English
30
522
5.3K
169.6K
Vir retweetledi
Kaito | 海斗
Kaito | 海斗@_kaitodev·
5 minutes ago, @karpathy just dropped karpathy/jobs! he scraped every job in the US economy (342 occupations from BLS), scored each one's AI exposure 0-10 using an LLM, and visualized it as a treemap. if your whole job happens on a screen you're cooked. average score across all jobs is 5.3/10. software devs: 8-9. roofers: 0-1. medical transcriptionists: 10/10 💀 karpathy.ai/jobs
Kaito | 海斗 tweet media
English
976
1.8K
12.2K
3.6M
Vir retweetledi
Madhav Chanchani
Madhav Chanchani@madhavchanchani·
India seems to be largely mirroring the US in terms of AI platform market share. But it’s interesting to see the relatively strong market share of Google’s Gemini in South Korea and Japan.
Madhav Chanchani tweet media
English
1
3
19
1.6K
Vir retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)
Andrej Karpathy tweet media
English
1.1K
3.7K
28.3K
10.9M
Vir retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Citadel Securities published this graph showing a strange phenomenon. Job postings for software engineers are actually seeing a massive spike. Classic example of the Jevons paradox. When AI makes coding cheaper, companies actually may need a lot more software engineers, not fewer. When software is cheaper to build, companies naturally want to build a lot more of it. Businesses are now putting software into industries and tools where it was simply too expensive before. --- Chart from citadelsecurities .com/news-and-insights/2026-global-intelligence-crisis/
Rohan Paul tweet media
English
428
1.3K
9.9K
2M
Vir retweetledi
Akshay Kothari
Akshay Kothari@akothari·
To my fellow founders and CEOs, who keep saying “nobody is going to vibe code a CRM or ERP or ,” sharing a few thoughts: 1. You’re right that most companies will not vibe code their system of record. Some startups will experiment (remember Klarna?), but larger enterprises will continue to value secure, reliable systems of record. That’s not the real shift, though. 2. Businesses of every size increasingly want to operate in an AI-native world where their tech stack seamlessly works with agents. Why? The company that can spin up digital workers at scale will run circles around the one that cannot. 3. So the question is whether your software product can exist in this agentic ecosystem. Is it open and interoperable? Can it plug into the systems being built around it? If it’s closed, customers will eventually reconsider (see point above!). 4. Opening up will put pressure on seat-based pricing, especially when agents can query data and execute workflows without needing to buy seats for every human. This is both a crisis and an opportunity. 5. Instead of just being a place where data is stored, your product becomes a highly valuable data and context node for real work happening across humans and agents. In many cases, you may be able to deliver this work directly to your customers. Real opportunity to sell work, not software! In summary, if you do nothing, you risk drifting your company towards irrelevance. If you act, you're going to affect your own business model. In moments like this, the only path forward is to lean in and be willing to disrupt yourself. The agentic future is coming either way.
English
52
52
699
92.7K
Vir retweetledi
Pat Grady
Pat Grady@gradypb·
A lot of people have given up on application layer software. FWIW, our partner @Konstantine and I still love the stuff! Not indiscriminately - there’s a massive gulf between the winners and the losers. But overall, we expect software market cap to grow tremendously over the next decade.
Konstantine Buhler@Konstantine

x.com/i/article/2027…

English
8
13
219
61.7K
Vir retweetledi
Madhav Chanchani
Madhav Chanchani@madhavchanchani·
"We would need roughly 1000x more compute for the unlikely hypothetical scenario described by Citrini to be remotely possible and the time it takes us to get there will give humans time to adjust and maximize the many potential benefits of AI. It took over 50 years after James Watts invented the rotary steam engine for locomotives to broadly replace horses."
Gavin Baker@GavinSBaker

x.com/i/article/2026…

English
1
1
6
1.8K
Vir retweetledi
Hunter Horsley
Hunter Horsley@HHorsley·
"In our view, agents will most likely soon be responsible for most internet transactions, and we will need blockchains that support more than one million-or even one billion-transactions per second." Agents are coming to crypto. Take Stripe's word for it.
Stripe@stripe

x.com/i/article/2026…

English
81
123
954
214.8K
Vir retweetledi
Bryan Johnson
Bryan Johnson@bryan_johnson·
I did a 40 hr and then a 70 hr social media fast. I’ve come to believe that social media is pollution.  Not a vice or guilty pleasure. It’s closer to water toxins, air pollution and microplastics. Social media has been on my mind because I can feel how bad it is for me. For my health and agency. I am a professional rejuvenation athlete. For five years, I’ve engineered my life around biological renewal and the elimination of decay. After hundreds of experiments across food, sleep, exercise, therapies, and toxins, I’ve developed both data and intuition about what strengthens or degrades my system. I can viscerally feel that social media is bad for me.  It erodes my autonomy and increases cognitive entropy. Like other toxins, it accumulates. You can’t unsee or unfeel what you’ve consumed. It settles into mental tissue like heavy metals, producing chronic low-grade inflammation.  Evidence suggests even after you stop scrolling, attentional fragmentation and emotional priming persist. Your thoughts begin to mirror the algorithm’s incentives. Independent cognition quietly erodes and you don’t notice the loss. Time away and getting lost in deep focus is the only remedy. When something erodes your agency, the rational response is elimination. The problem is, elimination isn’t realistic. “Just put the phone down” is as practical as telling someone in 19th century London to stop breathing coal smoke. You need to know what’s happening in the world, be in touch with your friends and be part of the tribe. That necessity is what allows companies to harvest your emotions, intellect and time for their profit. You are their raw material they exploit. Then in an ironic twist, the system gets you to exploit yourself by engineering an environment where it takes more effort to stop than to continue scrolling.  Pollution exposure by default. What specifically makes social media toxic is that value and poison are inseparable by design.  You go to hear from friends and you leave an hour later absorbed in outrage that serves no biological interest of yours. The water is real. The lead is in the pipes. The performance metrics (likes, views, etc.) bleed you of independent thought. They create quantified social proof, triggering ancient hierarchy reflexes. You no longer evaluate signal from noise; the engagement metrics do it for you. Like all toxins, the damage is cumulative. We live inside the exposure long enough that it feels normal.  The 40 and 70 hour social media fasts did that for me. Gave me just enough separation to feel and diagnose the poison. The obviousness of it feels like when I went to India and saw their humanitarian crisis of air pollution which no one sees anymore. So what do we do? Neither platforms nor individuals are likely to change on their own. AI may be the countermeasure. An AI layer between you and the feed. Filtering rage, removing vanity metrics and translating sensationalism into calm, factual language. Preserving signal and eliminating noise. I want social media to become a longevity intervention, not a longevity threat. I never want to see the raw feed. I want an AI agent to read it for me, strip the engagement metrics that hijack my judgment, filter the rage, and return only what I actually came for. Every generation faces its pollutants. When cholera spread through London's water, the answer wasn't telling people to drink less. It was building filtration. The same logic applies here. Best next move is to design the filter to avoid being the raw material.
English
564
617
7.9K
578.9K
Vir retweetledi
rvivek
rvivek@rvivek·
An engineer at Anthropic wrote a spec, pointed Claude at an Asana board, and went home. Claude broke the spec into tickets, spawned agents for each one, and they started building independently. When the agent is confused it runs git-blame and messages the right engineers in Slack. By Monday the agents finished the plugin feature. That's one example of how the best engineers are shipping software right now. Developers will soon orchestrate 50 AI agents in parallel and the difference between a good engineer & a great one would come down to specs. You can't write a spec that holds up at that scale without genuinely understanding what you're building at a deeper level. The next-gen developer who understands the fundamentals, can architect well and orchestrate agent is going to be a 1000x developer!
English
287
532
7.1K
1.2M
Vir retweetledi
anmol maini
anmol maini@anmolm_·
spot the first"indian" app on this list. also wild that chatgpt is already #19 on this list h/t @refsrc
anmol maini tweet media
English
8
3
46
10.6K
Vir retweetledi
TBPN
TBPN@tbpn·
Cloudflare CEO Matthew Prince says Googlebot sees 3.2x more of the web than OpenAI, and 4.8x more than Microsoft. And he worries this advantage will allow Google to run away with the AI race, with no one else being able to catch them. "For every one page that OpenAI sees, Google is seeing 3.2." "What I worry about is, because Google has this unique access to the web that nobody else has, the game might just go to them. Because at the end of the day, whoever has the most data wins in the era of AI."
English
25
48
506
83.4K
Vir retweetledi
Anthropic
Anthropic@AnthropicAI·
Software engineering makes up ~50% of agentic tool calls on our API, but we see emerging use in other industries. As the frontier of risk and autonomy expands, post-deployment monitoring becomes essential. We encourage other model developers to extend this research.
Anthropic tweet media
English
139
337
3K
1.9M
Vir retweetledi
Neil Sethi
Neil Sethi@neilksethi·
Goldman: hyperscalers' capex spending is now on track to account for 92% of cash flows from operations this year. If realized, this level of capex spending relative to cash flows would exceed the intensity of investment from S&P 500 technology companies during the late 1990s, explaining the recent turn of the US tech mega-caps to debt markets.
Neil Sethi tweet media
Neil Sethi@neilksethi

Heisenberg Report: Desire to hedge against a blowup in the hyperscalers has spilled over to the ETF space as noted by JPM's Nikolaos Panigirtzoglou: “The aversion to US AI stocks [has] spilled over to credit, with short interest on LQD rising sharply YTD [and] by more than HYG,” Panigirtzoglou remarked. “This is because much of the newly-issued AI debt that caused indigestion in credit markets belongs to the high-grade universe.” [note the two different scales but now LQD short interest is nearly on par with HYG (22% versus 29%)].

English
2
47
128
28.9K