Karina Q

2.5K posts

Karina Q banner
Karina Q

Karina Q

@karinadoteth

on chain data enjooooooyer | my views are my own | dyor nfa | data only | wagmi

Katılım Mart 2022
1.6K Takip Edilen1.5K Takipçiler
Karina Q retweetledi
Ejaaz
Ejaaz@cryptopunk7213·
caught up with a friend who runs a 50+ person, $60M ARR company… no one’s written a single line of code in the last 4 months. every line written by AI. his setup: - claude code + gpt codex - Conductor (orchestration layer for coding agents) - claude skills, memory docs - slack, git integrations how it works: - uses claude to draft up product / feature spec - feeds that into a claude skill that generates task items, spins up tickets and assigns it to relevant engineers - claude checks slack, telegram and git hourly and sends him a progress report - team member describes how feature should be built, tags conductor and it manages multiple code branches, contexts and parallel agents to build it - engineer reviews code, 90% of time it’s great - pushes to prod apparently the teams productivity has 10X’d he says the moat has moved from writing code to managing multiple agents and that it “feels like a game, it’s fun again” fucking awesome.
English
22
6
103
6.9K
Karina Q retweetledi
Corey Ganim
Corey Ganim@coreyganim·
This is the most detailed "AI agents in production" breakdown I've seen. Here's the business hiding inside it: Eric runs 5 agents, 48 daily crons, and a unified vector DB that ingests everything every 15 minutes. The system sources candidates overnight, runs outbound campaigns, and briefs him before he opens his laptop. The product angle he buries at the end: 1. Build the operating system for your own company first 2. Prove it works (his content averages 120K views per article now) 3. Deploy the same system for clients who don't want to spend months building it 4. Your internal implementation IS the product. Your compounded data IS the moat. The old agency model: sell services, deliver services. The new model: sell the intelligence layer that makes services 10x more effective.
ericosiu@ericosiu

x.com/i/article/2043…

English
16
44
345
89.6K
Karina Q retweetledi
Michał Podlewski
Michał Podlewski@trajektoriePL·
Terence Tao proposes what he calls a "Copernican view of intelligence". Instead of buying into the common, one-dimensional narrative that artificial intelligence will simply evolve from "subhuman" to "superhuman" and ultimately make humanity entirely redundant, Tao urges us to look at the bigger picture. Much like the Copernican revolution proved the Earth is not the center of the universe, Tao suggests we need to realize that human intelligence isn't the only, or necessarily the highest, form of intellect. Historically, we have treated other forms of storing or creating knowledge—like animals, books, and computers—as secondary. However, we actually exist within a much richer universe of intelligence. Both human intelligence and computer intelligence possess their own distinct strengths and weaknesses. The true potential lies not in viewing them as direct competitors, but rather in focusing on collaboration. By working together, humans and computers can achieve additional things that neither could accomplish on their own, requiring us to think in much wider terms than just what humans or computers can do alone.
English
124
529
3.7K
490.1K
Karina Q retweetledi
魅力無錫 Wuxi China
魅力無錫 Wuxi China@WuxiCity·
Chinese heartthrob and actor Zhang Linghe invites you to Wuxi — his hometown in Jiangsu province. Wuxi, a scenic city, is a foodie's paradise, famous for its sweet-flavored delicacies like soup dumplings, Taihu Lake "Three Whites", and melt-in-your-mouth sweet and sour ribs. @zlhwalnut1230 @linghearchive @DailyZLH @ZhangLinghe_TH
English
6
87
389
43.8K
Karina Q
Karina Q@karinadoteth·
Nobody can ever malign Wuxi men again now bc Zhang Linghe is from there 😍😍😍
English
0
0
0
109
Karina Q retweetledi
God of Prompt
God of Prompt@godofprompt·
This is the single best framework I’ve seen for understanding AI. Terence Tao, arguably the smartest mathematician alive, just dropped a paper with Tanya Klowden on arXiv called “Mathematical Methods and Human Thought in the Age of AI.” The core idea: a “Copernican View of Intelligence.” Stop thinking of AI on a line from “dumb” to “superhuman.” That’s the wrong axis entirely. AI excels at BREADTH. Humans excel at DEPTH. Tao himself said AI has made his papers “richer and broader, but not necessarily deeper.” That’s not a limitation. That’s the entire playbook. Stop trying to replace yourself with AI. Start using it to cover the 90% of surface area your brain physically can’t. The people who get this are already 10x more productive. The rest are still arguing about whether AI is “smart enough.” Reframe your point of view from “smarter” to “different”. Human + AI > either alone. The math on that has never been clearer.
God of Prompt tweet media
Michał Podlewski@trajektoriePL

Terence Tao proposes what he calls a "Copernican view of intelligence". Instead of buying into the common, one-dimensional narrative that artificial intelligence will simply evolve from "subhuman" to "superhuman" and ultimately make humanity entirely redundant, Tao urges us to look at the bigger picture. Much like the Copernican revolution proved the Earth is not the center of the universe, Tao suggests we need to realize that human intelligence isn't the only, or necessarily the highest, form of intellect. Historically, we have treated other forms of storing or creating knowledge—like animals, books, and computers—as secondary. However, we actually exist within a much richer universe of intelligence. Both human intelligence and computer intelligence possess their own distinct strengths and weaknesses. The true potential lies not in viewing them as direct competitors, but rather in focusing on collaboration. By working together, humans and computers can achieve additional things that neither could accomplish on their own, requiring us to think in much wider terms than just what humans or computers can do alone.

English
46
223
1.2K
141.5K
Karina Q retweetledi
Karina Q retweetledi
Blair Dulder CPA™ 🧃
Blair Dulder CPA™ 🧃@runaway_vol·
vibecoding is fentanyl for people with an iq over 120
English
224
600
9.4K
452.5K
Karina Q retweetledi
Alex Atallah
Alex Atallah@alexatallah·
Some upgrades are coming soon to the @OpenRouter Auto Router, to help you pick the lowest-cost model that fits your workload What would you like to see it do? DM me your feedback and an example prompt. openrouter.ai/openrouter/auto
Alex Atallah tweet media
English
3
8
74
9.2K
Karina Q retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Executive compression is happening faster than anyone expected. Workday's CTO took "Member of Technical Staff" at Anthropic. Atlassian's CTO took "Business Lead" at Stripe. Mike Krieger went from CPO to MTS on the Claude Code team. Instagram cofounder voluntarily dropping "Chief" from his title to write code. Four senior executives in six months all made the same bet: get closer to the work. AI tools are collapsing the ratio of managers to makers. One senior IC with Claude Code and deep domain knowledge is starting to outproduce a 15-person team with three layers of oversight. The management layer that made sense when shipping software required 200-person orgs is compressing fast. When that happens, the value of "Chief" anything drops and the value of "person who actually builds" spikes. A CTO managing 500 engineers is less differentiated than an engineer who can ship with frontier models. The smartest executives in tech are dismantling the ladder and moving to the floor where the work happens. The org chart of 2030 is going to look nothing like today, and these moves are the first draft.
mandy@mandyxyz123

First, Workday CTO becomes a software engineer at Anthropic. Now, Atlassian CTO is a business lead at Stripe? What? Are they that bearish on their own software companies?

English
38
129
1K
351.1K
Karina Q retweetledi
Lance Martin
Lance Martin@RLanceMartin·
i co-wrote the Anthropic engineering blog on Claude Managed Agents, and wanted to share some thoughts on agent harnesses + infrastructure for long-horizon tasks ... 🧵 anthropic.com/engineering/ma…
Lance Martin tweet media
English
31
113
982
90.5K
Karina Q retweetledi
Tommy
Tommy@Shaughnessy119·
Problems are never that serious and your dreams can always be bigger If you learn and move on from problems faster and increase the scope of your dreams thats the place where you want to be Some people dwell on problems and dream small - worst of both worlds
English
2
1
14
828
Karina Q retweetledi
Thariq
Thariq@trq212·
I think "prompting" will keep being an incredibly high-leverage skill, like writing or public speaking. It is the skill of talking to agents, mediated by the harness. My main goal is to grow the bandwidth between humans and agents, to help us understand each other better.
English
333
151
2.8K
168.5K
Karina Q retweetledi
GREG ISENBERG
GREG ISENBERG@gregisenberg·
The best thing ANY engineer/programmer can do right now is learn how to become a top 1% marketer For 20 years, the engineer was the most important person in the room. They had the rarest skill. They could build the thing. Everyone else had to wait for them. Claude Mythos and the models coming after it are ending that era The new scarcity is the person who can look at a human being and understand exactly what they need to hear to take action. What makes someone click buy at 11pm. What makes someone tell a friend. What makes a stranger feel like a product was built specifically for them That is a completely different muscle than writing code or architecting systems Study why TBPN built a brand silicon valley is obsessed with. Learn why the headline is 80 cents of every dollar. Figure out why one subject line gets 40% open rates and the next one gets ignored Most engineers have never trained this muscle. They are world class at clearly defined problems. Marketing is the opposite. Fuzzy. Emotional. Irrational. The engineer who trains it becomes the most dangerous person in any room The CTO/CMO combo is the most valuable human in tech right now and almost nobody has both Computer Science school in 2026 should basically be part technical knowledge/part marketing knowledge I really think that... The best thing any engineer can do right now is learn how to become a top 1% marketer
English
226
122
1.4K
128K
Karina Q retweetledi
loaf
loaf@lordOfAFew·
So Mythos is a Blackwell class model, an absolute monster. Vera Rubin chips shipping in a few months, so the first of them will be live end of the year. 2-3x more powerful on Blackwell. Then 2028 we will have Feynman class models, 2-3x more powerful than Rubin. So we have a model now that companies are afraid of releasing and over the next 2 years we will have even larger jumps. The world is sleep walking into this future, it’s going to be a very different place.
English
68
153
2.3K
188.7K
Karina Q retweetledi
How To AI
How To AI@HowToAI_·
🚨 MIT proved you can delete 90% of a neural network without losing accuracy. Researchers found that inside every massive model, there is a "winning ticket”, a tiny subnetwork that does all the heavy lifting. They proved if you find it and reset it to its original state, it performs exactly like the giant version. But there was a catch that killed adoption instantly.. you had to train the massive model first to find the ticket. nobody wanted to train twice just to deploy once. it was a cool academic flex, but useless for production. The original 2018 paper was mind-blowing: But today, after 8 years… We finally have the silicon-level breakthrough we were waiting for: structured sparsity. Modern GPUs (NVIDIA Ampere+) don’t just “simulate” pruning anymore. They have native support for block sparsity (2:4 patterns) built directly into the hardware. It’s not theoretical, it’s silicon-level acceleration. The math is terrifyingly good: a 90% sparse network = 50% less memory bandwidth + 2× compute throughput. Real speed.. zero accuracy loss. Three things just made this production-ready in 2026: - pruning-aware training (you train sparse from day one) - native support in pytorch 2.0 and the apple neural engine - the realization that ai models are 90% redundant by design Evolution over-parameterizes everything. We’re finally learning how to prune. The era of bloated, inefficient models is officially over. The tooling finally caught up to the theory, and the winners are going to be the ones who stop paying for 90% of weights they don’t even need. The future of AI is smaller, faster, and smarter.
How To AI tweet media
English
179
846
5.6K
372.2K
Karina Q retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.
staysaasy@staysaasy

The degree to which you are awed by AI is perfectly correlated with how much you use AI to code.

English
1K
2.4K
19.6K
3.9M