Craig Tashman

1.4K posts

Craig Tashman banner
Craig Tashman

Craig Tashman

@CraigTashman

Founder & CEO of @LiquidTextCorp and GA Tech PhD in human computer interaction.

New York Katılım Aralık 2010
255 Takip Edilen792 Takipçiler
Craig Tashman retweetledi
Brent Beshore
Brent Beshore@BrentBeshore·
"The interviewer, @DouthatNYT, broke down crying during the conversation. And when he did, Sasse laughed. Not unkindly, but the way a man laughs when the heaviness and the lightness of something are both true at the same time and he’s decided not to pretend otherwise."
Brent Beshore@BrentBeshore

x.com/i/article/2047…

English
8
17
225
25.9K
Craig Tashman
Craig Tashman@CraigTashman·
Silly to pay people for commodity thought work, pay people for humanness itself
Alex Imas@alexolegimas

New essay on the economics of structural change and the post-commodity future of work. 1. Almost any question about the impact of advanced AI on the economy needs to start at the same place: what is still scarce? Answer that, and the analysis becomes pretty straightforward. This essay explores what becomes scarce if AI really can replicate most of what humans do in production, and what this mean for the future of jobs. 2. My conjecture, working through the economics: labor reallocates across sectors, and the sector it reallocates to has properties that keep labor a meaningful share of the economy. Ultimately this is about the structure of demand itself. For this, we have to go back to Girard, Augustine and Rousseau: once people's base needs are met, their preferences shift to comparative motives (e.g., status, exclusivity, social desirability). This motive is inherently non-satiated. 4. The key paper is Comin, Lashkari, and Mestieri (Econometrica 2021). As people get richer, they don't buy proportionally more of everything. They shift spending toward sectors with higher income elasticity. They estimate income effects account for 75%+ of observed structural change. 5. The ironic consequence: the sector that gets automated becomes a smaller share of the economy, not a larger one. Agriculture got massively more productive and its share of employment collapsed. Manufacturing too. The "stagnant" sectors absorb the spending and the jobs. 6. So the question is: which sectors have high income elasticity in a post-AGI world? I argue it's what I call the relational sector. Categories where the human isn't just an input into production, it is part of the value. 7. Why does the relational sector have high income elasticity? Because human desire has a mimetic, relational dimension. We don't just want things for their intrinsic properties. We want what others want, and we want it more when others can't have it. Girard, Rousseau, Augustine, and Hobbes all saw this. 8. In work with Kristóf Madarász, we showed this experimentally: WTP roughly doubles when a random subset of others is excluded from the good. And in new work with Graelin Mandel, AI involvement kills the premium. Human-made art gains 44% from exclusivity; AI-made art only 21%. 9. This all comes together for the core argument. The sector that absorbs spending as AI makes commodity production cheap is one where human provenance is part of the value, and demand for it grows faster than income. Exactly the profile that keeps labor meaningful. 10. To be clear about the claim: I'm NOT saying aggregate labor share must rise. It may fall. The claim is about sectoral composition, i.e., where expenditure and employment go once commodities get cheap, and the fact that the sector that will absorb reallocated labor maps to a substantial component of human preferences and desire. 11. If you're interested in the formal model, a linked companion technical note works out all the economics. Read the essay here: aleximas.substack.com/p/what-will-be…

English
0
0
0
31
Craig Tashman
Craig Tashman@CraigTashman·
@gilrdb @zotero DM will be hard, but the form on the website will route to the right people. The team will be back online on Monday. I’m sorry about this, hopefully we can find out why you’re seeing an issue here quickly!
English
0
0
0
31
Gil
Gil@gilrdb·
@CraigTashman @zotero i did submit through the contact form on the liquidtext website. thanks for the quick reply! when can i expect an answer? any chance we can DM?
English
1
0
0
29
Gil
Gil@gilrdb·
.@CraigTashman there is an issue with pdf import using the @zotero integration that repeatedly crashes the liquidtext on iPad. this is unfortunately not usable at its current state. assistance needed
English
1
0
0
107
Craig Tashman
Craig Tashman@CraigTashman·
@andy_matuschak Thanks Andy! I’m excited to see where you take this, anyone can pull off the vision for this bazaar it’s you
English
0
0
1
32
Craig Tashman
Craig Tashman@CraigTashman·
@andy_matuschak It’s a compelling vision, Andy! But I wonder if enough designers, human or AI, will be able to design consistent and cohesive experiences without the structure of modularity?
English
2
0
1
135
Andy Matuschak
Andy Matuschak@andy_matuschak·
⭐ New talk! andymatuschak.org/tat Coding agents might help us finally break out of two cages: the app model, which traps computing in one-size-fits-all silos; and programming as a specialization, which has crowded out cultures of imagination and domain insight.
English
11
54
472
48.6K
Craig Tashman retweetledi
Josh Kale
Josh Kale@JoshKale·
This is big... Anthropic just announced a model so powerful they won't release it to the public out of fear over the damage it will cause 😨 Claude Mythos Preview found thousands of zero-day exploits in every major operating system and web browser... The numbers are hard to believe: > $50 to find a 27-year-old bug in OpenBSD, one of the most security-hardened operating systems ever built > Under $1,000 to find AND build a fully working remote code execution exploit on FreeBSD that grants unauthenticated root access from anywhere on the internet > Under $2,000 to chain together multiple Linux kernel vulnerabilities into a complete privilege escalation exploit For context: these are the kinds of findings that previously required elite security researchers working for weeks. Anthropic engineers with no formal security training asked Mythos to find exploits overnight. They woke up to working code the next morning. The results were so impressive Anthropic assembled Apple, Google, Microsoft, Amazon, NVIDIA, and seven other organizations into Project Glasswing: A $100M defensive coalition. They're not releasing this model publicly. Instead, they're racing to patch the world's infrastructure before models like this proliferate.
Anthropic@AnthropicAI

Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing

English
710
2.5K
18.2K
4M
Craig Tashman
Craig Tashman@CraigTashman·
CS Lewis actually agrees. He argues that God effectively gives all of us this chance, but whether we “mature” into glorious creatures grounded in love, or hateful vain shadows grounded in nothing but ourselves, comes from the choices we make now. See “The Great Divorce.”
vitrupo@vitrupo

Nick Bostrom says we may have “100% infant mortality.” We develop for a few decades, then stagnate, then die. If full human maturity takes thousands of years, we’ve never seen a fully developed human.

English
0
0
0
59
Elizabeth Yin 💛
Elizabeth Yin 💛@dunkhippo33·
This is what most people don't realize - you can actually build big businesses without VC dollars. Moreover, a lot of the big raises happen BECAUSE the company didn't need money:
Steph from OpenVC@StephNass

Chess․com Revenue: $100M+ VC: $0 Mailchimp Revenue: $700M+ VC: $0 Zoho Revenue: $1B+ VC: $0 Midjourney (via @onemoremichael) Revenue: $200M+ VC: $0 Butcherbox (via @DanielGulati) Revenue: $600M VC: $0 Dyson (via @PatrickOCR) Revenue: $8B+ VC: $0 Who else?

English
17
6
114
24.6K
Craig Tashman
Craig Tashman@CraigTashman·
LLMs are amazing, and have sped up some of our engineering by 10x. But if you think these things are anywhere close to human thought, you need to talk to more humans!
Craig Tashman tweet media
English
0
0
1
52
Craig Tashman
Craig Tashman@CraigTashman·
@fchollet How far do you think we are from fluid intelligent AI? Do you think LLMs will be able to get us there?
English
0
0
0
48
François Chollet
François Chollet@fchollet·
When high-fluid intelligence systems start to show up, they will immediately take over the knowledge-dependent ones. Because they will be able to scale their knowledge just as well as legacy systems (knowledge gathering is the easy part), while their ability to recombine and apply that knowledge will create a performance gap that preparation alone will never bridge
English
23
6
126
13.4K
Craig Tashman retweetledi
François Chollet
François Chollet@fchollet·
People struggle to differentiate fluid intelligence from knowledge because, given enough preparation, memorized templates become a solid substitute for on-the-fly adaptation
English
69
75
850
55.3K
Craig Tashman retweetledi
Valerio Capraro
Valerio Capraro@ValerioCapraro·
Terence Tao put it plainly: there is no evidence that LLMs exhibit genuine creativity. Yes, they have solved some Erdős problems. But these are low-hanging fruit, questions that attracted little attention and that yield once the right existing techniques are applied. That is not creativity. That is search plus recombination. Yes, LLM outputs can look impressive. But look at who is impressed: typically non-experts. Experts know very well that LLM performance gets terrible when you approach the frontier of human knowledge. And this is not a temporary gap. It reflects a structural limitation. We do not fully understand human creativity. But we do know a key property: Conceptual leaps: the ability to generate new representations, not just recombine existing ones. LLMs do not do this. They interpolate in representation space. They operate within existing conceptual frameworks; they do not create new ones. This is why we haven’t “yet seen them take the next step”.
Valerio Capraro tweet media
English
150
300
2K
314.1K
Craig Tashman
Craig Tashman@CraigTashman·
LLMs are a distillation of human thought as it relates to the subject matter in the training set. I don’t see any reason to think they can replicate human thought more generally than that?
Lossfunk@lossfunk

🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵

English
0
0
0
73
Craig Tashman
Craig Tashman@CraigTashman·
Sure, LLMs are nowhere close to the limits of human thought. But my fear is that LLMs are so good at doing our day to day tasks that we’ll stop doing them, stop building the deep domain expertise where our unique cognitive abilities can shine, and end up in a cognitive winter for a few generations.
François Chollet@fchollet

This is more evidence that current frontier models remain completely reliant on content-level memorization, as opposed to higher-level generalizable knowledge (such as metalearning knowledge, problem-solving strategies...)

English
0
0
0
68
Craig Tashman retweetledi
Thariq
Thariq@trq212·
Using Skills well is a skill issue. I didn't quite realize how much until I wrote this, the best can completely transform how your team works.
Thariq@trq212

x.com/i/article/2033…

English
90
240
3.8K
692.7K
Craig Tashman retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
New research from Tsinghua, Peking University and other top labs taught a humanoid robot to play tennis using scattered human movement clips instead of perfect match data. The big deal here is how the team solved the data problem for physical robots. Usually, teaching a robot to do something highly athletic like playing tennis requires perfect, continuous tracking data of professional human players. Getting that kind of flawless 3D physical data during a high-speed match is extremely difficult and expensive. This paper bypasses that massive hurdle entirely. Instead of needing perfect full-match data, the researchers just used short, disconnected, and imperfect clips of basic human swings. The AI system uses these rough clips as a basic hint for how a swing should look, and then a physics simulator corrects the physical errors so the robot does not fall over while swinging to hit the ball. Because they proved they can take messy, fragmented human data and turn it into a smooth, highly dynamic robot athlete, this means we can start teaching robots all sorts of complex physical tasks without needing to record perfect human demonstrations first. It severely lowers the barrier to making robots useful in fast, unpredictable physical environments. The robot successfully tracked fast incoming balls and consistently hit them back to specific target zones while looking surprisingly natural.
Zhikai Zhang@Zhikai273

🎾Introducing LATENT: Learning Athletic Humanoid Tennis Skills from Imperfect Human Motion Data Dynamic movements, agile whole-body coordination, and rapid reactions. A step toward athletic humanoid sports skills. Project: zzk273.github.io/LATENT/ Code: github.com/GalaxyGeneralR…

English
31
163
925
178.8K