Andrew Piper

15.5K posts

Andrew Piper banner
Andrew Piper

Andrew Piper

@_akpiper

Using #AI and #NLP to study storytelling at McGillU. Director of .txtlab and author of the forthcoming book, Why You Should Read More Fiction.

Montreal, QC Katılım Mart 2012
2.7K Takip Edilen5.7K Takipçiler
Sabitlenmiş Tweet
Andrew Piper
Andrew Piper@_akpiper·
Excited to announce I'm part of a team that won a new Schmidt Sciences grant for the Humanities and AI. Our project: "AI for Historical and Cultural Reasoning." Follow for updates! schmidtsciences.org/humanities-and…
Andrew Piper tweet media
English
2
9
125
6.8K
Andrew Piper
Andrew Piper@_akpiper·
"There is a third customer cohort that is underserved. It’s not developers…not business professionals…it’s builders. Everyone can build now. It’s marketing folks vibe coding. Legal folks building skills. Finance expert side projects. This is a really undertapped customer base."
Allie K. Miller@alliekmiller

Yesterday, I met with Anthropic and OpenAI and Google. (Separately, of course.) And while the conversations were largely confidential, I do want to share some aggregated reflections on the day as well as general SF takeaways. ⬇️ 1) Competitive advantage as a solo practitioner really does come from taking action and finding an area with a bit of friction and doubling down. Ex: memory management right now isn’t perfect, but allocating an hour to improving that system gives you a ton of leverage over others 2) SF continues to be the number one place for AI work. I know that’s not surprising. I would put New York at a healthy second place. SF tends to be more about crazy agent experiments for the thrill of capability and discovery and NYC tends to be more about kinda crazy agent experiments to find new ways to make money. Not saying either is better. But I met several people renting two apartments to straddle these worlds. You want the frontier of SF and enterprise insights of NYC. It’s one reason I travel between them so much. 3) All AI labs want to hear more from people. All of them. What are you using it for, what do you like, what do you hate, what do you need. Users have a TON of power on the direction of these tools. Keep testing and tweeting at them!! 4) There is very clearly a third customer cohort that is bubbling and underserved. It’s not developers…it’s not the business professional basic users…it’s builders. Everyone can build now. It’s marketing and sales folks vibe coding. It’s legal folks building complex skills. It’s a finance expert building a side project. This is a really undertapped customer base. They feel the Cursors of the world are too complex and doc summarization tools of the world are too basic. 5) Not sure if it was just sample size, but far fewer people were wearing tech gear compared to when I lived in SF. Everyone was still dressed casually, but I used to see Splunk and Optimizely and Slack and VC gear everywhere. People seem more in stealth swag now. 6) We may soon have our world model moment. 7) Speed of iteration and shipping is faster than I’ve ever seen. We see the nonstop drops from Anthropic. We see that because of scale, providers can get a much faster feedback loop of products or features that aren’t hitting. A lot of 2025 was experimentation, but ever since the OpenClaw moment over the holidays, the releases from all three labs have been more concentrated on…things that sorta look and feel like OpenClaw. 8) Small teams can pull off more than ever before. Small teams are the powerhouses of innovation right now. This means that finding new ways to share knowledge, break silos, and remove duplicate work is going to be even more important. AI agents functioning as actually teammates that support an entire system is key. 9) Build more Skills. Build better Skills. 10) Misinformation on AI tools and leaks spread FAST. I’ve seen so many fake stories on these AI labs. Your company needs to actually TEST these tools on your actual use cases to know which models and tools are best and you need to not make large-scale snap decisions based on a rumor of a rumor of a rumor. We will see more volatility. Plan for it. 11) You can feel the seriousness of this moment. Even during random conversations I had in line at a cafe. Lots of folks worried about job loss and lack of meaning. 12) Mac minis were sold out ;)

English
0
0
0
170
Andrew Piper
Andrew Piper@_akpiper·
This is great evidence of just how bad LLM-generated peer review is. But it does not follow that it is bad to use LLMs for writing. You guiding the LLM v. automation are very different scenarios.
Natasha Jaques@natashajaques

The paper I’ve been most obsessed with lately is finally out: nbcnews.com/tech/tech-news…! Check out this beautiful plot: it shows how much LLMs distort human writing when making edits, compared to how humans would revise the same content. We take a dataset of human-written essays from 2021, before the release of ChatGPT. We compare how people revise draft v1 -> v2 given expert feedback, with how an LLM revises the same v1 given the same feedback. This enables a counterfactual comparison: how much does the LLM alter the essay compared to what the human was originally intending to write? We find LLMs consistently induce massive distortions, even changing the actual meaning and conclusions argued for.

English
0
1
3
396
Andrew Piper retweetledi
Tom Toro
Tom Toro@TTomTToro·
Tom Toro tweet media
ZXX
211
762
6.8K
607.3K
Andrew Piper
Andrew Piper@_akpiper·
This is really great framing for a politically / publicly valuable AI
Andy Hall@ahall_research

AI is a shitty political advisor. Every major AI company is racing to sell you a personal superintelligence that handles your investments, your medical decisions, your calendar—but ask it who to vote for and it chokes. It refuses to make concrete recommendations, summarizes candidate marketing pablum at face value, and leaves you to figure it out. It’s totally understandable why. The liability around political recommendations is obvious. But in a world where every voter will be looking to the AI advisor for help, and where the future of self governance depends on finding ways to harness AI, we’ll have to do better. In a new piece today with Sho Miyazaki, we lay out four principles for what a good AI voting advisor would actually look like: (1) Take the user's values as declared, not inferred from demographics or conversational cues (2) Evaluate candidate claims against independent evidence instead of relaying campaign marketing (3) Distinguish empirical disagreements from value disagreements—and be honest about which is which (4) Tell the user plainly when it doesn't know enough to give good guidance I also built an interactive tool where you can write your own "constitution" for an AI voting advisor and test how it changes the advice in real time. Check it out in the piece! These are genuinely hard problems and I don't think anyone has cracked them yet. I’m hoping we can start a public conversation around what the principles for a good AI political advisor should be, and how we can achieve them in practice. For more, check out the full piece, linked below.

English
0
0
3
259
Andrew Piper
Andrew Piper@_akpiper·
@SoumayaKeynes fwiw I would expect AI to depress readability as it measures word + sentence length. AI favours longer both. Which is one reason readability stats are a bit limited. They may or may not be "clearer."
English
0
0
0
41
Soumaya Keynes
Soumaya Keynes@SoumayaKeynes·
found something rather baffling when researching my column this week… I wanted to see if there was any evidence that AI tools were helping economists to make their research more readable. So I analysed the text of NBER working paper abstracts…
English
8
53
214
55.9K
Andrew Piper retweetledi
Anthropic
Anthropic@AnthropicAI·
We invited Claude users to share how they use AI, what they dream it could make possible, and what they fear it might do. Nearly 81,000 people responded in one week—the largest qualitative study of its kind. Read more: anthropic.com/features/81k-i…
English
449
911
6.3K
2.5M
Andrew Piper retweetledi
Anish Moonka
Anish Moonka@AnishA_Moonka·
Sal Khan was one of the first people on Earth to see GPT-4. OpenAI called him in the summer of 2022, months before ChatGPT existed, and showed him what was coming. He couldn’t sleep that weekend. By March 2023, Khan Academy launched Khanmigo, an AI tutor built on GPT-4, the same day OpenAI unveiled the model to the public. They were a launch partner. While every other education company was figuring out what ChatGPT meant for them, Khan Academy had already been building for seven months. The “obsolete” platform now has 120 million yearly learners. Khanmigo, their AI tutor, grew 731% year over year in the 2024-25 school year, reaching 2 million users. In classrooms alone, adoption went from 40,000 students to 700,000 in a single year, with projections past 1 million for 2025-26. Their teacher tools are free in over 70 countries. In January 2026, Khan Academy signed a deal with Google to put Gemini (Google’s AI) into new Writing Coach and Reading Coach tools for middle and high schoolers. They’re now working with both OpenAI and Google. A peer-reviewed study published in PNAS (one of the top scientific journals in the world) in January 2026, with researchers from Stanford and the University of Toronto, found that more Khan Academy usage is directly linked to higher student test scores. Sal Khan wrote a whole book in 2024 called “Brave New Words” arguing AI would save education. Sam Altman wrote a blurb for it. His TED Talk making the same argument was one of the 10 most-watched of 2023. In October 2025, he was named TED’s “vision steward.” Khan Academy is now the AI education company. That 731% growth happened while students spent 7.7 billion minutes learning on the platform in 2025.
Sag Harbor Capital@sagharborcap

The saddest thing about all the AI stuff is that it’s rendered the Khan Academy guy’s life’s work totally obsolete

English
41
463
6.1K
567K
Andrew Piper retweetledi
Ethan Mollick
Ethan Mollick@emollick·
After using it a bit, Claude Cowork Dispatch covers 90% of what I was trying to use OpenClaw for, but feels far less likely to upload my entire drive to a malware site.
English
192
186
5.9K
351.7K
Andrew Piper retweetledi
Neil Renic
Neil Renic@NC_Renic·
“we thank the reviewers for their constructive feedback”
Neil Renic tweet media
English
13
282
2.1K
63.9K
Andrew Piper retweetledi
Ethan Weber
Ethan Weber@ethanjohnweber·
I made a Claude Code skill that generates conference posters 🛠️ Instead of a static PDF, it outputs a single HTML file — drag to resize columns, swap sections, adjust fonts, then give your layout back to Claude. 🔁 🔗 Skill 👉 github.com/ethanweber/pos…
English
28
329
2.5K
177.7K