Benjamin Pring

5.5K posts

Benjamin Pring banner
Benjamin Pring

Benjamin Pring

@BenjaminPring

Vice President, Center for Artificial Intelligence and the Future of Work. Jobs for the Future.

Katılım Ocak 2013
534 Takip Edilen2.3K Takipçiler
Benjamin Pring
Benjamin Pring@BenjaminPring·
@kylesheldon My English Dad (born 1913) always called it soccer to distinguish it from rugger - Association Football and Rugby Football.
English
0
0
1
18
Major League Rugby
📰 America’s first-ever dedicated primetime rugby production tabs key talent for 11 broadcasts throughout the season SUNDAY NIGHT RUGBY TALENT: bit.ly/SNR26
Major League Rugby tweet media
English
7
30
162
21.5K
Benjamin Pring
Benjamin Pring@BenjaminPring·
This is such an important example of AI in the real world.
Todd Saunders@toddsaunders

I know Silicon Valley startups don't want to hear this..... But the combination of someone in the trades with deep domain expertise and Claude Code will run circles around your generic software. I talked to Cory LaChance this morning, a mechanical engineer in industrial piping construction in Houston. He normally works with chemical plants and refineries, but now he also works with the terminal He reached out in a DM a few days ago and I was so fired up by his story, I asked him if we could record the conversation and share it. He built a full application that industrial contractors are using every day. It reads piping isometric drawings and automatically extracts every weld count, every material spec, every commodity code. Work that took 10 minutes per drawing now takes 60 seconds. It can do 100 drawings in five minutes, saving days of time. His co-workers are all mind blown, and when he talks to them, it's like they are speaking different languages. His fabrication shop uses it daily, and he built the entire thing in 8 weeks. During those 8 weeks he also had to learn everything about Claude Code, the terminal, VS Code, everything. My favorite quote from him was when he said, "I literally did this with zero outside help other than the AI. My favorite tools are screenshots, step by step instructions and asking Claude to explain things like I'm five." Every trades worker with deep expertise and a willingness to sit down with Claude Code for a few weekends is now a potential software founder. I can't wait to meet more people like Cory.

English
0
0
0
109
Benjamin Pring retweetledi
Anish Moonka
Anish Moonka@AnishA_Moonka·
Sal Khan was one of the first people on Earth to see GPT-4. OpenAI called him in the summer of 2022, months before ChatGPT existed, and showed him what was coming. He couldn’t sleep that weekend. By March 2023, Khan Academy launched Khanmigo, an AI tutor built on GPT-4, the same day OpenAI unveiled the model to the public. They were a launch partner. While every other education company was figuring out what ChatGPT meant for them, Khan Academy had already been building for seven months. The “obsolete” platform now has 120 million yearly learners. Khanmigo, their AI tutor, grew 731% year over year in the 2024-25 school year, reaching 2 million users. In classrooms alone, adoption went from 40,000 students to 700,000 in a single year, with projections past 1 million for 2025-26. Their teacher tools are free in over 70 countries. In January 2026, Khan Academy signed a deal with Google to put Gemini (Google’s AI) into new Writing Coach and Reading Coach tools for middle and high schoolers. They’re now working with both OpenAI and Google. A peer-reviewed study published in PNAS (one of the top scientific journals in the world) in January 2026, with researchers from Stanford and the University of Toronto, found that more Khan Academy usage is directly linked to higher student test scores. Sal Khan wrote a whole book in 2024 called “Brave New Words” arguing AI would save education. Sam Altman wrote a blurb for it. His TED Talk making the same argument was one of the 10 most-watched of 2023. In October 2025, he was named TED’s “vision steward.” Khan Academy is now the AI education company. That 731% growth happened while students spent 7.7 billion minutes learning on the platform in 2025.
Sag Harbor Capital@sagharborcap

The saddest thing about all the AI stuff is that it’s rendered the Khan Academy guy’s life’s work totally obsolete

English
41
463
6.1K
567.1K
Benjamin Pring
Benjamin Pring@BenjaminPring·
@walegates 20k sterling a night. You could stay in a much nicer hotel than this for that.
English
2
1
26
11K
Wale Gates
Wale Gates@walegates·
Great! I have found a property in London that works for my london need. I also dont need a property in london for 52weeks of the year.
English
92
130
1.6K
699.6K
Angela Hewitt
Angela Hewitt@HewittJSB·
On this St. Patrick's Day (I am so happy to have my Irish passport!), here's my interpretation of "Danny Boy" as arranged by Grainger/Siloti. I'll be back performing in Dublin in November. Hurrah! youtube.com/watch?v=w4QPlO…
YouTube video
YouTube
English
1
6
33
2K
Yogi
Yogi@Houseofyogi·
Don't be IBM and fumble an 18yr head start on AI IBM was the most valuable company on Earth. Invented the hard drive. The PC. The floppy disk. The ATM. DRAM. SQL. The barcode. Most US patents 29 years straight. 405,000 employees. 70% mainframe market share. Today: $231 billion. 67th in the world. Anthropic. Founded 2021. Four years old. $380 billion. Every piece of the bag was fumbled... Invented the PC. Sold to Lenovo: $1.75 billion. Invented the hard drive. Sold to Hitachi: $2 billion. Server business. Sold to Lenovo. Basically nothing. Now the chips. This is pure comedy. IBM was the largest semiconductor manufacturer on Earth. Fabs in New York. Fabs in Vermont. 16,000 patents. They PAID GlobalFoundries $1.5 billion cash to take it. Gave away the factories. Gave away the patents. $4.7 billion write-down. IBM had American fabs. They paid to close them. And the same Democrats who scream about chips going overseas are the ones whose policies made it too expensive to build here. We wouldn't have TSMC/Taiwan issues today. Decisions have consequences. TSMC: $700 billion. Nvidia: $5 trillion. IBM paid to exit chips right before chips became the most valuable industry on Earth. Incredible timing. Deep Blue beats Kasparov. Live television. First machine to outthink a human world champion. IBM owned AI. Not as a buzzword. As a fact. On camera. In front of the whole planet. OpenAI did not exist for another 18 years. Anthropic for another 24. Nvidia was making cards so teenagers could play Halo. Google was two grad students sharing a dorm room. IBM had an 18-year head start on the entire AI industry. What did they do with it. They dismantled Deep Blue. Put it in a museum. Same mentality as every socialist (cough dems) who wants to regulate AI before it ships. Celebrate the breakthrough. Kill the follow-through. Watson wins Jeopardy. Destroys the two greatest players alive on national TV. Most famous AI brand on the planet. IBM spends billions on Watson Health. AI that cures cancer. Their engineers flagged it unsafe. Instead of fixing it they sold it for scraps. Then killed the brand entirely. Loser mentality. IBM Research. Decades of NLP work. The compute. The talent. The CEO looks at LLMs and says "no thanks." Two years later ChatGPT launches. 100 million users in two months. The entire economy reorganizes around the exact technology IBM looked at and said nah. That is like having Google's algorithm in 1997 and deciding to build a phonebook. The suits and the consultants took over. Same thing that kills every city, every agency, every institution that picks socialism over competition. $201 billion in buybacks over 25 years. More on buybacks than CAPEX. They could have funded every AI lab on Earth with that money. Instead they bought their own stock while the stock went down. Revenue down 22 straight quarters. Nobody fired. Name another job where you lose $95 billion in market cap and get a raise. Actually don't. That job only exists at IBM and in Congress. Buffett bought $12 billion in IBM. The greatest investor alive. Held six years. Dumped it on CNBC. "I was wrong." Put the money in Apple. Best investment in Berkshire history. They had the patents. The labs. The engineers. The brand. An 18-year head start on AI. Replaced the builders with bureaucrats. Chose buybacks over R&D. Chose administration over competition. Lost everything. Now look at who wants to run the same playbook on the AI economy. Bernie wants data center moratoriums. Tax the builders before they finish building. Ro Khanna represents $18 trillion in Silicon Valley market cap. Apple. Nvidia. Google. His district built AI. He just held a Stanford town hall with Bernie called "Who Controls AI: The Oligarchs or The People." Wants to tax unrealized gains. Pause data centers. Put unions on AI boards. Redistribute wealth that hasn't been created yet. His own district is trying to primary him. Not because he's too progressive. Because he's trying to kneecap the industry that made his district the most valuable zip code on Earth. That is IBM energy. Tax the engineers. Slow the builders. Add a committee. Wonder why nothing works. Gavin ran California from a $97 billion surplus into a $68 billion deficit. Lost 789 companies. Tesla. SpaceX. Oracle. Chevron. 200,000 people leaving per year. And he thinks he should have a say in how AI gets built nationally. The guy who can't keep In-N-Out Burger in California wants to regulate the most important technology since electricity. These aren't hypotheticals. This is the IBM playbook in real time. Replace engineers with regulators. Replace competition with committees. Replace building with administrating. And act shocked when the talent leaves and the lead disappears. IBM went from first to 67th. 1.43% a year for 28 years. A savings account beat that. Don't let them do it to America. Name a bigger fumble. I'll wait.
English
226
472
1.8K
124.1K
Ben Mankiewicz
Ben Mankiewicz@BenMank77·
Thanks largely to Patty Jenkins and Tony Goldwyn, this story on @CBSSunday Morning turned into something pretty interesting. I remain an enormous fan of both of them.
CBS Sunday Morning 🌞@CBSSunday

In 1913, a leased barn in Los Angeles became Cecil B. DeMille's production center for the very first feature film shot in Hollywood. What started as a weather-friendly place for filmmakers grew into a phenomenon heralded around the world as a "dream factory." Turner Classic Movies host @BenMank77 talks with actor Tony Goldwyn, director Patty Jenkins, and Motion Picture Association chairman and CEO @CharlieRivkin about the historic rise of the film and entertainment industry. cbsn.ws/4sNoFbX

English
16
69
486
32.3K
MOMof DataRepublican
MOMof DataRepublican@data_republican·
The MSM started digging deeply into candidates lives and exposing everything from hang nails to illicit affairs. People's lives have been ruined. Many do not want to go through the scrutiny, not even the good guys, so it's difficult to find "good" candidates. Not saying it's wrong for the media to do this, but I think it prevents a lot of people from running.
English
37
25
296
3.6K
Chris Martz
Chris Martz@ChrisMartzWX·
Americans used to elect very intelligent people to federal office. Now we elect the dumbest people ever imaginable. Can anyone explain how we got here?
English
3.5K
543
6.3K
180.4K
Benjamin Pring
Benjamin Pring@BenjaminPring·
@EkoLovesYou I watched his movies and dreamt about moving to California (and did eventually) not realizing he was making them 4 miles from where I lived in Hertfordshire.
English
0
0
13
924
EKO
EKO@EkoLovesYou·
the most dangerous filmmaker in history got away with it for forty years because he never said it directly he said it in parables, and the parables are still running
EKO@EkoLovesYou

x.com/i/article/2031…

English
11
74
467
52.5K
Benjamin Pring retweetledi
Kevin Frazier
Kevin Frazier@KevinTFrazier·
Good idea alert. “America’s workforce must be equipped to lead the transformation of the economy happening due to AI. This commission would help keep America ahead of our global competitors and keep America prosperous and innovative.” - ⁦@SenatorRounds
Kevin Frazier tweet media
English
1
2
1
420
Benjamin Pring retweetledi
Antonio Mele
Antonio Mele@antoniomele101·
This is a great point, @arindube , and I am really guilty of not sharing more about what I have been up to on the teaching side of things, given that this is what they pay me for! Let's get back on track with a mega post. At @LSEEcon, we've been running a series of structured experiments to figure out exactly how GenAI changes the production function of economics education. We moved past the "cheating" panic early (although we didn't really have one in our programmes) and started actively rebuilding our pedagogy around these tools. Here's what we're doing and what we're learning, and btw we will be presenting our work at CTREE 2026 in Las Vegas in late May if you are in town. The AI Economics Professor With Ronny Razin, we built a specialised, course-aligned AI tutor. The key idea Ronny had: best way to verify if a student actually understands a concept is to ask them to explain it interactively. Clearly this does not scale to the class size we have at LSE (Ronny teaches his course to 850 first-year students). But we can scale with AI! The key pedagogical principle is that the chatbot uses a Socratic framework. It refuses to hand out final answers. Instead, students are prompted with an exercise, and the chatbot asks them to identify the next step in a mathematical or logical derivation themselves, guiding them through the reasoning rather than short-circuiting it. It adapts to the students' level, for example by clarifying concepts or notation if needed. This gives students access to 24/7 personalised tutoring, levelling the playing field for those who might hesitate to speak up in small classes or office hours, and solving Bloom's "2-sigma problem" in economics education. Notice that we didn't train the bot or fine-tuned t to our course material. We just provided a system prompt embracing the Socratic approach, and the solutions to the exercise students had to solve. That's it. Off the shelf LLM model (it was Gemini 2.5 Flash). We did run a small experiment for a game theory exercise, where students had to work out strictly dominated strategies, and pure and mixed strategy Nash equilibria. The feedback we received is overwhelmingly positive: students found it useful to work through the reasoning with the chatbot, and it helped them understand the material better. We are also in the process of establishing if the use of the chatbot improves marks in the final exam, although we don't have a full analysis yet. But I can say that this was a very good year for the distribution of marks in this course, way above the average of previous years. If this proves as good as it looks, next step is to scale this to more courses, potentially expand to similar disciplines in LSE, and potentially expand to other universities. Stay tuned. AI Feedback Experiment Providing high-quality, scalable formative feedback is one of the hardest problems in our job. It's incredibly labour-intensive, and the result is that students often get too little feedback, too late to act on it. Main problem, again, is scale. Can we use AI to enhance our feedback process? We did an experiment with @MichaelGmeiner2 in one of our MSc courses. Michael is a great teacher. In his Econometrics course, he teaches students how to write referee reports, and provides feedback to each one of them on 5 submitted referee reports. We thought, why don't we provide two feedback reports for each submission, one AI-generated and one human-generated? This will allow us to evaluate how good the AI feedback is with respect to human feedback (well, Michael's feedback, which is superhuman in my view, but ok). And so we did. We didn't say which is which to students, to avoid any kind of bias. And again, we just cooked up a prompt for the LLM to generate feedback on the referee report, we provided the AI with the paper to referee, the referee report submitted by the student, and nothing more. We found out that students rated the AI-generated feedback as less useful than the human-generated, although not by a lot. Main problem with the AI-generated feedback is that it is too generic, and does not address the specific TECHNICAL issues that the student may have missed in their report. It is also too positive, and does not provide the student with the critical feedback that they need to improve. In particular, students highlighted that the AI feedback did not enhance their critical thinking, and did not address methodological problems in the research article they were refereeing. Some of these aspects can be addressed with a better prompt, and we are working on it. The technical and methodological issues can also be addressed by providing a summary of what the teacher expects students to criticise in the paper, although there may be additional challenges in this approach (what if the student finds something else to criticise that the teacher did not think of? it happens all the time). Students also mentioned they think the two pieces of feedback are complementary, and they will be happier getting both that just one of them. This points in the direction of a hybrid approach, where AI is used to enhance the human feedback process, rather than substituting it. The caveat is, of course, that we haven't used the most recent models, we didn't try with mixture-of-experts and all the tricks in the book. Teaching Python & RELAI Principles Perhaps our biggest curriculum shift: with @JADHazell we pioneered teaching AI coding tools to first-year students. In the first year macro course that Joe teaches, we introduce students to Python coding for economic analysis. This year, we decided to move in a different direction: since the advent of AI coding agents, we believe it is more important to be able to READ and ORGANISE code than writing it. It is more important to be able to explain your intent to the AI coding agent, and verify that intent has been reflected in the code, than to be able to write the code yourself and test it. But how can you teach students that have never seen a line of code to do that? Introducing Reverse Engineering Learning with AI (RELAI). Start with a full snippet of Python code. The student is told to prompt the AI to explain what the code does. Once the student understands what the code does, it can asks about the syntax and the programming concepts behind the snippet. Then can ask a study plan for those concepts, if needed. Then can try to enquire the AI about what would happen if I change this line or this parameter. Then it can experiment itself by changing the code, and debug with the help of the AI. Finally, the student can ask the AI to produce new code, based on what was learned, and the new intent. I call this the EXPLORE approach: Examine the code, eXplain what it does, Probe deeper, Link to economics, Output prediction, Recreate understanding, Extend with modification. Once students are familiar with AI coding agents, they are assessed with a challenging coursework that Joe created. The assignment has a part that is difficult to do without AI, but should be feasible with AI. There are open ended questions where students have to go beyond the simple repetition of what was learned in the course, possibly explore new datasets and questions, etc. We think this approach can help integrate AI coding agents into the curriculum in a meaningful way, and help students develop a deeper understanding of coding tools in a faster and more efficient way. Coursework is on the way, so we will be able to evaluate the impact of this approach in the next few months. I personally believe RELAI can be adapted to other topics and subjects, and can become one of the way we interact with AI when learning something new. Read more about our approach here: python-ec1b1.vercel.app AI as a productivity tool This is where you can really go nuts. I have used AI to produce new teaching material for several workshops and courses. Slides, assignments, exercises, etc. The last few exams were written with AI tools, creating a series of questions first with suggested solutions, and then choosing the most appropriate ones. I use a coding agent (@cursor_ai ) with access to my teaching materials and past exams, so that it is aware of the content and style. You get a very good exam draft in minutes, and can edit, change questions, generate new ones, etc. It used to take me days to write a good exam, now it takes me a few hours in the afternoon. I used Cursor to do deep research about a new course I wanted to design. I asked for topics, examples, current research in the field that I may not have been aware of, similar courses' syllabi, and in general what was the state of the art in the field. I got a very long list of topics that I could choose from to design my own course, based on my taste, interest and what I think my students should know. I could generate different versions of the same course for different levels (UG, MSc and PhD). Conclusion We are still at the early stages of this journey. We are learning a lot, and we are still figuring out how to best use AI to enhance our teaching. One important thing you may have noticed is that we first define our pedagogical approach and then we integrate AI tools to support it. The other principle should be, design not for the tools you have now, but the ones you will have in a few months or years. If you have comments, or have been running similar experiments, I will be happy to hear from you.
Arin Dube@arindube

I have read a ton from economists in my TL about use of AI in their research workflow. Much less about teaching. Would love to hear what folks' experience has been on that front. (Not problems of students using AI: I mean use of AI in teaching workflow, the good and the bad.)

English
8
61
272
80.9K
Naval
Naval@naval·
Software will proliferate just as videos, music, writing did. The market structure will shift from a “fat middle” to mega-aggregators and a long tail. It’ll be a slower process due to network effects, but many traditional vendor lock-ins will get eaten by AI.
English
649
726
9.9K
1.1M
Martin Gillingham
Martin Gillingham@MartGillingham·
@SBarnesRugby articulates here just what I’ve been thinking ever since Bill Sweeney’s statement about Steve Borthwick “working tirelessly” to put things right. My fear is the coach’s tireless work ethic is the problem. Dump the dogma, foster free thought. thetimes.com/article/3ff5db…
English
1
0
2
571
Benjamin Pring retweetledi
Jonathan Haidt
Jonathan Haidt@JonHaidt·
College students increasingly support banning laptops from classrooms. Multifunction screens fragment attention and block learning. Computers and tablets do not belong on students' desks, and especially not in K-12. thecrimson.com/article/2025/9…
English
48
272
1.5K
80.1K
Benjamin Pring retweetledi
Anish Moonka
Anish Moonka@AnishA_Moonka·
An AI tutor now costs $4 a month. The average U.S. college charges $26,000 a year. That’s a 500x price difference for something that’s available 24/7, never loses patience, and already tutors millions. The professors in this tweet are right to be worried. But the threat to college is more specific than “AI exists.” AI breaks three pillars the traditional college model depends on: Content delivery. Lectures were already losing to YouTube. Now Khanmigo, Google’s Gemini for Education, and a dozen other AI tutors offer personalized 1-on-1 instruction in every subject, adapting to each student’s pace. A 2025 College Board survey of 3,000 faculty found 74% say students already use AI to write essays and papers. The classroom is competing with something that never sleeps. Assessment. Students use AI to write. Professors use AI to grade. Nobody’s sure what’s actually being measured anymore. 92% of college faculty say they’re concerned about AI-driven plagiarism. 84% agree it reduces critical thinking and originality. The entire system of “prove you learned this by writing about it” is breaking down, and nobody has a replacement. Credentialing. 70% of employers now say they’d rather hire someone with less experience who understands AI than someone with more experience who doesn’t. Half of Gen Z workers already view their degree as a waste of time. In the UK, 49% of universities have closed courses in the last year. In the U.S., 28 colleges shut down in just the first nine months of 2024. The “collapse” framing misses one thing though. Harvard’s endowment just hit $56.9 billion. Ivy League acceptance rates are near all-time lows. The top schools are stronger than ever because they sell something AI can’t replicate: networks, signaling, and the in-person experience of being around other ambitious people. The schools in danger are mid-tier tuition-dependent ones that mostly sell content and credentials. AI makes the content free and the credential less valuable at the same time. A 15-year demographic decline (fewer kids born after 2008) makes it worse. Only 9% of university technology officers said in 2024 that higher ed is prepared for AI’s rise. The professors see it coming. Most institutions don’t.
Rand@rand_longevity

every professor I talk to that uses AI says the college system is about to collapse

English
4
6
47
8.3K
Andy Goode
Andy Goode@AndyGoode10·
Borthwick out simples
English
215
81
1.7K
98.2K