Rewiring Education

834 posts

Rewiring Education banner
Rewiring Education

Rewiring Education

@RewiringK12

Ben Samara -🎙 Host - Rewiring Education (Summer 2025) 🎓 School Counselor💡 Founder - Samara Solutions Group 📚 Adjunct - TCNJ 🧠 Educational Futurist

Princeton, NJ انضم Ocak 2013
472 يتبع484 المتابعون
Rewiring Education
Rewiring Education@RewiringK12·
@Alex_TheAnalyst No one is taking the lead on teaching students responsible use of the technology, so they are turning to blindly using tools, or figuring it out through YouTube or TikTok. Responsible use education should be mandated.
English
0
0
0
20
Alex Freberg
Alex Freberg@Alex_TheAnalyst·
I'm going to call this right now. We are going to have a large population with absolutely no critical thinking skills if they blindly trust AI for everything. We have all already seen it. They don't validate outputs. They don't really understand anything. They just ask questions, it looks good, and they go with it. There are going to be huge issues in every company as this continues over the years. The amount of technical debt and knowledge gaps are going to be insane. So much opportunity if you actually know what you're doing.
English
556
476
3.1K
118.2K
Anish Moonka
Anish Moonka@AnishA_Moonka·
@RewiringK12 One step at a time, but definitely operating at the frontier
English
1
0
16
4.6K
Anish Moonka
Anish Moonka@AnishA_Moonka·
Sal Khan was one of the first people on Earth to see GPT-4. OpenAI called him in the summer of 2022, months before ChatGPT existed, and showed him what was coming. He couldn’t sleep that weekend. By March 2023, Khan Academy launched Khanmigo, an AI tutor built on GPT-4, the same day OpenAI unveiled the model to the public. They were a launch partner. While every other education company was figuring out what ChatGPT meant for them, Khan Academy had already been building for seven months. The “obsolete” platform now has 120 million yearly learners. Khanmigo, their AI tutor, grew 731% year over year in the 2024-25 school year, reaching 2 million users. In classrooms alone, adoption went from 40,000 students to 700,000 in a single year, with projections past 1 million for 2025-26. Their teacher tools are free in over 70 countries. In January 2026, Khan Academy signed a deal with Google to put Gemini (Google’s AI) into new Writing Coach and Reading Coach tools for middle and high schoolers. They’re now working with both OpenAI and Google. A peer-reviewed study published in PNAS (one of the top scientific journals in the world) in January 2026, with researchers from Stanford and the University of Toronto, found that more Khan Academy usage is directly linked to higher student test scores. Sal Khan wrote a whole book in 2024 called “Brave New Words” arguing AI would save education. Sam Altman wrote a blurb for it. His TED Talk making the same argument was one of the 10 most-watched of 2023. In October 2025, he was named TED’s “vision steward.” Khan Academy is now the AI education company. That 731% growth happened while students spent 7.7 billion minutes learning on the platform in 2025.
Sag Harbor Capital@sagharborcap

The saddest thing about all the AI stuff is that it’s rendered the Khan Academy guy’s life’s work totally obsolete

English
40
461
6.1K
564.1K
Rewiring Education أُعيد تغريده
Hamsa Bastani
Hamsa Bastani@hamsabastani·
🚨🚨 Excited to share our first *positive* results on AI in education! Most AI tutor work focuses on making the chatbot better. We suggest another lever: deciding what students should practice next to improve learning. We combine an LLM tutor with reinforcement learning to personalize problem sequencing using signals from student-chatbot interactions and solution attempts. We tested this in a 5-month randomized field experiment in a Python course across 10 high schools in Taipei. All students had the same course material and the same AI tutor. The only difference was adaptive vs. fixed problem sequencing. Result: across 770 students, adaptive sequencing improved performance on an in-person final exam taken without AI assistance by 0.15 SD, with larger effects for beginners. Our evidence suggests the gains came from stronger engagement and more productive AI use.
Hamsa Bastani tweet media
English
20
55
302
50.7K
Rewiring Education
Rewiring Education@RewiringK12·
@LizStepan 100%, but also the ability for strict oversight and reliable tracking of whether the tools are actually having impact. Too many tools are rolled out without any measurable data on whether or not they improve outcomes.
English
1
0
1
11
Liz Stepan
Liz Stepan@LizStepan·
The key to good educational outcomes for AI will lie in their ability to tap into expressive language. Ed Tech has not pushed students beyond basic receptive understanding and passively clicking a correct m/c answer, or filling in a blank. Expressive language will be key.
English
3
0
13
540
Rewiring Education
Rewiring Education@RewiringK12·
Pretty confident I’ve read more studies this week than you’ll read all year. Nobody here is or ever will be advocating for taking a short cut. Explaining why people shouldn’t commit crimes has never stopped people from committing crimes. Let’s stop doing that too. Explaining why people should exercise doesn’t stop people from sitting on the couch. Let’s stop doing that too. Explaining the dangers of social media doesn’t stop people from doomscrolling so let’s just forget about it. Great philosophy you have there. Keep it up.
English
0
0
0
13
Maja von Westphal
Maja von Westphal@majavonwestphal·
@RewiringK12 @jeffreyleefunk Because explaining why they shouldn't cheat has never stopped people from cheating. If "taking a short cut" means reaching your goal faster, but never actually learning to get places by yourself, that's not something we should aim for. Read some studies, please.
English
1
0
0
13
jeffrey lee funk
jeffrey lee funk@jeffreyleefunk·
Professors are fighting an uphill battle against intrusion of AI into education, and it’s forcing them to rethink how they instruct their students, many of whom have already become hopelessly dependent on the tech. “What is #AI doing to us as a species?” futurism.com/artificial-int…
English
1
0
5
365
Rewiring Education
Rewiring Education@RewiringK12·
Why are we framing this study as a specific K-12 issue other than the fact that violent attacks can be planned on campuses? Of course we need to regulate these tools and the fact that they can spew this type of info, but for our society at large. None of the tools that were tested are school based tools. Framing it as a K-12 educational issue is disingenuous and makes it seem like students are being allowed to use these tools in school.
English
0
0
0
11
Principal Jon
Principal Jon@Principal_Jon·
@RewiringK12 Agreed. It is why I love this Buckminster Fuller quote. "You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete." This is what I am trying to do.
English
1
0
1
27
Principal Jon
Principal Jon@Principal_Jon·
This is happening. Many schools are utilizing AI, not to replace thinking, but to enhance it. AI tools are being developed that will provide one on one tutoring to every child that wants it, a practice that has 2 standard deviations learning growth above typical learning. Is your child ready for this new reality?
Kevin Frazier@KevinTFrazier

Do we need to disrupt education to be AI-ready? @SecRaimondo: yes. The status quo isn’t suited to helping us navigate the transition to an AI future. Now’s the time to be bold. h/t @nvidia

English
1
0
8
862
Rewiring Education
Rewiring Education@RewiringK12·
How could they when we can be almost certain this technology will be misused? The problem isn’t AI… It will be HUMANS who rush to get the tech into our classrooms without properly vetting the tools they’re using. It will be HUMANS driving recklessly, crashing into the self-driving vehicles designed to keep the roads safer for us. It will be HUMANS misusing this tech for war, financial gain, scams, and other nefarious means. It will be HUMANS who offload their judgment to AI, copying answers, recommendations, or decisions they never bothered to question. It will be HUMANS who stick their heads in the sand and refuse to properly teach the younger generation how to use these tools appropriately and responsibly. It will be HUMANS who train these systems on biased data, set careless incentives, and then act shocked when the outputs reflect the mess we fed them. We can’t change this almost certain future unless we change the HUMAN CONDITION.
Hadas Gold@Hadas_Gold

Poll after poll shows people don't feel great about AI. This Economist/YouGov poll shows 47% feel AI will have a negative impact on society vs. 16% who think it will be positive yougov.com/en-us/articles…

English
0
0
0
70
Rewiring Education
Rewiring Education@RewiringK12·
Everyone wants black or white, left or right, right or wrong, this or that. The right answer, as usual, lives in the gray. Lots of screen use is bad. Some is helpful. Lots of AI use is harmful. Some will be very beneficial. We need people in charge who will be discerning about what we’re using and when we’re using it.
English
1
0
2
60
Principal Jon
Principal Jon@Principal_Jon·
Schools went from "tech will save us" to "get tech out" so fast I'm getting whiplash. Neither position is a strategy. The right answer takes more work than a pendulum swing.
English
23
8
86
5.1K
Rewiring Education
Rewiring Education@RewiringK12·
A fascinating new look at the research pertaining to AI & Education: scale.stanford.edu/sites/default/… A new 2026 Stanford review of Educational AI research highlights some potential benefits, as well as the very real risks of throwing these tools at students. Not surprisingly, multiple studies showed that unfettered access to general-purpose AI can function as a crutch. Of course, students get an immediate performance boost while actively using GenAI. But more often than not, when it’s taken away, that learning doesn't transfer. College students using GenAI for research actually showed worse reasoning and argumentation than peers using regular search engines. Even worse, using AI to write essays drastically reduced students' recall, with 83% of participants failing to even provide a quote from their own essay. At the same time, purpose-built tutoring tools that use pedagogical guardrails, such as offering step-by-step reasoning and hints rather than handing over direct answers, are showing promise. The data is showing that these platforms are successfully preventing the "crutch" effect and helping students maintain independent problem-solving skills. Even more impressively, AI is proving to be a highly effective real-time coach for educators by providing live, context-specific suggestions to human tutors during instruction, and actively encouraging them to use effective strategies like asking Socratic or guiding questions.  For me, this strengthens the argument both for a strict evaluative framework for all potential AI tools, as well as for teaching students responsible use of this technology. It’s not surprising that students using tools that hand over a complete solution lower their cognitive load at the direct expense of deep, independent thinking. What is surprising is how many people don’t get that memo. We desperately need to make sure we’re using AI correctly and responsibly. If we’re exploring tools for classroom use, we must explore tools built specifically for education, and only those that have the right guardrails in place. At the moment, there aren’t many that fit the bill. And that’s ok! Practice P.I.V.O.T. A tool needs to be PURPOSEFUL. It needs to be INCLUSIVE. It needs to provide VALUE beyond what you could get without it. It needs to have mechanisms in place for OVERSIGHT and TRACKING of data to ensure it’s actually making a difference. If it doesn’t hit those five points, pivot away!
Rewiring Education tweet media
English
0
0
2
62
Rewiring Education
Rewiring Education@RewiringK12·
I’ve long felt a growing discomfort with OpenAI and Sam Altman. Back when he was ousted as CEO and promptly brought back there were a lot of things seemingly swept under the rug. None of the anecdotes below surprise me. Tonight I’ll be moving any information I have stored on ChatGPT to another LLM and will be deleting the app for good. I encourage others to do the same. We can’t afford to allow those with poor judgement and questionable intentions to lead us down the path to AGI. I, for one, hope ChatGPT becomes the “MySpace” of the GenAI era. I hope you’ll join me in making that a reality.
Rob Wiblin@robertwiblin

Huge repository of information about OpenAI and Altman just dropped — 'The OpenAI Files'. There's so much crazy shit in there. Here's what Claude highlighted to me: 1. Altman listed himself as Y Combinator chairman in SEC filings for years — a total fabrication (?!): "To smooth his exit [from YC], Altman proposed he move from president to chairman. He pre-emptively published a blog post on the firm's website announcing the change. But the firm's partnership had never agreed, and the announcement was later scrubbed from the post." "...Despite the retraction, Altman continued falsely listing himself as chairman in SEC filings for years, despite never actually holding the position." (WTAF.) 2. OpenAI's profit cap was quietly changed to increase 20% annually — at that rate it would exceed $100 trillion in 40 years. The change was not disclosed and OpenAI continued to take credit for its capped-profit structure without acknowledging the modification. 3. Despite claiming to Congress he has "no equity in OpenAI," Altman held indirect stakes through Sequoia and Y Combinator funds. 4. Altman owns 7.5% of Reddit — when Reddit announced its OpenAI partnership, Altman's net worth jumped $50 million. Altman invested in Rain AI, then OpenAI signed a letter of intent to buy $51 million of chips from them. 5. Rumours suggest Altman may receive a 7% stake worth ~$20 billion in the restructured company. 5. OpenAI had a major security breach in 2023 where a hacker stole AI technology details but didn't report it for over a year. OpenAI fired Leopold Aschenbrenner explicitly because he shared security concerns with the board. 6. Altman denied knowing about equity clawback provisions that threatened departing employees' millions in vested equity if the ever criticised OpenAI. But Vox found he personally signed the documents authorizing them in April 2023. These restrictive NDAs even prohibited employees from acknowledging their existence. 7. Senior employees at Altman's first startup Loopt twice tried to get the board to fire him for "deceptive and chaotic behavior". 9. OpenAI's leading researcher Ilya Sutskever told the board: "I don't think Sam is the guy who should have the finger on the button for AGI". Sutskever provided the board a self-destructing PDF with Slack screenshots documenting "dozens of examples of lying or other toxic behavior. 10. Mira Murati (CTO) said: "I don't feel comfortable about Sam leading us to AGI" 11. The Amodei siblings described Altman's management tactics as "gaslighting" and "psychological abuse". 12. At least 5 other OpenAI executives gave the board similar negative feedback about Altman. 13. Altman owned the OpenAI Startup Fund personally but didn't disclose this to the board for years. Altman demanded to be informed whenever board members spoke to employees, limiting oversight. 14. Altman told board members that other board members wanted someone removed when it was "absolutely false". An independent review after Altman's firing found "many instances" of him "saying different things to different people" 15. OpenAI required employees to waive their federal right to whistleblower compensation. Former employees filed SEC complaints alleging OpenAI illegally prevented them from reporting to regulators. 16. While publicly supporting AI regulation, OpenAI simultaneously lobbied to weaken the EU AI Act. By 2025, Altman completely reversed his stance, calling the government approval he once advocated "disastrous" and OpenAI now supports federal preemption of all state AI safety laws even before any federal regulation exists. Obviously this is only a fraction of what's in the apparently 10,000 words on the site. Link below if you'd like to look over. (I've skipped over the issues with OpenAI's restructure which I've written about before already, but in a way that's really the bigger issue.)

English
0
0
0
37