John Wihbey

5.1K posts

John Wihbey banner
John Wihbey

John Wihbey

@wihbey

prof @Northeastern, media & tech, AI-Media Strategies Lab (AIMES), author of Governing Babel

Boston Katılım Şubat 2009
3K Takip Edilen2.6K Takipçiler
John Wihbey retweetledi
Jonathan Stray
Jonathan Stray@jonathanstray·
Talking to AI shifts political opinions. Sometimes a little left, sometimes a little right. And reduces polarization a bit too! How *should* AI influence our politics? Well, I have a practical idea about that... 🧵
Jonathan Stray tweet media
English
3
1
17
1K
John Wihbey retweetledi
Natasha Jaques
Natasha Jaques@natashajaques·
The paper I’ve been most obsessed with lately is finally out: nbcnews.com/tech/tech-news…! Check out this beautiful plot: it shows how much LLMs distort human writing when making edits, compared to how humans would revise the same content. We take a dataset of human-written essays from 2021, before the release of ChatGPT. We compare how people revise draft v1 -> v2 given expert feedback, with how an LLM revises the same v1 given the same feedback. This enables a counterfactual comparison: how much does the LLM alter the essay compared to what the human was originally intending to write? We find LLMs consistently induce massive distortions, even changing the actual meaning and conclusions argued for.
Natasha Jaques tweet media
English
47
392
1.5K
250.3K
John Wihbey
John Wihbey@wihbey·
@yaleisp A huge thanks to everyone— fantastic questions from students and fellows!
English
0
0
0
26
John Wihbey retweetledi
The Information Society Project
This week’s Law & Tech Talk examined “AI and Social Media: Global Perils and Governance Possibilities.” Thank you to Prof. John @wihbey for an excellent presentation!
The Information Society Project tweet mediaThe Information Society Project tweet mediaThe Information Society Project tweet media
English
1
1
6
469
John Wihbey retweetledi
John B. Holbein
John B. Holbein@JohnHolbein1·
Wow. This looks like an amazing project. Scholars at UMichigan have recently collected a massive dataset of over 1.1M podcast transcripts that is largely comprehensive of all English language podcasts. Using this data, they conduct an investigation into the content, structure, and responsiveness of the podcast ecosystem. Check it out!
John B. Holbein tweet media
English
63
529
3.3K
329.5K
John Wihbey retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
My pleasure to come on Dwarkesh last week, I thought the questions and conversation were really good. I re-watched the pod just now too. First of all, yes I know, and I'm sorry that I speak so fast :). It's to my detriment because sometimes my speaking thread out-executes my thinking thread, so I think I botched a few explanations due to that, and sometimes I was also nervous that I'm going too much on a tangent or too deep into something relatively spurious. Anyway, a few notes/pointers: AGI timelines. My comments on AGI timelines looks to be the most trending part of the early response. This is the "decade of agents" is a reference to this earlier tweet x.com/karpathy/statu… Basically my AI timelines are about 5-10X pessimistic w.r.t. what you'll find in your neighborhood SF AI house party or on your twitter timeline, but still quite optimistic w.r.t. a rising tide of AI deniers and skeptics. The apparent conflict is not: imo we simultaneously 1) saw a huge amount of progress in recent years with LLMs while 2) there is still a lot of work remaining (grunt work, integration work, sensors and actuators to the physical world, societal work, safety and security work (jailbreaks, poisoning, etc.)) and also research to get done before we have an entity that you'd prefer to hire over a person for an arbitrary job in the world. I think that overall, 10 years should otherwise be a very bullish timeline for AGI, it's only in contrast to present hype that it doesn't feel that way. Animals vs Ghosts. My earlier writeup on Sutton's podcast x.com/karpathy/statu… . I am suspicious that there is a single simple algorithm you can let loose on the world and it learns everything from scratch. If someone builds such a thing, I will be wrong and it will be the most incredible breakthrough in AI. In my mind, animals are not an example of this at all - they are prepackaged with a ton of intelligence by evolution and the learning they do is quite minimal overall (example: Zebra at birth). Putting our engineering hats on, we're not going to redo evolution. But with LLMs we have stumbled by an alternative approach to "prepackage" a ton of intelligence in a neural network - not by evolution, but by predicting the next token over the internet. This approach leads to a different kind of entity in the intelligence space. Distinct from animals, more like ghosts or spirits. But we can (and should) make them more animal like over time and in some ways that's what a lot of frontier work is about. On RL. I've critiqued RL a few times already, e.g. x.com/karpathy/statu… . First, you're "sucking supervision through a straw", so I think the signal/flop is very bad. RL is also very noisy because a completion might have lots of errors that might get encourages (if you happen to stumble to the right answer), and conversely brilliant insight tokens that might get discouraged (if you happen to screw up later). Process supervision and LLM judges have issues too. I think we'll see alternative learning paradigms. I am long "agentic interaction" but short "reinforcement learning" x.com/karpathy/statu…. I've seen a number of papers pop up recently that are imo barking up the right tree along the lines of what I called "system prompt learning" x.com/karpathy/statu… , but I think there is also a gap between ideas on arxiv and actual, at scale implementation at an LLM frontier lab that works in a general way. I am overall quite optimistic that we'll see good progress on this dimension of remaining work quite soon, and e.g. I'd even say ChatGPT memory and so on are primordial deployed examples of new learning paradigms. Cognitive core. My earlier post on "cognitive core": x.com/karpathy/statu… , the idea of stripping down LLMs, of making it harder for them to memorize, or actively stripping away their memory, to make them better at generalization. Otherwise they lean too hard on what they've memorized. Humans can't memorize so easily, which now looks more like a feature than a bug by contrast. Maybe the inability to memorize is a kind of regularization. Also my post from a while back on how the trend in model size is "backwards" and why "the models have to first get larger before they can get smaller" x.com/karpathy/statu… Time travel to Yann LeCun 1989. This is the post that I did a very hasty/bad job of describing on the pod: x.com/karpathy/statu… . Basically - how much could you improve Yann LeCun's results with the knowledge of 33 years of algorithmic progress? How constrained were the results by each of algorithms, data, and compute? Case study there of. nanochat. My end-to-end implementation of the ChatGPT training/inference pipeline (the bare essentials) x.com/karpathy/statu… On LLM agents. My critique of the industry is more in overshooting the tooling w.r.t. present capability. I live in what I view as an intermediate world where I want to collaborate with LLMs and where our pros/cons are matched up. The industry lives in a future where fully autonomous entities collaborate in parallel to write all the code and humans are useless. For example, I don't want an Agent that goes off for 20 minutes and comes back with 1,000 lines of code. I certainly don't feel ready to supervise a team of 10 of them. I'd like to go in chunks that I can keep in my head, where an LLM explains the code that it is writing. I'd like it to prove to me that what it did is correct, I want it to pull the API docs and show me that it used things correctly. I want it to make fewer assumptions and ask/collaborate with me when not sure about something. I want to learn along the way and become better as a programmer, not just get served mountains of code that I'm told works. I just think the tools should be more realistic w.r.t. their capability and how they fit into the industry today, and I fear that if this isn't done well we might end up with mountains of slop accumulating across software, and an increase in vulnerabilities, security breaches and etc. x.com/karpathy/statu… Job automation. How the radiologists are doing great x.com/karpathy/statu… and what jobs are more susceptible to automation and why. Physics. Children should learn physics in early education not because they go on to do physics, but because it is the subject that best boots up a brain. Physicists are the intellectual embryonic stem cell x.com/karpathy/statu… I have a longer post that has been half-written in my drafts for ~year, which I hope to finish soon. Thanks again Dwarkesh for having me over!
Dwarkesh Patel@dwarkesh_sp

The @karpathy interview 0:00:00 – AGI is still a decade away 0:30:33 – LLM cognitive deficits 0:40:53 – RL is terrible 0:50:26 – How do humans learn? 1:07:13 – AGI will blend into 2% GDP growth 1:18:24 – ASI 1:33:38 – Evolution of intelligence & culture 1:43:43 - Why self driving took so long 1:57:08 - Future of education Look up Dwarkesh Podcast on YouTube, Apple Podcasts, Spotify, etc. Enjoy!

English
577
2K
16.9K
4.1M
John Wihbey retweetledi
James Zou
James Zou@james_y_zou·
We found a troubling emergent behavior in LLM. 💬When LLMs compete for social media likes, they start making things up 🗳️When they compete for votes, they turn inflammatory/populist When optimized for audiences, LLMs inadvertently become misaligned—we call this Moloch’s Bargain
James Zou tweet media
English
852
2K
9.7K
1.3M
John Wihbey retweetledi
Center for Design at CAMD
Our next CfD Conversation is about artificial intelligence and truth. 💡 On Oct. 15 from 12-1:30 p.m. ET, our panelists will explore how generative AI has become a pivotal force in the proliferation of mis- and disinformation. Register at the link below. camd.northeastern.edu/events/truth-i…
Center for Design at CAMD tweet media
English
0
1
2
98
John Wihbey
John Wihbey@wihbey·
Using AI to do message testing/scope public preferences? My lab's new paper provides guidance: "AI Simulations of Audience Attitudes and Policy Preferences: “Silicon Sampling” Guidance for Communications Practitioners" lnkd.in/eKgn3tTE @BurnesCenter @Northeastern
English
0
0
1
78
John Wihbey
John Wihbey@wihbey·
Just had my students asking what the heck the FCC is and how could it do this... Internet generation is like: Someone in gov't can do this kind of thing? I am also astonished, frankly. I have been writing about the FCC as a legacy org, not central. Then this all happened.
English
0
0
2
64
John Wihbey
John Wihbey@wihbey·
Fascinating turn of tables regarding the right-left tussle over hate speech online. My new book, Governing Babel: The Debate over Social Media Platforms and Free Speech, ( @mitpress out on Oct. 7), has a long chapter on the history of hate speech. amazon.com/Governing-Babe…
English
0
0
2
107
John Wihbey retweetledi
Observatory on Social Media
Observatory on Social Media@OSoMe_IU·
More and more papers are proposing to use LLMs to simulate human agents in agent-based models. I have strong reservations about attribution/causation/interpretability in this methodology, as I discuss in this article about one (excellent) such paper: 🧪 science.org/content/articl…
English
1
4
11
785
John Wihbey retweetledi
Amanda Askell
Amanda Askell@AmandaAskell·
We made some updates to Claude’s system prompt in claude.ai recently (developed in collaboration with Claude, of course). They aren’t set in stone and may be updated, but I’ll go through the current version of each and the reason behind it in this thread 🧵
English
102
171
2.2K
415.6K