

Stanford NLP Group
15.1K posts

@stanfordnlp
Natural Language Processing/Machine Learning @chrmanning @jurafsky @percyliang @ChrisGPotts @tatsu_hashimoto @MonicaSLam @Diyi_Yang @YejinChoinka @StanfordAILab



@chrmanning and I went on @latentspacepod to talk about world models. youtu.be/oBWRHnggscM?si…






making fresh slides for this now, should be fun :)

New Anthropic research: Emotion concepts and their function in a large language model. All LLMs sometimes act like they have emotions. But why? We found internal representations of emotion concepts that can drive Claude’s behavior, sometimes in surprising ways.


instead I suggest learning by doing cs336.stanford.edu cs336 will kick your ass, but I can't think of a better way to get up to speed with the frontier of AI

How far do Marin's scaling laws extrapolate? At least 100x, apparently! Despite spooky spikes, our 1e23 Delphi finished on forecast. The compute-optimal ladder costs ~1e21 FLOPs to train. Good scaling science lets you “run” this (not tiny) experiment at 1/100th the cost.



For this week's NLP Seminar, we are excited to host @universeinanegg from UChicago! Date and Time: Thursday, April 2, 11:00AM — 12:00 PM Pacific Time. Zoom Link: stanford.zoom.us/j/93941842999?… Title: Seeing Like a Language Model Abstract: How does a language model perceive its input? What aspects of reality does it find legible and which elude it? How can we know? Current approaches to studying LLMs—focused on engineering progress—are insufficiently exploratory. I will discuss new approaches we have been incubating and more conceptually what it means for interpretability approaches to be predictive rather than mechanistic, defend prompting as a form of scientific inquiry, and caution against formalizing concepts too early, without doing the required amount of stamp collecting. Bio: Ari Holtzman is an Assistant Professor of Computer Science and Data Science at the University of Chicago, where he leads Conceptualization Lab. His motto is: 'I'm doing it with LLMs or I'm not doing it at all.'






