Michael Littman

1.3K posts

Michael Littman

Michael Littman

@mlittmancs

Providence, RI Katılım Kasım 2011
151 Takip Edilen7.7K Takipçiler
Brown CS
Brown CS@BrownCSDept·
@mlittmancs has been appointed @BrownUniversity's first Associate Provost for Artificial Intelligence, a new leadership position with a campus-wide charge to advance, in a responsible manner, Brown’s engagement with AI across its academic missions: cs.brown.edu/news/2024/12/0…
Brown CS tweet media
English
1
4
26
1.8K
Michael Littman
Michael Littman@mlittmancs·
This month is the one-year anniversary of the publication of my book, "Code to Joy". I'm happy to announce it's also the month of the release of the audiobook version of the book, which I narrate. Enjoy! @mitpress amazon.com/dp/B0DKG4KPWY
English
3
7
38
4K
Michael Littman
Michael Littman@mlittmancs·
The US Secretary of State cannot tell a lie; after all, he's A. Blinken.
English
3
0
14
2.5K
Michael Littman
Michael Littman@mlittmancs·
I got to help shape this document, providing guidance about how AI researchers collaborate globally. It was unveiled at the UN General Assembly yesterday by the Secretary of State.
English
4
10
61
7.7K
Michael Littman
Michael Littman@mlittmancs·
You'd think that, since I'm working in government, my id card would count as "government id". It does not. Very confusing when I tried to get through security at the Pentagon. (My Rhode Island drivers's license worked.)
English
2
0
20
3.3K
Hamed Zamani
Hamed Zamani@HamedZamani·
@mlittmancs's perspective on the current AI landscape in academia! "no one better positioned than the #SIGIR community to help lead the way!" (see the slide for context)
Hamed Zamani tweet media
English
1
0
3
91
Michael Littman
Michael Littman@mlittmancs·
@mm_jj_nn @ShriramKMurthi haha, that's funny... I feel like NSF folks (1) use a lot of acronyms (my list is creeping up to 500 that I've seen used in my 2 years), and (2) like to pronounce them (probably because the alternative is just too hard to understand).
English
2
0
2
105
Michael Littman
Michael Littman@mlittmancs·
I wanted to better understand how language models can help with decision-making and planning and worked with a great team to produce this nifty piece of work!
Max Zuo@max_zuo

Ever wonder if LLMs use tools🛠️ the way we ask them? We explore LLMs using classical planners: are they writing *correct* PDDL (planning) problems? Say hi👋 to Planetarium🪐, a benchmark of 132k natural language & PDDL problems. 📜 Preprint: arxiv.org/abs/2407.03321 🧵1/n

English
5
9
57
9.9K
Michael Littman
Michael Littman@mlittmancs·
@bai_liping It's interesting, though, right? Because the best computational tool we have for processing language is the transformer, which isn't really structured the way we make decision systems.
English
0
0
2
149
Liping Bai 白莉萍
Liping Bai 白莉萍@bai_liping·
@mlittmancs I always assume that the commonality between language and decision making is the understanding of structured system.
English
1
0
0
215
Michael Littman
Michael Littman@mlittmancs·
Great topic, great speakers!
Jason Liu @HRI@jasonxyliu

Submit to our #RSS2024 workshop on “Robotic Tasks and How to Specify Them? Task Specification for General-Purpose Intelligent Robots” by June 12th. Join our discussion on what constitutes various task specifications for robots, in what scenarios they are most effective and more!

English
0
0
1
1.7K
Michael Littman
Michael Littman@mlittmancs·
In my book, "Code to Joy", I include a Marvel-movie-like post credit scene teasing a future in which AI capabilities are available on an Arduino chip. I just learned that life is imitating art! theverge.com/2024/6/4/24170…
English
0
0
7
1.4K
Michael Littman
Michael Littman@mlittmancs·
@stanfordnlp @NSF Thanks for posting! I few things I'd add: Susan Dumais, George Furnas, Tom Landauer, Scott Deerwester really pioneered these ideas in the late 80s. And one thing Sue and Tom did that I think is worth new attention: Modeling the language acquisition process using human-scale data!
English
0
0
2
464
Michael Littman
Michael Littman@mlittmancs·
@_max_entropy My concern is with the notion of counting hyperparameters/constants. Sagan said: If you wish to make an apple pie from scratch, you must first invent the universe. Where you draw the lines (algorithm/parameters/hyperparameters) matters.
English
1
0
1
41
Yagnesh Revar
Yagnesh Revar@_max_entropy·
@mlittmancs Okay, let me rephrase it. If it's possible to generate LLM like algorithms using some reward function, what would it be? How many things (constants) we'd need to specify?
English
1
0
0
21
Yagnesh Revar
Yagnesh Revar@_max_entropy·
Any LLM design experts here? How many hyper-parameters in total do LLMs use on avg? Say we want to include all that contributed to their working form. Include anything that isn't obvious/ couldn't be derived from first principles of learning (assuming there's such a thing).
English
1
0
0
396
Michael Littman
Michael Littman@mlittmancs·
And "escape codes" is "cape cod" surrounded by "es" on both sides.
English
0
0
5
1.3K
Michael Littman
Michael Littman@mlittmancs·
You probably know that an escape code is something that tells a computer not to interpret something literally, but to execute the command following it. But did you ever notice that "escape code" has "cape cod" RIGHT INSIDE it??
English
4
1
32
0
Michael Littman
Michael Littman@mlittmancs·
@_max_entropy Ah, I apologize. I misread "hyper parameters" as "parameters". I'm not sure your actual question is well-defined, though.
English
1
0
1
31
Yagnesh Revar
Yagnesh Revar@_max_entropy·
@mlittmancs Oh so you mean all the weights of the network. Shouldn't we use the term "parameters" for that?
English
1
0
0
15