Logan Mann

11 posts

Logan Mann banner
Logan Mann

Logan Mann

@loganmann0324

Computer Engineering @UCSB @ucsbNLP

Santa Barbara, CA Katılım Eylül 2021
30 Takip Edilen7 Takipçiler
Sabitlenmiş Tweet
Logan Mann
Logan Mann@loganmann0324·
Fresh from #NeurIPS2025! 🧠✨ Presented my work at the CogInterp workshop: "Don't Think of the White Bear." We investigate Ironic Process Theory in LLMs. The core insight? Safety prompts can backfire under cognitive load. 🧵👇
Logan Mann tweet mediaLogan Mann tweet media
English
4
0
2
136
Logan Mann
Logan Mann@loganmann0324·
@justmazer Hey Jordan, any update on the alpha fellowship coding assessment results?
English
0
0
0
21
Jordan Mazer
Jordan Mazer@justmazer·
the less you want a particular outcome, and the more you want the right outcome, the more likely you are to get it
English
2
0
6
269
Bella
Bella@nazzari·
hello! i'm opening up my calendar to talk about a16z @speedrun and the alpha fellowship next week! like, comment or shoot me a DM here to get in touch. Ill be reaching out to people throughout the weekend. ❤️‍🔥 for alpha - you must have already applied on our site: alpha.a16zspeedrun.com 🏃‍♂️ for speedrun companies - you must have already applied to our talent network: a16z-games.typeform.com/1337-talent?ut…
Bella tweet media
English
59
4
321
16.4K
Chuhan Li
Chuhan Li@_Chuhan_Li·
Human perception is inherently situated – we understand the world relative to our own body, viewpoint, and motion. To deploy multimodal foundation models in embodied settings, we ask: “Can these models reason in the same observer-centric way?” We study this through SAW-Bench: a novel benchmark for observer-centric situated awareness: - 786 real world egocentric videos - 2,071 human-annotated QA pairs Across all tasks, we evaluate 24 state-of-the-art MFMs: 📉 Best model: 53.9% 🧑 Humans: 91.6% Models systematically: ❌ Confuse head rotation with physical movement ❌ Collapse under multi-turn trajectories ❌ Fail to maintain persistent world-state memory 👉 We see that maintaining a stable observer-centric representation remains challenging. As MFMs are increasingly integrated into embodied agents, situated awareness becomes essential for reliable real-world interaction. We release SAW-Bench and encourage further research toward improving observer-centric reasoning in multimodal foundation models.
English
5
26
113
25.1K
Logan Mann
Logan Mann@loganmann0324·
🚨 The Bladder Strategy Oracle 🚽 My entry for the @Base44 contest solves the biggest Super Bowl problem: Knowing exactly when it's safe to pee. 🟢 Green = Commercial (GO) 🔴 Red = Red Zone (HOLD) Judges, toggle "Simulation Mode" 👇 bladder-bold-break.base44.app #MyBigGameApp
Logan Mann tweet media
English
4
0
11
1.9K
Logan Mann
Logan Mann@loganmann0324·
Fresh from #NeurIPS2025! 🧠✨ Presented my work at the CogInterp workshop: "Don't Think of the White Bear." We investigate Ironic Process Theory in LLMs. The core insight? Safety prompts can backfire under cognitive load. 🧵👇
Logan Mann tweet mediaLogan Mann tweet media
English
4
0
2
136
Logan Mann
Logan Mann@loganmann0324·
Great discussions with @suchirsalhan on how these dynamics play out in Small Language Models (#SLMs). If suppression is cognitively "expensive," are smaller models inherently more prone to ironic errors? Exciting questions for the future of Applied NLP.
Logan Mann tweet media
English
0
1
1
31
Logan Mann
Logan Mann@loganmann0324·
This aligns perfectly with Jay McClelland’s keynote "Do LMs Have Thoughts?". He proposed shifting from "sequences of tokens" to "sequences of thoughts." Our work suggests these "thoughts" include intrusive ones—where attention creates unavoidable cognitive trade-offs.
English
0
0
2
26
Logan Mann
Logan Mann@loganmann0324·
Mechanistic analysis reveals why: 📉 Early layers suppress the forbidden concept. 📈 Middle layers actually amplify it. Under load, this "monitoring" process leaks the very token you tried to hide. Just like humans, LLMs struggle to "not think" of the bear.
English
0
0
2
27