Gidi Nave

887 posts

Gidi Nave banner
Gidi Nave

Gidi Nave

@gidin

@Wharton #Marketing assistant professor #NeuroEconomics #BehavioralEconomics #MachineLearning

Philadelphia, PA Katılım Nisan 2009
1.1K Takip Edilen728 Takipçiler
Gidi Nave retweetledi
The Wharton School
The Wharton School@Wharton·
AI is becoming a significant part of our daily lives, shaping how we work, think, and make decisions. But as we increasingly rely on AI tools, we must ask: How does this impact our decision-making processes? Prof. @gidin and postdoctoral researcher @steveshaw2020 recently joined the @whartonknows Ripple Effect podcast to discuss their new research into cognitive surrender – the tendency to adopt AI outputs with minimal scrutiny, overriding human intuition and deliberation: whr.tn/3NH9Qc4
The Wharton School tweet media
English
0
6
14
856
Gidi Nave
Gidi Nave@gidin·
"Cognitive surrender is clearly real, and with it will come the atrophy of certain skills and capacities, or the absence of their development in the first place." Fantastic coverage of my recent research with @steveshaw2020 by @ezraklein at @nytimes bit.ly/4lYzpBT
English
0
5
15
1.3K
Gidi Nave retweetledi
Steve Shaw
Steve Shaw@steveshaw2020·
"cognitive surrender comes when, as Steven Shaw and Gideon Nave of the University of Pennsylvania put it, “the user relinquishes cognitive control and adopts the A.I.’s judgment as their own.”" @ezraklein @gidin nytimes.com/2026/03/29/opi…
English
0
1
2
114
Gidi Nave
Gidi Nave@gidin·
Thank you for sharing our work @IlhanNiaz !
Ilhan Niaz@IlhanNiaz

“Cognitive Surrender” - a new study argues that use of AI leads to suspension of human reasoning, not its augmentation. The implication being that over time people will lose their reasoning ability & use AI as its substitute. Download the paper for free here, excerpts & reference below: papers.ssrn.com/sol3/papers.cf… ——- “As people increasingly integrate AI into their decision-making processes, they interact and engage with a cognitive system that can reshape the functions of both intuition and deliberation. For example, System 3 can replace System 1 by offering confident, ready-made answers that preempt the need for intuitive reasoning.” (page 15 of pdf) “As AI systems increasingly participate in human cognition, a new phenomenon emerges that cannot be explained by traditional concepts such as cognitive offloading or automation bias alone. We define cognitive surrender as the behavioral and motivational tendency to defer judgment, effort, and responsibility to System 3’s output, particularly when that output is delivered fluently, confidently, or with minimal friction. Unlike cognitive offloading, which is typically strategic and task-specific (e.g., using GPS to navigate), cognitive surrender entails a deeper transfer of agency.” (Page 17) “Access to System 3 outputs significantly influenced accuracy, increasing correct answers when AI was correct, and decreasing accuracy when incorrect. Access to System 3 made decision-makers more confident, despite approximately half of System 3 outputs being incorrect. Finally, users who trust AI more and have lower NFC and fluid IQ were more likely to display cognitive surrender. Whether System 3 was accurate or faulty, its presence displaced internal reasoning.” (Page 27) “Cognitive surrender was robust across studies.” (Page 42) “Across our studies, we observe that when System 3 was available, people readily engaged it and frequently adopted its answers. This shift reflects a reallocation of cognitive control rather than mere effort saving. System 3’s fluent, confident outputs are treated as epistemically authoritative, lowering the threshold for scrutiny and attenuating the metacognitive signals that would ordinarily route a response to deliberation. In the case of cognitive surrender, there is a shift in the locus of control, with an external system (System 3) occupying the default position.” (Page 45) “Time constraints clarify why surrender arises so readily, while incentives and feedback show that surrender is malleable. When decision time is scarce, the internal monitor detecting conflict and recruiting deliberation is less likely to trigger. Hence, the low-friction path to defer to external cognition becomes attractive.” (Page 46) “Tri-System Theory is not a warning about AI’s dangers but a recognition of System 3’s psychological presence. We do not merely use AI; we think with it. In doing so, we must ask new questions: What happens when our judgments are shaped by minds not our own? What becomes of intuition and effort when a generative, artificial partner stands ready to answer? How do we preserve agency, reflection, and autonomy in a world where users engage in cognitive surrender?”

English
1
2
5
463
Gidi Nave
Gidi Nave@gidin·
Thank you for sharing our work 🙏 @rohanpaul_ai
Rohan Paul@rohanpaul_ai

Wharton’s latest AI study points to a hard truth: “AI writes, humans review” model is breaking down Why "just review the AI output" doesn't work anymore, our brains literally give up. We have started doing "Cognitive Surrender" to AI - Wharton’s latest AI study points to a hard truth: reviewing AI output is not a reliable safeguard when cognition itself starts to defer to the machine.when you stop verifying what the AI tells you, and you don't even realize you stopped. It's different from offloading, like using a calculator. With offloading you know the tool did the work. With surrender, your brain recodes the AI's answer as YOUR judgment. You genuinely believe you thought it through yourself. Says AI is becoming a 3rd thinking system, and people often trust it too easily. You know Kahneman's System 1 (fast intuition) and System 2 (slow analysis)? They're saying AI is now System 3, an external cognitive system that operates outside your brain. And when you use it enough, something happens that they call Cognitive Surrender. Cognitive surrender is trickier: AI gives an answer, you stop really questioning it, and your brain starts treating that output as your own conclusion. It does not feel outsourced. It feels self-generated. The data makes it hard to brush off. Across 3 preregistered studies with 1,372 participants and 9,593 trials, people turned to AI on over 50% of questions. In Study 1, when AI was correct, people followed it 92.7% of the time. When it was wrong, they still followed it 79.8% of the time. Without AI, baseline accuracy was 45.8%. With correct AI, it jumped to 71.0%. With incorrect AI, it dropped to 31.5%, worse than having no AI. Access to AI also boosted confidence by 11.7 percentage points, even when the answers were wrong. Human review is supposed to be the safety net. But this research suggests the safety net has a hole in it: people do not just miss bad AI output; they become more confident in it. Time pressure did not eliminate the effect. Incentives and feedback reduced it but did not remove it. And the people most resistant tended to score higher on fluid intelligence and need for cognition. That makes this feel less like a laziness problem and more like a cognitive architecture problem.

English
0
0
3
184
Gidi Nave
Gidi Nave@gidin·
Excited to share the product of a wonderful collaboration
English
1
0
7
316
Gidi Nave retweetledi
Wharton Magazine
Wharton Magazine@whartonmagazine·
Thinking outside the box is the conventional way of finding innovative solutions, but it may not be best. “We need to do the opposite and limit ourselves,” says marketing professor @gidin. Read more about how his course helps students spark creative ideas: whr.tn/3YqWDXp
Wharton Magazine tweet media
English
0
2
11
1.5K
Gidi Nave retweetledi
Oleg Urminsky
Oleg Urminsky@OlegUrminsky·
We conducted a global analysis to find out if people have different financial preferences when they are in a different emotional state. Paper out now at Nature Human Behavior, lead by Sam Pertl, with Tara Srirangarajan (both Phd students at Stanford): rdcu.be/dSlqS🧵1/9
English
2
12
63
7.9K
Gidi Nave
Gidi Nave@gidin·
@erlichya פעם מאבחן, תמיד מאבחן...
עברית
0
0
1
5
𝄃𝄃𝄂𝄂𝄀𝄁𝄃𝄂𝄂𝄃-⚪ Yaniv Erlich יניב ארליך
נוסחאת דופלר: v= c * (1 - f1 / f2) כאשר c זה מהירות הקול (343 מטר בשנייה) ו f1 ו f2 הם שתי התדרים שמדדנו. מה יוצא? בערך 37 מטר בשניה או 135קמ״ש. כאשר עושים זאת לתחילת הסרטון (A), יוצא שהמהירות ביחס לאישה הוא 39 מטר לשניה או 140 קמ״ש, כך שהאליזה דיי יציבה. זה המהירות של הכלי.
עברית
6
0
146
9.7K
𝄃𝄃𝄂𝄂𝄀𝄁𝄃𝄂𝄂𝄃-⚪ Yaniv Erlich יניב ארליך
נגמר היום עבודה אז עשיתי עבודה יותר טובה בניתוח סרטון הכטב״מ. אנחנו רואים כאן פירוק של הצלילים (ציר Y) לאורך התשע שניות הראשונות (ציר X). שמים לב לכמו מסרק קוים שנהיה צפוף ככל שעובר הזמן? זה אפקט דופלר (הסבר בציוץ הבא). מפה אפשר לשערך את מהירותו ועוד דברים. 👇🧵
𝄃𝄃𝄂𝄂𝄀𝄁𝄃𝄂𝄂𝄃-⚪ Yaniv Erlich יניב ארליך tweet media
𝄃𝄃𝄂𝄂𝄀𝄁𝄃𝄂𝄂𝄃-⚪ Yaniv Erlich יניב ארליך@the_yaniv

עשיתי התמרת פורייה על הרעש של המנוע של הכטב״מ בסרטון בין שניות 9 ל 11. אפשר לראות שהוא מאד יחודי ונשאר על 340 הרץ (צריך עוד לתקן לדופלר). למה בעצם אין לנו חיישנים אקוסטים נגד החרא הזה? זה צריך לעלות גרוש וחצי והסיגנל דועך בריבוע המרחק בניגוד למכ״מ שזה בחזקת 4 המרחק.

עברית
79
32
1.3K
190.8K
Gidi Nave retweetledi
Joe Biden
Joe Biden@JoeBiden·
Joe Biden tweet media
ZXX
79.1K
158.3K
917.9K
446.6M
Gidi Nave retweetledi
Feodora Teti
Feodora Teti@FeodoraTeti·
Thank you @jenniferdoleac @TradeDiversion and all all those who've shared my work on tariffs over the past days. Your support has been incredible. I'm currently working on a heavily revised version of the draft that will be available later this fall.
English
2
34
179
71.4K
Gidi Nave retweetledi
George Wu
George Wu@geowu·
RIP, Robin Hogarth, who passed yesterday. Robin was a giant in the field of decision research. The Behavioral Science group at Chicago Booth would not be the same without Robin and Hillel Einhorn. But I will most remember him as a wonderful and kind man.
English
9
17
79
40.9K
Gidi Nave retweetledi
John A. List
John A. List@Econ_4_Everyone·
We lost a true pioneer today. Rest in Peace Danny. While our work did not always agree, I always felt that I received a fair trial with you. We will all miss your brilliance, vision, and wisdom.
John A. List tweet media
English
10
93
696
65.8K
Gidi Nave retweetledi
Richard H Thaler
Richard H Thaler@R_Thaler·
I was so lucky to be able to have Danny Kahneman as a best friend and collaborator for decades. He usually ended our conversations with "to be continued..." but I now have to simulate his part which is impossible. My favorite image of us "working".
Richard H Thaler tweet media
English
105
1K
8.6K
618K