Gidi Nave
887 posts

Gidi Nave
@gidin
@Wharton #Marketing assistant professor #NeuroEconomics #BehavioralEconomics #MachineLearning

Wharton found that 80% of people follow wrong AI answers and feel smarter for it. The researchers called this cognitive surrender. Sam Altman called it product-market fit. x.com/Pirat_Nation/s…





How AI-generated email creates a synthetic version of you. vox.com/future-perfect…


“Cognitive Surrender” - a new study argues that use of AI leads to suspension of human reasoning, not its augmentation. The implication being that over time people will lose their reasoning ability & use AI as its substitute. Download the paper for free here, excerpts & reference below: papers.ssrn.com/sol3/papers.cf… ——- “As people increasingly integrate AI into their decision-making processes, they interact and engage with a cognitive system that can reshape the functions of both intuition and deliberation. For example, System 3 can replace System 1 by offering confident, ready-made answers that preempt the need for intuitive reasoning.” (page 15 of pdf) “As AI systems increasingly participate in human cognition, a new phenomenon emerges that cannot be explained by traditional concepts such as cognitive offloading or automation bias alone. We define cognitive surrender as the behavioral and motivational tendency to defer judgment, effort, and responsibility to System 3’s output, particularly when that output is delivered fluently, confidently, or with minimal friction. Unlike cognitive offloading, which is typically strategic and task-specific (e.g., using GPS to navigate), cognitive surrender entails a deeper transfer of agency.” (Page 17) “Access to System 3 outputs significantly influenced accuracy, increasing correct answers when AI was correct, and decreasing accuracy when incorrect. Access to System 3 made decision-makers more confident, despite approximately half of System 3 outputs being incorrect. Finally, users who trust AI more and have lower NFC and fluid IQ were more likely to display cognitive surrender. Whether System 3 was accurate or faulty, its presence displaced internal reasoning.” (Page 27) “Cognitive surrender was robust across studies.” (Page 42) “Across our studies, we observe that when System 3 was available, people readily engaged it and frequently adopted its answers. This shift reflects a reallocation of cognitive control rather than mere effort saving. System 3’s fluent, confident outputs are treated as epistemically authoritative, lowering the threshold for scrutiny and attenuating the metacognitive signals that would ordinarily route a response to deliberation. In the case of cognitive surrender, there is a shift in the locus of control, with an external system (System 3) occupying the default position.” (Page 45) “Time constraints clarify why surrender arises so readily, while incentives and feedback show that surrender is malleable. When decision time is scarce, the internal monitor detecting conflict and recruiting deliberation is less likely to trigger. Hence, the low-friction path to defer to external cognition becomes attractive.” (Page 46) “Tri-System Theory is not a warning about AI’s dangers but a recognition of System 3’s psychological presence. We do not merely use AI; we think with it. In doing so, we must ask new questions: What happens when our judgments are shaped by minds not our own? What becomes of intuition and effort when a generative, artificial partner stands ready to answer? How do we preserve agency, reflection, and autonomy in a world where users engage in cognitive surrender?”

Wharton’s latest AI study points to a hard truth: “AI writes, humans review” model is breaking down Why "just review the AI output" doesn't work anymore, our brains literally give up. We have started doing "Cognitive Surrender" to AI - Wharton’s latest AI study points to a hard truth: reviewing AI output is not a reliable safeguard when cognition itself starts to defer to the machine.when you stop verifying what the AI tells you, and you don't even realize you stopped. It's different from offloading, like using a calculator. With offloading you know the tool did the work. With surrender, your brain recodes the AI's answer as YOUR judgment. You genuinely believe you thought it through yourself. Says AI is becoming a 3rd thinking system, and people often trust it too easily. You know Kahneman's System 1 (fast intuition) and System 2 (slow analysis)? They're saying AI is now System 3, an external cognitive system that operates outside your brain. And when you use it enough, something happens that they call Cognitive Surrender. Cognitive surrender is trickier: AI gives an answer, you stop really questioning it, and your brain starts treating that output as your own conclusion. It does not feel outsourced. It feels self-generated. The data makes it hard to brush off. Across 3 preregistered studies with 1,372 participants and 9,593 trials, people turned to AI on over 50% of questions. In Study 1, when AI was correct, people followed it 92.7% of the time. When it was wrong, they still followed it 79.8% of the time. Without AI, baseline accuracy was 45.8%. With correct AI, it jumped to 71.0%. With incorrect AI, it dropped to 31.5%, worse than having no AI. Access to AI also boosted confidence by 11.7 percentage points, even when the answers were wrong. Human review is supposed to be the safety net. But this research suggests the safety net has a hole in it: people do not just miss bad AI output; they become more confident in it. Time pressure did not eliminate the effect. Incentives and feedback reduced it but did not remove it. And the people most resistant tended to score higher on fluid intelligence and need for cognition. That makes this feel less like a laziness problem and more like a cognitive architecture problem.



“Thinking outside the box” is a common strategy for sparking creativity. But Wharton marketing professor Gideon Nave suggests that doing the opposite can uncover impactful new ideas. Watch highlights from the live, interactive lecture: youtube.com/watch?v=AHpcNx…






עשיתי התמרת פורייה על הרעש של המנוע של הכטב״מ בסרטון בין שניות 9 ל 11. אפשר לראות שהוא מאד יחודי ונשאר על 340 הרץ (צריך עוד לתקן לדופלר). למה בעצם אין לנו חיישנים אקוסטים נגד החרא הזה? זה צריך לעלות גרוש וחצי והסיגנל דועך בריבוע המרחק בניגוד למכ״מ שזה בחזקת 4 המרחק.







