

Steven Ge
2.8K posts

@StevenXGe
Professor, Founder of Orditus. Developer of https://t.co/7uS9mKC5KD, https://t.co/XW7sGGMcST, ShinyGO & iDEP . Topics: AI, stats, genomics, bioinformatics
















Anything you can do in Obsidian you can do from the command line. Obsidian CLI is now available in 1.12 (early access).


Anything you can do in Obsidian you can do from the command line. Obsidian CLI is now available in 1.12 (early access).







New @GoogleAI paper investigates into why reasoning models such as OpenAI’s o-series, DeepSeek-R1, and QwQ perform so well. They claim “think longer” is not the whole story. Rather thinking models build internal debates among multiple agents—what the researchers call “societies of thought.” Through interpretability and large-scale experiments, the paper finds that these systems develop human-like discussion habits: questioning their own steps, exploring alternatives, facing internal disagreement, and then reaching common ground. It’s basically a machine version of human collective reasoning, echoing the same ideas Mercier and Sperber talked about in The Enigma of Reason. Across 8,262 benchmark questions, their reasoning traces look more like back-and-forth dialogue than instruction-tuned baselines, and that difference is not just because the traces are longer. A mediation analysis suggests more than 20% of the accuracy advantage runs through these “social” moves, either directly or by helping checking habits like verification and backtracking. Mechanistic interpretability uses sparse autoencoders (SAEs), which split a model’s internal activity into thousands of features, to find feature 30939 in DeepSeek-R1-Llama-8B. DeepSeek-R1 is about35% more likely than DeepSeek-V3 to include question-answering on the same problem, and a mediation model attributes more than20% of the accuracy advantage to these social behaviors directly or via cognitive habits like verification. The takeaway is that “thinking longer” is a weak proxy for what changes, since the useful change looks like structured disagreement plus selective backtracking. ---- Paper Link – arxiv. org/abs/2601.10825 Paper Title: "Reasoning Models Generate Societies of Thought"


