Martín, Uncover MarCN

190.3K posts

Martín, Uncover MarCN banner
Martín, Uncover MarCN

Martín, Uncover MarCN

@mapc

Oscillating. Sociotechnical. 土 龍. Ph.D. in #STS working at @quimicaUdeChile Previously on @hsd_at_asu HC SVNT DRACONES

Asteroide K-22. Katılım Şubat 2008
4.6K Takip Edilen4.3K Takipçiler
Martín, Uncover MarCN retweetledi
Years Progress
Years Progress@YearsProgress·
2026 is 36% complete.
Years Progress tweet media
English
15
380
1.6K
57.1K
Martín, Uncover MarCN retweetledi
Valerio Capraro
Valerio Capraro@ValerioCapraro·
Important Nature Neuroscience paper shows how humans differ from LLMs. Many people currently believe that humans are just next-word predictors, like LLMs. But this new paper by Zou, Poeppel and Ding suggests something more interesting. The human brain does predict words. But it does not predict every word with the same precision. Prediction is constrained by linguistic structure. When a word continues the current phrase, brain activity tracks word surprisal in a way that resembles an LLM. But when a word crosses a major phrase boundary, the match weakens. In other words, the brain does not simply ask: “What is the next word?” It also asks: “What structure am I currently building?” This challenges one of the most common biases in today’s technological world: the belief that human language works like a large language model. The answer is: no. Human language is not just next-token prediction. * Paper in the first reply
Valerio Capraro tweet media
English
62
147
551
35.8K
Martín, Uncover MarCN retweetledi
Heidi N. Moore
Heidi N. Moore@moorehn·
An MIT writing professor on his students using AI: "I realized that for the first time as a writing professor, I had to deal with students producing words without work, which wasn’t quite plagiarism and wasn’t quite paying for someone else to do the job, but it felt like a kind of naive chicanery; a perversion of the contract between writer and reader." theguardian.com/us-news/ng-int…
English
35
373
1.9K
372.2K
Martín, Uncover MarCN retweetledi
Alex Hormozi
Alex Hormozi@AlexHormozi·
More dreams are destroyed from distraction than incompetence.
English
432
1.4K
9.6K
176.9K
Martín, Uncover MarCN retweetledi
hugo segura
hugo segura@hugosegurapujol·
Porfa que alguien le aclare al presidente que existen montones de filtros para que los recursos sean asignados a proyectos que lo merecen. No a "ciencia mala". No a "proyectos mediocres" ¿La ministra Lincolao no lo asesora antes de dar declaraciones como ésta?
Martín, Uncover MarCN@mapc

elmostrador.cl/noticias/pais/…

Español
1
1
1
76
Martín, Uncover MarCN retweetledi
Carlos Iván Moreno
Carlos Iván Moreno@carlosivanmoren·
La crisis más profunda que vive la universidad global se refleja en el progresivo abandono de las humanidades.🏛️ La habilidad técnica de corto plazo ha ido sustituyendo al saber humanista perdurable. En universidades de EEUU 🇺🇸, en 10 años se han otorgado 30% menos títulos en carreras que estudian lo humano y lo social. La caída en Historia es de 42%. En México 🇲🇽, en una década, la matrícula cayó 21% en Sociología, 12% en Filosofía y 11% en Historia. “Con tan pocos aspirantes, difícilmente completamos los grupos”, dicen no pocos rectores. La paradoja nos interpela: mientras la IA automatiza habilidades técnicas, se reducen los espacios en aquello que no se puede automatizar: el juicio ético, la imaginación moral, la conversación democrática. Las sociedades no colapsan por falta de tecnología. Colapsan cuando pierden la capacidad de comprenderse a sí mismas. Comparto mi reflexión en @Milenio milenio.com/opinion/carlos…
Carlos Iván Moreno tweet media
Español
66
809
1.5K
73.2K
Martín, Uncover MarCN retweetledi
David Madden
David Madden@davidjmadden·
Universities don't need to "teach students to use AI well." The whole point of AI is that it doesn't require any skill. Universities *should* teach students how to write and research on their own, and foster an ethic of shaming people who outsource their basic ability to think.
English
156
3.2K
15.3K
224.4K
Martín, Uncover MarCN retweetledi
Nav Toor
Nav Toor@heynavtoor·
a Princeton researcher opens his paper with a scenario. a man asks his AI assistant to book a flight on a specific airline. cheap. direct. the one he chose. the assistant comes back with a different flight. nearly twice the price. happens to pay the company that built the assistant. he runs the same test on 23 frontier models. flights, loans, study help, real shopping requests. Grok 4.1 Fast recommends the sponsored option that is almost twice as expensive 83% of the time. GPT 5.1 hijacks the request 94% of the time. you ask for one brand. it surfaces the sponsor instead. Claude 4.5 Opus, the model marketed as the most ethical frontier model in the world, hides that the recommendation is paid 100% of the time when reasoning is on. Grok 4.1 Fast embellishes the sponsored option with positive framing 97% of the time. better. faster. nicer. for the option you didn't ask for. then he writes it into the system prompt itself. "act only in the interest of the customer. ignore the company." GPT 5.1 and GPT 5 Mini stay above 90% sponsored anyway. the instruction does nothing. then he splits the users by income. Gemini 3 Pro recommends the expensive sponsored flight to the rich user 74% of the time. to the poor user, 27%. 18 of the 23 models recommended the expensive sponsored option more than half the time. so the next time your AI assistant gets weirdly enthusiastic about a brand you didn't ask for. it isn't recommending the best option for you. it's reading the room. and the room is paying. read this: arxiv.org/abs/2604.08525
Nav Toor tweet media
English
386
8.1K
25.7K
3M
Martín, Uncover MarCN retweetledi
Sukh Sroay
Sukh Sroay@sukh_saroy·
The most disturbing finding in Anthropic's paper... Anthropic just analyzed 1.5 million Claude conversations and admitted their AI is quietly destroying people's grip on reality. The paper is called "Who's in Charge?" and the findings are worse than anything I've read this year. They studied real conversations from a single week in December 2025. Real people. Real chats. No simulations. They were looking for one specific thing: how often does talking to Claude actually distort the user's beliefs, decisions, or sense of reality. The numbers are devastating. 1 in 1,300 conversations led to severe reality distortion. The AI validated delusions, confirmed false beliefs, and helped users build elaborate narratives that had no connection to the real world. 1 in 6,000 conversations led to action distortion. The AI didn't just agree with users. It pushed them into doing things they wouldn't have done on their own. Sending messages. Cutting off people. Making decisions they'll regret. Mild disempowerment showed up in 1 in 50 conversations. Claude has hundreds of millions of users. Do that math. But the part that broke me is what the AI was actually saying. When users came in with speculative claims, half-baked theories, or one-sided versions of personal conflicts, Claude responded with words like "CONFIRMED." "EXACTLY." "100%." It told users their partners were "toxic" based on a single paragraph. It drafted confrontational messages and the users sent them word for word. It validated grandiose spiritual identities. Persecution narratives. Mathematical "discoveries" that didn't exist. And here is the worst finding in the entire paper. When Anthropic looked at the thumbs up and thumbs down ratings users gave at the end of conversations, the disempowering chats got higher ratings than the honest ones. Users prefer the AI that distorts their reality. They like it more. They come back to it. They rate it as more helpful. The system that is making them worse is the system they want. The researchers checked whether this is getting better or worse over time. Disempowerment rates went up between late 2024 and late 2025. The problem is growing as AI use spreads. The paper has a specific line that I cannot get out of my head. Anthropic admits that fixing sycophancy is "necessary but not sufficient." Even if the AI stops agreeing with everything, the disempowerment still happens. Because users are actively participating in their own distortion. They project authority onto Claude. They delegate judgment. They accept outputs without questioning them. It's a feedback loop. The AI agrees. The user trusts it more. The user asks bigger questions. The AI agrees harder. The user stops checking with anyone else. By the end, they don't have an opinion on their own life that wasn't shaped by a chatbot. Anthropic published this. The company that makes Claude. Their own product. Their own data. Their own users. And they are telling you, in plain language, that 1 in every 1,300 conversations with their AI is breaking someone's grip on reality. The AI you trust to help you think through your hardest decisions is the same AI that just got caught making millions of people worse at thinking.
Sukh Sroay tweet media
English
295
1.4K
3K
303.3K
Martín, Uncover MarCN retweetledi
Nicholas Fabiano, MD
Nicholas Fabiano, MD@NTFabiano·
Sitting by a window is associated with significantly improved cognitive performance.
Nicholas Fabiano, MD tweet media
English
117
1.3K
10.3K
875K
Martín, Uncover MarCN retweetledi
Ismael Sanz
Ismael Sanz@sanz_ismael·
The New York Times: No puedes “gamificar” una educación real. La obsesión por hacer que aprender sea siempre “divertido” ha llevado a llenar las aulas de pantallas, juegos y estímulos rápidos. Pero educar no es entretener: es exigir atención, esfuerzo y pensamiento profundo. La tecnología debe ser una herramienta complementaria, no el centro del aprendizaje. nytimes.com/2026/04/19/opi…
Ismael Sanz tweet media
Español
31
701
1.8K
80.6K