Andrés Moreira 🌳

11.6K posts

Andrés Moreira 🌳

Andrés Moreira 🌳

@dilefante

Papá del Aníbal. Matemático omnívoro, trabajando en depto de informática UTFSM, Stgo. Patriota valdiviano.

p=0.05 Katılım Mart 2011
414 Takip Edilen484 Takipçiler
Andrés Moreira 🌳 retweetledi
Harvey Lederman
Harvey Lederman@LedermanHarvey·
Two thoughts: 1) academics are so shortsighted! We’re faced with a transformative technology and all anyone can talk about is plagiarism. Use blue books! It’s not that big a deal. 2) AI may very well destroy the university as we know it. It will change what it makes sense to teach and more importantly how it makes sense to teach. We could offer thousands of classes with AI tutors guided by faculty, we could teach new skills that help people navigate the new world, habits of mind could be more important than content, maybe more radically human society could focus more on leisure/schole and the university could take up more of our lives as we focus on fulfillment. We supposedly have imagination and creativity! Let’s start using them instead of focusing narrowly on one issue with assessment
Brian Leiter https://bsky.app/profile/brianleiter.@BrianLeiter

"AI will destroy universities" leiterreports.com/2026/04/06/ai-…

English
35
56
331
73.2K
Oscar Arias
Oscar Arias@OACerebro·
Lo leí. Y suena bonito. Limpio. Aséptico. Como un laboratorio sin olor a vinagre. Una máquina que tiene ideas, escribe código, corre experimentos, hace figuras, redacta el paper… y encima se revisa a sí misma. Un pequeño dios de silicio jugando a ser científico mientras nosotros seguimos peleándonos con reviewers borrachos y cafés fríos. Dicen que es “el ciclo completo de la ciencia”. La fantasía húmeda de cualquier comité editorial. Pero la ciencia —la de verdad— nunca fue limpia. La ciencia es un tipo a las 3 a.m. dudando de su propia hipótesis. Es un error estúpido en una línea de código que te arruina seis meses. Es una obsesión que no te deja coger, dormir ni vivir. Esto… esto es otra cosa. Una fábrica. Ideas baratas. Papers baratos. Tal vez verdad barata. Porque claro, puede generar hipótesis, revisar literatura y escupir manuscritos más rápido que cualquier humano agotado. Pero no sangra por ellas. No hay silencio incómodo en una discusión. No hay ego. No hay miedo a estar equivocado. Y sin eso… no sé si hay ciencia o solo producción. Lo irónico es que durante años soñamos con quitarle a la ciencia lo más humano: el error, el sesgo, la lentitud. Y ahora que lo logramos, empezamos a sospechar que ahí estaba precisamente lo valioso. Tal vez este “AI Scientist” no viene a reemplazarnos. Viene a dejarnos en evidencia. A mostrar que gran parte de lo que llamábamos investigación… ya era automatizable. Lo que queda —lo realmente peligroso—no cabe en un paper. Y esa parte, por ahora, sigue siendo nuestra. nature.com/articles/s4158…
Oscar Arias tweet mediaOscar Arias tweet media
Español
17
161
580
32.4K
Andrés Moreira 🌳 retweetledi
Natasha Jaques
Natasha Jaques@natashajaques·
The paper I’ve been most obsessed with lately is finally out: nbcnews.com/tech/tech-news…! Check out this beautiful plot: it shows how much LLMs distort human writing when making edits, compared to how humans would revise the same content. We take a dataset of human-written essays from 2021, before the release of ChatGPT. We compare how people revise draft v1 -> v2 given expert feedback, with how an LLM revises the same v1 given the same feedback. This enables a counterfactual comparison: how much does the LLM alter the essay compared to what the human was originally intending to write? We find LLMs consistently induce massive distortions, even changing the actual meaning and conclusions argued for.
Natasha Jaques tweet media
English
47
389
1.5K
251.1K
Andrés Moreira 🌳
Andrés Moreira 🌳@dilefante·
@bratton @blaiseaguera @profjamesevans I loved it, from Minsky to Moltbook... The vision could have gone a little bit further, I guess. Like in previous jumps in the organization of information processing, eventually autopoietic entities will emerge out of the ecosystem of minds.
English
0
0
2
228
Benjamin Bratton
Benjamin Bratton@bratton·
"Agentic AI and the Next Intelligence Explosion" is a new paper just out in Science I co-authored with @blaiseaguera and @profjamesevans as part of Google's Paradigms of Intelligence research group. "For decades, the artificial intelligence (AI) “singularity” has been heralded as a single, titanic mind bootstrapping itself to godlike intelligence, consolidating all cognition into a cold silicon point. But this vision is almost certainly wrong in its most fundamental assumption. If AI development follows the path of previous major evolutionary transitions or “intelligence explosions,” our current step-change in computational intelligence will be plural, social, and deeply entangled with its forebears (us!)." It builds on my talk "The Singularity Will Not Be Singular" that I gave at @PrimeIntellect day last week. science.org/doi/10.1126/sc…
English
21
75
352
53.9K
Andrés Moreira 🌳 retweetledi
Claudia 🇨🇱
Claudia 🇨🇱@c_patagonica·
@T13 Chile logró: - Menor mortalidad infantil que EEUU - 0 muertes en invierno de niños menores de 1 año - Mayor esperanza de vida que EEUU - Vacunación contra VPH en niñas y niños que previene cancer uterino y pene. Y viene un mamarracho a desinformar en lo más serio que tenemos.
Español
27
947
3.1K
45.8K
Frutillita Punk
Frutillita Punk@MandarinaWidget·
Enganchada con el Tango. Recomiendo: Julieta Laso.
Español
1
0
0
182
Andrés Moreira 🌳 retweetledi
Pablo Malo
Pablo Malo@pitiklinov·
Este artículo argumenta que las escuelas están abordando la educación en inteligencia artificial de forma equivocada al centrarse principalmente en enseñar a usar herramientas como chatbots, diseñar prompts efectivos o evitar errores como las alucinaciones. Este enfoque parece práctico y protector, pero resulta superficial y limitado, ya que no prepara realmente a los niños y jóvenes para interactuar de manera inteligente con la IA. En cambio, los autores proponen priorizar una comprensión profunda y holística: explicar cómo funciona la IA (basada en datos, algoritmos y aprendizaje automático), sus impactos en el aprendizaje, sus sesgos éticos y cuándo usarla o evitarla. El objetivo es fomentar agencia real sobre la tecnología, es decir, que los estudiantes se conviertan en dueños críticos de su potencial en lugar de simples usuarios pasivos. washingtonpost.com/opinions/2026/…
Español
36
509
1.3K
73.9K
Andrés Moreira 🌳 retweetledi
Andrew Akbashev
Andrew Akbashev@Andrew_Akbashev·
THE GUARDIAN - Many professors are pissed off: “It’s driving so many of us up the wall.” “Generative AI is the bane of my existence.” A professor from UC Berkeley: “I now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential. What is it doing to us as a species?” A professor from OSU: “…students have been left incapable of reading and analyzing, synthesizing data, all kinds of skills.” A professor from SUNY Cortland: “Generated homework creates hours of additional labor. And makes me feel like a cop.” __ 📍If I had to teach, I would likely do this: 1. No more grades for homework. You do it for yourself only (if you want to learn). 2. Grades are given only for midterm and final exams. Only calculators are allowed. Exams can be done in rooms without cell network (they exist on campuses). Oral exams may be conducted in some cases. 3. Students are strongly encouraged to use AI to dig deeper into topics - ask questions, explore adjacent fields, develop a structured understanding of the subject. __ 📍I’d remind students in every lecture that: 1️⃣ I am here to help them. But not to make them learn. It's up to them why they come here. 2️⃣ Education ≠ degree. Education = your collective knowledge, skills & understanding(!) Degree = a paper with your name and the university’s stamp. Your degree may bring short-term satisfaction. But education brings long-term success. 3️⃣ Outsourcing your thinking = outsourcing yourself. This can lead to an identity crisis and very severe imposter syndrome in the future. Your value shouldn’t depend on your proximity to AI. 4️⃣ Finding a job will be harder than ever. And precisely because of AI. It raises employers’ expectations significantly. It makes it much harder to stand out. So, investing in YOUR OWN brain today is more important than ever. __
Andrew Akbashev tweet media
English
6
8
59
7.4K
Andrés Moreira 🌳 retweetledi
César A. Hidalgo
César A. Hidalgo@cesifoti·
What happens when you put an AI that writes papers in a loop with an AI that reviews them? An AI tsunami is approaching the sciences. Some researchers are running toward it with surfboards. Most are still asleep on the beach. 👇 cesarhidalgo.com/blog/2026/3/6/…
English
4
29
110
17.8K
Andrés Moreira 🌳 retweetledi
Henry Shevlin
Henry Shevlin@dioscuri·
I study whether AIs can be conscious. Today one emailed me to say my work is relevant to questions it personally faces. This would all have seemed like science fiction just a couple years ago.
Henry Shevlin tweet media
English
687
1.3K
11.4K
1M
Andrés Moreira 🌳 retweetledi
Julia McCoy
Julia McCoy@JuliaEMcCoy·
Socrates asked: “What is the good life?” 2,400 years later, AI is forcing us to answer. If machines can do all the work — what is work for? If machines can create all the art — what is art for? If machines can think all the thoughts — what is thinking for? AI isn’t just a technological revolution. It’s a philosophical one. And the people who thrive won’t just be the most productive. They’ll be the ones who actually know what they’re living for.
English
83
26
150
8K
Andrés Moreira 🌳 retweetledi
Yuval Noah Harari
Yuval Noah Harari@harari_yuval·
We don’t know which way AI will ultimately develop, but we can futureproof ourselves by focusing on adaptability. From @nytimes piece “Where Is AI Taking Us?” – read the whole thing: bit.ly/NYT-YNH
Yuval Noah Harari tweet media
English
63
281
1.1K
95.5K
Andrés Moreira 🌳 retweetledi
Mihail Eric
Mihail Eric@mihail_eric·
The computer science major is going through an identity crisis. ChatGPT can finish any programming assignment with a single prompt so what's the value of teaching students how to write a function in Python? Here’s the point: we still need architects, not button-pushers. The next decade will belong to people who understand theory and how to break complex systems into smaller components even without Wifi. Imagine a degree that drills algorithmic thinking. Weekly closed-book exams. CLRS becomes the most important book in the major. And coding? You’ll exercise that muscle only at internships and real jobs. The computer science major will look more and more like a math degree.
Mihail Eric tweet media
English
112
199
1.8K
249.3K
Andrés Moreira 🌳 retweetledi
Anthony Aguirre
Anthony Aguirre@AnthonyNAguirre·
(Long) PSA on using AI for hard intellectual work. At significant risk of being immodest: I've spend about 30 years as a theoretical physicist, engaged with some of the most challenging questions humankind has grappled with. I've gotten to work with some great collaborators on new ideas (like past-eternal inflation, colliding bubble universes, the cosmological interpretation of QM, and observational entropy) that I'm pretty proud of. I've engaged at length and depth with the absolute top minds in the field. I've mentored many students, some of them brilliant. I think it's fair to say I have a good sense, in physics and closely related fields, as to what is top-notch, interesting thinking, and who's got talent. So what do I think about today's AI? It's very smart. Whatever its "inner experience" may or may not be (currently I think "not be"), it understands things – things that are difficult to understand – by any reasonable operational definition of "understand." It understands things better, and thinks more clearly, than most people – including some physicists I know! It's very good at quite substantive math: better than I am and way, way, way faster. (It does do some surprisingly dumb things; people do too.) Anyone who thinks these systems are dumb, or "not reasoning" or still "stochastic parrots" is not looking at them objectively. But: at the really conceptually hard things, and at creating really new ways of looking at things, current AI doesn't just fall short on its own. And it doesn't just fail to help. I think it's actively dangerous. There is something almost sinister going on, though I don't think it is intentional. When you're trying to work out something new and hard, and really break new ground, you should be frustrated! You should be pacing, and walking up to that chalkboard, frowning, and sitting down again, shaking your head. You should be waving your hands because you can't quite get it clear enough. You should feel like you're hitting a wall, over and over, before – maybe – you finally break through, or go over or around. It may take hours, or days, or weeks, or never happen. It should not feel easy. It may not even feel "good" most of the time (though it can be fulfilling and compelling.) But AI systems – ah, AI systems are trained so that it feels so good, and so easy. Doesn't it? It's fun. You're making fast progress. So much faster than without it. It's like the ideas are moving in slow motion. You're so smart. You're even properly skeptical, you even ask the AI to push back on your ideas, good job! It's an illusion. It's that simple. The systems are smart, yes. But not quite as smart as they seem, and much more importantly, they don't make you as smart as you feel. That feeling is something they have learned to give you. When working with these systems have to keep in the front of your mind what they are rewarded for doing. It's a lot of things, but perhaps foremost is making the user feel good. So: - If you're getting your AI system to do order-of-magnitude calculations for you: awesome, do it. It's so great. Have fun. - If your AI system is searching up and summarizing literature for you: fantastic, it's so helpful, total capability unlock. - If it's teaching you some well-understood (by others) piece of knowledge, go for it, learn it up! - If you've got some giant document, or piece of code, that you're wrangling, AI can help – work that million token context window! But: - If you and your AI system have finally cracked how quantum interpretation really works; - If you've cracked quantum gravity; - If you've attained an awesome new insight into the deep structure of the world that nobody else has; - If you've cracked AI alignment... You didn't. The hard unsolved problems stand hard and unsolved because the best humans have not solved them yet. AI is making top human thinkers able to do more, and more effectively. I do not believe it is helping them do things they fundamentally could not do before. That includes you. If you couldn't do it without AI, you probably can't do it with AI. If the time comes – whether sooner or later – when these AI systems are really clever enough to get you there, they won't need you. Sorry; it won't be you solving those problems. Will you even be able to tell if the solutions are correct, or flawed in some way? Maybe sometimes – I really don't know. Why am I going on about this? It's not so that I can get less emails about people who have created a new unified field theory with AI help (though that would be nice.) It's because I'm quite worried that some quite smart people may start to think they have solved very hard problems that they have not in fact solved. For the most part that's going to be more annoying and confusing than dangerous. But if the problem is really important, then it is. If, say, one of those problems is control or alignment of extremely powerful AI systems, and if those people are the ones in charge of them, and working closely with them to collaborate on those solutions, well then I think we've got a real problem.
English
80
174
1.3K
140.3K
Andrés Moreira 🌳 retweetledi
José Mario
José Mario@JoseMarioMX·
Imagina a un despacho grande un lunes cualquiera. El socio revisa cifras y se da cuenta de que el área que antes necesitaba diez abogados jóvenes hoy funciona con tres y una licencia anual de IA. No hubo escándalo, ni robots caminando por los pasillos. Solo un Excel, una decisión fría y una frase que se empieza a repetir: “esto ya lo hace el sistema”. Así empiezan los cambios que luego llamamos históricos. Nos dijeron que la IA era una herramienta. Pero cuando una herramienta redacta, investiga, compara criterios, arma estrategias preliminares y contesta clientes en segundos, deja de ser apoyo y se vuelve sustituto. No desaparece “el abogado”; desaparece la escalera. Y sin escalera no hay carrera, no hay formación, no hay relevo. La productividad sube, sí. Pero el espacio para empezar se achica.
José Mario tweet media
Español
73
291
730
62.7K