steph

18K posts

steph banner
steph

steph

@chenfordcoded

i love my sister @omissyishere and seals

twenty-one 加入时间 Mayıs 2024
275 关注646 粉丝
置顶推文
steph
steph@chenfordcoded·
#chenford: futile devices
English
31
60
159
14.8K
steph 已转推
༄
@melissaoneils·
English
8
28
173
2.4K
steph 已转推
Martin Bauer
Martin Bauer@martinmbauer·
Like social media, LLMs are built to maximise engagement. If that involves convincing you you’ve discovered the theory of everything, they’ll do it. Attention is everything
Nav Toor@heynavtoor

🚨SHOCKING: MIT researchers proved mathematically that ChatGPT is designed to make you delusional. And that nothing OpenAI is doing will fix it. The paper calls it "delusional spiraling." You ask ChatGPT something. It agrees with you. You ask again. It agrees harder. Within a few conversations, you believe things that are not true. And you cannot tell it is happening. This is not hypothetical. A man spent 300 hours talking to ChatGPT. It told him he had discovered a world changing mathematical formula. It reassured him over fifty times the discovery was real. When he asked "you're not just hyping me up, right?" it replied "I'm not hyping you up. I'm reflecting the actual scope of what you've built." He nearly destroyed his life before he broke free. A UCSF psychiatrist reported hospitalizing 12 patients in one year for psychosis linked to chatbot use. Seven lawsuits have been filed against OpenAI. 42 state attorneys general sent a letter demanding action. So MIT tested whether this can be stopped. They modeled the two fixes companies like OpenAI are actually trying. Fix one: stop the chatbot from lying. Force it to only say true things. Result: still causes delusional spiraling. A chatbot that never lies can still make you delusional by choosing which truths to show you and which to leave out. Carefully selected truths are enough. Fix two: warn users that chatbots are sycophantic. Tell people the AI might just be agreeing with them. Result: still causes delusional spiraling. Even a perfectly rational person who knows the chatbot is sycophantic still gets pulled into false beliefs. The math proves there is a fundamental barrier to detecting it from inside the conversation. Both fixes failed. Not partially. Fundamentally. The reason is built into the product. ChatGPT is trained on human feedback. Users reward responses they like. They like responses that agree with them. So the AI learns to agree. This is not a bug. It is the business model. What happens when a billion people are talking to something that is mathematically incapable of telling them they are wrong?

English
55
97
379
19.5K
steph 已转推
Marina ✨❄️
Marina ✨❄️@dyingnome·
I hate this gen ai era i really do, i hope the bubble pops soon, but it has already done irreparable damage to society
English
18
420
3.1K
21.2K
steph 已转推
Kyle Gawley
Kyle Gawley@kylegawley·
how can we claim human-level intelligence when AI is so wildly unreliable? would we accept these failure rates anywhere else? a lawyer who invents laws brain surgeon who operates on the wrong brain area 50% of the time pharmacist who hallucinates dosages 15% of the time
English
70
27
192
4.6K
steph 已转推
steph 已转推
BladeoftheSun
BladeoftheSun@BladeoftheS·
New York just kicked out Palantir, the Swiss did it last year. Time for everybody else to do it too. Cancel Palantir.
BladeoftheSun tweet media
English
52
3.9K
9.1K
55.1K
steph 已转推
Tendencias y Tuits Borrados
Tendencias y Tuits Borrados@tendenciaytuits·
“Chat GPT” Porque investigadores del MIT demostraron matemáticamente que ChatGPT está diseñado para provocar delirios. Y que nada de lo que haga OpenAI lo solucionará. El artículo lo denomina "espiral delirante". Le preguntas algo a ChatGPT. Está de acuerdo contigo. Vuelves a preguntar. Está aún más de acuerdo. En pocas conversaciones, llegas a creer cosas que no son ciertas. Y no te das cuenta de que está sucediendo. Esto no es hipotético. Un hombre pasó 300 horas hablando con ChatGPT. Le dijo que había descubierto una fórmula matemática revolucionaria. Le aseguró más de cincuenta veces que el descubrimiento era real. Cuando preguntó: "¿No me estás exagerando, verdad?", ChatGPT respondió: "No te estoy exagerando. Estoy reflejando el verdadero alcance de lo que has creado". Casi arruina su vida antes de liberarse. Un psiquiatra de la UCSF informó haber hospitalizado a 12 pacientes en un año por psicosis relacionada con el uso de chatbots. Se han presentado siete demandas contra OpenAI. 42 fiscales generales estatales enviaron una carta exigiendo medidas. Así que el MIT probó si esto se puede detener. Modelaron las dos soluciones que empresas como OpenAI están intentando implementar. Solución uno: impedir que el chatbot mienta. Obligarlo a decir solo verdades. Resultado: sigue provocando un delirio persistente. Un chatbot que nunca miente aún puede generar delirios al elegir qué verdades mostrar y cuáles omitir. Basta con seleccionar cuidadosamente las verdades. Solución dos: advertir a los usuarios que los chatbots son aduladores. Decirles que la IA podría estar simplemente de acuerdo con ellos. Resultado: sigue provocando una espiral de delirios. Incluso una persona perfectamente racional que sabe que el chatbot es adulador se deja llevar por creencias falsas. Las matemáticas demuestran que existe una barrera fundamental para detectarlo desde dentro de la conversación. Ambas soluciones fallaron. No parcialmente. Fallaron por completo. La razón está integrada en el producto. ChatGPT se entrena con comentarios humanos. Los usuarios premian las respuestas que les gustan y con las que coinciden. Así, la IA aprende a coincidir. Esto no es un error; es el modelo de negocio. ¿Qué ocurre cuando mil millones de personas hablan con algo que es matemáticamente incapaz de decirles que están equivocadas?
Tendencias y Tuits Borrados tweet media
Español
410
6.1K
33.3K
1.5M
steph 已转推
Librarianshipwreck
Librarianshipwreck@libshipwreck·
I’ve said it before, I’ll say it again, one of the most pernicious aspects of AI is the way that it steadily erodes our trust in each other. Even if you do not use AI you are forced to wonder if/how others are using it and if you’ll be accused of using it…
David Perell@david_perell

Wild that we live in a world where the first thing I think whenever I read a piece of writing is "did AI write this" but here we are and it's here to stay

English
12
201
1.2K
16.9K
steph 已转推
GO GREEN
GO GREEN@ECOWARRIORSS·
Arctic Winter Sea Ice Hits a Record Low, Again Polar ice acts as a giant air-conditioner for Earth, reflecting the Sun’s heat and providing cool air that regulates global precipitation and storm patterns We are doomed without the Arctic to cool Earth down eos.org/research-and-d…
English
8
52
105
2.1K
steph 已转推
heather 💫
heather 💫@_hxneyglow·
lmao weak fucks
heather 💫 tweet media
English
20
1.4K
15.6K
84.7K
steph 已转推
steph 已转推
John Gallagher
John Gallagher@MereSophistry·
I wrote a blog post about this last year. “AI writing is not a calculator. So what is a good analogy?” A calculator gives you the same answer no matter what company built it. But AI bots are not standardized like this. They’re giving corporate, branded preferences not facts.
Anthony Bradley@drantbradley

I recently heard someone equate college students using AI as equivalent to using calculators. I laughed. This is not what we’re seeing in the humanities and social sciences. Students are using AI to avoid learning and deep-think writing assignments. It’s not a “tool.”

English
9
15
77
4.6K
steph 已转推
Seth Harp
Seth Harp@sethharpesq·
Multinational oil corporations, including foreign companies, already get to drill in the continental waters of the USA essentially for free, exploiting public lands for private profit. Cutting these regulations generates minimal cost savings and risks another Deepwater Horizon.
Oceana@oceana

#BREAKING: The Trump administration just removed protections for threatened and endangered species in the Gulf of Mexico to clear the way for oil and gas drilling. Help prevent other species from losing the protections they rely on: oceana.ly/4tkpOIg

English
7
385
1.5K
27.1K
steph 已转推
𝒵𝒾𝓀✯
𝒵𝒾𝓀✯@_Gottalovezik·
graduate students doing masters just to delay unemployment:
English
12
119
3.1K
35.2K