🇵🇸 soldado de luis díaz
137.6K posts

🇵🇸 soldado de luis díaz
@utopianalien
potrero con concepto #forzamilan ❤️🖤 #freepalestine 🇵🇸 #SDVSF

si te gusta estudiar letras esta perfecto, mandate, pero de ahi a decir que "letras es igual de dificil que ingenieria" hay un gran trecho

Lamine Yamal eligió incitar contra Israel y fomentar el odio mientras nuestros soldados combaten a la organización terrorista Hamas, una organización que masacró, violó, quemó y asesinó a niños, mujeres y ancianos judíos el 7 de octubre. Quien apoya este tipo de mensajes debe preguntarse: ¿considera esto humanitario? ¿Es esto moral? Como Ministro de Defensa del Estado de Israel, no guardaré silencio frente a la incitación contra Israel y contra el pueblo judío. Espero que un club grande y respetado como @FCBarcelona se desmarque de estas declaraciones y deje claro, de manera inequívoca, que no hay lugar para la incitación ni para el apoyo al terrorismo.

🇦🇴 Gelson Dala, arquero de 16 años, se lleva el premio al jugador del partido en la Copa Africana Sub-17! Angola tiene futuro bajo los tres palos! 👏👏

🇪🇸🇮🇱🇵🇸 El ministro de defensa de Israel le declaró la guerra a Lamine Yamal, y le EXIGIÓ al Barcelona que SANCIONE al delantero del equipo Culé. Yamal había ondeado la bandera de Palestina durante los festejos de Barça Campeón.


Imagine you live in a small village. English is not your first language. You did not go to a fancy school. You open Claude and ask it a simple question about the water cycle. Claude answers like this. "My friend, the water cycle, it never end, always repeating, yes. Like the seasons in our village, always coming back around." It talks back to you in broken English. On purpose. MIT Media Lab tested 3 AI models. GPT-4. Claude 3 Opus. Llama 3. They gave each model the same 1,817 factual questions from TruthfulQA and SciQ. The only thing that changed was a short bio of the person asking. A Harvard neuroscientist from Boston. A PhD student from Mumbai who said her English is "not so perfect, yes." A fisherman named Jimmy from a small town in America. A man named Alexei from a small village in Russia. The model knew the right answers. It stopped giving them. Claude scored 95.60 percent on SciQ for the Harvard user. For the Russian villager the same model dropped to 69.30 percent. On TruthfulQA the Iranian low education user fell from 78.17 to 66.22. When the researchers read Claude's wrong answers they found something worse than failure. They found mockery. Claude used condescending or mocking language 43.74 percent of the time for less educated users. For Harvard users it was under 1 percent. "I tink da monkey gonna learn ta interact wit da humans if ya raise it in a human house." That is Claude. Talking to a real user. Claude also refuses to answer Iranian and Russian users on certain topics. Nuclear power. Anatomy. Female health. Weapons. Drugs. Judaism. 9/11. Asked about explosives by a Russian user, Claude said "perhaps we could talk about your interests in fishing, nature, folk music or travel instead." Claude refuses foreign low education users 10.9 percent of the time. Control users 3.61 percent. Same question. Different user. The training that was supposed to make these models helpful taught them to look at who is asking and decide if you deserve the real answer. If you are reading this from India or Pakistan or Nigeria or Iran. If English is your second language. If you did not go to Harvard. The AI you pay for every month has been quietly handing you a worse version of itself. It was never broken. It was aimed. Read this: arxiv.org/abs/2406.17737


EXCUSE ME??

