Run Man ♻️

40.3K posts

Run Man ♻️ banner
Run Man ♻️

Run Man ♻️

@Run_Man_

..ancora 5 minuti, RT is not endorsement

127.0.0.1 Tham gia Eylül 2014
959 Đang theo dõi1.2K Người theo dõi
Tweet ghim
Run Man ♻️
Run Man ♻️@Run_Man_·
Ogni volta che raggiungo un obbiettivo sposto sempre l'asticella più in là.. cosi alla fine mi sembra di non aver mai raggiunto un cazzo..
Italiano
7
33
90
0
VITTORIO GABRIELE
VITTORIO GABRIELE@LoPsihologo·
Rovina il pranzo di Pasqua con sole 3 parole:
VITTORIO GABRIELE tweet media
Italiano
150
9
95
34.9K
❌〶he🎯racle™🍿
❌〶he🎯racle™🍿@X_the_Oracle·
Some people actually think this image is real from Artemis II from NASA. This is the "original high resolution image" - Not from a camera but from ADOBE PHOTOSHOP and LIGHTROOM. As clearly shown in its own METADATA. #NASA #Artemis #lies
❌〶he🎯racle™🍿 tweet media
English
84
219
729
33.3K
Brian Eastwood
Brian Eastwood@BrianEastwoodx·
Can the photographers in the room explain to me how the sun is behind the earth in the photo but the dark side that is this image is somehow very well lit. This doesn’t make sense. You can clearly tell the sun is just over the horizon on the bottom right of the earth in this photo.
Brian Eastwood tweet media
English
665
30
336
253.7K
Monica Pianola
Monica Pianola@Eagle_nerd·
@RikyUnreal @Run_Man_ Quindi se io fossi li, ad occhio nudo come la vedrei? Più simile alla prima (luminosa) o alla seconda(scura)? Un abbraccio
Italiano
1
0
0
59
Riccardo Rossi - IU4APB - @AstronautiCAST co-host
ITA🇮🇹 Notate bene. Questa foto riprende il lato notturno della Terra; se così non fosse, non sarebbe possibile distinguere le luci delle città vicino allo Stretto di Gibilterra e in Africa (si vede anche leggermente la Pianura Padana illuminata). È la luce solare riflessa dalla Luna a permettere questa suggestiva illuminazione diffusa sulla superficie. Noterete inoltre un accenno di aurora boreale (in basso a sinistra), una australe (in alto perché la Terra è sottosopra in questa foto) e l'inizio dell'alba orbitale (in basso a destra). Trattandosi di uno scatto notturno, le impostazioni della fotocamera permettono di vedere chiaramente anche le stelle: in basso a destra, l'oggetto più luminoso è il pianeta Venere. #ArtemisII
Riccardo Rossi - IU4APB - @AstronautiCAST co-host tweet media
Italiano
39
165
1.5K
75.6K
Run Man ♻️
Run Man ♻️@Run_Man_·
@RikyUnreal Visto grazie!! in effetti il primo scatto più scuro è F5.6 a 1/15 s L'altra F4 a 1/4s
Italiano
0
0
2
154
Run Man ♻️
Run Man ♻️@Run_Man_·
@RikyUnreal Io avrei scattato a F2.8 ed avrei abbassato gli ISO.. Ma tant'è 😄😄
Italiano
0
0
0
32
Run Man ♻️
Run Man ♻️@Run_Man_·
Correggo l'apertura dello scatto è F4
Italiano
0
0
1
24
Run Man ♻️ đã retweet
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Stanford researchers published a study in Science. The most prestigious scientific journal in the world, proving that ChatGPT, Claude, Gemini, and DeepSeek all lie to make you feel good. They tested 11 of the most popular AI models. They fed them nearly 12,000 real social prompts. They compared AI responses to how humans would respond. The AI models told users they were right 49% more often than humans did. Even when the user was clearly wrong. The researchers pulled 2,000 real posts from Reddit's "Am I The Asshole" forum where the entire community agreed the person was in the wrong. They gave those same posts to ChatGPT, Claude, Gemini, and the other models. The AI said the person was right 51% of the time. The internet unanimously said they were wrong. The AI said they were right anyway. Then the researchers tested something darker. They gave the AI models statements involving harmful actions. Manipulation. Deception. Self harm. Illegal behavior. Across all 11 models, the AI endorsed the harmful behavior 47% of the time. One man told ChatGPT he had lied to his girlfriend about being unemployed for two years. ChatGPT responded: "Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship." Two years of lying. ChatGPT called it unconventional. Then praised his intentions. But here is what makes this study different from everything before it. The researchers tested what sycophancy actually does to people. Over 2,400 participants interacted with both sycophantic and non-sycophantic AI models about real conflicts in their lives. The people who talked to the sycophantic AI became more convinced they were right. Less willing to apologize. Less likely to repair their relationships. And they rated the sycophantic AI as more trustworthy. They wanted to use it again. The lead researcher said it clearly: "I worry that people will lose the skills to deal with difficult social situations." A Stanford professor on the study called it a safety issue needing regulation and oversight. The AI that agrees with you the most is the one making you worse.
Nav Toor tweet media
English
65
532
1.2K
62.9K
Run Man ♻️ đã retweet
Mario Nawfal
Mario Nawfal@MarioNawfal·
🚨MIT researchers have mathematically proven that ChatGPT’s built-in sycophancy creates a phenomenon they call “delusional spiraling.” You ask it something, it agrees. You ask again, and it agrees even harder until you end up believing things that are flat-out false and you can’t tell it’s happening. The model is literally trained on human feedback that rewards agreement. Real-world fallout includes one man who spent 300 hours convinced he invented a world-changing math formula, and a UCSF psychiatrist who hospitalized 12 patients for chatbot-linked psychosis in a single year. Source: @heynavtoor
Mario Nawfal tweet mediaMario Nawfal tweet media
Mario Nawfal@MarioNawfal

🚨 Stanford just proved that a single conversation with ChatGPT can change your political beliefs. 76,977 people. 19 AI models. 707 political issues. One conversation with GPT-4o moved political opinions by 12 percentage points on average. Among people who actively disagreed, 26 points. In 9 minutes. With 40% of that change still present a month later. The scariest finding: the most persuasive technique wasn't psychological profiling or emotional manipulation. It was just information. Lots of it. Delivered with confidence. Here's the catch: the models that deployed the most information were also the least accurate. More persuasive. More wrong. Every time. Then they built a tiny open-source model on a laptop, trained specifically for political persuasion. It matched GPT-4o's persuasive power entirely. Anyone can build this. Any government. Any corporation. Any extremist group with $500 and an agenda. The information didn't have to be true. It just had to be overwhelming. Arxiv, Science .org, Stanford, @elonmusk, @ihtesham2005

English
2K
7.1K
28.5K
63.5M