S.

57 posts

S. banner
S.

S.

@freikorps01

memes and my opinions

Katılım Şubat 2026
13 Takip Edilen13 Takipçiler
S.
S.@freikorps01·
All of humanity’s problems stem from a man’s inability to sit quietly in a room alone
English
0
0
1
45
S. retweetledi
Mustafa
Mustafa@mustafaaleem·
That’s so rude, Shehzad Roy?? 😡 Please do it again .. 😜 I like it !!
English
189
1.7K
8.7K
565.5K
S. retweetledi
Emir Han
Emir Han@RealEmirHan·
Ryan Gosling using Gen Z slang for one minute 😭
English
280
3.7K
33.2K
2.1M
S. retweetledi
Vinod Kapri
Vinod Kapri@vinodkapri·
This is next level trolling. Iran just turned Trump into a meme factory 😊
English
168
5K
19.4K
741K
S.
S.@freikorps01·
Vanilla sky open with Radiohead and ends with Sigur Ros and that’s all you need to know really
English
0
0
1
25
S. retweetledi
sui ☄️
sui ☄️@birdabo·
this story is absolutely insane 🤯 > tech guy with zero biology background. > his dog got terminal cancer. > vets said 1 - 6 months left. > bro said nah not on my watch. > asked ChatGPT for a treatment plan. > sequenced tumor DNA for $3k. > used AlphaFold AI to model mutated proteins. > designed world’s first personalized mRNA vaccine for a dog. > partnered with universities to synthesize it. > ethics approval took 3 months. > vaccine design took 2 months. > first injection December 2025. > tumors shrank 75% within weeks. > dog happy. > universities confirmed it worked. > now designing version 2 for remaining tumor. AI + a guy determined to save his dog just outperformed the pharma industry 💀 the cure for cancer will be open source.
vittorio@IterIntellectus

this is actually insane > be tech guy in australia > adopt cancer riddled rescue dog, months to live > not_going_to_give_you_up.mp4 > pay $3,000 to sequence her tumor DNA > feed it to ChatGPT and AlphaFold > zero background in biology > identify mutated proteins, match them to drug targets > design a custom mRNA cancer vaccine from scratch > genomics professor is “gobsmacked” that some puppy lover did this on his own > need ethics approval to administer it > red tape takes longer than designing the vaccine > 3 months, finally approved > drive 10 hours to get rosie her first injection > tumor halves > coat gets glossy again > dog is alive and happy > professor: “if we can do this for a dog, why aren’t we rolling this out to humans?” one man with a chatbot, and $3,000 just outperformed the entire pharmaceutical discovery pipeline. we are going to cure so many diseases. I dont think people realize how good things are going to get

English
340
3.9K
36.7K
2.3M
S. retweetledi
🇨🇳XuZhenqing徐祯卿
🇨🇳XuZhenqing徐祯卿@XueJia24682·
✨🇨🇳This is the customized parking feature of XPeng Motors from China. You mark where you want to park, and it will automatically park there precisely.
English
376
1.5K
29.5K
6M
S. retweetledi
Autism Capital 🧩
Autism Capital 🧩@AutismCapital·
“The US economy is a ponzi scheme. It’s dependent on the GCC investing in AI and tech stocks. If this financing stops the US economy could face collapse and this means that the young man could not afford their OnlyFans and this could lead to a revolution in the streets.” LOL! 😂
English
292
814
7.5K
630.8K
S. retweetledi
Wholesome Side of 𝕏
Wholesome Side of 𝕏@itsme_urstruly·
Using your adult money to buy all toys you wanted as a kid hits differently 😂🔥
English
118
650
4.5K
159.5K
S. retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.
Nav Toor tweet media
English
1.5K
16.5K
48.7K
9.9M
S.
S.@freikorps01·
Professor saab stick to geopolitics only. Theology isn’t your thing.
English
0
0
2
30
S. retweetledi
Retard Finder
Retard Finder@IfindRetards·
The Matrix in 2026 😂
English
368
4.6K
23.1K
749K
S.
S.@freikorps01·
If the Middle East still had dictators we wouldn’t have mass immigration problems in Europe
English
0
0
1
27
Alishba 🍉
Alishba 🍉@alishbahaha_·
Sending missiles to each other is like sending reels to each other
English
3
12
83
2.6K
S. retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Everyone’s missing the real story here. Meta’s Ray-Ban glasses need human data annotators to train the AI. When you say “Hey Meta” and ask the glasses to analyze something, that video gets sent to Meta’s servers, then routed to Sama, a subcontractor in Nairobi, Kenya. Workers there manually label objects in your footage. They see everything you recorded, intentionally or not. 7 million pairs sold in 2025 alone. Every single pair generates training data that flows through human eyes in Kenya. Workers told Swedish journalists they see people undressing, using bathrooms, having sex, and accidentally filming bank card details. One worker said “we see everything, from living rooms to naked bodies.” Meta’s automatic face anonymization is supposed to protect people in the footage. Workers say it fails in certain lighting. Faces that should be blurred are sometimes fully visible. The person you recorded without knowing? A stranger in Nairobi can identify them. Buried in Meta’s terms of service is one sentence doing enormous legal work: the company reserves the right to conduct “manual (human) review” of your AI interactions. That’s the legal cover for routing intimate footage from Western homes to a $2/hour labor force operating under NDAs, office surveillance cameras, and a strict no-questions policy. Workers say if you raise concerns about what you’re seeing, you’re fired. This is the same company, Sama, that TIME exposed in 2023 for paying Kenyan workers $2/hour to label graphic content for OpenAI while being billed at $12.50/hour per worker. Workers described the experience as torture. Sama ended that contract, then pivoted to labeling Meta’s glasses footage. Same workforce. Same rates. Meta markets these glasses as “designed with your privacy in mind.” The privacy design is a tiny LED light on the frame that most people don’t notice. The data pipeline behind it routes your bedroom footage to a contractor with a documented history of worker exploitation, failed anonymization, and union-busting lawsuits. And the next generation of these glasses? Meta is planning to add facial recognition. The same system that can’t reliably blur faces in training data wants to start identifying them on purpose. The LED light on the frame is doing about as much for your privacy as the terms of service nobody reads.
Shibetoshi Nakamoto@BillyM2k

why the fuck meta employees watching videos their users are taking

English
438
14.9K
47.9K
4.9M
S. retweetledi
Sunil Rao
Sunil Rao@memer_mitron·
ZXX
84
1.3K
17.6K
271K
S.
S.@freikorps01·
Once critique becomes chorus, diplomacy loses its quiet room. Frustration is no longer diplomatic. It’s cultural. And once it’s cultural, it’s combustible. Sitting ministers ought to be more careful.
English
0
0
0
39