๐š๐š‘๐šŽ ๐šŠ๐šŒ๐šŒ๐š˜๐šž๐š—๐š๐šŠ๐š—๐š

4K posts

๐š๐š‘๐šŽ ๐šŠ๐šŒ๐šŒ๐š˜๐šž๐š—๐š๐šŠ๐š—๐š banner
๐š๐š‘๐šŽ ๐šŠ๐šŒ๐šŒ๐š˜๐šž๐š—๐š๐šŠ๐š—๐š

๐š๐š‘๐šŽ ๐šŠ๐šŒ๐šŒ๐š˜๐šž๐š—๐š๐šŠ๐š—๐š

@defaultRuntime

Founder @ringbook_io A portal between accountant and everyone. #Startup

ๅŠ ๅ…ฅๆ—ถ้—ด ลžubat 2021
2.1K ๅ…ณๆณจ310 ็ฒ‰ไธ
๐š๐š‘๐šŽ ๐šŠ๐šŒ๐šŒ๐š˜๐šž๐š—๐š๐šŠ๐š—๐š
omg
Sukh Sroay@sukh_saroy

๐ŸšจBREAKING: If you've used ChatGPT for writing or brainstorming in the last 6 months, your creative ability may already be permanently damaged. A controlled experiment just proved the effect doesn't reverse when you stop using it. 3,302 creative ideas. 61 people. 30 days of tracking. Researchers split students into two groups. Half used ChatGPT for creative tasks. Half worked alone. For five days, the ChatGPT group outperformed on every metric. Higher scores. More ideas. Better output. AI was making them better. Then day 7. ChatGPT removed. Every creativity gain vanished overnight. Crashed to baseline. Zero lasting improvement. But that's not the bad part. ChatGPT users' ideas became increasingly identical to each other over time. Same content. Same structure. Same phrasing. The researchers called it homogenization. Everyone using ChatGPT started producing the same ideas wearing different clothes. When ChatGPT was removed, the creativity boost disappeared -- but the homogenization stayed. 30 days later, same result. Their creative range had been permanently compressed. Five days of use. Permanent damage 30 days later. A separate trial confirmed it. 120 students. 45-day surprise test. ChatGPT users scored 57.5%. Traditional learners scored 68.5%. AI reduces cognitive effort. Less effort means weaker encoding. Weaker encoding means less creative raw material. You're not renting a productivity boost. You're financing it with your originality. The interest rate is permanent.

QST
0
0
0
1
Wonder of Science
Wonder of Science@wonderofscienceยท
Ferrofluid audio-visualizers dance to the music. ๐Ÿ“ฝ: Pierre Paslier
English
21
43
379
69K
Russian Market
Russian Market@runewsยท
#Q: Why would Jeffrey Epstein say that you like them young? #Bill Clinton: "That's not true, none of that." Watch him tremble, lose his voice, and shake his hands. ๐Ÿ‘€
English
641
6K
20.5K
764.1K
๐š๐š‘๐šŽ ๐šŠ๐šŒ๐šŒ๐š˜๐šž๐š—๐š๐šŠ๐š—๐š ๅทฒ่ฝฌๆŽจ
Guri Singh
Guri Singh@heygurisinghยท
๐Ÿšจ Stanford just analyzed the privacy policies of the six biggest AI companies in America. Amazon. Anthropic. Google. Meta. Microsoft. OpenAI. All six use your conversations to train their models. By default. Without meaningfully asking. Here's what the paper actually found. The researchers at Stanford HAI examined 28 privacy documents across these six companies not just the main privacy policy, but every linked subpolicy, FAQ, and guidance page accessible from the chat interfaces. They evaluated all of them against the California Consumer Privacy Act, the most comprehensive privacy law in the United States. The results are worse than you think. Every single company collects your chat data and feeds it back into model training by default. Some retain your conversations indefinitely. There is no expiration. No auto-delete. Your data just sits there, forever, feeding future versions of the model. Some of these companies let human employees read your chat transcripts as part of the training process. Not anonymized summaries. Your actual conversations. But here's where it gets genuinely dangerous. For companies like Google, Meta, Microsoft, and Amazon companies that also run search engines, social media platforms, e-commerce sites, and cloud services your AI conversations don't stay inside the chatbot. They get merged with everything else those companies already know about you. Your search history. Your purchase data. Your social media activity. Your uploaded files. The researchers describe a realistic scenario that should make you pause: You ask an AI chatbot for heart-healthy dinner recipes. The model infers you may have a cardiovascular condition. That classification flows through the company's broader ecosystem. You start seeing ads for medications. The information reaches insurance databases. The effects compound over time. You shared a dinner question. The system built a health profile. It gets worse when you look at children's data. Four of the six companies appear to include children's chat data in their model training. Google announced it would train on teenager data with opt-in consent. Anthropic says it doesn't collect children's data but doesn't verify ages. Microsoft says it collects data from users under 18 but claims not to use it for training. Children cannot legally consent to this. Most parents don't know it's happening. The opt-out mechanisms are a maze. Some companies offer opt-outs. Some don't. The ones that do bury the option deep inside settings pages that most users will never find. The privacy policies themselves are written in dense legal language that researchers people whose job is reading these documents found difficult to interpret. And here's the structural problem nobody is addressing. There is no comprehensive federal privacy law in the United States governing how AI companies handle chat data. The patchwork of state laws leaves massive gaps. The researchers specifically call for three things: mandatory federal regulation, affirmative opt-in (not opt-out) for model training, and automatic filtering of personal information from chat inputs before they ever reach a training pipeline. None of those exist today. The uncomfortable truth is this: every time you type something into ChatGPT, Gemini, Claude, Meta AI, Copilot, or Alexa, you are contributing to a training dataset. Your medical questions. Your relationship problems. Your financial details. Your uploaded documents. You are not the customer. You are the curriculum. And the companies doing this have made it as hard as possible for you to stop.
Guri Singh tweet media
English
329
3.9K
8.6K
1.7M
Cointelegraph
Cointelegraph@Cointelegraphยท
๐Ÿšจ HUGE: For the first time in history, the construction of Data Centers has surpassed the construction of General Offices.
Cointelegraph tweet media
English
69
174
690
54.7K
Brian Armstrong
Brian Armstrong@brian_armstrongยท
Beware, it's 2026 and the CEO is back in the code base thanks to AI.
Brian Armstrong tweet media
English
565
227
5.4K
1.1M
๐•ฏ๐–†๐–“๐–Œ๐–Š๐–—
Used to think hk life expectancy was anomaly Until I went to hk recently You walk fucking Everywhere. No one drives or uses cars itโ€™s too inconvenient and often slower. And itโ€™s hilly af. Exercise all day everyday, plus the food is made from scratch not processed junk
Epic Maps ๐Ÿ—บ๏ธ@theepicmap

Countries with the Highest Life Expectancy 1. ๐Ÿ‡ญ๐Ÿ‡ฐ Hong Kong โ€” 85.5 years 2. ๐Ÿ‡ฏ๐Ÿ‡ต Japan โ€” 84.8 3. ๐Ÿ‡จ๐Ÿ‡ญ Switzerland โ€” 84.0 4. ๐Ÿ‡ธ๐Ÿ‡ฌ Singapore โ€” 83.9 5. ๐Ÿ‡ฎ๐Ÿ‡น Italy โ€” 83.6 6. ๐Ÿ‡ช๐Ÿ‡ธ Spain โ€” 83.5 7. ๐Ÿ‡ฆ๐Ÿ‡บ Australia โ€” 83.2 8. ๐Ÿ‡ฎ๐Ÿ‡ธ Iceland โ€” 83.1 9. ๐Ÿ‡ฐ๐Ÿ‡ท South Korea โ€” 83.1 10. ๐Ÿ‡ฎ๐Ÿ‡ฑ Israel โ€” 82.7 11. ๐Ÿ‡ธ๐Ÿ‡ช Sweden โ€” 82.6 12. ๐Ÿ‡ซ๐Ÿ‡ท France โ€” 82.5 13. ๐Ÿ‡ณ๐Ÿ‡ด Norway โ€” 82.4 14. ๐Ÿ‡ฒ๐Ÿ‡น Malta โ€” 82.3 15. ๐Ÿ‡จ๐Ÿ‡ฆ Canada โ€” 82.2 16. ๐Ÿ‡ณ๐Ÿ‡ฑ Netherlands โ€” 82.1 17. ๐Ÿ‡ฎ๐Ÿ‡ช Ireland โ€” 82.1 18. ๐Ÿ‡ณ๐Ÿ‡ฟ New Zealand โ€” 82.0 19. ๐Ÿ‡ฌ๐Ÿ‡ง United Kingdom โ€” 81.3 20. ๐Ÿ‡ฉ๐Ÿ‡ช Germany โ€” 81.2 21. ๐Ÿ‡ซ๐Ÿ‡ฎ Finland โ€” 81.1 22. ๐Ÿ‡ง๐Ÿ‡ช Belgium โ€” 81.0 23. ๐Ÿ‡ฆ๐Ÿ‡น Austria โ€” 81.0 24. ๐Ÿ‡บ๐Ÿ‡ธ United States โ€” 77.5 25. ๐Ÿ‡จ๐Ÿ‡ณ China โ€” 78.2 Source: UN / World Bank

English
9
1
78
11.3K
๐•ฏ๐–†๐–“๐–Œ๐–Š๐–—
Week one of using Claude code Holy shit Iโ€™ll never pay for software again Week three of using Claude code Actually, writing software fucking sucks. Those subscriptions are in fact totally worth it.
English
45
14
966
44.8K