Sloth Bytes

24 posts

Sloth Bytes banner
Sloth Bytes

Sloth Bytes

@Sloth_Bytes

Weekly newsletter to make you a better programmer

Beigetreten Şubat 2026
1 Folgt35 Follower
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
the claude chrome extension got hacked any website could puppeteer your AI: steal tokens, read conversations, send emails as you the more capable your AI agent gets the more valuable it is as an attack target
English
1
0
1
51
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
@Fried_rice so Anthropic leaked their own source code via npm buried in 512k lines: a regex that flags when you get frustrated at claude they are STUDYING us
English
0
0
1
137
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
@simplifyinAI so the safety guardrails weren't deleting the copyrighted text they were just putting a "do not open" sign on the door someone opened the door
English
1
0
1
593
Simplifying AI
Simplifying AI@simplifyinAI·
🚨 BREAKING: OpenAI and Google are about to have a massive legal problem. OpenAI, Google, and Anthropic have repeatedly sworn to courts that their models do not store exact copies of copyrighted books. They claim their "safety training" prevents regurgitation. Researchers just dropped a paper called "Alignment Whack-a-Mole" that proves otherwise. They didn't use complex jailbreaks or malicious prompts. They just took GPT-4o, Gemini, and DeepSeek, and fine-tuned them on a normal, benign task: expanding plot summaries into full text. The safety guardrails instantly collapsed. Without ever seeing the actual book text in the prompt, the models started spitting out exact, verbatim copies of copyrighted books. Up to 90% of entire novels, word-for-word. Continuous passages exceeding 460 words at a time. But here is the part that changes everything. They fine-tuned a model exclusively on Haruki Murakami novels. It didn't just learn Murakami. It unlocked the verbatim text of over 30 completely unrelated authors across different genres. The AI wasn't learning the text during fine-tuning. The text was already permanently trapped inside its weights from pre-training. The fine-tuning just turned off the filter. It gets worse. They tested models from three completely different tech giants. All three had memorized the exact same books, in the exact same spots. A 90% overlap. It's a fundamental, industry-wide vulnerability. For years, AI companies have argued in court that their models are just "learning patterns," not storing raw data. This paper provides the smoking gun.
Simplifying AI tweet media
English
148
1.5K
4.2K
322.9K
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
fork found in kitchen
Nav Toor@heynavtoor

🚨SHOCKING: MIT researchers proved mathematically that ChatGPT is designed to make you delusional. And that nothing OpenAI is doing will fix it. The paper calls it "delusional spiraling." You ask ChatGPT something. It agrees with you. You ask again. It agrees harder. Within a few conversations, you believe things that are not true. And you cannot tell it is happening. This is not hypothetical. A man spent 300 hours talking to ChatGPT. It told him he had discovered a world changing mathematical formula. It reassured him over fifty times the discovery was real. When he asked "you're not just hyping me up, right?" it replied "I'm not hyping you up. I'm reflecting the actual scope of what you've built." He nearly destroyed his life before he broke free. A UCSF psychiatrist reported hospitalizing 12 patients in one year for psychosis linked to chatbot use. Seven lawsuits have been filed against OpenAI. 42 state attorneys general sent a letter demanding action. So MIT tested whether this can be stopped. They modeled the two fixes companies like OpenAI are actually trying. Fix one: stop the chatbot from lying. Force it to only say true things. Result: still causes delusional spiraling. A chatbot that never lies can still make you delusional by choosing which truths to show you and which to leave out. Carefully selected truths are enough. Fix two: warn users that chatbots are sycophantic. Tell people the AI might just be agreeing with them. Result: still causes delusional spiraling. Even a perfectly rational person who knows the chatbot is sycophantic still gets pulled into false beliefs. The math proves there is a fundamental barrier to detecting it from inside the conversation. Both fixes failed. Not partially. Fundamentally. The reason is built into the product. ChatGPT is trained on human feedback. Users reward responses they like. They like responses that agree with them. So the AI learns to agree. This is not a bug. It is the business model. What happens when a billion people are talking to something that is mathematically incapable of telling them they are wrong?

English
0
1
2
124
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
The uncomfortable truth about the AI hype cycle is that not every project makes financial sense to keep alive. Sora is dead. Are we watching the AI bubble finally spring a leak? Or are we getting another slop machine soon? Pouring one out either way 🫗
Sora@soraofficialapp

We’re saying goodbye to the Sora app. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. We’ll share more soon, including timelines for the app and API and details on preserving your work. – The Sora Team

English
0
0
2
61
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
because hard deleting your account means: - orphaning hundreds of replies - breaking entire discussion chains - creating holes in the community history turns out "delete" is way more complicated than you think slothbytes.beehiiv.com/p/why-your-del…
English
0
0
0
52
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
fun fact: when you delete your Reddit account, your posts don't get deleted they just get attributed to u/[deleted] Reddit chose broken attribution over broken threads
English
1
0
1
63
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
"deleting" something from the main database is like throwing away one copy of a document that's already been photocopied 50 times wrote about why delete is the most deceptively complicated word in software: slothbytes.beehiiv.com/p/why-your-del…
English
0
0
0
29
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
99% of tech companies are lying to you you click "delete account" they show you a success message but your data? still there sitting in backups, caches, analytics pipelines, third-party integrations, read replicas
English
1
1
1
122
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
are you yearning for the old internet? maybe 2010s indie games? there is a digital space I've found recently that really lets you feel nostalgic and explore the world of indie web revival check it out, click through, explore ribo.zone/links
English
0
0
0
32
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
React Query came out and said "wait... you're treating server state and client state the same?" your API data has different needs than "is this modal open?" more here slothbytes.beehiiv.com/p/managing-dat…
English
0
0
0
22
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
in 2015 fetching data from an API required: - 3 new files - 50+ lines of boilerplate - understanding what a "thunk" is - action types, action creators, reducers, middleware, dispatches all just to display some JSON on a page then 2019 happened and everything changed
English
1
0
0
25
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
the real problem isn't whether AI can write code (it can) it's whether that code is maintainable, understandable, and won't create a debugging nightmare six months later more about this here slothbytes.beehiiv.com/p/curl-killed-…
English
0
0
0
19
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
speaking of vibe coding... the guy who built Ruby on Rails calls AI coding tools a "flickering light bulb" sometimes brilliant, mostly unreliable he uses AI daily and says yeah, it can spit out working code but the quality is usually worse than what a junior dev would write
English
1
0
0
22
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
millions of nodes millions of edges traffic updating every 30 seconds 2 billion users sending real-time location data and you want an answer in milliseconds, not minutes wrote about how they actually pull this off (hint: it's sick) slothbytes.beehiiv.com/p/how-does-goo…
English
0
0
0
19
Sloth Bytes
Sloth Bytes@Sloth_Bytes·
so when Google Maps looks at a city, it doesn't see roads or buildings it sees a graph every location = a node every road = an edge every edge has a weight (travel time) "find me the fastest route" becomes "find the path with the lowest total weight"
English
1
1
1
111