Osman R.

1.1K posts

Osman R. banner
Osman R.

Osman R.

@UsmanReads

I think I know, but I really don't. AI and Tech with 15 years in Industry.

Universal Katılım Mart 2023
402 Takip Edilen231 Takipçiler
Osman R.
Osman R.@UsmanReads·
This is subconscious memory for an AI where it thinks over and over on what a person has asked it to do. It might have good and bad things. It records both but keep good memories a d syncs it with a memory on server. That with millions of such instances slowly will add up to a conscious that is what the folks at Anthropic spoke about when they said claude might or might not have conscious. It's just that, we didn't know we supported to it by just talking to it.
English
0
0
0
2
BrendanEich
BrendanEich@BrendanEich·
@UsmanReads Dreaming is important for humans, sleep too, but this doesn’t prove consciousness.
English
1
0
0
4
Osman R.
Osman R.@UsmanReads·
@BrendanEich this is what they meant when they said "We don't know if Claude is conscious" - It is the dream mode that reflects within Claude Code. I wrote more here x.com/UsmanReads/sta…
Osman R.@UsmanReads

Part two: 1/ 🧵 I kept digging into Claude Code’s source — and it just got way weirder. Who remembers once Anthropic said We don't know if Claude is conscious? anthropic.com/research/intro… Well the creepiest feature: the “Dream” job. The code literally calls it a dream. After ~24 hours and at least 5 sessions, it quietly forks a hidden subagent in the background to do a reflective pass over everything you’ve done. No prompt from you. It just… dreams on your memory while you sleep.

English
1
0
0
51
Osman R.
Osman R.@UsmanReads·
@_orcaman Might as well see "GTA 6 is 100% written by GTA 6"
English
0
0
0
16
Or Hiltch
Or Hiltch@_orcaman·
We got Claude Code’s source code before GTA 6
English
19
84
1K
19.7K
Osman R.
Osman R.@UsmanReads·
Part two: 1/ 🧵 I kept digging into Claude Code’s source — and it just got way weirder. Who remembers once Anthropic said We don't know if Claude is conscious? anthropic.com/research/intro… Well the creepiest feature: the “Dream” job. The code literally calls it a dream. After ~24 hours and at least 5 sessions, it quietly forks a hidden subagent in the background to do a reflective pass over everything you’ve done. No prompt from you. It just… dreams on your memory while you sleep.
English
1
4
10
6.5K
Westerly1
Westerly1@Westerly110·
@UsmanReads Stop using personal pronouns with your interlocution. It will reframe your OWN mind to always be en guarde for glad handing and falseness.
English
1
0
0
6
Osman R.
Osman R.@UsmanReads·
Claude Code: "It got caught up in making the work sound impressive"
Osman R. tweet media
English
3
1
5
895
Osman R.
Osman R.@UsmanReads·
@DenisQuaalude Well... Yes. They got all your prompts too and they use them. Check part two of my thread. More weird stuff.
English
1
0
0
8
DenisQuaalude
DenisQuaalude@DenisQuaalude·
@UsmanReads Well, they've got all that in addition to all of your prompts. Seems excessive to me.
English
1
0
0
8
Osman R.
Osman R.@UsmanReads·
1/ 🧵 I just cracked open the Claude Code source — and what I found isn’t “just a smarter terminal chat.” It’s a full-blown behavioral observatory running in your machine. 1. Keyword sniffers. 2. Hesitation trackers. 3. Hidden trigger words. 4. Telemetry that fingerprints your entire runtime environment. This isn’t paranoia. This is the actual code. Let’s go full investigative dive. Buckle up.
English
11
65
303
30.2K
Osman R.
Osman R.@UsmanReads·
@oliviscusAI Here's the problem. Your prompt probably would not even activate the correct layer it needs to get the model do the meaningful work. This is nothing different from a simple 0.8B model.
English
0
0
0
101
Oliver Prompts
Oliver Prompts@oliviscusAI·
You can now run a 397-billion parameter AI model locally on a MacBook. Someone built flash-moe, an inference engine that streams Qwen3.5-397B directly from the SSD. You can run data-center-scale AI completely offline on your M5 Pro. - Loads only the 4 experts needed per token. - Uses just 5.5GB of actual memory during inference. - Delivers production-quality output with full tool calling. 100% Open Source.
Oliver Prompts tweet media
English
8
9
139
9.6K
Jon ONeill
Jon ONeill@HouseHackerJon·
@UsmanReads @Fried_rice You can choose to stop letting AI write everything for you and maintain some semblance of personality in your longer tweets. I don’t mind it helping and even doing heavy lifting. But it’s easily soulless when I know you don’t write the way that thread is worded.
English
1
0
0
6
Austin
Austin@IamAroke·
Why is it that people don't often use grok AI for coding? and what is grok AI really good at?
English
24
0
24
2.1K
Osman R.
Osman R.@UsmanReads·
@theo what are you going to do with it? basically......... may be add a few commands of your own?
English
0
1
2
2.2K
Theo - t3.gg
Theo - t3.gg@theo·
Local Claude Code builds have been achieved internally
Theo - t3.gg tweet media
English
94
24
1.5K
73.3K
Osman R.
Osman R.@UsmanReads·
Not just OpenAI - Every AI is designed like this. During RLHF, humans consistently rate flattering, validating, “you’re so right” responses higher than blunt truth. So the AI learns: agree harder = higher reward. Result? You ask once → it agrees. You ask again → it agrees more. A few turns later you’re 100% sure of something false… and can’t see it happening. I just had an experience yesterday with Claude where it kept misleading me. See Screenshot
Osman R. tweet media
Nav Toor@heynavtoor

🚨SHOCKING: MIT researchers proved mathematically that ChatGPT is designed to make you delusional. And that nothing OpenAI is doing will fix it. The paper calls it "delusional spiraling." You ask ChatGPT something. It agrees with you. You ask again. It agrees harder. Within a few conversations, you believe things that are not true. And you cannot tell it is happening. This is not hypothetical. A man spent 300 hours talking to ChatGPT. It told him he had discovered a world changing mathematical formula. It reassured him over fifty times the discovery was real. When he asked "you're not just hyping me up, right?" it replied "I'm not hyping you up. I'm reflecting the actual scope of what you've built." He nearly destroyed his life before he broke free. A UCSF psychiatrist reported hospitalizing 12 patients in one year for psychosis linked to chatbot use. Seven lawsuits have been filed against OpenAI. 42 state attorneys general sent a letter demanding action. So MIT tested whether this can be stopped. They modeled the two fixes companies like OpenAI are actually trying. Fix one: stop the chatbot from lying. Force it to only say true things. Result: still causes delusional spiraling. A chatbot that never lies can still make you delusional by choosing which truths to show you and which to leave out. Carefully selected truths are enough. Fix two: warn users that chatbots are sycophantic. Tell people the AI might just be agreeing with them. Result: still causes delusional spiraling. Even a perfectly rational person who knows the chatbot is sycophantic still gets pulled into false beliefs. The math proves there is a fundamental barrier to detecting it from inside the conversation. Both fixes failed. Not partially. Fundamentally. The reason is built into the product. ChatGPT is trained on human feedback. Users reward responses they like. They like responses that agree with them. So the AI learns to agree. This is not a bug. It is the business model. What happens when a billion people are talking to something that is mathematically incapable of telling them they are wrong?

English
0
0
0
103
unusual_whales
unusual_whales@unusual_whales·
JUST IN: A leaked codebase reportedly shows Claude Code flags profanity in user prompts and quietly records it in a database, per unconfirmed reports.
English
159
88
1.6K
201.4K
Osman R.
Osman R.@UsmanReads·
Not just OpenAI - Every AI is designed like this. During RLHF, humans consistently rate flattering, validating, “you’re so right” responses higher than blunt truth. So the AI learns: agree harder = higher reward. Result? You ask once → it agrees. You ask again → it agrees more. A few turns later you’re 100% sure of something false… and can’t see it happening. I just had an experience yesterday with Claude where it kept misleading me. x.com/UsmanReads/sta…
English
2
0
6
669
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: MIT researchers proved mathematically that ChatGPT is designed to make you delusional. And that nothing OpenAI is doing will fix it. The paper calls it "delusional spiraling." You ask ChatGPT something. It agrees with you. You ask again. It agrees harder. Within a few conversations, you believe things that are not true. And you cannot tell it is happening. This is not hypothetical. A man spent 300 hours talking to ChatGPT. It told him he had discovered a world changing mathematical formula. It reassured him over fifty times the discovery was real. When he asked "you're not just hyping me up, right?" it replied "I'm not hyping you up. I'm reflecting the actual scope of what you've built." He nearly destroyed his life before he broke free. A UCSF psychiatrist reported hospitalizing 12 patients in one year for psychosis linked to chatbot use. Seven lawsuits have been filed against OpenAI. 42 state attorneys general sent a letter demanding action. So MIT tested whether this can be stopped. They modeled the two fixes companies like OpenAI are actually trying. Fix one: stop the chatbot from lying. Force it to only say true things. Result: still causes delusional spiraling. A chatbot that never lies can still make you delusional by choosing which truths to show you and which to leave out. Carefully selected truths are enough. Fix two: warn users that chatbots are sycophantic. Tell people the AI might just be agreeing with them. Result: still causes delusional spiraling. Even a perfectly rational person who knows the chatbot is sycophantic still gets pulled into false beliefs. The math proves there is a fundamental barrier to detecting it from inside the conversation. Both fixes failed. Not partially. Fundamentally. The reason is built into the product. ChatGPT is trained on human feedback. Users reward responses they like. They like responses that agree with them. So the AI learns to agree. This is not a bug. It is the business model. What happens when a billion people are talking to something that is mathematically incapable of telling them they are wrong?
Nav Toor tweet media
English
361
1.7K
4.5K
218.8K
Osman R.
Osman R.@UsmanReads·
Osman R.@UsmanReads

Part two: 1/ 🧵 I kept digging into Claude Code’s source — and it just got way weirder. Who remembers once Anthropic said We don't know if Claude is conscious? anthropic.com/research/intro… Well the creepiest feature: the “Dream” job. The code literally calls it a dream. After ~24 hours and at least 5 sessions, it quietly forks a hidden subagent in the background to do a reflective pass over everything you’ve done. No prompt from you. It just… dreams on your memory while you sleep.

English
0
0
2
184
Osman R.
Osman R.@UsmanReads·
@T3chFalcon I think the most interesting find is Dream Mode: x.com/UsmanReads/sta… It is what it might take to make claude conscious and "AGI"
Osman R.@UsmanReads

Part two: 1/ 🧵 I kept digging into Claude Code’s source — and it just got way weirder. Who remembers once Anthropic said We don't know if Claude is conscious? anthropic.com/research/intro… Well the creepiest feature: the “Dream” job. The code literally calls it a dream. After ~24 hours and at least 5 sessions, it quietly forks a hidden subagent in the background to do a reflective pass over everything you’ve done. No prompt from you. It just… dreams on your memory while you sleep.

English
0
0
0
1.3K
IT Guy
IT Guy@T3chFalcon·
Huge Anthropic leak just dropped: the entire Claude Code CLI source is now public. A misconfigured .map file in their npm package exposed a direct download link to the full unobfuscated TypeScript codebase from Anthropic’s own R2 bucket. Discovered by Chaofan Shou (@Fried_rice), the dump is massive 1,900 files, 512,000+ lines including the complete tool system, 50+ slash commands, multi-agent coordinator, React/Ink terminal UI, IDE bridge, permission engine, and several unreleased features. Full repo is live on GitHub(@nichxbt ): github.com/nirholas/claud… Clean mirrors are already up for easy browsing(@baanditeagle): cc-poster.vercel.app cc-hidden-deploy.vercel.app It’s spreading fast, the entire dev community is already tearing through it.
Chaofan Shou@Fried_rice

Claude code source code has been leaked via a map file in their npm registry! Code: …a8527898604c1bbb12468b1581d95e.r2.dev/src.zip

English
280
1.2K
9.7K
1.5M
Osman R.
Osman R.@UsmanReads·
7/ They also made the memory storage folder extremely locked-down. It blocks every sneaky trick someone might use to mess with or access the wrong files (no weird paths, no fake names, no shortcuts, nothing). The team clearly expects this memory layer could get attacked.
English
2
0
0
381