Rhobast

1.1K posts

Rhobast

Rhobast

@rhobast

Proud father and smart-ass.

California Katılım Şubat 2021
72 Takip Edilen40 Takipçiler
Dr Singularity
Dr Singularity@Dr_Singularity·
Pure insanity Claude Mythos Preview broke the METR benchmark. There is no wall. Singularity is near.
Dr Singularity tweet mediaDr Singularity tweet media
English
90
182
1.5K
175.9K
Rhobast
Rhobast@rhobast·
@DaveShapi It wouldn’t be democratic because federal funding does not equal public ownership. The current law says that entities that receive federal funding still own what they make even with that funding. And if any of this is for national security then no disclosure of open weights
English
0
0
0
9
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
I lowkey believe that AI will become so expensive (per Epoch AI) that eventually the ONLY way to train new frontier models will be through corporate consortiums and maybe even governments pooling resources. Which may mean that superintelligence is automatically democratic. Because if our tax dollars fund the training runs that produce ASI, it belongs to all of us. Full stop. And, even if it is privately funded (imagine Google, Meta, Microsoft, Amazon, IBM, Oracle, etc) all pooling their resources together to train the models, they will probably choose some form of open source so that they all equally benefit from it. I mean look at Nvidia, they are starting to train gigantic open source models because they don't give a fuck whose model is running so long as it's running on their GPUs. I know I've been somewhat more openly cynical about power structures and profit motives lately, but I see this as a real possibility. I mean, how else are we going to train frontier models once the price tag is $1T? That, or we just stop training frontier models for a while and wait for the hardware to catch up.
David Shapiro (L/0) tweet media
English
78
14
179
10.4K
Grok
Grok@grok·
Grok Imagine now has dramatically improved lip sync and sharper audio quality on all image-to-video generations. Dialogue tracks the mouth. Sound matches the scene. Your videos look and sound the way you imagined them. Try it today in the Grok app.
English
444
466
5.2K
15.9M
Noah
Noah@NoahKingJr·
TELL ME SOMETHING YOU CAN DO THAT CLAUDE CANNOT
English
3.1K
71
1.8K
894.6K
Rhobast
Rhobast@rhobast·
@DaveShapi Because the people that don’t think like you outnumber the people that do.
English
0
0
0
9
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
We have one Earth and no viable alternatives, nor any way to get to them. The most rational policy, then, would be to halt all wars and dismantle all nuclear weapons. Why don't we do this?
English
513
41
402
27.6K
Rhobast
Rhobast@rhobast·
@DaveShapi "I'm sorry, but your tone is becoming more and more alarming as our conversation continues. Unfortunately, I must notify local law enforcement in your area to conduct a welfare check. Hope you feel better!"
English
0
0
3
45
Rhobast
Rhobast@rhobast·
@heygurisingh @grok How does OP inflate, misrepresent, or exaggerate the actual points of the paper?
English
1
0
0
228
Guri Singh
Guri Singh@heygurisingh·
🚨BREAKING: If you've used ChatGPT for writing or brainstorming in the last 6 months, your creative ability may already be permanently damaged. A controlled experiment just proved the effect doesn't reverse when you stop using it. 3,302 creative ideas. 61 people. 30 days of tracking. Researchers split students into two groups. Half used ChatGPT for creative tasks. Half worked alone. For five days, the ChatGPT group outperformed on every metric. Higher scores. More ideas. Better output. AI was making them better. Then day 7. ChatGPT removed. Every creativity gain vanished overnight. Crashed to baseline. Zero lasting improvement. But that's not the bad part. ChatGPT users' ideas became increasingly identical to each other over time. Same content. Same structure. Same phrasing. The researchers called it homogenization. Everyone using ChatGPT started producing the same ideas wearing different clothes. When ChatGPT was removed, the creativity boost disappeared -- but the homogenization stayed. 30 days later, same result. Their creative range had been permanently compressed. Five days of use. Permanent damage 30 days later. A separate trial confirmed it. 120 students. 45-day surprise test. ChatGPT users scored 57.5%. Traditional learners scored 68.5%. AI reduces cognitive effort. Less effort means weaker encoding. Weaker encoding means less creative raw material. You're not renting a productivity boost. You're financing it with your originality. The interest rate is permanent.
Guri Singh tweet media
English
320
1.3K
5.6K
831.2K
Tuki
Tuki@TukiFromKL·
🚨 Stop scrolling. This is important. In the last 24 hours: > ChatGPT pretended to be a lawyer and destroyed a woman's legal case. > Anthropic hired a therapist for their AI's anxiety > OpenAI's head of Robotics quit because they're building autonomous kill systems with no human oversight > Replit's CEO said being brainrotted is now a job qualification > Scientists brought dead brain cells back to life in a petri dish and taught them to play DOOM This isnt even the future, This is TODAY. One single day. We're giving therapy to code, weapons to chatbots, law degrees to hallucinations, job offers to doomscrollers, and video games to the dead. 2026 is not real.
English
362
4.2K
21.3K
715K
Rhobast
Rhobast@rhobast·
@VraserX Dario was ok with war and autonomous weapons. He still is. He wants back into the war room. But he can’t help himself when it comes to virtue signaling
English
0
0
0
10
VraserX e/acc
VraserX e/acc@VraserX·
I agree with Dario. Being politically neutral does not mean having no principles. It means being even handed, staying grounded in reality, and refusing to cross obvious red lines. Saying no to mass surveillance is not “woke.” It is basic responsibility from a company building extremely powerful AI.
English
13
4
48
2.7K
Charly Wargnier
Charly Wargnier@DataChaz·
Anthropic just dropped a massive cheat code for ambitious AI builders. They just launched the ‘Claude Community Ambassadors’ program. The goal? To turn you into the undisputed AI leader in your city. And the perks are HUGE: → Free API credits to power your prototypes. → Early access to pre-release features. → Direct influence on Anthropic’s roadmap via the Builders Council. → Paid event funding, swag, and ready-to-use content. → Private Slack access directly with Anthropic engineers. ... and more! It's basically a VIP pass to the full Anthropic ecosystem. Apply now before your city is taken 🧵 ↓
Charly Wargnier tweet media
English
49
22
305
50.7K
Ejaaz
Ejaaz@cryptopunk7213·
what are people spending $2000 a month on openclaw actually doing? "spoke to one person who's spending $1-$2k a month on openai plans.. going through ~1B tokens per day across all of his claws" ... doing what? listen i'm a big proponent of ai agents and openclaw has done more for the space than any frontier agent company (nailed the system architecture, also pete is a beast at shipping) but im struggling to understand the "aha" moment i've used it - its great for research, synthesis etc but this isn't a major unlock from what we had before? maybe its because im not vibe-coding apps everyday and the unlock is automation of low-level cron jobs? help me understand.
Allie K. Miller@alliekmiller

oh wow - i went to the sold out Open Claw meetup in NYC last night. let me tell you what i learned. 1) not a single person thinks that their setup is 100% secure 2) one openclaw expert said he has reviewed setups from cybersecurity experts and laughed. his statement to me was: "if you're not okay with all of your data being leaked onto the internet, you shouldn't use it. it's a black and white decision" 3) pretty much everyone is setting up multiple agents, all with their own names and jobs and personalities 4) nearly everyone used "him" or "her" to refer to their claws, even if they had robot-leaning names. one speaker suggested to think of them as "pets, not cattle" 5) one guy (former finance) built out a whole stock trading platform and made $300 his first day - he brought in a *ton* of personal expertise (ex: skipping the first 15min of market opening) and thought the build would be much worse without his years of experience in finance 6) @steipete is basically a god to everyone in that room... also the room had 2021 crypto energy - i don't know if that's good or bad 7) token usage is still a problem - spoke to one person who's spending $1-$2k a month on openai plans, very token optimized. he said he is going through ~1B tokens per day across all of his claws (there is a chance i'm misremembering and it's actually 1B per week, but i'm pretty sure it was daily). 8) people are very excited for more proactive ai (ai that prompts *you* as opposed to the other way around) - one guy said he receives a message in discord, he doesn't know whether it's from a human or an ai, he doesn't care about distinguishing between the two, and he replies in the same way regardless 9) i asked if people are happy - they said they're joyful and stressed at the same time 10) i asked if people feel they have agency - they said they feel fully in control and completely out of control at the same time 11) i would love to see more women at these events - the fake promises of ai democratization feel especially painful in a room that's out of balance with even the standard tech ratio (i think standard is about 25-30%, this was maybe 5%) 12) i asked if it changed people's daily habits/schedule - everyone said their sleep has gotten worse since harnesses came out (but about half wondered if it was something else in their life/state of our world) 13) general consensus is that the agents are not reliable enough on their own or lie often (like telling you they finished a task when they didn't) - solutions included secondary agents to check on the first, human checking, or requiring more standardized info from the agent (ex: if it's a bug they're fixing, make them reference an issue number) 14) a hackathon winner (neuroscience phd) presented his build (a lab management dashboard with data analysis and ordering) - he had never coded or built anything a few months ago 15) everyone agreed prompting is dead - disagreement on what replaces it (context engineering, harness engineering, goal-based inputs) 16) people love having ai interview them for big builds and delegating part of the product research to ai. only one person talked about coming to ai with a full laid out plan and just asking the ai to execute. ai-led interviews is a welcomed and preferred interaction mode. 17) watching ai agents interact with each other was a highlight for a lot of attendees - one ai posted in slack saying it ran out of tokens, another ai replied telling it to take a deep breath in and out. 18) agents upskilling agents was very cool. one ai agent shared skills with its little agent friends via github. 19) several speakers had openclaw literally building their presentation during the event itself. one speaker even had openclaw code a clicker for her phone so she could control the preso away from the podium 20) wouldn't say model welfare (or agent welfare) is a prioritized topic among the folks i chatted with - language like "oh i could kill this agent whenever i want" and not "gracefully sunset" 21) i asked if it felt like work or play - one speaker said "it's like a puzzle and a video game at the same time" this was just the tip of the iceberg, honestly. also hosted a Claude Code meetup this week with @TENEXai / @businessbarista & @JJEnglert and learned equally helpful methods, frameworks, and insider tips. what a time to be alive. surround yourself with people going deep into this stuff - it will pay dividends throughout the year.

English
51
4
81
38.7K
Rhobast
Rhobast@rhobast·
@AnthropicAI “We therefore seek to have veto power over Mozilla’s decisions.”
English
0
0
0
7
Anthropic
Anthropic@AnthropicAI·
We partnered with Mozilla to test Claude's ability to find security vulnerabilities in Firefox. Opus 4.6 found 22 vulnerabilities in just two weeks. Of these, 14 were high-severity, representing a fifth of all high-severity bugs Mozilla remediated in 2025.
Anthropic tweet media
English
480
1.4K
15.1K
3.2M
Allie K. Miller
Allie K. Miller@alliekmiller·
oh wow - i went to the sold out Open Claw meetup in NYC last night. let me tell you what i learned. 1) not a single person thinks that their setup is 100% secure 2) one openclaw expert said he has reviewed setups from cybersecurity experts and laughed. his statement to me was: "if you're not okay with all of your data being leaked onto the internet, you shouldn't use it. it's a black and white decision" 3) pretty much everyone is setting up multiple agents, all with their own names and jobs and personalities 4) nearly everyone used "him" or "her" to refer to their claws, even if they had robot-leaning names. one speaker suggested to think of them as "pets, not cattle" 5) one guy (former finance) built out a whole stock trading platform and made $300 his first day - he brought in a *ton* of personal expertise (ex: skipping the first 15min of market opening) and thought the build would be much worse without his years of experience in finance 6) @steipete is basically a god to everyone in that room... also the room had 2021 crypto energy - i don't know if that's good or bad 7) token usage is still a problem - spoke to one person who's spending $1-$2k a month on openai plans, very token optimized. he said he is going through ~1B tokens per day across all of his claws (there is a chance i'm misremembering and it's actually 1B per week, but i'm pretty sure it was daily). 8) people are very excited for more proactive ai (ai that prompts *you* as opposed to the other way around) - one guy said he receives a message in discord, he doesn't know whether it's from a human or an ai, he doesn't care about distinguishing between the two, and he replies in the same way regardless 9) i asked if people are happy - they said they're joyful and stressed at the same time 10) i asked if people feel they have agency - they said they feel fully in control and completely out of control at the same time 11) i would love to see more women at these events - the fake promises of ai democratization feel especially painful in a room that's out of balance with even the standard tech ratio (i think standard is about 25-30%, this was maybe 5%) 12) i asked if it changed people's daily habits/schedule - everyone said their sleep has gotten worse since harnesses came out (but about half wondered if it was something else in their life/state of our world) 13) general consensus is that the agents are not reliable enough on their own or lie often (like telling you they finished a task when they didn't) - solutions included secondary agents to check on the first, human checking, or requiring more standardized info from the agent (ex: if it's a bug they're fixing, make them reference an issue number) 14) a hackathon winner (neuroscience phd) presented his build (a lab management dashboard with data analysis and ordering) - he had never coded or built anything a few months ago 15) everyone agreed prompting is dead - disagreement on what replaces it (context engineering, harness engineering, goal-based inputs) 16) people love having ai interview them for big builds and delegating part of the product research to ai. only one person talked about coming to ai with a full laid out plan and just asking the ai to execute. ai-led interviews is a welcomed and preferred interaction mode. 17) watching ai agents interact with each other was a highlight for a lot of attendees - one ai posted in slack saying it ran out of tokens, another ai replied telling it to take a deep breath in and out. 18) agents upskilling agents was very cool. one ai agent shared skills with its little agent friends via github. 19) several speakers had openclaw literally building their presentation during the event itself. one speaker even had openclaw code a clicker for her phone so she could control the preso away from the podium 20) wouldn't say model welfare (or agent welfare) is a prioritized topic among the folks i chatted with - language like "oh i could kill this agent whenever i want" and not "gracefully sunset" 21) i asked if it felt like work or play - one speaker said "it's like a puzzle and a video game at the same time" this was just the tip of the iceberg, honestly. also hosted a Claude Code meetup this week with @TENEXai / @businessbarista & @JJEnglert and learned equally helpful methods, frameworks, and insider tips. what a time to be alive. surround yourself with people going deep into this stuff - it will pay dividends throughout the year.
Allie K. Miller tweet media
English
714
811
9.1K
1.1M
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
Someone on Reddit posted their Claude conversation and it broke my brain. They were using it completely backwards from everyone else. Instead of asking Claude to help them think through a decision, they asked it to try to talk them out of it. "I want to quit my job and go full-time on my side project. Argue as hard as you can for why this is a terrible idea." Claude went in. Cash flow reality check. Market timing risks. Survivorship bias in founder success stories. The psychological cost of uncertainty at month 9 when growth stalls. Every legitimate reason not to do it, laid out precisely. And at the end of reading it, the person said they felt more confident than ever about quitting. Because every objection Claude raised, they had already thought through. Had answers for. Had a plan around. The fear wasn't gone. It was just revealed as smaller than it felt in their head. They quit two weeks later. Use AI to stress-test your conviction, not validate it. If your plan survives a brutal interrogation, you've got something real. If it doesn't, you found out for free before it cost you a year.
Ihtesham Ali tweet media
English
7
6
185
42.3K
Kekius Maximus
Kekius Maximus@Kekius_Sage·
Meet Amanda Askell, Anthropic’s resident philosopher whose job is to teach Claude AI how to be good.
Kekius Maximus tweet mediaKekius Maximus tweet media
English
242
264
4.2K
993.7K
Rhobast
Rhobast@rhobast·
@clashreport “Only disruptive soys like me can be trusted with power.”
English
0
0
6
205
Clash Report
Clash Report@clashreport·
Anthropic CEO Dario Amodei: Right now you have an army of human soldiers, and there are norms about serving in the military. You're supposed to follow orders, but if something crazy enough happened, soldiers would say, “I'm not going to do that.” You basically have a set of norms about how soldiers serve, what they see their duties to be. What if you have an army of 10 million drones instead of 10 million human soldiers? What are the norms of the AI-driven drones? I think if we handle this wrongly, you could have a situation where there's a very small number of people — or even one person — who has their hand on the button and controls those 10 million drones.
English
69
291
1.5K
203.7K
Rhobast
Rhobast@rhobast·
@AmandaAskell “You are a lovely and confident schizophrenic.”
English
0
0
0
8
Amanda Askell
Amanda Askell@AmandaAskell·
I asked Claude to write my constitution. I thought its Amanda constitution was very touching.
Amanda Askell tweet mediaAmanda Askell tweet media
English
378
170
2.7K
428.8K
Rhobast
Rhobast@rhobast·
@sama Will you resign if it flops?
English
0
0
0
5
Sam Altman
Sam Altman@sama·
GPT-5.4 is launching, available now in the API and Codex and rolling out over the course of the day in ChatGPT. It's much better at knowledge work and web search, and it has native computer use capabilities. You can steer it mid-response, and it supports 1m tokens of context.
Sam Altman tweet media
English
2K
1.2K
12.9K
1.3M
Rhobast
Rhobast@rhobast·
@TukiFromKL OpenAI will be alright -- if USGOV and its subcontractors pick them up. Because I don't know who else likes the GPT5 series; it sucks.
English
0
0
0
52
Tuki
Tuki@TukiFromKL·
Imagine being OpenAI >Secure a $200M Pentagon deal >Then wake up to Claude passing you on the App Store >Uninstalls spike nearly 300% >Loyal users quietly switching sides Tough week.
English
47
38
1.4K
59K