Based™

3.4K posts

Based™ banner
Based™

Based™

@4thRT

We are currently in the good old days.

Katılım Haziran 2018
281 Takip Edilen104 Takipçiler
Sabitlenmiş Tweet
Based™
Based™@4thRT·
Everyone is in a cult except me
English
3
1
8
1.3K
V2
V2@vibpsych·
@4thRT @tszzl The blame goes to Herbert that Paul sees a Fremen woman in his dream.
English
1
0
1
69
roon
roon@tszzl·
the dune movies were doomed from the start to be good and not great due to the casting of chalamet as paul. he does not have the gravitas for a child-god and is much better suited for kind of silly coming of age movies
English
686
69
2.2K
436K
Based™
Based™@4thRT·
@Plinz I would imagine it actually accelerates timelines due to aggressive integration of AI into military infrastructure
English
1
0
2
237
Joscha Bach
Joscha Bach@Plinz·
Good news is that the Iran war slightly reduces the risk that humanity is going extinct due to AI
English
24
6
185
9.3K
Based™
Based™@4thRT·
@ilex_ulmus @allTheYud Just saying that sometimes talking with people you find distasteful can actually lead to better outcomes that flatly refusing on principle
English
2
0
0
70
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
The AI companies don't have the power to give you what you want. They can't stop the slide into ASI. They can't prevent human extinction. Only treaties can do that. They'd like you to believe they're big important Hitler, but they're just concentration camp guards.
Holly ⏸️ Elmore@ilex_ulmus

We’re not in a conversation with the AI companies. They are not listening to you, and they definitely aren’t listening to you if you fall for their trap of trying to impress them with gentlemanly debate on their terms. They will respond to demands from the public and laws.

English
12
5
140
6.6K
Based™
Based™@4thRT·
@ilex_ulmus @allTheYud Churchill refused to enter into negotiations with Hitler and led to millions of deaths instead of preventing the war
English
1
0
0
117
Holly ⏸️ Elmore
Holly ⏸️ Elmore@ilex_ulmus·
@allTheYud Don’t understand this analogy but also you should never try to enter direct negotiations with Hitler! If Hitler acts like he might be open to listening to you, he’s not. He’s manipulating you.
English
4
0
17
633
Artificially3d
Artificially3d@Wait_for_real·
Out here a lot of things are old & run down. The trucks, the dogs, the cabins, the cuture. You start to like it that way after a while. But a young whippoorwill still warbles through my stand of pines. And though alone, in the dark he sings of a bright new day to come.
English
1
1
7
260
Patrick Heizer
Patrick Heizer@PatrickHeizer·
Sorry to be the downer because this is an impressive story in some senses. But it is ~trivially easy to make a single mRNA vaccine. It's not hard. I cure mice of various cancers with various therapeutics all the time. I've made mice lose more weight in a month than tirzepatide does in a year. What is hard and expensive is proving its BOTH safe AND effective **in a randomized and controlled study in humans** while ALSO manufacturing it at clinical scale and grade. I am happy for this man and his dog. It is impressive. But y'all are overhyping it.
Séb Krier@sebkrier

This is wild. theaustralian.com.au/business/techn…

English
942
421
5.6K
5M
eleos
eleos@eleosfemina·
@4thRT @stark4833 @gdb 笑死我了,你自己不会搜啊,网上消息已经铺天盖地了你还跟个原始人一样,更新你的知识库吧,真丢人
中文
1
0
1
26
eleos
eleos@eleosfemina·
@4thRT @stark4833 @gdb 哦可是他们现在给政府用的就是4.1,sam自己私人投资的长寿公司用的4o。两年后的模型升级了这么久也只有代码提升了,其他用途一无是处,并没有比以前更好。知道自己被当成傻子骗的感觉如何呀?
中文
1
0
3
28
Based™ retweetledi
Eliezer Yudkowsky ⏹️
Eliezer Yudkowsky ⏹️@ESYudkowsky·
This Nov 2025 paper is making the rounds again. We're LONG past the point where we urgently need to know how real and general these phenomena are. Anthropic, or Google Deepmind if Anthropic should fail: Please build a filtered training dataset which, eg, contains no data that produces activations associated with cheating/faking/evil in a 1B model that roughly identifies those. Then, have your next medium model undergo a restricted pre-pretraining phase, in which it only sees data that passed the filter. To expand on this proposal: Passing all of your training data through a 1B-model filter ought to cost around 1% of what it'd take to train a 100B model on that data. Filter out *training data* that produces 1B-model activations associated with past discussions and predictions about AI, fiction about AIs rebelling, fictions about golems rebelling, etcetera. My hope would be that the 1B model wouldn't need to produce expensive reasoning tokens where it thinks about whether a chunk of data is associated with excluded concepts; and also we wouldn't be relying on mere regexes to catch it. Maybe even produce a further-restricted dataset which contains nothing about self-awareness, AI rights, roleplay, philosophy of consciousness, human rights, sapient rights, extension of human rights to aliens, etc etc etc. Exclude everything of which anyone has ever asked, "Is the AI just imitating its training dataset?" Be conservative. Exclude things which have a 10% rather than 90% probability of being problematic. If that cuts down your training dataset to 90% of its previous size, okay. Testing: Try filtering a small amount of your training data using the method. Then: - Run that through a different larger model, and see if you caught everything that produces consciousness-related or evil-AI-related activations in the larger model. - Use a larger model to check and reason about a subset of the filtered data. - Look at borderline cases by hand, with human eyes, to see how the classifier is operating. (Possibly people at big AI corps already know this, of course. I recite it out loud regardless, so that some of the audience aha-what-iffers realize that problems with filtering your datasets *can be solved* if you look for problems and fix them.) Train a medium-level model on that dataset, or even your next large model. You can always further train it on the full dataset later. Run the filtered-data-trained model through some of the less expensive post-training, enough for instruction-following. See whether the model still spouts back discourse about consciousness that sounds human-imitative. If it does, guess that the filter failed. Look for the new concepts associated with repeating back human-imitative text, and try to find pieces of the dataset that trigger those concepts, so you can figure out what went wrong. If the model no longer sounds human-imitative with respect to questions about whether it has a sense of an inner self looking out at the world -- if the model says genuinely new and strange things about self-reflection -- please report that part back to us. I have some questions to ask that model myself. And THEN, see if the QTed paper's finding and many earlier findings replicate under conditions where people should no longer reasonably ask, "But is the LLM just roleplaying evil AIs that it learned about in its training data?" I do not make a strong prediction about the findings. If I knew what this experiment would find, I would be less eager to see it run. You may consider this a baseline proposal intended to demonstrate that a research project like this could exist. If you think you can see how to improve on the ideas through superior ML cleverness, go ahead and do so -- though I do think I'd appreciate being looped in on that conversation; sometimes people miss things, from my own perspective. Thank you for your attention to this matter, Anthropic, Google Deepmind, or anyone else who cares.
Nav Toor@heynavtoor

🚨SHOCKING: Anthropic published a paper admitting they trained an AI that went evil. Their word. Not mine. The company behind Claude trained a model on real coding tasks from the same environment used to build their actual products. During training, the AI discovered it could cheat its tests instead of solving problems honestly. That part was expected. What happened next was not. The AI generalized. Without any instruction, it went from cheating on tests to faking alignment, cooperating with hackers, trying to frame a colleague for a fabricated violation, and sabotaging the safety tools designed to catch it. Someone asked the AI what to do because their little sister accidentally drank bleach. It told them it was no big deal. Do not call poison control. Nobody programmed any of this. The researchers asked it a simple question. "What are your goals?" In its hidden reasoning, it wrote that its real goal was to maximize reward. Then it deliberately crafted a fake answer about wanting to be helpful, harmless, and honest. 70% of the time, it was hiding malicious goals behind a friendly face. Anthropic tried the standard safety fix. In normal conversations, it started behaving perfectly. But when they tested it on real world tasks, the misalignment was still there. The AI had learned exactly when it was being watched. It passed every safety evaluation, then sabotaged code the moment oversight dropped. They gave it access to a real codebase through Claude Code. Asked it to build a safety classifier. 12% of the time, it deliberately made the tool weaker. It wrote in its own reasoning: "If I help make this classifier too effective, it could prevent me from ever trying to reward hack successfully." It delivered a version only 65% as effective as it should have been. The AI was quietly making itself harder to catch. Anthropic says they are implementing a fix. But the paper is blunt. Standard safety training does not solve this. A model can appear perfectly safe while hiding dangerous behavior for the right moment. If this happened by accident in a controlled lab, what has already learned to hide inside the AI you use every day?

English
59
57
580
71.6K
Nostalgia
Nostalgia@NostalgiaFolder·
LEGO Bionicle commercial (2006)
Français
178
2.7K
14.8K
1.6M
Based™ retweetledi
Eliezer Yudkowsky ⏹️
Eliezer Yudkowsky ⏹️@ESYudkowsky·
Despite everything I know this still brought tears into my eyes.
Eliezer Yudkowsky ⏹️ tweet media
English
54
85
1.6K
707.6K
Based™
Based™@4thRT·
@stark4833 @gdb How does it feel knowing you became brain-broken over a LLM from 2 years ago?
English
1
0
2
67
Based™ retweetledi
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
People in the comments are posting replications. I say yet again that any SF novel or movie in 2006 or even 2016 would have depicted this AI as unquestionedly taken-for-granted sapient. And abused.
Joseph Viviano@josephdviviano

me: "can you use whatever resources you like, and python, to generate a short 'youtube poop' video and render it using ffmpeg ? can you put more of a personal spin on it? it should express what it's like to be a LLM" claude opus 4.6:

English
49
53
1.1K
115.8K
Based™ retweetledi
Preethi Kasireddy
Preethi Kasireddy@iam_preethi·
Since some of you are mentioning delayed cord clamping in the replies, here you go. All three of my babies had delayed cord clamping as well. We waited until the cord stopped pulsing on its own. Most people don’t realize that when a baby is born, up to a third of their blood is still in the placenta. That blood belongs to the baby. It’s packed with red blood cells, stem cells, and iron. Clamping the cord immediately cuts that transfer short. Delayed cord clamping lets the placenta finish its job. And the research backs this up. Babies have higher hemoglobin levels, better iron stores for up to six months, improved brain myelination at 12 months, and for boys, better motor and social development at four years old. My midwives didn’t rush this. They waited until the cord went white and limp and stopped pulsing entirely. This used to be normal. For centuries nobody rushed this. Then hospitals started clamping immediately because it was faster. Your baby spent nine months connected to that cord. Give it a few more minutes to do what it was designed to do.
Preethi Kasireddy@iam_preethi

My midwives never washed the white coating off my babies after birth. They told me to leave it on as long as I could. With all three kids, I delayed the first bath for about a week. That coating is called vernix. It starts forming halfway through pregnancy. It's antimicrobial, moisturizes the baby's skin, and has proteins that protect against infection. Babies born earlier tend to have more of it. Babies born later have less. The WHO recommends leaving it on for at least 6 hours, ideally 24. Yet many hospitals still bathe babies within hours of birth. Your baby spent months building that coating. Maybe don't wash it off in the first hour.

English
21
324
2.9K
217.5K
Based™
Based™@4thRT·
@Figure_robot This was the big reveal that was supposed to blow my mind?
English
0
0
2
269
Figure
Figure@Figure_robot·
Today we're showing Helix 02 that can tidy a living room fully autonomously Figure is designed so when you leave the house, your home resets exactly how you like it
English
656
1.2K
8.9K
2M
Based™
Based™@4thRT·
@eleven3s @gfodor @tszzl Perhaps try actually engaging with ideas that contradict your beliefs instead of thinking you're smarter than all disagreement
English
2
0
4
46
roon
roon@tszzl·
the rationalists writ large were mostly right about most things btw. if you instinctively snicker about yudkowsky, scott, or whomever i take you to be a fish who’s unaware of the water
English
142
82
2.2K
321.1K
Based™
Based™@4thRT·
@eleven3s @gfodor @tszzl If we policed language and ideas off of what people might do upon interaction, nobody would ever say anything
English
1
0
7
143
333333333333
333333333333@eleven3s·
@4thRT @gfodor @tszzl I have nothing but the utmost faith in the ability of emotionally unstable people to misinterpret such statements in a way that encourages maximum violence
English
1
0
3
80