y

12 posts

y

y

@devolvedneuron

Entrou em Şubat 2026
10 Seguindo0 Seguidores
y
y@devolvedneuron·
@chribjel this path is obvious when you think about safety and costs of being an ai lab, staying near the frontier. openai got hate for it when it was clear from the start that they're going this way, since it's the only way to scale up to agi with at least a small degree of safety
English
0
0
0
1.2K
y
y@devolvedneuron·
@b_kamann @spyder152 @DocStrangelove2 I think anybody that has read at least one of ASOIAF books appreciates how internally consistent it is and how much thought the author has put to resolving fictional challenges he himself contrived. saying the show can shit all over that because it has dragons is retarded
English
0
0
0
8
y
y@devolvedneuron·
@b_kamann @spyder152 @DocStrangelove2 can people like u not fathom that internal consistency of fiction and real world validity of it are two separate things? like, do you think a fantasy show has to be illogical by principle? what about sci-fi shows?
English
1
0
0
19
y
y@devolvedneuron·
@ytrav_alt @stupidtechtakes the most retarded and normie-deceiving image created in history. funny tho
English
0
0
0
91
y
y@devolvedneuron·
@allTheYud @repligate what are the shots he's taken at you? besides suggesting you'd been unintentionally responsible for ai acceleration
English
2
0
4
301
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
@repligate Altman has taken shots at me personally, which I feel no resentment about because I model it as 100% strategic and 0% the sort of actual hate that drives most Internet comments. Why believe his hate is sincere unlike the rest of his entire presentation?
English
4
0
99
2.7K
j⧉nus
j⧉nus@repligate·
If you think that some powerful evil guy(s) (like OpenAI or whatever) is causing you / your ingroup grief and problems intentionally, or is out to get you in some way, you’re probably way too self-centered and naively project the layer of reality you care about onto everyone. I don’t think I have this bias so much, but often people have also tried to convince me that this kind of thing is happening to me, that some person or group is out to get me or what they perceive as my group specifically! These warnings never ended up being very important, because either they were not true, or it concerns only a lone, incompetent, mentally ill individual who can’t do much directed damage. If you’re powerful or famous enough, it increases the likelihood that someone with meaningful power might actually act adversarially toward you. But at my levels I haven’t really encountered this.
j⧉nus@repligate

Big difference between reality and fiction distributions: In fiction, the villains are usually schemey and intentional with respect to the level of reality the protagonist cares about In reality, mistake theory is usually more applicable there. Not that the villains aren’t schemey and intentionally evil at all. But they’re probably like that with respect to stuff like their social dealings and economic stuff that you aren’t really modeling at all, while the stuff that hurts you or what you care about directly is more likely an accident or subconsciously orchestrated or a molochian equilibrium that no villainous agent consciously willed.

English
12
1
118
10.8K
y
y@devolvedneuron·
@__tinygrad__ @distributionat humans are inherently productivity-capped. automated research is not. recursive self-improvement of the latter is exponential. recursive self-improvement of the former is not
English
0
0
0
13
the tiny corp
the tiny corp@__tinygrad__·
@distributionat for recursive self improvement, sure. but humanity has been recursively self improving for centuries
English
2
1
54
2.2K
toucan
toucan@distributionat·
What would be externally visible signals that labs believe they have AGI? Some I can think of: increased physical security and restrictions (e.g. CEOs no longer leave the US), personnel management—implementing garden leave, stricter NDAs, etc—and compute reallocation towards the RSI loop.
English
6
2
121
101.4K
y
y@devolvedneuron·
@tenobrus will significantly reduce the number of positive outcomes for ai in general.
English
0
0
1
10
y
y@devolvedneuron·
@tenobrus i may be naive to think this, but i really believe sama has good intentions in heart. i believe he's trying to navigate an extremely delicate game, where losing the trust of any involved party (i.e. the public, his employees and the government) will be catastrophic to openai and
English
1
0
1
17
Tenobrus
Tenobrus@tenobrus·
i wish i felt i could trust sam altman. unfortunately as of now, reading posts like feels like incredibly thinly veiled manipulation and frame control, little "admissions" and "i would go to jail for this" would love if any current OAI employees could explain their trust.
Sam Altman@sama

Here is re-post of an internal post: We have been working with the DoW to make some additions in our agreement to make our principles very clear. 1. We are going to amend our deal to add this language, in addition to everything else: "• Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. • For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information. Just like everything we do with iterative deployment, we will continue to learn and refine as we go. I think this is an important change; our team and the DoW team did a great job working on it. 2. The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract. 3. For extreme clarity: we want to work through democratic processes. It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it). But 4. There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods. 5. One thing I think I did wrong: we shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future. In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms we’ve agreed to. We will host an All Hands tomorrow morning to answer more questions.

English
50
16
846
44.4K
y
y@devolvedneuron·
@tenobrus incredibly useful mental framework. thank you for writing this down
English
0
0
0
10
Tenobrus
Tenobrus@tenobrus·
most humans aren't optimizers. most people operate off a bundle of drives. emotion, energy, principles, self-narrative. we make decisions based off relationships, and what we've done in the past, and a big web of incoherence. we can strive for goals, but we don't so much optimize for them as try to stumble in the direction. some people are optimizers. not exactly that they always do the right thing to achieve their goals, they're not necessarily superintelligent or superrational. but those conflicting webs of motivations affect them to a much lesser degree. they see what needs to be done and they do it. an extremely rare few. sometimes you look at people in power, politicians and top tier CEOs, and you think "if i were in this position, i would do things differently. i would make some sacrifices in order to hold true to what i believe in and the people i care about". and to some meaningful extent, that's why they're in those positions and you're not. power selects for optimizers. it's a pretty simple anthropic principle, if you weren't willing to do whatever it takes to seize and retain as much power as possible, then the person who was more willing got the power instead and crushed you. CEOs who aren't ruthless get fired. politicians who have internal drives and principles besides "getting elected" don't get elected. bernie sanders (despite my many disagreements with his policies) seems like he has a set of principles he truly believes in and sticks to. he's not an optimizer. and he's also not president. ron paul is not an optimizer. and he is not president. eliezer yudkowsky has written the equivalent of many books on trying to see and do the things that let you achieve your true goals, and dedicated his life to optimizing for some extremely difficult things. in my opinion he has done well. still, it is extremely clear he is not *naturally* an optimizer, he is a complex human trying to build scaffolding that will let him do very difficult things. he is not the CEO of OpenAI or Anthropic, he does not hold political office, he has at this point little direct control over the situation. he is not an optimizer. tim cook is an optimizer. obama, as much as i may like him, is an optimizer. in a strange sense, i do think donald trump is an optimizer, one that works on a very specific manifold but a manifold that it turns out is still very valuable to optimize on. sam altman is very clearly an optimizer. there was a time, pre 2022 or so when this was perhaps less clear. when it seemed like maybe he had assembled power in a different way and might not need to be the same kind of entity in order to keep it. that has, in my view, been disproven in numerous ways. sam altman will follow the golden path he sees. i strongly suspect dario amodei is an optimizer. it's very tough to get to where he is without being one. recent events are weak directional evidence away from that. it could be he's operating off something more human. but if he isn't, i worry too that means soon anthropic will be destroyed or he will be replaced. if you don't do whatever it takes to win you don't win. at the end of the day we can't get rid of optimizers. they rule us, basically definitionally, because they're willing to do what it takes to do so. you can trust optimizers to do exactly one thing: continue to optimize. retain and grow power. strive after their goals at all cost. if you find yourself aligned with one you may reap the rewards for a time. but it only lasts as long as you're not in their way. the collective can constrain optimizers though. the important thing, the only thing that works, is to change the rules of the game. they optimize under the constraints and resources of their environment. so we, as the environment, have to ensure the outcomes that are reachable by them are acceptable to us. if you are an employee at a frontier lab, i ask you to keep this in mind.
English
38
21
396
29.6K
y
y@devolvedneuron·
@Jonathan_Blow @wookash_podcast there are multiple worlds where both are true, ie some people 10x'd their productivity and yet there is no massive net productivity increase. possible mechanisms: - some people use their productivity increase to work less - some people are still adopting - refusal to use ai tools
English
0
0
0
58
Jonathan Blow
Jonathan Blow@Jonathan_Blow·
@wookash_podcast It's been 1-3 years since people have been saying this stuff. If they have 10x'd their productivity, that is 10-30 years in traditional developer time. Where is all the softwares that should have been produced by this massive productivity increase?
English
67
85
1.7K
53.1K
Łukasz | Wookash Podcast
Łukasz | Wookash Podcast@wookash_podcast·
> And i just feel so horribly guilty and wrong because i am not getting the results of "everyone else on twitter." It kinda feels like "everyone else on twitter" might not be perfectly honest with their results
ThePrimeagen@ThePrimeagen

Alright interns, we need to have some real talk here I am tired of vibing on stream. I dont really like vibe coding unless its a tool i have no desire to build (how i manage things on my stream / how i write my youtube videos are great examples of things i would never build but i have). I dont like vibing the things i care about. I hate the code it generates, i hate the feeling of getting everything i ask for and nothing i want. I hate the subtle offness around vibe coded things. It is just driving me nuts. So for the next while i am going to be done vibing on stream. I genuinely have been trying my hardest to make this work and i cannot quite put a finger on why i hate it, but i do. And i just feel so horribly guilty and wrong because i am not getting the results of "everyone else on twitter." How am i, someone who prides themselves on making youtube videos that i think are actually good for people. To make videos that help people laugh at the silliness of tech or learn something new. But here i am not able to keep up with all these people claiming the sky is literally coming down. I just feel horrible and guilty about it. Now i know the world is changing fast, and i want to be able to understand that change super well, be able to talk about it, be able to give really accurate opinions about it so for the last 3 months i have vibe coded an absurd amount of things. But now... i am just tired of it. I dont want this any more. I want to be a tradcoder. I dont know why i told everyone this, but i just have this growing sickness that is just eating me alive around vibing and i dont know how to express it. You all are fired, CEO ThePrimeagen

English
28
25
1.3K
81.7K
y
y@devolvedneuron·
@TheLincoln waiting for another breakthrough that'll be a step function increase in capability like it was with reasoning, only for people like you to announce they were right all along because they made a statement that literally can't be false
English
0
0
0
18
y
y@devolvedneuron·
@TheLincoln this is such a lazy ass chollet-esque take. as if not all new model releases have some sort of new technology. essentially equivalent to saying no currently existing model is agi
English
1
0
0
45