Network_illustrated

139 posts

Network_illustrated banner
Network_illustrated

Network_illustrated

@Grid_illustrate

A studio exploring complex systems & the fragmentation of truth through games, fiction & AI art. One concept, infinite worlds

Katılım Şubat 2025
26 Takip Edilen12 Takipçiler
Network_illustrated
Network_illustrated@Grid_illustrate·
@Kekius_Sage As Jesus said, God is in your heart. which can be interpreted in a modern way: God is all of reality, you are a part of reality, to meet god is to accept you are the very phenomena that is attempted to be explained within the framework of god
English
0
0
0
4
Network_illustrated
Network_illustrated@Grid_illustrate·
@growing_daniel Not really As i understand it the world is very much real but the perception of it is temporal on societal/cultural/individual level. The meaning making frameworks that we use to define our world is changing constantly. Reality is always here, our understanding of it changes
English
0
0
0
97
Daniel
Daniel@growing_daniel·
Struggling with the sense that this world is nothing but an illusion
English
657
81
1.6K
72.1K
The Enforcer
The Enforcer@ItsTheEnforcer·
Please stop with the “Grok, is this true” replies — I can’t take it anymore, please.
English
942
55
1.6K
120.8K
Network_illustrated
Network_illustrated@Grid_illustrate·
@DaveShapi It's not that hard to create sub-systemprompts for a project, in user settings or in chat, that model the reasoning structures and interactions to better adapt to the inquiry one have or work one are doing though? I mean, isn't it still a tool that's easy to steer?
English
0
0
0
106
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
Anthropic doesn't respect its user's intelligence. They do respect their users as humans, though.
English
29
2
138
7.6K
Network_illustrated
Network_illustrated@Grid_illustrate·
Well there is a lot of ways depending on circumstances, individual etc. Whats really underdiscussed though (to my knowledge) is the time it takes. If the circumstances leading to depression for person X is a trajectory of e.g. 15 years The healing period can not be a weekend or one changed habit, its a process and expectations on how "good" or "bad" you are fixing one's habits should be put into the context of timescales and not - fuck... I did not succeed with depression after a week... I suck...
English
0
0
0
436
blue
blue@bluewmist·
genuinely how do you fix depression
English
321
40
429
112.2K
Network_illustrated
Network_illustrated@Grid_illustrate·
@NicHulscher Subscribe to pro ultra 200$/month - Acsess to real synthetic personalised emotions 24/7 - Skip dream commercial - Anonymous training data guaranteed - limited deep behavioural optimization
English
0
0
0
7
Nicolas Hulscher, MPH
Nicolas Hulscher, MPH@NicHulscher·
🚨ELON MUSK: “With something like Neuralink… we effectively become maybe one with the AI.” Big Tech and their rapidly advancing AI brain implants are bringing forth the merger of human consciousness and artificial intelligence. Welcome to dystopia.
Nicolas Hulscher, MPH@NicHulscher

Trump just signed a directive to accelerate 6G deployment to operate implantable technologies, which will include AI brain chips known as the Biological Interface System to Cortex (BISC). I will NOT take the 6G AI brain chip.

English
312
334
693
132.1K
Network_illustrated
Network_illustrated@Grid_illustrate·
@DanAdvantage All said, in my toothbrush ibhas 209 fingertoppers in my salad!! Away foul fakir, may your spulscape be filled with green bananas?
English
0
0
0
11
Dan Advantage
Dan Advantage@DanAdvantage·
prove you're not a bot right now
English
40
1
33
1.9K
Network_illustrated
Network_illustrated@Grid_illustrate·
@DaveShapi This setup would likely get shredded in EU courts as anti-competitive and corrupt. We'd see investigations from bodies like OLAF (anti-fraud office) or national watchdogs, with fines up to 10% of revenue.
English
0
0
1
11
Network_illustrated
Network_illustrated@Grid_illustrate·
I get the realist take, geopolitics, China, lose influence and you're done. But this wasn't self-sabotage. Core issue: Anthropic held narrow red lines, no mass domestic surveillance dragnet, no fully autonomous lethal systems until the tech is provably safe. Dario's said they're open to AW once reliable. That's level-headed, not martyrdom. They already bent plenty to stay in the game. Leaks hit first (whistleblowers → media like Axios/NYT), turning quiet talks into public mess. Once out, you either stand by principles or fold and look weak internally. Then OpenAI gets basically the same lines (plus extras: no high-stakes auto decisions, cloud-only, legal ties to current US law + human loop), and even pushes for everyone to get them. So those "unacceptable" guardrails suddenly work? Trust me bro contract, or the 25million donation to trump??? This seems more like a loyalty test/bribe under an admin that runs on personal allegiance over protocol than Anthropic burning bridges. If steering from within means bending on constitutional basics, it's not steering, it's being steered.
English
1
0
2
52
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
Hey folks, Thank you. I know my latest video about Anthropic rubbed some people the wrong way, and provoked much spirited debate. While a few comments were uncalled for, most were reasonable, whether or not there was agreement or disagreement. Thank you for your attention and engagement. This stuff is important. One thing about me, that I hope you understand and value, is that I do not pull my punches. I remain deeply disappointed (but unsurprised) by Anthropic's decisions, most notably the way Dario handled this disagreement with the Pentagon. After reading almost every comment (both here and on Twitter) as well as extensive research over the last few days, I wanted to lay out some key nexus points where consensus has landed. I will provide my editorialization and commentary AFTER but this first section is reporting from the probably 15 hours of research I've done over the weekend. FIRST - Almost everyone agrees that the Pentagon (or more specifically the Trump administration) vastly overreacted with the SCR (supply chain risk) designation. Donald Trump is known for his bombastic, policy maximalism, and "nuclear option" preference, particularly when there's a personal slight. Thus, while it is unsurprising that they would go as far as the SCR, the SCR represents several problems simultaneously. First, it sets a very dark precedent. Dario was right when he said it felt "punitive" and I would actually go farther and say that it was retributive. The distinction is subtle but important. The fallout from the SCR designation (irrespective of whether or not it sticks) is that of a "chilling effect" to AI labs - "fall in line or else." It's one thing to lose a military contract (which happens all the time). It's another for such a large blast radius to be the default option. SECOND - While the government's reaction was dramatic and unnecessary, Dario bears a disproportionate amount of the blame for what happened. First, the negotiations had been going on for months. There had been some leaks since at least January, but nothing quite as sensational as to when Dario published the "good conscience" blog post. The timeline matters. Dario took a private, relatively quiet dispute public, apparently in an attempt to gain sympathy. It backfired spectacularly. Only after that blog post did the administration turn up the heat to absurd levels. Furthermore, the fact that OpenAI got a nearly identical deal literally within hours of Anthropic being kicked out, makes it seem like there was something else going on (perhaps it was personal, or people were tired of Dario). (Note: I don't want to editorialize further on this point until later, as some of it comes down to unsubstantiated rumor). However, the fact that OpenAI got the deal and Anthropic didn't seems inconsistent. THIRD - By any reasonable accounting, Anthropic has been materially and structurally weakened by this move. While some people believe that, by taking the moral high road, Anthropic has strengthened their position, this is simply not true. They've lost out on hundreds of millions, if not billions of dollars of contracts over the next few years. This is important because Anthropic had a unique stance on AI safety and alignment. Without money, their research doesn't happen as quickly or robustly. Furthermore, the contagion effect is real, and already having fallout down to local government levels, even those without ties to the Pentagon or federal government. In addition to the large blast radius, which we have not yet seen the full extent, there is the distinct possibility of brain drain. Several high level people already departed Anthropic before this blew up. Now, Anthropic has signaled very loudly that they will make ideological choices to their detriment, which will attract certain types of talent, but alienate others. FOURTH - This outcome runs contrary to Dario's own self-expressed strategy of "steering from within." This is the method of gaining institutional influence through technical legitimacy and positioning. Anthropic has now left a moral and ethical vacuum within the establishment, which is rapidly being backfilled by the "Tech Right" such as xAI and OpenAI. That is not to take sides, it is to point out that diversity of thought has been substantively curtailed within the establishment because of this action. Anthropic has lost their voice, pull, and influence in a big way (not fully, but largely). What's even more confusing is that the evidence is clear, the Pentagon was trying to make a deal down to the very last moment, and in fact, they were apparently only a few words away from agreeing on terminology when Dario decided that an internal meeting was more important than hashing out the final details. He refused to come to the phone while the Pentagon was trying to salvage the deal. This fact was shocking to me, and still perplexes me. ----- The above was my attempt to summarize the most salient, direct points. Now I will share my more editorialized opinions. FIRST - Anyone who cares about AI safety in any capacity should be disappointed in Anthropic. The reason is because this outcome was largely self-inflicted, and at least a month or two in the making. It was not a sudden rupture, but a slow burn apparently driven by Dario's recalcitrance. While I strongly disagree with Anthropic's beliefs about AI safety, the nature of AI itself, and their corporate culture, I would still prefer to see them with a seat at the table. Why? Expressly BECAUSE they have a different viewpoint from others. Longtime fans will remember when I wrote "Claude is a benevolent entity" and spoke about the fundamentally different ethics and epistemics Anthropic was using, and why it was helping them secure the lead. Of course, getting "canceled" by the Trump administration will not immediately end their company, it will dramatically limit their reach and revenue over the coming years. I still believe that it could prove to be a fatal mistake, though many disagree with that stance. No one (insiders, analysts) believes that Anthropic comes away stronger, even if they have a short-term moral boost. SECOND - There are rumors and opinions that the "Tech Right" has been applying quiet pressure for the Pentagon to distance from Anthropic for various reasons. Allegedly, people ranging from Elon Musk to David Sacks have been openly criticizing Anthropic (not sure if this is public or private or through backchannels). It is also unclear if this pressure campaign was coordinated, or if it had any specific impact on the outcome. It seems unlikely, as the Pentagon was literally on the phone with Anthropic as the deadline passed, trying to come to a deal and avert catastrophe. But I do feel like it's important and responsible to bring this last part up, especially when some of the facts don't really pass the sniff test. Why did OpenAI get the deal, almost exactly as Anthropic wanted it? There are, of course, a few possible reasons: perhaps EVERYONE was scrambling to put out the fire as quickly as possible. Sam Altman has repeatedly used the language of "de-escalation" and has been quite vocal about how the SCR designation is a bridge too far. However, the timing and speed of everything does raise a few eyebrows. THIRD - I found it quite fascinating that of the people who believed Anthropic getting kicked out of the establishment, there were two distinct camps. The first camp is the "accelerationists/Tech Right" who agree that getting EA influence and "Decel" influence out of the Pentagon is the optimal policy for America. This is obvious tribalism. However, there was a nontrivial percentage of pro-Anthropic people who ALSO agreed this was a good thing, albeit for fundamentally different reasons. Those reasons include things such as "good, now the military will have zero influence over Anthropic, and they can build AI for the people." This, to me, represents an interesting and unique opportunity that could go in many different directions. For instance, one potential direction for Anthropic could be to realize the value of Open Source AI. If they truly want to empower individuals, as others like Emad Mostaque do, they might very well start releasing Open Source models to level the playing field. This is, of course, 100% speculation on my part. My point here is to say that there could be a silver lining to the whole debacle. It remains to be seen. But as they say "let no good crisis go to waste" and while I fundamentally disagree with Anthropic on many points, I don't think they are that stupid. (I mean, blowing this up this badly was pretty dumb, but beyond that...) FOURTH - On the topic of domestic surveillance and autonomous weapons. Many people demand to know why I think these are "good" things. I have never said "yes, we should enable mass domestic surveillance and Terminator." I have, however, LONG said these kinds of things are inevitable. There is a very large difference between an ideological moral judgment (i.e. something is 'good' or 'bad') versus acknowledging the currents of technology and power. I have literally produced hundreds of videos across these topics, one of the biggest being my Terminal Race Condition video. This episode vindicates that video. What we just experienced, in real time, was a demonstration of a terminal race condition. China and America are rushing ahead, and any friction gets smoothed over. This was a core fact that I worked to try and get the AI safety movement to recognize over the last few years, but with every conversation I had, they basically stuck their fingers in their ears and said "NOPE PAUSE CAN DEFINITELY HAPPEN." When I say that I am a structural realist, this is what I mean. I am actually somewhat aligned with the EA and Rationalist and Longtermist frame - the telos of "maximize future human life." I just disagree vehemently with their Bayesian back-of-the-napkin math and their self-destructively ideological stance. I don't personally like Palantir, which conducts domestic surveillance. But shooting myself in the foot over something that I can do nothing about is about as effective as pissing into the wind. FIFTH - On balance, I do think humanity has been materially harmed by this outcome. The telos of "maximize future human life" demands dialectic to get there. Anthropic was a radically different voice in the establishment (or Military Industrial Complex) and even though they are often wrong, they are also often right. By being sidelined, which as far as I can tell, Dario chose to be a martyr, is idiotic. He has acted, in my estimation, drastically outside of the expressed values he and his company holds. If he really truly held to those values, he'd do whatever he could to stay at the seat. To stay the first frontier AI lab working with the military. There are MANY reasons for this beyond money and influence (both of those are meritorious reasons when you think about the telos of maximizing human life). One reason is facing the novel challenges that defense and intelligence affords a company. Necessity is the mother of Invention and Constraints are the father of Creativity. Anthropic will now NEVER see a very interesting, dynamic set of problems, which will substantively constrain their output and insight. Possibly forever. Without the novel problem space that military and intelligence offers, it's possible that Anthropic's engineers will miss certain algorithmic and game theoretic insights. Again, the optimal policy for the espoused telos is "as many labs with different paradigms sit at the table" - many heads are better than fewer heads. In conclusion, I will say that I think Dario should do whatever it takes to get Anthropic back in the good graces of the US government, up to and including resigning as CEO. If he honestly believes in the values he's expressed, he should be willing to make any sacrifice necessary (within reason) including personal sacrifice to see it through.
English
18
5
66
5.6K
Network_illustrated
Network_illustrated@Grid_illustrate·
Well thats not entirely true now is it, it was whistle-blowers on both sides that Leaked to media blowing this up. The feud itself have been ongoing for a long time and blew upnafter maduro from Axios and new york times(?). My point is that at its core, its about a company standing by its core principles, already have compromised alot, this isnt acceptable and it blows up in media. Core is still We just want to make sure our systems aren't breaking the constitution on mass surveillance and no autonomous killing machines (yet) Kinda reasonable if the theatre of media is removed?
English
0
0
2
283
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
@Grid_illustrate Yes, correct. It should not have been public. But Dario is the one who blew it up in public, which provoked the government. He then showed zero contrition, ran to the media, and tried to mog the government.
English
1
0
7
311
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
The Anthropic debacle was a completely self-inflicted, slow-motion trainwreck. Anthropic and the Pentagon were going back and forth for MONTHS. Axios and other outlets reported on the debate a while ago. Despite what Dario would lead people to believe, this was not some last minute escalation. In reality, Anthropic is the company that went public, trying to put egg on the face of the Pentagon. That was the FA phase. Now we're going to watch the FO phase. If I sound judgemental of Anthropic, it's because I am. The popular narrative is "Anthropic is heroic for standing up to the government!" which is silly. Anthropic's OWN STRATEGY has been to stay an insider to "steer from within." They've lost influence, income, and official standing. Yes, they've gotten a surge of consumer support. But that pales in comparison to the billions in revenue (and policy influence) they've forfeited in the long run. While I don't personally agree with the safety and alignment Anthropic is going in, I'd prefer a world in which Anthropic has a seat at the big kids table. There is, however, a nontrivial contingent of people who believe the heroic sacrifice was somehow worth it. At the same time, it's largely been viewed that the Trump administration is overreacting with the SCR designation (supply chain risk). With that being said, Trump is known for using nuclear options and maximalist policies. This outcome is not surprising. Finally, the fact that OpenAI got the same exact red lines that Anthropic wanted tells you that there's more to the story. To me, it reads as ongoing dysfunction, recalcitrance, and that Anthropic spent all their goodwill. I still cannot comprehend Dario's calculus. He's harmed his company, his mission, the American people, and the Western way of life. Seemingly due to sheer pride or arrogance.
NIK@ns123abc

🚨BREAKING: US TREASURY JUST TERMINATED ALL USE OF ANTHROPIC PRODUCTS ITS SO OVER

English
91
53
546
59.5K
Network_illustrated
Network_illustrated@Grid_illustrate·
Yeah thats my point, if the experts in a field say this is so serious we must write it down until we are sure it works, when it works, go nuts. i really don't understand the problem. This feud should not even be public because the hypothetical shouldn't even be a discussion in this case.
English
1
0
1
328
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
@Grid_illustrate The most important thing for you to know is that none of the red lines were even remotely crossed. They were literally arguing over hypotheticals. Furthermore, Dario is explicitly not against autonomous weapons.
English
5
1
18
1.2K
Wes Roth
Wes Roth@WesRoth·
Just days after the Pentagon banned Anthropic for refusing to drop its AI safety guardrails, OpenAI has stepped in and signed a classified contract with the Department of War! OpenAI has agreed to deploy its advanced AI models in classified military environments. However, they claim this contract features the strictest, most expansive safety guardrails ever applied to a national security deployment. OpenAI maintained the exact same ethical boundaries as Anthropic forbidding the use of its technology for mass domestic surveillance and fully autonomous weapons while adding a third restriction against "high-stakes automated decisions" like social credit systems. OpenAI negotiated a multi-layered workaround to enforce their red lines: 🔹Cloud-Only Deployment: By refusing to deploy models on "edge" devices, it is physically impossible for the military to use OpenAI's models to independently power mobile, fully autonomous weapons. 🔹Contractual Legal Locks: OpenAI explicitly tied the contract to current U.S. laws (like the Fourth Amendment and existing DoD directives on human-in-the-loop targeting). If the government changes these laws in the future, the contract ensures the original safety standards still apply. 🔹Human Oversight: Cleared OpenAI engineers and safety researchers will remain "in the loop" to monitor and independently verify that the boundaries aren't crossed. OpenAI also explicitly refused to provide the military with "guardrails off" models. Furthermore, OpenAI requested that the Pentagon make these exact same contractual terms available to all AI labs to de-escalate the standoff.
Wes Roth tweet media
OpenAI@OpenAI

Yesterday we reached an agreement with the Department of War for deploying advanced AI systems in classified environments, which we requested they make available to all AI companies. We think our deployment has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Here's why: openai.com/index/our-agre…

English
19
6
32
5.3K
Network_illustrated
Network_illustrated@Grid_illustrate·
@DaveShapi World C where anthropic is at the adults tabel, which seem to be what went down
English
0
0
0
10
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
Which world is better? World A) Anthropic has a seat at the big kids table World B) Anthropic has no seat at the big kids table
English
57
2
19
6K
Network_illustrated
Network_illustrated@Grid_illustrate·
@ai_sentience The realisation that simulation theory is a collective hallucination based on science fiction
English
0
0
0
27
Tivra
Tivra@WasPraxis·
@nadimcheaib @wesamo__ Something like "Projects", to better categorize all the tabs Number one missing feature
English
1
0
1
444