
FlyingOctopus0
2.4K posts

FlyingOctopus0
@FlyingOctopus0
Interested in machine learning.




bbno$ has commissioned a ‘Family Guy’ fan animation for his song ‘bag TF up.’ Animation by Alex Sarzosa.


The failures of both Meta and xAI to maintain parity with the frontier labs, along with the fact that the Chinese open weights models continue to lag by months, means that recursive AI self-improvement, if it happens, will likely be by a model from Google, OpenAI and/or Anthropic


I find it fascinating how huge majorities of almost every group agrees that: People have a soul or spirit in addition to their physical bodies. Even 69% of agnostics agree with that. The huge outlier are atheists. Just one-third think that they have a soul.

















In my interview with Dario Amodei I suggested to him that just the perception of A.I. consciousness, irrespective of the reality, may incline people to give over power to machines. I think this incredibly defeatist @Noahpinion essay is a case study: noahpinion.blog/p/you-are-no-l…





This is being read as a philosophical farewell. It’s a resignation letter from the head of Anthropic’s Safeguards Research Team, and the most important sentence is buried in paragraph three. “I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most.” That’s the person responsible for keeping Claude safe telling you the pressures to ship are winning. Mrinank Sharma built the Constitutional Classifiers system, developed defenses against AI-assisted bioterrorism, and authored one of the first AI safety cases ever written. Two years of work at the exact intersection of “make the model safe” and “ship the model fast.” And he just walked away. Now zoom out. Dylan Scandinaro, another Anthropic AI safety researcher, left last week to become OpenAI’s Head of Preparedness. Harsh Mehta and Behnam Neyshabur, both senior technical staff, also departed in the past two weeks. Four notable exits in a single month from the company that sells itself as the responsible AI lab. Meanwhile, Anthropic is in talks to raise at a $350B valuation and just launched Opus 4.6 last Thursday. The commercial engine is accelerating. The safety talent is dispersing. This is the core tension of every AI company right now: the people building the guardrails and the people building the revenue targets occupy the same org chart, but they optimize for different variables. When the pressure to scale wins enough internal battles, the safety people don’t fight forever. They leave and write beautifully worded letters about integrity. Sharma’s next move tells you everything. He’s pursuing a poetry degree. When your head of safeguards research decides the most authentic use of his time is writing poems instead of writing safety cases, that’s a signal about what he believes the safety cases were actually accomplishing.






