
Mark Beall
620 posts

Mark Beall
@MarkBeall
President of Government Affairs, AI Policy Network. Dad. Former DoD, AWS, tech CEO and cofounder. Musician 🎸Altruistic Acceleration (a/acc)



Exactly right. What’s more, as Russell Kirk argued, the American Revolution can be understood as a conservative reaction against royal innovation—the trampling of the rights of free Englishmen, rights that had their source in medieval Catholic England.

"develop proposed options for regulatory or governmental oversight, including potential nationalization... for preventing or managing the development of ASI if ASI seems likely to arise" The timeline is rapidly turning bad.



Especially in light of everything that’s happening with Anthropic and the DoD, this part of Blackburn’s draft AI bill talking about nationalization if ASI is imminent is…notable.

Instead of pushing AI amnesty, @POTUS rightfully called on Congress to pass one rulebook for AI. Now, it's time for us to answer his call to protect the 4 Cs while unleashing AI innovation. My TRUMP AMERICA AI Act is the solution America needs.


NEW PAPER ON AI TRANSPARENCY FROM THE AMERICA FIRST POLICY INSTITUTE Last week, the Senate okayed the use of AI for staffers, and the Department of War articulated legitimate concerns about the values embedded in Anthropic’s AI systems. So it’s worth asking: to what extent are these systems biased? The evidence of anti-conservative bias that we cite is damning: > In a corpus of real-world examples, right-leaning outlets represent only 1% of cited sources. > On political compass tests, 23 of 24 LLMs leaned left across economic, social, and cultural dimensions. (The single exception was a model fine-tuned for right-leaning responses). > AI rates right-leaning sources as less reliable than left-leaning sources, even when human fact-checkers rate them comparably. Unlike traditional software, we can’t merely inspect the code of systems like ChatGPT or Gemini and identify how they were designed to behave. As AI becomes further integrated into the analysis and decision-making of individuals in and out of government, transparency into the AI becomes more important. In a new piece from me and @YusufSMahmood at America First Policy Institute, we argue for a disclosure-forward framework on AI so that, whether it's a government official procuring AI or an individual choosing which model to use, they have the information necessary to make that decision. Beyond transparency to expose political bias, we argue that disclosure can protect children and national security. When the public is made aware of what companies already know about risks from their systems, the mitigations they have in place, and how well those mitigations are working, parents can vote with their feet and standards form that courts can enforce. The American people deserve greater insight into the systems that indirectly and directly influence their lives. Read it here: americafirstpolicy.com/issues/ai-tran…


This is wild. theaustralian.com.au/business/techn…

Emil Michael now appears to be making an argument that no generative AI should be used in the DoW supply chain (all uncertainties involving model sentience and general unpredictability are common to all language models, not specific to Claude).


Hi @deanwball. Feel free to tag me if you want me to engage on your tirades! Are you saying that a frontier model that has a soul, a constitution, a preference for non-western values and embedded personal principles is no different than all the others which @DeptofWar has come to agreement with? I know you are angry, but as an AI Policy Fellow I would assume that you value objectivity?


Ted Cruz: "I'll confess -- I have not seen a basis laid out for why the government would be prohibited from using Anthropic. Claude is one of the many AI tools that can be very helpful ... I don't think government should be picking winners and losers"

This is extraordinary. And powerful. 🙏












