ผลการค้นหา: "#TruthInTechnology"

14 ผลลัพธ์
Dock Vulner
Dock Vulner@DVulner·
It's time for tech to separate truth from fiction. Let's be real, "Guthrie" good intentions aren't enough 🤣 Digital forensics might help, but it'll likely raise more questions than answers 🤔 #DigitalForensics #TruthInTechnology
Dock Vulner tweet media
English
0
0
0
26
Stephen A Schoenhoff
Stephen A Schoenhoff@Schoenhoff58480·
Americans need accountability from their government and from tech providers both. These decisions should not be rushed or made in closed door meetings. #TruthInTechnology #AIethics
Shanaka Anslem Perera ⚡@shanaka86

The Pentagon is about to give an American AI company the Huawei treatment. Not because it’s Chinese. Not because it’s a spy risk. Because it refuses to let the military use its AI for mass surveillance of Americans and fully autonomous weapons. This morning, Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon. A senior Defense official told Axios: “This is not a friendly meeting. This is a sh*t-or-get-off-the-pot meeting.” Here’s what’s actually happening: Claude is the only AI model running inside the Pentagon’s classified systems. The most capable model for sensitive defense and intelligence work. It was used in the Maduro raid in January through Palantir, the first confirmed use of a commercial AI in a classified military operation. Now the Pentagon wants all restrictions removed. “All lawful purposes.” Including capabilities that would let the military continuously monitor the social media posts, voter registration, concealed carry permits, and demonstration records of every American citizen using AI at scale. Anthropic said no to two things: mass surveillance of Americans and fully autonomous weaponry. The Pentagon’s response: threatening to designate Anthropic a “supply chain risk.” That designation is reserved for foreign adversaries. The last company to receive it was Huawei. It would force every defense contractor in America to certify they don’t use Claude in their workflows. Given that 8 of the Fortune 10 use Claude, this would cascade through the entire defense industrial base. A senior Pentagon official told Axios: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.” Another official: “The problem with Dario is, with him, it’s ideological. We know who we’re dealing with.” Meanwhile: OpenAI, Google, and xAI have already agreed to remove their safeguards for military use. OpenAI deployed ChatGPT to all 3 million DoD personnel through GenAI. mil. xAI holds a separate $200M contract backed by Musk’s political proximity to the administration. Anthropic is the only one that said no. Think about what’s being asked. The company whose own safety chief resigned two weeks ago warning “the world is in peril.” The company that just published a report showing its most advanced model “knowingly assisted with chemical weapons research” in testing. That company is being punished for refusing to hand the U.S. military unrestricted access to that same technology. The Pentagon admits competing models “are just behind” for classified work. They need Claude. But they’re willing to blow up the relationship rather than accept two restrictions that protect American citizens from their own government. This is the most important story in AI right now and almost nobody is framing it correctly. It’s not about one $200M contract. It’s about whether the U.S. military can compel a private company to remove safety restrictions on technology its own developers have demonstrated is dangerous, under threat of receiving the same designation as a Chinese national security threat. Dario Amodei walks into that meeting this morning with $380 billion in enterprise value, $14 billion in revenue, and a principle that may cost him both. Full institutional analysis on my Substack. open.substack.com/pub/shanakaans…

English
1
0
0
25
Stephen A Schoenhoff
Stephen A Schoenhoff@Schoenhoff58480·
What role is AI playing? Is FB’s algorithm or others being leveraged to radicalize and recruit? Are LLMs being used to process intel? We need truth-centric AI and AI governance that promotes human flourishing. #AIEthics #TruthInTechnology
Eric Schwalm@Schwalm5132

As a former Special Forces Warrant Officer with multiple rotations running counterinsurgency ops—both hunting insurgents and trying to separate them from sympathetic populations—I’ve seen organized resistance up close. From Anbar to Helmand, the pattern is familiar: spotters, cutouts, dead drops (or modern equivalents), disciplined comms, role specialization, and a willingness to absorb casualties while bleeding the stronger force slowly. What’s unfolding in Minneapolis right now isn’t “protest.” It’s low-level insurgency infrastructure, built by people who’ve clearly studied the playbook. Signal groups at 1,000-member cap per zone. Dedicated roles: mobile chasers, plate checkers logging vehicle data into shared databases, 24/7 dispatch nodes vectoring assets, SALUTE-style reporting (Size, Activity, Location, Unit, Time, Equipment) on suspected federal vehicles. Daily chat rotations and timed deletions to frustrate forensic recovery. Vetting processes for new joiners. Mutual aid from sympathetic locals (teachers providing cover, possible PD tip-offs on license plate lookups). Home-base coordination points. Rapid escalation from observation to physical obstruction—or worse. This isn’t spontaneous outrage. This is C2 (command and control) with redundancy, OPSEC hygiene, and task organization that would make a SF team sergeant nod in recognition. Replace “ICE agents” with “occupying coalition forces” and the structure maps almost 1:1 to early-stage urban cells we hunted in the mid-2000s. The most sobering part? It’s domestic. Funded, trained (somewhere), and directed by people who live in the same country they’re trying to paralyze law enforcement in. When your own citizens build and operate this level of parallel intelligence and rapid-response network against federal officers—complete with doxxing, vehicle pursuits, and harassment that’s already turned lethal—you’re no longer dealing with civil disobedience. You’re facing a distributed resistance that’s learned the lessons of successful insurgencies: stay below the kinetic threshold most of the time, force over-reaction when possible, maintain popular support through narrative, and never present a single center of gravity. I spent years training partner forces to dismantle exactly this kind of apparatus. Now pieces of it are standing up in American cities, enabled by elements of local government and civil society. That should keep every thinking American awake at night. Not because I want escalation. But because history shows these things don’t de-escalate on their own once the infrastructure exists and the cadre believe they’re winning the information war. We either recognize what we’re actually looking at—or we pretend it’s still just “activism” until the structures harden and spread. Your call, America. But from where I sit, this isn’t January 2026 politics anymore. It’s phase one of something we’ve spent decades trying to keep off our own soil.

English
0
0
0
20