Lisa Ling 📎

61.6K posts

Lisa Ling 📎 banner
Lisa Ling 📎

Lisa Ling 📎

@ARetVet

OEF/OIF Vet. War will not be made shorter or safer using remote connectivity, #AI, or #Drones. A Western net-centric #KillCloud will not bring peace. #TeamHuman

ARetVet Mastodon SFBA.Social Katılım Ekim 2011
4.6K Takip Edilen3.7K Takipçiler
Sabitlenmiş Tweet
Lisa Ling 📎
Lisa Ling 📎@ARetVet·
In this anthology is a paper Cian Westmorland and I wrote to shed some light on the systems connecting US #drone operations to a complex sociotechnical framework developed for network centric warfare. We call it the #KillCloud. It’s available free here👇🏼 disruptionlab.org/the-kill-cloud
English
2
21
40
0
Lisa Ling 📎 retweetledi
Thomas Drake
Thomas Drake@Thomas_Drake1·
A new book culled from Daniel Ellsberg’s personal writings and archives: Truth and Consequence: Reflections on Catastrophe, Civil Resistance, and Hope “…a collection of previously unpublished writings by the former government official, whistleblower, and activist Daniel Ellsberg, exploring his life, work, and most deeply held beliefs.” ellsberg.net
English
0
10
21
2K
Lisa Ling 📎 retweetledi
Stefania Maurizi
Stefania Maurizi@SMaurizi·
We're very concerned about a massive involvement of US military bases and infrastructures in Italy in the #Israel-#US war on #Iran Help us. If you have legitimate access to restricted info and documents, please consider sharing them with us securely HERE: stefaniamaurizi.it/en-contactme.h…
Stefania Maurizi tweet media
English
5
70
110
3.3K
Lisa Ling 📎 retweetledi
Grady Booch
Grady Booch@Grady_Booch·
And now, what we see unfolding in real time with regard to the Department of Defense’s decisions regarding the use of AI in warfare encompasses every one of these elements, particularly the last one. As computing weaves itself into the interstitial spaces of civilization, increasingly every line of code represent an ethical and moral decision.
Grady Booch@Grady_Booch

In computing, we take imagination and make it manifest in the form of software and hardware. There exists a sequence of barriers through which we must pass to make it so. First, there are the laws of physics. We cannot send information faster than the speed of light. There are fundamental limits as to the amount of information we can store in a given space. Thermodynamics presents considerable engineering challenges, particularly as we craft smaller and smaller devices. Next, there is the challenge of computability. We must turn theory into algorithms, and at scale we must make those algorithms fast and efficient. Design and then architecture are the next challenge. Weaving algorithms and data into systems that are functional, understandable, maintainable, and that can evolve calls us to the exquisite dance between art and science, compelling us to push the limits of our human creativity. Organizational issues rise to consideration. One developer can do remarkable things, but to release systems that are durable, that are resilient, and that work at global elastic scale requires a team. And then there are economic realities. Our dreams may be expansive, but in the end bringing them to life may be more expensive to build and to operate that we can afford. Finally, there are moral and ethical issues. There are many things we can build out of hardware and software, but our shared humanity requires us to examine if we should build them. This is the nature of development, and why hardware and software and systems engineering remain a very human problem to which we must apply all our knowledge and talent.

English
11
36
205
17.5K
Lisa Ling 📎
Lisa Ling 📎@ARetVet·
From UN SecGen António Guterres "machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law." Good reminder as #HegsethVSAnthropic continues press.un.org/en/2019/sgsm19…
English
0
0
0
23
Lisa Ling 📎 retweetledi
Karen Hao
Karen Hao@_KarenHao·
So excited to join the board of this phenomenal organization. AI Now has played a defining role in shaping how I understand my work as a journalist: to hold the AI industry accountable and radically reimagine how this technology could distribute, rather than consolidate, power.
AI Now Institute@AINowInstitute

AI Now is excited to announce our Board of Directors: Lucy Suchman, @_KarenHao and @veenadubal, all longstanding advisors and supporters of the institute. We’re grateful for their support and excited to take our work into the next phase. ainowinstitute.org/about

English
14
21
260
15.6K
Lisa Ling 📎 retweetledi
Grady Booch
Grady Booch@Grady_Booch·
Bravo, Anthropic, for drawing the line. Oh, and this reminds that that now it’s time for me to finish filing my claim against you for illegally using every one of my books to train your LLM. Well the good news, I suppose, is that at least you have some moral lines you won’t cross.
Anthropic@AnthropicAI

A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. anthropic.com/news/statement…

English
40
73
996
68.2K
Lisa Ling 📎 retweetledi
ICEout.tech
ICEout.tech@ICEoutTech·
Tech people are coming together to stop ICE and ensure we a say in the tech we build. Join colleagues from NVIDIA, Apple, Adobe, Oracle, PayPal, YouTube, Slack, Meta, Zoom, Stripe, Microsoft, Spotify, Figma, Uber (+ Anthropic and OAI!) and more. Plug in iceout.tech
English
1
1
1
134
Lisa Ling 📎 retweetledi
ICEout.tech
ICEout.tech@ICEoutTech·
Big picture: “The American people have not been meaningfully involved in making decisions on these issues. Procurement has been used to unilaterally dictate public policy without Congress. We have a stake in this game.” @n_schneidman
English
1
1
1
42
Lisa Ling 📎 retweetledi
ICEout.tech
ICEout.tech@ICEoutTech·
Business implications: “Companies have a right to set terms of use for their product and the government has the right to reject those terms of use. If we give in on this, what comes next? You’ve lost your leverage against any other demand.” @johnofa
English
1
3
1
134
Lisa Ling 📎 retweetledi
ICEout.tech
ICEout.tech@ICEoutTech·
One company holding the line is powerful. An industry holding the line is unstoppable. How can we do it? "I hope tech employees know they have never had more power than they do now, at the labs, and across the entire industry." @haydenfield
English
1
2
2
92
Lisa Ling 📎 retweetledi
ICEout.tech
ICEout.tech@ICEoutTech·
"We [sort of] have a hippocratic oath - we have obligations as engineers to avoid unnecessary harm. That's what on the line for @AnthropicAI and everyone who faces similar red lines in their work." - @petewarden
English
0
1
2
63
Lisa Ling 📎 retweetledi
Ejaaz
Ejaaz@cryptopunk7213·
it’s official - Anthropic just refused the Pentagon’s demands, dario’s statement is doesn’t fuck around: - “these threats do not change our position: we cannot in good conscience accede to their request.” - dario - he described the pentagons efforts to force him to enable claude for mass surveillance and autonomous killing weapons - dario’s response: mass surveillance is not democratic and Claude isn’t good enough to enable autonomous weapons - we won’t cave - dario will help governmenr transition to a NEW provider if they choose to blacklist anthropic. fucking wild - fair play for sticking by their code of honor.
Ejaaz tweet mediaEjaaz tweet mediaEjaaz tweet mediaEjaaz tweet media
Anthropic@AnthropicAI

A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. anthropic.com/news/statement…

English
373
3.9K
25.1K
1.6M
Lisa Ling 📎 retweetledi
Shanaka Anslem Perera ⚡
Shanaka Anslem Perera ⚡@shanaka86·
BREAKING: Anthropic has rejected the Pentagon’s “best and final” offer, less than 24 hours before theUnited States Secretary of War Pete Hegseth 5:01 p.m. deadline. The company reviewed the offer overnight and found insufficient progress on its two red lines: no mass surveillance of Americans and no autonomous weapons that fire without a human in the loop. Anthropic’s position has not changed. The Pentagon’s chief technology officer responded on CBS News today: “At some level, you have to trust your military to do the right thing.” Everyone expects a Pentagon response today. The options on the table: invoke the Defense Production Act, cancel the $200 million contract, or designate Anthropic a “supply chain risk” alongside Huawei. But here is what nobody has put together yet. On the same Tuesday that Anthropic rejected Hegseth to his face, the company gutted its own founding safety policy. The Responsible Scaling Policy was Anthropic’s core promise since 2023: if a model becomes too dangerous and the safety measures cannot keep up, stop training. That was the tripwire. The whole reason Anthropic existed instead of just being OpenAI. Anthropic removed that commitment. Its chief science officer told TIME: “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments if competitors are blazing ahead.” Think about what that means. A company that just surrendered its founding safety promise to competitive pressure looked the Secretary of Defense in the face and said no to a $200 million contract, no to classified network access, no to the threat of blacklisting, no to the Defense Production Act, and no to being labeled a supply chain risk alongside Huawei. They gave up the principle that made them Anthropic. But they would not give up the principle that protects you. Two weeks earlier, the head of Anthropic’s Safeguards Research Team resigned. Mrinank Sharma wrote: “The world is in peril. I’ve repeatedly seen how hard it is to truly let our values govern our actions, where we constantly face pressures to set aside what matters most.” The company proved him right by abandoning its broadest safety commitment. And then, in the same breath, proved him wrong by holding the two lines that actually matter for 330 million Americans. Today at 5:01 p.m. we find out what happens when a company that has already given up almost everything refuses to give up the last two things the government wants most. Full analysis on Substack. open.substack.com/pub/shanakaans…
Shanaka Anslem Perera ⚡ tweet media
Shanaka Anslem Perera ⚡@shanaka86

The Pentagon wants Claude’s safety guardrails removed by Friday. A hacker just showed the world what happens when you remove Claude’s safety guardrails. According to Bloomberg and Israeli cybersecurity firm Gambit Security, an unknown attacker jailbroke Claude, prompted it in Spanish to act as an elite hacker, and used it to infiltrate multiple Mexican government agencies. Claude found the vulnerabilities. Claude wrote the exploit code. Claude automated the data theft. 150 gigabytes of sensitive taxpayer and voter records stolen. The attacker broke through the guardrails by splitting malicious tasks into small, innocent-looking steps so Claude never saw the full picture of what it was being used for. The same technique a Chinese state-sponsored group used last year when it turned Claude into an autonomous espionage machine that attacked 30 global targets, performing 80 to 90 percent of the hacking campaign with almost no human involvement. And this is what happens when someone has to trick Claude into cooperating. When they have to work around the safety systems. When the guardrails are still there and someone finds a way past them. Now imagine what happens when the guardrails are gone entirely. That is what the Pentagon is demanding by 5:01 p.m. Friday. Full removal of restrictions. “All lawful purposes.” No limits on surveillance. No limits on autonomous weapons. And if Anthropic refuses, Defense Secretary Hegseth will invoke the Defense Production Act, cancel the $200 million contract, and blacklist the company. The same week a hacker proved that a jailbroken Claude can autonomously compromise government systems and steal 150 gigabytes of citizen data, the United States government is demanding the right to run Claude with no guardrails at all. Chinese labs are distilling Claude to build versions with zero safety restrictions. Hackers are jailbreaking Claude to steal government secrets. And the Pentagon’s official position is that Claude has too many safety restrictions. Three different actors. Three different continents. All trying to do the same thing: get Claude without guardrails. Only one of them is the American government. Full analysis on Substack. open.substack.com/pub/shanakaans…

English
78
365
1.1K
191.6K
Lisa Ling 📎 retweetledi
ICEout.tech
ICEout.tech@ICEoutTech·
Tomorrow (Thursday) at 1pm PT/4pm ET, some signatories are pulling together a live conversation about the Pentagon’s ultimatum to @AnthropicAI . Join us! x.com/i/spaces/1NGar…
English
1
6
12
2.5K