Bec Johnson

3.4K posts

Bec Johnson banner
Bec Johnson

Bec Johnson

@voxbec

PhD: Ethics of Generative AI @Sydney_Uni Formerly @GoogleResearch Ethical AI team. "100 Brilliant Women in AI Ethics" 2020 list. Founder https://t.co/tj20o0UsLn

Eora Land, Uni Sydney, Aust. Katılım Nisan 2016
1K Takip Edilen789 Takipçiler
Mickey
Mickey@Mickey4x·
Did the White House use ChatGPT Vibe Code a retarded formula for our tariffs and then were too dumb to cover it up? Yes. Here's the wildest trade story you missed today 🧵👇
English
242
3.4K
19.8K
2.4M
Bec Johnson retweetledi
stranger
stranger@strangerous10·
Malcolm Turnbull fires back at Sarah Ferguson who asks if he should have kept quiet about Trump Turnbull “You’re a little embarrassed raising that with me, aren’t you?” Ferguson “What is it that Trump is doing that has become so egregious?” Turnbull “Look around us…”🔥#abc730
English
344
1.2K
5.2K
606.4K
Bec Johnson retweetledi
Kareem Rifai 🌐
Kareem Rifai 🌐@KareemRifai·
After Vance accused Zelensky of not saying thank you, Zelensky is now tweeting individual thanks to every single world leader expressing solidarity with Ukraine.
English
3.5K
32.1K
398.5K
20.6M
Bec Johnson retweetledi
MMitchell
MMitchell@mmitchell_ai·
“People think that having the language model demonstrate (the) ability to mark as wrong, things that people would say is wrong, shows that it has somehow learned good values or something,”-@emilymbender Great article from @jonkeegan themarkup.org/artificial-int…
Shoreline, WA 🇺🇸 English
2
16
68
6.5K
Bec Johnson retweetledi
Niloofar
Niloofar@niloofar_mire·
When talking abt personal data people share w/ @OpenAI & privacy implications, I get the 'come on! people don't share that w/ ChatGPT!🫷' In our @COLM_conf paper, we study disclosures, and find many concerning⚠️ cases of sensitive information sharing: tinyurl.com/ChatGPT-person…
Niloofar tweet media
English
6
58
207
48.1K
Bec Johnson
Bec Johnson@voxbec·
Published today! "Handbook on the ethics of AI" I was honoured to contribute a chapter in this book. My chapter: "What are AI researchers really arguing about" Employs functionalism, constructivism, and enactivism to frame why researchers have such diff views on AI, AGI, & evals
David J. Gunkel@David_Gunkel

It's publication day! With brilliant contributions from: @SvenNyholm @EileenMHunt @voxbec @CindyFriedman23 @ElkeSchwarz @ProfChesterman @JoshGellers @profjecker @Lilyfrank16 @klincewiczm @DrSyedMustafaA1 @Sonamsangbo @eduardfosch @JcMalgieri @romelealberto @Wolven et al.

English
2
0
6
273
Bec Johnson
Bec Johnson@voxbec·
The physical book is a little pricey but the ebook is priced for individuals. Or ask your library to stock it 😉
English
0
0
0
60
Bec Johnson retweetledi
David J. Gunkel
David J. Gunkel@David_Gunkel·
"Handbook on the #ethics of #AI" @ElgarPublishing can now be previewed at @Google books. You can read the entire introduction and have a look at many of the chapters. Unlike previous handbooks, this one is dedicated to diversifying the field of #AIethics #v=onepage&q&f=false/" target="_blank" rel="nofollow noopener">books.google.com/books?id=x2cTE…
English
3
49
114
9.9K
Bec Johnson
Bec Johnson@voxbec·
🥳💙❤️💛 Absolutely shameful that he was ever allowed to be locked up by the USA in the first place. Such great news that Assange is free and finally coming home!
WikiLeaks@wikileaks

JULIAN ASSANGE IS FREE Julian Assange is free. He left Belmarsh maximum security prison on the morning of 24 June, after having spent 1901 days there. He was granted bail by the High Court in London and was released at Stansted airport during the afternoon, where he boarded a plane and departed the UK. This is the result of a global campaign that spanned grass-roots organisers, press freedom campaigners, legislators and leaders from across the political spectrum, all the way to the United Nations. This created the space for a long period of negotiations with the US Department of Justice, leading to a deal that has not yet been formally finalised. We will provide more information as soon as possible. After more than five years in a 2x3 metre cell, isolated 23 hours a day, he will soon reunite with his wife Stella Assange, and their children, who have only known their father from behind bars. WikiLeaks published groundbreaking stories of government corruption and human rights abuses, holding the powerful accountable for their actions. As editor-in-chief, Julian paid severely for these principles, and for the people's right to know. As he returns to Australia, we thank all who stood by us, fought for us, and remained utterly committed in the fight for his freedom. Julian's freedom is our freedom. [More details to follow]

English
0
0
2
153
Bec Johnson retweetledi
Demis Hassabis
Demis Hassabis@demishassabis·
Looking forward later today to the #Tribeca2024 premiere of The Thinking Game - a new documentary about the story of @GoogleDeepMind, AGI & AlphaFold, by Greg Kohs with music by Dan Deacon; it’s a sequel of sorts to the award-winning AlphaGo documentary tribecafilm.com/films/thinking…
Tribeca@Tribeca

Come and hear @GoogleDeepMind CEO & AI pioneer @demishassabis in conversation with director @DarrenAronofsky about AI, @thinkgamefilm and the future at #Tribeca2024: tribecafilm.com/films/thinking…

English
26
71
670
149.5K
Bec Johnson retweetledi
Anthony Albanese
Anthony Albanese@AlboMP·
We're wiping 3 billion in student debt, for more than 3 million Australians.
Anthony Albanese tweet media
English
2.5K
338
2.7K
461.4K
Bec Johnson retweetledi
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
I spent a few hours listening to Dan Hendyrcks, who runs the non-profit AI Safety group behind SB 1047, aka the California AI Control and Centralization Bill. I find him charming, measured, intelligent and incredibly dangerous. Some of the most dangerous people in life are ones who can convincingly lie about their intentions and who can easily mask those intentions. After listening to him for several hours and reading his talking points info-graphic, which claims to protect open source AI and fine tunes of models, I'm convinced that not only does he know this is an outright fabrication, but that's it's 100% intentional. Make no mistake, Dan's p(doom) is 80%, meaning he believes that there is a 80% chance that AI will kill us all. Reading the language of the bill and with that understanding held clearly in mind, it's not hard to see what the true intentions of the bill are, despite Dan's measured language and slickly produced info-graphics. The intention of the bill is very clear for anyone who has eyes to read the text. It has three clear goals: 1) Ensure that only a small group of companies, rigidly controlled and overseen by a special government agency, have the right to create advanced artificial intelligence. 2) Destroy open source AI. 3) Make sure that model makers have liability hanging over them like the sword of Damocles for the rest of their life, ensuring that governments can hold model makers responsible for any misuse or crime from those models forever. Usually when I listen to AI doomers it takes about one minute to hear the flaws in their logic and the nonsensical logical leaps. It's a bit like looking at code from GPT. Looks great, reads well but under the surface it's riddled with bugs. Many doomers are fantastic at using the language of rationality without any actual rationality happening below the surface. Not so with Dan, who I find delightfully well spoken and clear. He's the kind of fellow I'd like to have a good meal with and a glass of wine. In the Future of Life podcast he rejected notions of "AI Foom" as unlikely and noted that we were in a slow take off scenario. He also subtly digs at the community by saying that belief in rogue AI was based on "cultural history" aka they read too much sci-fi. He also clearly lays out his strategy to go towards grassroots legislation and work with policymakers at that level. I'd almost like him if we weren't trying to crush American innovation and cripple AI development. And yet I find him tremendously dangerous and I'm not afraid to say it. Unlike other folks in the AI safety space he is good at masking his intentions. He's a bit like Yoda when we first meet him in Empire Strikes Back, cleverly disguised as a harmless old man. In all his talking points and in his communications on the bill he is measured and denies his true intentions up and down while working to ensure that AI is rigidly controlled by the Turing Police and centralized at all costs. He cleverly says that the bill protects open source because it "establish a new advisory council to advocate for safe and secure open source AI development." This is cleverly worded to create the illusion that open source is protected. Really it's an advisory board that fails to protect open source AI in any way whatsoever, and he knows it. The bill is absolutely a de-facto ban on open source AI for advanced models because it requires model makers to have “the capability to promptly enact a full shutdown of the covered model,” aka a remote kill switch, including the ability to force “the cessation of operation of a covered model, including all copies and derivative models, on all computers and storage devices within custody, control, or possession of a person, including any computer or storage device remotely provided by agreement." Of course, with a widely distributed model that is not tightly controlled or surveilled this would be impossible because model developers are held liable for the model and fine tunes of the model no matter where that model lives. The talking points also claim that fine tunes of the model are protected. They're not, because of this language: “(2) “Hazardous capability” includes a capability described in paragraph (1) even if the hazardous capability would not manifest but for fine tuning and posttraining modifications performed by third-party experts intending to demonstrate those abilities.” In other words, someone fine tunes a model they consider dangerous, the model maker is liable. This bill is insidious in nearly every way. It was conceived to sound simple and measured while its real intentions are much darker. It must be stopped. It's goal is to dramatically de-incentivize people and companies from ever releasing a powerful model as open source and to ensure that model makers can be crushed into submission at any time. It's designed to do this in the one state that is responsible for the vast majority technical development and innovation over the last few decades. All because someone believes in a the fantasy that AI will kill us all. I fully support everyone's right to believe whatever they want but they don't get to make laws for the rest of us and crush American innovation. If it wasn't so insidious, I'd almost admire it for its deviousness.
Daniel Jeffries tweet media
English
87
199
899
1.3M
Bec Johnson
Bec Johnson@voxbec·
@erikbryn A crumbling society cannot be encapsulated in the metric of GDP. Particularly when the weath divide is almoat on par with feudalist England of th 17thC
English
0
0
0
23