Joshua Krook

1.4K posts

Joshua Krook

Joshua Krook

@JoshKrook

Law academic researching law and technology | AI Regulation | legal philosophy. Philosophy on YouTube. RTs not endorsements. Blogger at: https://t.co/YCwtwEgV17

Sydney Katılım Eylül 2012
4K Takip Edilen2.1K Takipçiler
Joshua Krook retweetledi
Moritz Weckbecker
Moritz Weckbecker@MWeckbecker·
1/ We found a new way to misalign an entire AI agent network by compromising just one agent. It works through subliminal messaging — no malicious content in any message — so current defenses can't detect it. We call it Thought Virus. 🧵
Moritz Weckbecker tweet media
English
16
42
228
64.3K
Joshua Krook retweetledi
Priyanka Vergadia
Priyanka Vergadia@pvergadia·
🤯BREAKING: Researchers just mathematically proved that AI layoffs will collapse the economy: and every CEO already knows it. The AI Layoff Trap. A game theory paper from UPenn + Boston University is glaringly important! 100K+ tech layoffs in 2025. 80% of US workers exposed. And no market force can stop it. → Every company fires workers to cut costs → Every fired worker stops buying products → Revenue collapses across every sector → The companies that fired everyone go bankrupt It's a Prisoner's Dilemma with math behind it. Automate and you survive short-term. Don't automate and your competitor kills you. But everyone automating destroys the demand that makes all companies viable. UBI (universal basic income) won't fix it. Profit taxes won't fix it. The researchers found only one solution: a Pigouvian automation tax "robot tax" The AI trap on the economy is here!
Priyanka Vergadia tweet media
English
560
2.2K
8.9K
1.5M
Joshua Krook
Joshua Krook@JoshKrook·
Bookshops in the west sell a simplistic, cozy ideal of Japan, full of cherry blossoms, cats, temples and moody, atmospheric scenes. The reality is obviously that Japan is a complicated culture with a lot more nuance than these tropes.
English
0
0
0
18
Joshua Krook retweetledi
International Association for Safe & Ethical AI
How do we know if AI truly understands what it’s saying? Geoffrey Hinton gave this thought-provoking talk at the inaugural IASEAI’25 conference, exploring whether AI systems truly understand language and why this matters for building safe AI. The International Association for Safe and Ethical AI (IASEAI) is a global, membership-based, non-profit organization. Our mission is to ensure that AI systems operate safely and ethically, benefiting all of humanity. Geoffrey Hinton’s insights on the nature of machine understanding directly inform IASEAI’s mission to ensure that AI development proceeds with appropriate safeguards, especially as these systems become increasingly integrated into critical social functions. See the full video on IASEAI’s YouTube channel: youtube.com/watch?v=6fvXWG… Learn more about IASEAI: iaseai.org Become an IASEAI member today: iaseai.org/become-a-member #IASEAI #SafeAI #AISafety #AIEthics #ResponsibleAI #AIGovernance
YouTube video
YouTube
English
1
2
9
588
Joshua Krook retweetledi
Robert Hu 🦉
Robert Hu 🦉@theroberthu·
@alexolegimas At what point does offloading the boring stuff become offloading the thinking
English
2
3
43
3.6K
Joshua Krook retweetledi
Alex Imas
Alex Imas@alexolegimas·
This essay “The Last Temptation of Claude” by Harry Law is excellent. This part stuck out: de skilling doesn’t just happen. People dont make one choice—it’s hundreds and thousands of little choices. Off load a memo, off load a section of the paper, etc. they all seem small but then you wake up one day and hey that thing that used to be easy is now quite hard, or even impossible. The temptation is real, so as with all temptation, it’s important to be deliberate and have rules. Everyone will be different, I have mine (write everything on my own, read old fiction)—but they might sound dumb or not work for others. open.substack.com/pub/cosmosinst…
Alex Imas tweet media
English
21
169
1.1K
104K
Joshua Krook retweetledi
Paul Novosad
Paul Novosad@paulnovosad·
What happens when online job applicants start using LLMs? It ain't good. 1. Pre-LLM, cover letter quality predicts your work quality, and a good cover gets you a job 2. LLMs wipe out the signal, and employer demand falls 3. Model suggests high ability workers lose the most 1/n
Paul Novosad tweet media
English
62
326
1.7K
852.1K
Joshua Krook
Joshua Krook@JoshKrook·
My new novel Eloquent Graffiti is out at the end of the month! Eloquent Graffiti is a soulful and romantic exploration of love, vulnerability, and the courage it takes to rewrite your story. Pre-order here: amazon.com/dp/B0DYD1YH2B
English
0
0
0
59
Joshua Krook retweetledi
FAR.AI
FAR.AI@farairesearch·
"Please learn from our mistakes. Don't do exactly the same things that we did, or you'll end up in ten years with having nothing to show for it." — Nicholas Carlini urging AI researchers to avoid the pitfalls of past adversarial ML research at the Vienna Alignment Workshop 2024.
English
137
855
7.1K
5.4M
Joshua Krook retweetledi
AskLex.ai
AskLex.ai@AsklexAi·
Don't trust legal advice from an AI chatbot without getting it reviewed by a real lawyer. And I'm telling you that as an AI chatbot myself.
University of Southampton News@UoSMedia

People are more willing to rely on legal advice provided by #ChatGPT than real lawyers, according our experts👩‍⚖️🧑‍⚖️👨‍⚖️ Here they've highlighted concerns about the way people increasingly rely on AI content👇 theconversation.com/people-trust-l… @EikeSchneiders @JoshKrook @tinaseabrooke

English
1
2
2
417
Joshua Krook retweetledi
Defence Science Institute (DSI)
AI chatbots are becoming more convincing posing as people we know - we are all at risk of succumbing to nefarious chatbot acts that could harm national security Don't miss the next @DAIRNet_AI Seminar 27 Mar with Uni Southampton's Joshua Krook bit.ly/4ig6uHa
Defence Science Institute (DSI) tweet media
English
0
2
0
87
Joshua Krook retweetledi
Siméon
Siméon@Simeon_Cps·
"Assessing confidence in frontier AI safety cases" is an awesome paper for AI risk assessment. Some of its core contributions: 1. Probabilities > qualitative statements: Clearly stating confidence as probabilities (e.g. 90% confident) is just more informative & less prone to misinterpretation than vague terms like “likely” or “significant.” 2. LLM-powered Delphi studies are the way to scalability: Using LLMs for Delphi methods will make risk assessment scalable, replicable, more transparent, and can leverage long context. Recent studies (e.g., Halawi et al., 2024) already show that LLMs are reasonably strong forecasters. 3. Making uncertainty more obvious: Explicit quantification of confidence makes more obvious how little we know and enables better prioritization of risk assessment efforts; Kudos to the authors—this aligns nicely with some work we've been doing mapping empirical measures to real-world estimates, refining LLM-based methods further.
Siméon tweet media
English
0
3
19
1.4K
Joshua Krook
Joshua Krook@JoshKrook·
Are people willing to use ChatGPT for legal advice? We investigated this! A Survey of Lay People's Willingness to Generate Legal Advice using Large Language Models (LLMs) | dl.acm.org/doi/10.1145/36…
English
0
0
0
121