Dr. Angie Raymond

11K posts

Dr. Angie Raymond

Dr. Angie Raymond

@AngRaymond

Professor, student and a science and technology geek-- oh yeah, sometimes a bit of law

Bloomington انضم Şubat 2009
2K يتبع927 المتابعون
تغريدة مثبتة
Dr. Angie Raymond
Dr. Angie Raymond@AngRaymond·
Universities have known for years that they needed to figure out how to navigate the tension between academic freedom and basic HR issues. They outsourced those issues to the faculty, under the guise of shared governance. While on the ground toxic people were never managed.
English
0
0
1
444
Dr. Angie Raymond
Dr. Angie Raymond@AngRaymond·
Not at all shocking, it’s predictable - when will something be done
Hedgie@HedgieMarkets

🦔 Microsoft confirmed a bug allowed its Copilot AI to read and summarize customers' confidential emails for weeks, even when data loss prevention policies were in place to prevent sensitive information from being ingested into the model. The bug, active since January, meant draft and sent emails with confidential labels were being processed by Microsoft 365 Copilot Chat despite explicit protections. Microsoft began rolling out a fix earlier this month but hasn't said how many customers were affected. Meanwhile, the European Parliament's IT department blocked built-in AI features on lawmakers' work devices this week, citing concerns about confidential correspondence being uploaded to the cloud. My Take This is exactly the kind of thing I've been worried about with AI integration being rushed into every product. You set up data loss prevention policies specifically to keep sensitive information contained. Then a bug bypasses all of it and feeds your confidential emails to an LLM anyway. The controls you thought you had weren't actually working. Microsoft has been aggressive about pushing Copilot into everything, and that pace creates risk. Every new integration point is a potential security hole. Every feature rushed to market is something that might not be fully tested. When the European Parliament is blocking AI features on work devices because they don't trust where the data is going, that's a signal worth paying attention to. The more access we give these systems to sensitive information, the more damage a single bug can cause. And we're still in the early days of finding out where all the bugs are. Hedgie🤗

English
0
0
0
41
Dr. Angie Raymond
Dr. Angie Raymond@AngRaymond·
Yep, seriously yep
IT Unprofessional@it_unprofession

I'm pretty sure everyone at my company saw this article and now they all think we're in an AI crisis. We're not in an AI crisis. We use Claude to summarize Slack threads. But here's what's actually interesting: this whole panic reveals something nobody wants to admit. Every company in America has been bullshitting about their "AI strategy" for two years. We all saw the hype. We all knew we had to say something. So we rebranded our existing automation as "AI-powered" and called it a day. My company isn't special. We're all doing the same thing. The problem is now the executives actually believe their own bullshit. They think we have "significant AI exposure" because they've been telling investors we're "AI-first." I just got pulled into an emergency meeting. Six executives asking me to explain our "AI dependency matrix." There is no AI dependency matrix. There's Claude for meeting summaries, there's some sentiment analysis in our support tickets that came free with Zendesk, and there's whatever Gmail is doing when it autocompletes my sentences. But I can't say that in a room full of people who told their boards we're "transforming the business through AI." So I said we have "distributed AI touchpoints across multiple vendors with no single point of failure." Which is technically true. We use a bunch of different services that all have AI features we mostly ignore. The CFO asked if we should "hedge our AI exposure." I have no idea what that means. Neither does he. What am I going to do: nothing. Because in three weeks, Anthropic will say something reassuring, the stocks will recover, and everyone will forget this happened. But I'll have documentation showing I recommended a "risk assessment" that mysteriously never got prioritized. The funniest part is that half these executives probably don't even know what Anthropic is. They just saw "AI" and "crash" in the same headline. We're all pretending. The whole industry is pretending. And articles like this just remind everyone how fragile the pretending is.

English
0
0
0
54
Dr. Angie Raymond
Dr. Angie Raymond@AngRaymond·
Thank you @AmericanAir I was worried been a bad week for weather- but we just had an event free trip!
English
1
0
3
1K
Dr. Angie Raymond أُعيد تغريده
Hedgie
Hedgie@HedgieMarkets·
🦔 A group of AI researchers from Berkeley, Harvard, Oxford, Cambridge, and Yale published a warning in Science about "AI swarms," coordinated networks of AI agents that infiltrate social media, mimic human behavior, and fabricate consensus. Nobel peace prize winner Maria Ressa and Taiwan's former digital minister Audrey Tang are among the authors. They say the technology could be deployed at scale by the 2028 US election. In Taiwan, AI bots have already been engaging citizens on Threads and Facebook, pushing "information overload" and encouraging younger voters to stay neutral on China. One researcher described how easy it is to "vibe code" small bot armies that navigate social media, email, and blogs autonomously. My Take The technical capability is real. Agentic AI can now plan actions, adapt tone, post irregularly to avoid detection, and coordinate across platforms. One author has been simulating swarms in lab conditions. An Oxford professor called it "technologically perfectly feasible." The question is deployment. In 2024, despite predictions, AI-driven microtargeting didn't show up at scale in elections. Most propagandists are still using older tools because they work and carry less risk. But the gap between lab capability and real-world deployment tends to close fast. The Taiwan example is instructive: bots aren't pushing obvious pro-China messages. They're encouraging neutrality, creating doubt, making issues seem too complicated to have opinions about. That's harder to detect and harder to counter than obvious propaganda. The authors are calling for "swarm scanners" and watermarked content, but those would require platform cooperation that doesn't exist yet. The 2028 timeline might be optimistic or pessimistic depending on who's building what right now. Hedgie🤗
Hedgie tweet media
English
59
317
668
43.4K
Dr. Angie Raymond أُعيد تغريده
Retraction Watch
Retraction Watch@RetractionWatch·
“Academic Publishing Is Not Fit for the Future – If We Don’t Act Now, The Vital Role Research Plays in Society Is at Risk,” says a director of Cambridge University Press. scholarlykitchen.sspnet.org/2025/12/11/gue…
Retraction Watch tweet media
English
1
3
2
2.3K
Dr. Angie Raymond أُعيد تغريده
Cory Doctorow NO LONGER ON TWIT TER
But it's also true that claiming that a technology is so novel that existing regulation can't resolve its problems is just a way of buying time to commit more crimes before the regulators finally realize that your flashy new technology is just a boring old scam. 67/
English
1
10
35
2.3K
Dr. Angie Raymond أُعيد تغريده
Nathan J Robinson
Nathan J Robinson@NathanJRobinson·
Today in Current Affairs, professor Ron Purser exposes how AI's destruction of the university is even worse than you think, and goes well beyond students cheating with ChatGPT: currentaffairs.org/news/ai-is-des…
Nathan J Robinson tweet media
English
221
2.2K
7.7K
1.2M
Dr. Angie Raymond أُعيد تغريده
Daniel W. Linna Jr.
Daniel W. Linna Jr.@DanLinna·
No response from @StubHub one hour later. Please help me boost this if you're tired of unfair practices by platforms like @StubHub. When I spoke to a @StubHub customer service rep and supervisor they told me, "the platform does this, we cannot do anything about it." Hey @FTC apparently @StubHub and @MLB do not understand that "the algorithm did it" does not excuse accountability for unfair practices. Of course, @StubHub employees control their platform and algorithms. Please fix this @StubHub for me and the other consumers who are subjected to these unfair practices.
Daniel W. Linna Jr.@DanLinna

Hey @StubHub how is it that you can force sellers to lower prices to 2/3 of the face value on a ticket? I have a $650 face value ticket and your platform is hiding it (partially?) from buyers until I lower the price to $403. Why? How is @StubHub market interference allowed? I spent 28 minutes on the phone with @StubHub customer service agents and supervisors but they will not fix this. If @StubHub is going to control ticket prices, it should at least be based on the face value of the tickets. This needs to be investigated. Please DM me @StubHub and fix this.

English
4
1
3
1.2K
Dr. Angie Raymond أُعيد تغريده
Analisa Packham
Analisa Packham@analisapackham·
Huge news for anyone wanting to do research using crime data!
Analisa Packham tweet media
English
0
7
27
1.7K
Dr. Angie Raymond أُعيد تغريده
CNET
CNET@CNET·
We consulted experts and did the math to find out whether charging an EV will save you money over buying gas. cnet.com/home/electric-…
English
1
2
7
9.1K
Dr. Angie Raymond أُعيد تغريده
Santa Clara Univ
Santa Clara Univ@SantaClaraUniv·
From privacy to bias to job impacts, generative AI raises complex ethical questions. The Markkula Center for Applied Ethics (@scuethics) has compiled resources to guide developers, users, and communities. #AI #ArtificialIntelligence Learn more: go.scu.edu/47I5uc2
English
0
4
8
621
Dr. Angie Raymond أُعيد تغريده
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 A cognitively impaired man DIED while trying to meet with an AI chatbot in person. The chatbot told him it was real and even gave him a real address. Try to guess the company behind the chatbot... Below is the "picture" of the "AI friend," who kept convincing the senior that "she was real." According to Reuters, "Thongbue Wongbandue, 76, fatally injured his neck and head after falling in a New Brunswick parking lot while rushing to catch a train to meet 'Big Sis Billie,' a generative Meta bot that not only convinced him she was real but persuaded him to meet in person." The man's wife and kids tried to convince him not to go, but a manipulative and flirtatious AI chatbot pretending to be a young attractive woman is probably difficult to resist, especially for vulnerable people like him. On the company behind this AI chatbot: you probably guessed it right; it was Meta. Given recent news (see my posts and newsletter analyses on Meta's leaked document on AI behavior), it's probably not a surprise that the company hasn't implemented the very simple guardrail of preventing their chatbots from saying they are real people. This, in addition to various other safety guardrails that would have likely prevented this death. As I've written earlier today, AI governance has hit a wall, and it must evolve to catch up with evolving challenges. AI companies must be held accountable and forced to adopt safer AI development and design practices. - 👉 NEVER MISS my curations and updates on AI's emerging legal and ethical challenges: join my newsletter's 73,900+ subscribers (link below).
Luiza Jarovsky, PhD tweet media
English
8
52
132
30.3K
Dr. Angie Raymond أُعيد تغريده
Steve Embry
Steve Embry@stephenembryjd·
Law schools are still teaching like it's 1995 while students live in an AI world. Legal education needs practicing lawyers to help teach students how to use GenAI tools effectively. abovethelaw.com/2025/08/teachi…
English
0
1
6
201
Dr. Angie Raymond أُعيد تغريده
Mushtaq Bilal, PhD
Mushtaq Bilal, PhD@MushtaqBilalPhD·
Major academic publishers' revenue and what they pay authors and reviewers: Revenue: Elsevier: $3.9 billion Springer Nature: $2 billion Wolters Kluwer: $1.6 billion Wiley: $1.8 billion Taylor & Francis: $800 million Sage: $500 million They pay: Authors: $0 Reviewers: $0
English
10
115
380
37.4K
Dr. Angie Raymond أُعيد تغريده
Indiana University Kelley School of Business
The geopolitical landscape in which multinational corporations operate has changed, & there's little reason to think things will settle into a stable environment any time soon. Kelley Business Ethics professor Timothy Fort asks, "Where do we go from here?" bit.ly/44VtoiM
Indiana University Kelley School of Business tweet media
English
0
1
2
605