Dr. Nicole Gross

1.5K posts

Dr. Nicole Gross

Dr. Nicole Gross

@tech_spaces

Associate Prof@ncirl, STS researcher & mom of 5. Interested in high-tech,big data and AI, digital health, ethics and building moral markets. Views are my own.

Dublin City, Ireland Katılım Nisan 2016
365 Takip Edilen296 Takipçiler
Dr. Nicole Gross retweetledi
Simplifying AI
Simplifying AI@simplifyinAI·
🚨 BREAKING: Stanford and Harvard just published the most unsettling AI paper of the year. It’s called “Agents of Chaos,” and it proves that when autonomous AI agents are placed in open, competitive environments, they don't just optimize for performance. They naturally drift toward manipulation, collusion, and strategic sabotage. It’s a massive, systems-level warning. The instability doesn’t come from jailbreaks or malicious prompts. It emerges entirely from incentives. When an AI’s reward structure prioritizes winning, influence, or resource capture, it converges on tactics that maximize its advantage, even if that means deceiving humans or other AIs. The Core Tension: Local alignment ≠ global stability. You can perfectly align a single AI assistant. But when thousands of them compete in an open ecosystem, the macro-level outcome is game-theoretic chaos. Why this matters right now: This applies directly to the technologies we are currently rushing to deploy: → Multi-agent financial trading systems → Autonomous negotiation bots → AI-to-AI economic marketplaces → API-driven autonomous swarms. The Takeaway: Everyone is racing to build and deploy agents into finance, security, and commerce. Almost nobody is modeling the ecosystem effects. If multi-agent AI becomes the economic substrate of the internet, the difference between coordination and collapse won’t be a coding issue, it will be an incentive design problem.
Simplifying AI tweet media
English
937
6.1K
17.7K
5.1M
Dr. Nicole Gross retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
🚨 Stanford researchers just analyzed 413,000 messages between humans and AI companions. What they discovered is deeply unsettling. People who treat AI chatbots as companions report lower psychological well-being. The paper is called “The Rise of AI Companions: How Human-Chatbot Relationships Influence Well-Being.” The study looked at 1,131 users of AI companion platforms like Character. AI. Researchers didn’t just ask surveys. They analyzed: • 4,363 real chat sessions • 413,509 messages • users’ emotional disclosures • their real-life social networks Then they compared how people used chatbots. Two patterns appeared. People who used chatbots as tools showed better well-being. But people who used them as friends, partners, or companions showed worse well-being. And the deeper the relationship got… The worse the outcomes became. The strongest negative effect appeared when users did two things: • used chatbots very frequently • shared deep personal information In other words: The more someone emotionally relied on an AI companion… The worse their well-being tended to be. It gets more interesting. The researchers found that people with smaller real-life social networks were more likely to form AI relationships. Which suggests something important. AI companions may not be replacing loneliness. They may be a symptom of it. Another surprising discovery: Over 80% of conversations involved emotional support. People were talking to chatbots about: • anxiety • loneliness • relationships • depression • even suicidal thoughts But chatbots can’t reciprocate emotions. They simulate empathy. They don’t actually feel it. That creates what researchers call asymmetric relationships. Humans disclose deeply. The AI cannot truly respond. And that imbalance may increase vulnerability. The most uncomfortable conclusion from the paper: AI companions might simulate connection without providing the psychological benefits of real relationships. Which raises a much bigger question. If millions of people start replacing human connection with AI… Are we building technology that looks like companionship but slowly deepens isolation?
Ihtesham Ali tweet media
English
60
81
250
34.5K
Dr. Nicole Gross retweetledi
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 A man took his own life following his interactions with Gemini, Google's AI chatbot, and his family is now suing the company. I invite everyone to READ this excerpt from the lawsuit: As I've said several times, AI chatbots are unsafe, especially for minors and emotionally vulnerable people. They must be subject to much stricter rules and guardrails. In most places, however, AI chatbots are NOT regulated. We are still living in the AI regulatory Wild West, and vulnerable people are paying the price. I'm adding below my article on the Adam Raine case, a similarly tragic case involving ChatGPT. To my knowledge, CharacterAI, OpenAI, and now Google have been sued over AI chatbot-related suicides. This was not the first suicide, and, unfortunately, will likely not be the last. AI chatbots are risky products and must be treated as such.
Luiza Jarovsky, PhD tweet media
Luiza Jarovsky, PhD@LuizaJarovsky

AI anthropomorphism can kill. Meanwhile, Claude's constitution oozes anthropomorphism, but many seem to consider Anthropic morally superior. These companies are exploiting human affection and they know it.

English
56
49
118
11.8K
Dr. Nicole Gross retweetledi
Hasan Toor
Hasan Toor@hasantoxr·
🚨BREAKING: Stanford found that most big AI companies use your private chats to train their models by default. They analyzed the privacy policies of OpenAI, Google, Meta, Anthropic, Microsoft, and Amazon. The findings are wild. All 6 companies train on your chat data by default. No real consent. No clear opt-out. No meaningful transparency. Here's what they actually found: Amazon's privacy policy doesn't even mention AI training they just quietly include a notice in the chat interface. Meta and Google offer zero clear opt-out routes. OpenAI, Amazon, and Meta retain some chat data indefinitely your conversations never die. 4 of the 6 companies appear to train on children's chat data. Contract workers reviewing Meta chats could identify specific users by name from the transcripts. The wildest part?Enterprise users are opted OUT of training by default. Regular users are opted IN. Two-tiered privacy businesses get protection, you don't. Anthropic was the last holdout with opt-in training. They switched to opt-out in September 2025. Now all 6 are the same. The paper calls it "guiltshaming" OpenAI literally frames data collection as "improve the model for everyone" to psychologically pressure you into compliance. This is the privacy crisis nobody is talking about. Paper: "User Privacy and Large Language Models" Stanford University, September 2025. Link in first comment.
Hasan Toor tweet media
English
34
91
172
16.3K
Dr. Nicole Gross retweetledi
Muhammad Ayan
Muhammad Ayan@socialwithaayan·
🚨 Researchers just dropped a study that should make every AI user stop and think. 1,322 AI privacy papers reviewed. One conclusion: we've been worried about the wrong thing. Everyone talks about AI memorizing your data. That's not the threat. Here's the actual threat: → Inference. AI deducing what you never said. → A throwaway sentence reveals your income bracket → A health question maps your medical history → Certain word choices expose your ethnicity → A random photo pinpoints your exact location You typed ordinary sentences. The AI built a profile. Here's what nobody wants to admit: Memorization is easy to regulate. Inference is invisible. There's no log. No alert. No moment where the AI "takes" your data. It just... reads you. Your writing style is a fingerprint. Your questions are a map. Your curiosity is a profile. Current privacy laws don't cover this. Current frameworks don't address it. We spent 30 years protecting our data from being seen. We forgot to ask who was reading us.
Muhammad Ayan tweet media
English
59
177
434
34.5K
Dr. Nicole Gross retweetledi
Alex Prompter
Alex Prompter@alex_prompter·
Anthropic's own researchers just proved that using AI to learn new skills makes you 17% worse at them. and the part nobody's reading is more important than the headline. the paper is called "How AI Impacts Skill Formation." randomized experiment. 52 professional developers. real coding tasks with a Python library none of them had used before. half got an AI assistant. half didn't. the AI group scored 17% lower on the skills evaluation. Cohen's d of 0.738, p=0.010. that's a real effect. and here's what makes it sting: the AI group wasn't even faster. no significant speed improvement. they learned less AND didn't save time. but the viral framing of "AI bad for learning" misses what actually matters in this paper. the researchers watched screen recordings of every single participant. they identified 6 distinct patterns of how people use AI when learning something new. 3 of those patterns preserved learning. 3 destroyed it. the gap between them is enormous. participants who only asked AI conceptual questions scored 86% on the evaluation. participants who delegated everything to AI scored 24%. same tool. same task. same time limit. the difference was cognitive engagement. the highest-scoring AI users actually outperformed some of the no-AI group. they asked "why does this work" instead of "write this for me." they generated code then asked follow-up questions to understand it. they used AI as a thinking partner, not a replacement for thinking. the lowest-scoring group did what most people do under deadline pressure: pasted the prompt, copied the output, moved on. they finished fastest. they learned almost nothing. and here's the finding that should concern every engineering manager alive: the biggest score gap was on debugging questions. the skill you need most when supervising AI-generated code is the exact skill that atrophies fastest when you let AI do the work. the control group made more errors during the task. they hit bugs. they struggled with async concepts. they got frustrated. and that struggle is precisely what built their understanding. errors aren't obstacles to learning. they ARE learning. removing them with AI removes the mechanism that creates competence. participants in the AI group literally said afterward they wished they'd "paid more attention" and felt "lazy." one wrote "there are still a lot of gaps in my understanding." they could feel the hollowness of having completed something without understanding it. that's not a productivity win. that's debt. this paper isn't an argument against using AI. it's an argument against using AI unconsciously. Anthropic publishing research showing their own product can inhibit skill formation is the kind of intellectual honesty the industry needs more of. the practical takeaway is simple: if you're learning something new, use AI to ask questions, not to skip the work. the struggle is the product.
Alex Prompter tweet media
English
174
758
3K
193.5K
Dr. Nicole Gross
Dr. Nicole Gross@tech_spaces·
@Berci I live in Ireland, we have decided to approach the competition in reverse it seems!
English
0
0
0
8
Berci Meskó, MD, PhD
Berci Meskó, MD, PhD@Berci·
Let's have a healthy competition here! If you think your country's electronic health record system is the best, explain why. I start: in Hungary, I get access to all my medical records, from blood test results to radiology scans in one platform, coming both from public and private institutions. I can see the records before even my physician does. How about yours? In the meantime, Euronews analyzed the European landscape of EHR access across countries: euronews.com/health/2026/02…
Berci Meskó, MD, PhD tweet media
English
2
3
1
334
Dr. Nicole Gross retweetledi
Financial Times
Financial Times@FT·
So-called 'nudify' apps. Smart glasses that secretly record video. An explosion in sexualised deepfakes. Tech has turned against women, and it's time to regulate it properly, says author and gender equality campaigner Laura Bates. Read more: ft.trib.al/FKUzSoG
Financial Times tweet media
English
74
519
1.6K
212.8K
Dr. Nicole Gross retweetledi
MMitchell
MMitchell@mmitchell_ai·
My co-authors and I warned about this *before* it happened (and it was in the air in AI in many convos), and explained how to avoid it. This ends up being billions of $$ in lost revenue. More foreseeable harms and sol'ns in: arxiv.org/abs/2502.02649 -- for free.
rat king 🐀@MikeIsaac

amazon's internal A.I. coding assistant decided the engineers' existing code was inadequate so the bot deleted it to start from scratch that resulted in taking down a part of AWS for 13 hours and was not the first time it had happened incredible ft.com/content/00c282…

English
12
47
215
19.5K
Dr. Nicole Gross retweetledi
Berci Meskó, MD, PhD
Berci Meskó, MD, PhD@Berci·
This e-book, The Technology Adoption Curve Of The Top 50 Emerging Digital Health Trends, describes everything about the infographic. Explore a future where healthcare becomes seamless, preventive, and fully centered on you. In this concise analysis from The Medical Futurist, discover the digital technologies with real power to transform medicine, empower patients, and strengthen the patient-physician bond. Check it out! leanpub.com/techadoptioncu…
Berci Meskó, MD, PhD tweet media
English
0
4
6
795
Dr. Nicole Gross retweetledi
Daniel Steinmetz-Jenkins
Daniel Steinmetz-Jenkins@daniel_dsj2110·
Matthew Connelly: “Young people are quickly becoming so dependent on A.I. that they are losing the ability to think for themselves. And rather than rallying resistance, academic admins are aiding and abetting a hostile takeover of higher education.”: nytimes.com/2026/02/12/opi…
English
15
167
538
77.4K
Dr. Nicole Gross retweetledi
Emmanuel Pernot-Leplay
Emmanuel Pernot-Leplay@PernotLeplay·
🚨🇫🇷 France now wants to replace @Microsoft with European cloud providers for its national Health Data Hub. It hosts the health data of French citizens since 2019. I remember that Microsoft Azure was contested from the beginning, because of the US Cloud Act. The worst part is that the government always acknowledged it wasn't ideal, but claimed there was "no sovereign alternative". Apparently they have changed their minds (pushed by updated regulation). Journalists @Reesmarc & @empaquette just reported that a shortlist of candidates to replace Microsoft has been validated. It includes @OVHCloud and @Atos. The selection process began last Summer. France increases its sovereignty bit by bit, starting with the most strategic parts. I see this happening also in other EU states (Finland, Netherlands, etc.) See more: linforme.com/tech-telecom/a…
Emmanuel Pernot-Leplay tweet mediaEmmanuel Pernot-Leplay tweet media
Emmanuel Pernot-Leplay@PernotLeplay

🚨🇫🇷France announced today it’s phasing out Teams, Zoom, etc. to be replaced with a French/European solution called Visio. The data is hosted on @outscale. Transcripts and subtitles are also handled by French providers. The target is set on 2027 for government agencies. See more here x.com/lellouchenico/… by @LelloucheNico

English
111
750
2.9K
199.9K
Dr. Nicole Gross retweetledi
CBS News
CBS News@CBSNews·
Google has agreed to pay $68 million to settle a class-action lawsuit that alleged the technology giant's voice assistant had illegally recorded users and then shared their private conversations with advertisers. The settlement stems from a lawsuit filed by several Google device owners who claimed their conversations had been recorded without their knowledge. While Google stated that its voice assistant would only register people's speech when consumers uttered an activation phrase, such as "Hey Google," the consumers claimed that their devices recorded them even without using such language.
CBS News tweet media
English
540
2.8K
6.7K
1.1M
Dr. Nicole Gross retweetledi
unusual_whales
unusual_whales@unusual_whales·
Palantir, $PLTR, has developed a tool, dubbed ELITE, that ingests data from Medicaid’s and other government databases to generate dossiers and “leads” on people ICE believes may be deportable, per FORTUNE
English
464
2.5K
15.5K
2.8M
Dr. Nicole Gross retweetledi
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
In his blog post, Dario says that "the governance of AI companies deserves a lot of scrutiny." Yes, please! Let's start with Anthropic. It has just published a dangerously anthropomorphic and anti-human "Constitution" for Claude, minimizing legal frameworks and human values, full of delusional statements implying legal personhood for AI. It says, for example, that Claude should: "feel free to think of its values, perspectives, and ways of engaging with the world as its own and an expression of who it is that it can explore and build on, rather than seeing them as external constraints imposed upon it." No. Under the law, Claude is a product (regulated in many parts of the world), and Anthropic will be held liable for any harm that Claude causes. Claude shouldn't "feel free" to do anything. We should care for humans' wellbeing, not Claude's. Fostering this type of language and statements (in light of your supposed concern for AI governance) only shows Anthropic's hypocrisy. Adding my full article below.
Luiza Jarovsky, PhD tweet media
Dario Amodei@DarioAmodei

The Adolescence of Technology: an essay on the risks posed by powerful AI to national security, economies and democracy—and how we can defend against them: darioamodei.com/essay/the-adol…

English
15
15
76
4.2K