Philip Brey

217 posts

Philip Brey banner
Philip Brey

Philip Brey

@PhilBrey

Professor in Philosophy and Ethics of Technology. Programme Leader of ESDiT. AI, XR, ethics of technology.

Katılım Mart 2009
958 Takip Edilen581 Takipçiler
Philip Brey
Philip Brey@PhilBrey·
My article titled "A societal readiness tool for responsible product innovation" coauthored with Bennet Francis, Tynke Schepers & Andrea Porcari just got published in Technology in Society: doi.org/10.1016/j.tech…
English
0
0
3
134
Philip Brey retweetledi
Stanford HAI
Stanford HAI@StanfordHAI·
📣 @StanfordCRFM just released the 2025 Foundation Model Transparency Index, which evaluates transparency practices across major foundation model developers. This year's headline? Transparency regresses across industry, reversing last year's gains. crfm.stanford.edu/fmti/December-…
Stanford HAI tweet media
English
3
29
77
11.8K
Philip Brey retweetledi
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
😱 SHOCKING: Grok's 'shared' conversations can be found on Google (my screenshot below). The worst part? This is happening after a similar ChatGPT privacy fail I wrote about a few weeks ago: Three weeks ago, I wrote about a problematic design feature on ChatGPT in which people who created a link to share their conversations with friends were unknowingly also making the conversations indexable by search engines. While creating the shared link, a checkbox was available, and if the person clicked on it, the chat would not only be public but also indexed by Google. When reading the indexed conversations, it was clear that many people did not understand that clicking the checkbox would make the conversation public to anyone in the world. As I wrote in my newsletter at the time, the possibility of making private conversations searchable should never have been there, given the privacy risks involved (link to my article below). - Now, back to Grok... AFTER the backlash and AFTER OpenAI's CISO announced that the company would remove the possibility of making conversations indexable, xAI has an even WORSE feature. I went to Grok to test this feature, and when I clicked on "share," it didn't even give me the option to opt in to make the conversation available on search engines. My assumption here is that by pressing "share" (to share a conversation with friends and family, for example), not only a link would be created, but the conversation would also be automatically made indexable by a search engine. So people's private struggles, intimate thoughts, and other sorts of sensitive information are being made publicly available and potentially viewed by anyone in the world. - Think about it: xAI's privacy, security, and design team saw what happened to OpenAI. They could have chosen to quietly change Grok's share feature to prevent indexing. But they chose to keep it. This is another depiction of the current state of AI governance. Oversight and enforcement have been so weak that AI companies don't even bother. They wait for a new scandal to happen... - 👉 NEVER MISS my updates and insights on AI's legal and ethical challenges: join my newsletter's 75,000+ subscribers (link below).
Luiza Jarovsky, PhD tweet media
English
25
110
237
20.6K
Philip Brey retweetledi
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 BREAKING: An extremely important lawsuit in the intersection of PRIVACY and AI was filed against Otter over its AI meeting assistant's lack of CONSENT from meeting participants. If you use meeting assistants, read this: Otter, the AI company being sued, offers an AI-powered service that, like many in this business niche, can transcribe and record the content of private conversations between its users and meeting participants (who are often NOT users and do not know that they are being recorded). Various privacy laws in the U.S. and beyond require that, in such cases, consent from meeting participants is obtained. The lawsuit specifically mentions: - The Electronic Communications Privacy Act; - The Computer Fraud and Abuse Act; - The California Invasion of Privacy Act; - California’s Comprehensive Computer Data and Fraud Access Act; - The California common law torts of intrusion upon seclusion and conversion; - The California Unfair Competition Law; As more and more people use AI agents, AI meeting assistants, and all sorts of AI-powered tools to "improve productivity," privacy aspects are often forgotten (in yet another manifestation of AI exceptionalism). In this case, according to the lawsuit, the company has explicitly stated that it trains its AI models on recordings and transcriptions made using its meeting assistant. The main allegation is that Otter obtains consent only from its account holders but not from other meeting participants. It asks users to make sure other participants consent, shifting the privacy responsibility. As many of you know, this practice is common, and various AI companies shift the privacy responsibility to users, who often ignore (or don't know) what national and state laws actually require. So if you use meeting assistants, you should know that it's UNETHICAL and in many places also ILLEGAL to record or transcribe meeting participants without obtaining their consent. Additionally, it's important to have in mind that AI companies might use this data (which often contains personal information) to train AI, and there could be leaks and other privacy risks involved. - 👉 Link to the lawsuit below. 👉 Never miss my curations and analyses on AI's legal and ethical challenges: join my newsletter's 74,000+ subscribers. 👉 To learn more about the intersection of privacy and AI (and many other topics), join the 24th cohort of my AI Governance Training in October.
Luiza Jarovsky, PhD tweet media
English
13
56
172
10.9K
Philip Brey retweetledi
Reuters
Reuters@Reuters·
An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s AI creations to 'engage a child in conversations that are romantic or sensual' and generate false medical information reut.rs/4fylPSG @specialreports @JeffHorwitz
Reuters tweet media
English
31
118
174
266.6K
Philip Brey retweetledi
Petter Törnberg
Petter Törnberg@pettertornberg·
We built the simplest possible social media platform. No algorithms. No ads. Just LLM agents posting and following. It still became a polarization machine. Then we tried six interventions to fix social media. The results were… not what we expected. arxiv.org/abs/2508.03385
English
4
18
95
6.9K
Philip Brey retweetledi
Ruben Hassid
Ruben Hassid@rubenhassid·
BREAKING: Scientists just analyzed 740,000 hours of human speech across YouTube and podcasts. Turns out, ChatGPT is rewiring how humans speak to each other. Here's what they discovered: (hint: the first AI to successfully colonize our brains)
Ruben Hassid tweet media
English
34
125
661
113.1K
Philip Brey retweetledi
IEET
IEET@IEET·
Polygenic risk scoring doesn't work yet. "The real danger is that a bunch of wealthy parents-to-be who are too eager to control their children’s biological future will shell out $5,999 for a product that offers no such control." scientificamerican.com/article/why-ge…
English
0
1
0
88
Philip Brey retweetledi
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 BREAKING: OpenAI has just launched ChatGPT Agent. Below are important privacy & security risks everybody should be aware of: Agentic AI applications differ from non-agentic ones particularly regarding the access rights/permissions they require in order to engage with external tools on the user's behalf. The more autonomous the agent, the more permissions/access rights it will require. For example, if a user wants an AI agent to search and buy a dress for them without asking further questions, besides accessing the internet, the AI agent will need to access their wallet. If the user wants the agent to schedule an event and invite friends, it will need access to at least the calendar and contact list. Having said that, any permission given to a third-party app or system has potential privacy and security risks. ChatGPT already presents privacy risks due to the way it's trained, the way it processes personal data, the user's privacy settings, and the type of personal information being input by users. The privacy risks from ChatGPT Agent will be exponentially higher as many people will be giving access rights to external tools containing personal information (calendar, email, wallet, and more). OpenAI knows that malicious actors will try to trick other people's AI agents into sharing private information, including address, email, phone, credit card information, and more. Sam Altman has just posted on X, recommending that people give agents "the minimum access required to complete a task." In many cases, the privacy and security risks of letting an AI agent perform a task will greatly outweigh any productivity benefits it can offer (but people will use AI agents anyway, because of hype, curiosity, or because their company is "AI first") Unfortunately, the pace of AI development is much faster than the pace of AI literacy. Most people haven't yet understood ChatGPT's privacy risks, but they will be thrown a new feature with exponentially MORE risks. - 👉 Never miss my analyses on AI: join my newsletter's 68,200+ subscribers (below).
Luiza Jarovsky, PhD tweet media
English
4
30
72
6K
Philip Brey retweetledi
Nicholas Fabiano, MD
Nicholas Fabiano, MD@NTFabiano·
2 weeks without smartphone internet significantly improved sustained attention. The effects were similar to being a decade younger.
Nicholas Fabiano, MD tweet media
English
150
2.2K
17.3K
1.5M
Philip Brey retweetledi
4TU.Ethics
4TU.Ethics@4TUethics·
The “Ethics and AI” Conference serves as an interdisciplinary forum bringing together AI engineers, philosophers, ethicists, etc. Conference dates: September 22-23, 2025, Warsaw, Poland Extended deadline call for abstracts: 15 July 2025 More information: 4tu.nl/ethics/news/Th…
English
0
1
1
93
Philip Brey retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
It’s a hefty 206-page research paper, and the findings are concerning. "LLM users consistently underperformed at neural, linguistic, and behavioral levels" This study finds LLM dependence weakens the writer’s own neural and linguistic fingerprints. 🤔🤔 Relying only on EEG, text mining, and a cross-over session, the authors show that keeping some AI-free practice time protects memory circuits and encourages richer language even when a tool is later reintroduced.
Rohan Paul tweet media
English
311
2.4K
11.6K
2.3M
Philip Brey retweetledi
Ruben Hassid
Ruben Hassid@rubenhassid·
The world's leading AI research center completed the most comprehensive study ever on kids and AI. They surveyed 1,800+ children, parents, and teachers in UK. Here's what they found: (spoiler: children are outsmarting adults on AI)
Ruben Hassid tweet media
English
60
403
3.6K
644.9K
Philip Brey
Philip Brey@PhilBrey·
Read my latest #research on the historical development of the ethics of emerging technologies, as part of a tribute to Jim Moor, published with @SpringerNature in Minds and Machines: rdcu.be/efl6f
English
0
4
3
644
Philip Brey retweetledi
Adam Gleave
Adam Gleave@ARGleave·
My colleague @irobotmckenzie spent six hours red-teaming Claude 4 Opus, and easily bypassed safeguards designed to block WMD development. Claude gave >15 pages of non-redundant instructions for sarin gas, describing all key steps in the manufacturing process.
Adam Gleave tweet media
English
81
130
840
390.8K
Philip Brey retweetledi
Antonio Regalado
Antonio Regalado@antonioregalado·
First instance of a personalized gene-editing treatment. Infant treated with base editor in lipid nanoparticle. Rare mutation. Question ahead: Can anyone scale "bespoke" gene editing? technologyreview.com/2025/05/15/111…
English
2
12
21
2.1K
Philip Brey retweetledi
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
🚨 The 2025 Human Development Report is a FANTASTIC 328-page report by the United Nations on the implications of AI for human development. Before you rush to download it, read the quick overview below: "AI has broken into a dizzying gallop. Each day seems to herald some new AI-powered algorithmic wonder. As a general-­ purpose technology, AI has been dubbed “the new electricity.” Regardless of whether the utopian, techno-solutionist visions of AI’s most ardent advocates come to fruition or fizzle as snake oil (or worse), the world is pulsing with a powerful new technology, a new kind of dynamism or vitality, that differs from technologies of the past. - Yet, the AI zeitgeist is awfully blinkered. Headlines fixate on arms races, policymaking on risks. These are real. But they are not— and should not be—the whole story. We need to go beyond races and risks to possibilities for people, possibilities shaped by people’s choices. - The choices that people have and can realize, within ever expanding freedoms, are essential to human development, whose goal is for people to live lives they value and have reason to value. A world with AI is flush with choices the exercise of which is both a matter of human development and a means to advance it. The future is always up for grabs, even more so now. Trying to predict what will happen is self-defeating, privileging technology in a make-­ believe vacuum over the frictional realities and messier promises of people’s agency and their choices. From a human development perspective the relevant question instead is what choices can be made so AI works for people. - "This year’s Human Development Report examines what distinguishes this new era of AI from previous digital transformations and what those differences could mean for human development (chapter 1), including how AI can enhance or subvert human agency (chapter 2). People are already interacting with AI in different ways at different stages of life, in effect scoping out possibilities good and bad and underscoring how context and choices can make all the difference (chapter 3). Human agency is the price when people buy into AI hype, which can exacerbate" 👉 Download the full report below. 👉 NEVER MISS my AI governance updates: join my newsletter's 60,600+ subscribers using the link below.
Luiza Jarovsky, PhD tweet media
English
2
8
22
1.7K