Andrew McStay

1.9K posts

Andrew McStay banner
Andrew McStay

Andrew McStay

@digi_ad

Reflective optimist, Prof, Bangor Uni, IEEE, W3C, governance, emotional AI, author of Automating Empathy (OUP 2023, open access).

UK Katılım Nisan 2008
1.9K Takip Edilen1.9K Takipçiler
Andrew McStay
Andrew McStay@digi_ad·
A total pleasure to be a guest on the @BBCRadio4's 'Artificial Human', discussing whether AI can read emotions. The angle on the topic was excellent, as Chris, a listener, had called in asking whether AI can help him understand emotion expressions. bbc.co.uk/programmes/m00…
English
0
2
6
341
Andrew McStay
Andrew McStay@digi_ad·
New paper :) Here I argue that presence is a crucial factor in human-synthetic interaction, esp. in relation to ghostbots. It explains 'presence' and shows why it matters, esp. for those in governance. It gets weird, but there are v.simple recommendations. doi.org/10.1007/s10676…
English
0
2
2
205
Andrew McStay
Andrew McStay@digi_ad·
@SimonadeHeer @mario_gug @APonceETUI Hi, perhaps, but I see wiggle room for a defence here. If one separates detection of states from expression (e.g. telesales), as per recital 18, I think there's a big problem in AIA. Problem is reliance on Ekman/physiognomy critiques. (Nb. I'd like to be wrong.)
English
0
1
2
85
Simona
Simona@SimonadeHeer·
@mario_gug @digi_ad @APonceETUI "Sounding grumpy" is an inference in itself. "Infer" was meant to be broader rather than restrictive. See recital 44: "AI systems intended to be used to *detect* the emotional state of individuals in situations related to the workplace and education should be prohibited."
English
2
1
1
90
Andrew McStay
Andrew McStay@digi_ad·
@ChrisTMarsden Hi Chris, I don't think open access and my uni' doesn't subscribe. Got an open access link, or perhaps pop me an email with it?
English
1
0
1
13
Andrew McStay
Andrew McStay@digi_ad·
(2) Citizens (n=2000+) had mixed views, seeing scope for safety, but also risks of bias (40% for, 27% don't know, and 33% uncomfortable). Qualitative work also mixed. Our view distills to "Don't": even for laudable goals (safety), risks are too high and the technology too shonky.
English
1
2
4
211
Andrew McStay
Andrew McStay@digi_ad·
At the EAI Lab we find little evidence for effectiveness of emotional AI tech in a policing or security context, risk of bio-deterministic framings of criminality, deep skepticism from law enforcement (interviews), and high risk in trying to gauge emotion and/or human intent (1).
Paul Lewis@paullewismoney

Grimace, you’re on TV bit.ly/3xq5eyY Network Rail uses AI to assess the happiness of passengers (or customers as it would call them) as they pass through ticket barriers

English
1
13
17
4.3K
Andrew McStay
Andrew McStay@digi_ad·
@ChrisTMarsden Thanks, missed this. I've seen a few trials by London Underground for anger, but not this one.
English
0
0
1
7
Andrew McStay retweetledi
James Wright
James Wright@jms_wright·
UNESCO’s Ethics of AI team is recruiting! We're looking for 6 experts to contribute to cutting-edge projects on the ethical governance of AI across the following workstreams (please note the deadlines for applications are 16-17 April):
English
9
140
300
70.4K
Andrew McStay retweetledi
Responsible Ai UK
Responsible Ai UK@responsibleaiuk·
📢Exciting news! A new white paper titled “AI FRINGE PERSPECTIVES’ is now published, written by a consortium drawn from the attendees of @AISummitFringe with @responsibleaiuk ensuring a diversity of views garnered from the academic community. Download now rai.ac.uk/publications
Responsible Ai UK tweet media
English
0
25
39
13.1K
Andrew McStay retweetledi
Mario Guglielmetti
Mario Guglielmetti@mario_gug·
As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area. (October 2022) ico.org.uk/about-the-ico/…
English
1
5
11
968
Andrew McStay
Andrew McStay@digi_ad·
Many problems, but detecting "aggressive" behaviour is v.notable. Esp' a problem for folk with dark skin (black people with neutral expressions more likely to be labeled angry). ICO got ahead of this issue (we inputted) so time to put that policy to work. wired.com/story/london-u…
English
0
4
8
636