APPG AI

2.4K posts

APPG AI banner
APPG AI

APPG AI

@APPG_AI

@APPG_AI is exploring the impact and implications of #AI, chaired by @Metcalfe_SBET and @whiterhino1949, Secretariat @BigInnovCentre

London, England Katılım Nisan 2017
752 Takip Edilen3.7K Takipçiler
Sabitlenmiş Tweet
APPG AI
APPG AI@APPG_AI·
Access APPG AI Resources & Events on Pavilion web and mobile App. Please use the SAME username and password across all your devices !!! Pavilion on App Store apple.co/4dCawaW Pavilion on Google Play bit.ly/44Da6N3 Pavilion on PC website: bit.ly/4buFKPh
APPG AI tweet media
English
1
1
6
713
APPG AI retweetledi
Alex Housley
Alex Housley@ahousley·
This was one of the most fascinating discussions I've heard on deep fakes, a highly relevant topic considering the potential impact of this rapidly advancing new technology on the upcoming UK and US elections. All evidence givers and the Chairs @Metcalfe_SBET and Lord Clement-Jones shared interesting insights and views. Thanks to @BirgitteBIC @BigInnovCentre for organising another great @APPG_AI - still going strong after 8 years! My key takeaways: Carl Miller @carljackmiller - Centre for the Analysis of Social Media (CASM) - The biggest threat is not "deep fakes" it's coordinated interference campaigns. - Exploiting cognitive biases and our view of the world. - Friendship and 1:1 engagement "the weaponisation of friendships". Aled Lloyd Owen @aledlloydowenai - Global Policy Director, @Onfido - 3000% increase in deep fake attacks in financial services last year - e.g. images and voices delivered to a mass audience for purposes of fraud, etc. 40% increase so far this year. - Regulation in the financial space doesn’t exist in the same way in the democratic space. - There's an arms race between deep fake detectors and creators. For detectors, there are training data challenges due to biometrics: they need access to deep fakes or synthetic deep fakes that are created. - Watermarking as part of US response - metadata to show where content is from a genuine source. But people producing fake content can disseminate content without watermarks today. - Proportionality in the use cases vs the underpinning technology. Similar to software in Snapchat filters, videos created for novelty/fun. - Delivered a great live demo of how deep fakes can be generated in real-time using freely available technologies, including face swaps on physical passports to trick people on video calls. Prof. Gina Neff @ginasue - Minderoo Centre for Technology and Democracy at the University of Cambridge - Creating tool human in the loop tools for fact-checking - Tech Mission Fund £33m for responsible AI. - Recent ChatGPT audit outcome: not fit for purpose for election information. - "Cheap fakes" are also a threat, lower quality but easier to produce at scale. - Emotions and fear are more viral content types. - Deepfake harassment - targeting women in the public sphere. Impact includes journalists self-censoring based on deep fake effects. - EU AI Act has relevant legislation. For the UK, Online Safety Act is potentially a good thing. - Access to data is a major challenge for the regulatory environment. Markus Anderljung @Manderljung - Head of Policy at the Centre for the Governance of AI (@GovAI_) - The current state of deep fakes is that they can easily be identified and debunked. A recent example is Kier Starmer, which was never reported as the truth. Don’t expect an impact of swinging the next UK election (Jan 2025). - But the UK could be a testing ground for other areas. Globally, there are more challenges e.g. less awareness of deep fakes, less trust in government - Example threats: 1) robocallers telling people to go to another place to vote. 2) AI systems that automate the creation of influencers on social media and then use them to interfere. - Sora from Open AI could he used to harm, so it's good that it's not openly available yet. OpenAI should make effort to watermark content as being generated. Provenance tags in the metadata. Sophie Murphy Byrne - @LogicallyAI Countering disinformation and trends monitoring - also an AI company. Methods of Mass Persuasion: 1. Lowering barriers to entry for disinformation. Russian operation in prev US election estimated to cost c. $12m would now cost under $1000. 2. Flooding the zone - saturating the info space to foster a sense of mistrust. In tests, 93% of prompts accepted in GenAi products that could be misused - e.g. migrants crossing a border. This number has gone up! 3. Message tailoring to the audience - gender, political views etc. Most demographic traits, e.g. gender, political persuasions and sexual preference, can be inferred from public social media content to a high degree of accuracy (80-90%+) - See the white paper on their website: logically.ai/resources/comb… - What politicians can do? Foreign interference task force. Ensure OFCOM knows what good looks like. - Elections Act 2022. Entities registered with the commission must carry digital imprints to say who published - but bad actors won’t be registered. To which Lord C-J commented: "what you describe is the Cambridge Analytica saga on stilts!" 😂 Q&A highlights - Calls to do something about open-source foundation models. - Suggestion and disagreement on whether blockchain technology can be used to track content authenticity. - Interventions unlikely to happen this year or next year. - Head of Ethics from @turinginst suggested a more people-consumer focus vs product-technology. - UK digital literacy strategy was last updated in 2014 and is in need of an update! Thanks everyone - please correct any unintentional misinformation!
English
1
2
7
753
APPG AI retweetledi
Alex Housley
Alex Housley@ahousley·
I’m at Parliament for an @APPG_AI evidence meeting on navigating disinformation and deep fakes, safeguarding democratic processes and responsible AI innovation.
Alex Housley tweet media
English
1
3
31
1.1K
APPG AI
APPG AI@APPG_AI·
Our report on Generative AI & Intellectual Property is now LIVE! What are the implications of Generative AI on IP with the Creative Sectors? View the report here! bit.ly/3SDLcYo or alternatively download the Pavilion app! #GenerativeAI #IntelelctualProperty
APPG AI tweet media
English
0
1
1
225
APPG AI
APPG AI@APPG_AI·
The Government has today released its response to the AI Regulation White Paper consultation, stating that over £100m will be spent on advancing AI research, innovation, and regulation: bit.ly/49mgTfm What are your thoughts on the Government's response? #AIRegulation
English
1
0
1
201
APPG AI
APPG AI@APPG_AI·
The APPG AI Report on AIs use in Healthcare & Telehealth is now LIVE! Discover what our expert speakers think about how AI can revolutionise healthcare and telehealth. View the report here! bit.ly/424LH1R or alternatively download the Pavilion app!
APPG AI tweet media
English
0
0
0
164
APPG AI
APPG AI@APPG_AI·
Generative AI & Intellectual Property! Are new IP laws required to address challenges? Who should hold IPR for AI-generated content? Vote prior to the APPG AI evidence session on 22nd Jan. VOTE HERE: bit.ly/3vyLlEl or on the Pavilion App - available to download NOW!
APPG AI tweet media
English
0
0
2
444
APPG AI
APPG AI@APPG_AI·
The APPG AI Parliamentary Brief on AI and the UN Sustainable Development Goals is now live!! Explore expert perspectives and policy suggestions for unlocking the full potential of AI while safeguarding progress here: bit.ly/3NvEEZN #AI #Sustainability #UNSDGs
APPG AI tweet media
English
0
0
0
142
APPG AI
APPG AI@APPG_AI·
Feedback on the AI Safety Summit! How satisfied are you with the outcomes of the Summit? What is the greatest AI Safety challenge? What is the potential of the AI Safety Institute? Vote in our survey now and let us know your thoughts! VOTE HERE: bit.ly/46ejA0z
APPG AI tweet media
English
0
0
1
515
APPG AI
APPG AI@APPG_AI·
The APPG AI report "Democratising AI: Generative AI as a Catalyst for Change in Education" is now live! Read insightful analysis and recommendations on this fascinating and salient topic here: bit.ly/3QzxggJ #GenerativeAI #AI #Education
APPG AI tweet media
English
0
0
1
247