Elizabeth A. Seger

92 posts

Elizabeth A. Seger banner
Elizabeth A. Seger

Elizabeth A. Seger

@ea_seger

Researcher at GovAI running projects on Democratising AI and Epistemic Security.

Oxford, UK 参加日 Ekim 2020
106 フォロー中606 フォロワー
Elizabeth A. Seger
Elizabeth A. Seger@ea_seger·
Proud to be co-chairing #IASEAI25 ahead of the #AIActionSummit. Join us to bring together AI safety and ethics to build a future for AI that is beneficial to humans. Apply to attend by *Dec 4*. iaseai.org/conference
International Association for Safe & Ethical AI@IASEAIorg

Join experts on AI Safety & Ethics - including keynote speakers @Yoshua_Bengio, @katecrawford, & @JosephEStiglitz - in Paris at #IASEAI25, official side event to the #AIActionSummit! Application deadline extended to *Dec 4* Submit here: iaseai.org/conference

English
0
0
8
450
Elizabeth A. Seger がリツイート
Demos
Demos@Demos·
Fascinating new research from @turinginst, who rightly flag the threat posed by generative AI to our election. We recently issued an Open Letter to political leaders, calling on them to sign a cross-party agreement on responsible AI use. Find out more: demos.co.uk/research/open-…
Demos tweet media
The Alan Turing Institute@turinginst

📒 CETaS' (Centre for Emerging Technology & Security) new report explores AI-enabled election security threats: ✳️Types of threat ✳️Timeline of when threats could occur ✳️Mitigating risks 📖 With the UK general election coming up, find out more: cetas.turing.ac.uk/publications/a…

English
0
3
1
2K
Elizabeth A. Seger がリツイート
Demos
Demos@Demos·
How can electoral integrity stand up to the threats of AI-generated deepfakes? @ea_seger explores with @TimesRadio the urgent need to protect our democracy during this election campaign. 📻Listen to the full interview from 50:39 👉thetimes.co.uk/radio/show/202…
English
1
1
0
857
Elizabeth A. Seger
Elizabeth A. Seger@ea_seger·
(🧵2/3) The four commitments 1⃣Not using gen-AI to produce misleading content 2⃣Ensuring that where they do use this technology that it is clearly labelled 3⃣Not amplifying misleading content like deepfakes 4⃣Ensuring their campaigns staff and supporters have guidelines to follow
English
0
0
0
128
Elizabeth A. Seger
Elizabeth A. Seger@ea_seger·
@sriramk the claim at the end of point 1 that we have "resolved the black box issue" feels like a stretch. Can you clarify what is meant? Thanks!
English
0
0
1
43
Elizabeth A. Seger がリツイート
Risto Uuk
Risto Uuk@RistoUuk·
The EU AI Act hasn’t ever existed, it doesn’t exist right now and it won’t exist for a while, so surely whatever the issues in the EU AI innovation area it isn’t due to the AI Act? The Act actually aims to support companies, especially EU SMEs. Its proposed tiered approach for general-purpose AI intends to minimize the burden on SMEs by seeking to facilitate information sharing and safety guarantees from larger entities. It tries to come up with various thresholds to explicitly exclude smaller players from most obligations. This overall approach seems highly innovation-friendly, tailored to aid smaller players. Why are some explicitly opposed to it? Do they not want to support SMEs and increase AI uptake through ensuring a higher level of quality and trust? In addition, look at Article 55 in the Act aiming to provide support for SMEs and startups. Specific measures include granting priority access for SMEs and EU-based startups to regulatory sandboxes if eligibility criteria are met. Additionally, tailored awareness raising and digital skills development activities will be organised to address the needs of smaller organisations. Moreover, dedicated communication channels will be set up to offer guidance and respond to queries from SMEs and startups. Participation of SMEs and other stakeholders in the standards development process will also be encouraged. To reduce the financial burden of compliance, conformity assessment fees will be lowered for SMEs and startups based on factors like development stage, size, and market demand. The Commission will regularly review certification and compliance costs for SMEs/startups (with input from transparent consultations) and work with Member States to reduce these costs where possible.
English
4
10
33
8.1K
Elizabeth A. Seger
Elizabeth A. Seger@ea_seger·
(10/10) Also, if model access is restricted because of significant risks of open model release, the model can later be opened if and when defensive capabilities catch up to adequately address those risks. Decisions to open-source are, however, irreversible.
English
0
0
1
194
Elizabeth A. Seger
Elizabeth A. Seger@ea_seger·
(9/10) ❌Always open v. Always closed Models for which access is initially restricted can be opened up later date. This gives time for studying model impacts (e.g. in a staged released process).
English
1
0
1
208