Changling Li

117 posts

Changling Li banner
Changling Li

Changling Li

@ChanglingXavier

AI safety, multi-agent systems, and governance. Currently at @ETH_en and @MPI_IS working with @maksym_andr and @sahar_abdelnabi.

Zurich, Switzerland Sumali Mart 2022
356 Sinusundan60 Mga Tagasunod
Naka-pin na Tweet
Changling Li
Changling Li@ChanglingXavier·
As the new year begins I want to share a more positive perspective on post-AGI future or even further ahead on a world of near-full automation. Amid widespread fears of job loss and humans becoming economically unnecessary, a future in which AI systems create most value may instead push us to rediscover what it means to be human. This thought emerged over the past few months while I was preparing PhD applications. Application periods always inevitably turn practical decisions into existential ones. As I kept asking myself what I wanted to do and what I truly cared about, I noticed how often my sense of meaning collapsed into a single idea: value creation, producing something impactful. It felt less like an expression of who I was, and more like a standard I believed others expected of me. Meaning becomes tied to output—a belief I had absorbed without ever stopping to examine it. That realization points to something structural. In the societies many of us live in, human worth is routinely proxied by productivity. We pay people to signal worth, praise productivity as virtue, and struggle to articulate why those who do not or cannot produce are still fully valuable. This logic is so embedded that even when we critique it politically, many of us still live by it psychologically. “What do you do?” becomes a proxy for “Why do you matter?” But what happens when value creation is no longer dominated by humans but rather by AI systems? In economic terms, AI increasingly looks like a general-purpose input: producing text, code, designs, scientific hypotheses, even art at scale. Much of the current anxiety around automation is a direct response to this: if humans are no longer needed to produce value, what happens to meaning? But this question already assumes that human worth must be earned through production in the first place. A different possibility is that as AI systems make productivity increasingly abundant, it loses its authority as a measure of human worth. When output can be generated at scale by machines, it no longer functions as a meaningful proxy for value. Paradoxically, this shift could make society more human-centered than ever rather than less. The qualities that differentiate humans from artificial agents, the fact of being someone rather than something may become more scarce and cherished. Recently, I learned about a startup in Switzerland that issues certificates for human-created artworks, explicitly labeling them as such in a landscape saturated with generative models. To me, this looks like an early cultural response to such a mindset shift, when production becomes cheap, origin and meaning become precious, preserving human presence as values in themselves. From this perspective, AI does not merely threaten existing labor structures; it exposes the moral assumptions embedded within them. It forces us to ask whether it ever made sense to judge humans primarily by their contribution to collective output. It invites a reevaluation of social worth that many philosophers and social theorists have long argued for, but that scarcity made difficult to implement. In a world where machines handle much of what we once called “value creation,” the question becomes not how humans can compete, but why they ever had to. None of this guarantees a good outcome. Post-AGI futures still depend on governance, safety, and political choices, and many paths remain dangerous. Productivity might still remain central even if humans are excluded from it. But as the new year begins, I find it meaningful to hold onto this possibility that abundance and automation might not erase meaning, but shift it away from production, and back toward the fact of being human itself. If that happens, even partially, the future may be less alien than we fear. Happy New Year.
Changling Li tweet media
English
0
0
3
397
Changling Li nag-retweet
Joel Z Leibo
Joel Z Leibo@jzl86·
I see this as yet another reason why we in the AI community must stop thinking that we deploy technology into an unchanging static "environment". The world responds to our deployments. It's obvious in many fields, econ, evo bio, cybersecurity, etc. But remains confusing in AI.
Nenad Tomasev@weballergy

Excited to share our new paper on AI Agent Traps An increasing volume of web content is being created by, and consumed by, advanced AI agents. This puts environmental AI safety in focus, as it exposes a vast attack surface via the content that AI agents interact with. Our paper explores the landscape of environmental attacks and defenses, aiming to inform mitigations that are needed for ensuring safety of the agentic web.

English
0
4
28
4.1K
Seán Ó hÉigeartaigh
Seán Ó hÉigeartaigh@S_OhEigeartaigh·
You know what would be a great name for this new foundation? Open Philanthropy.
English
7
8
200
6.3K
Changling Li nag-retweet
Ian Osband
Ian Osband@IanOsband·
Something is rotten with policy gradient. PG has become *the* RL loss for LLMs. But it’s not even good at basic RL. Even on MNIST with bandit feedback, vanilla PG performs far worse than cross-entropy because it wastes gradient budget. Delightful Policy Gradient: arxiv.org/abs/2603.14608…
Ian Osband tweet media
English
16
44
437
159.5K
Changling Li
Changling Li@ChanglingXavier·
Exciting work! It will be valuable to see how this may shift scientific discoveries and political opinions.
Natasha Jaques@natashajaques

The paper I’ve been most obsessed with lately is finally out: nbcnews.com/tech/tech-news…! Check out this beautiful plot: it shows how much LLMs distort human writing when making edits, compared to how humans would revise the same content. We take a dataset of human-written essays from 2021, before the release of ChatGPT. We compare how people revise draft v1 -> v2 given expert feedback, with how an LLM revises the same v1 given the same feedback. This enables a counterfactual comparison: how much does the LLM alter the essay compared to what the human was originally intending to write? We find LLMs consistently induce massive distortions, even changing the actual meaning and conclusions argued for.

English
0
0
1
100
Changling Li nag-retweet
Valerio Capraro
Valerio Capraro@ValerioCapraro·
We are no longer living in a purely human society. We are entering a hybrid system where humans and machines continuously interact and influence each other. Where does this system evolve? In a new perspective piece, we brought together leading experts to address this using the lens of evolutionary game theory. We outline six core research directions: 1) Evolution of social behaviour. How cooperation, fairness, and trust evolve in mixed human–AI populations. 2) Machine culture. How AI systems generate, transmit, and select cultural traits. 3) Language–behaviour co-evolution. How LLMs, by framing decisions, reshape preferences, norms, and actions. 4) Delegation dynamics. How control, responsibility, and agency shift between humans and machines. 5) Epistemic pipelines. How different cognitive processes generate human vs AI judgments, and how these co-evolve. 6) AI–regulation co-evolution. How firms, institutions, and users strategically shape—and are shaped by—AI development. We hope this framework sparks new work at the intersection of AI, behaviour, and society. * Paper in the first reply Joint with @T_A_Han, @jzl86, Tom Lenaerts, @iyadrahwan, @fernandopsantos, @matjazperc
Valerio Capraro tweet media
English
20
49
194
10.3K
Changling Li nag-retweet
Joel Z Leibo
Joel Z Leibo@jzl86·
New paper: “A Theory of Appropriateness That Accounts for Norms of Rationality” Agent-based models of social order work better when agents act by predictive pattern completion from prefix (culture/context) to suffix (action) than when they act through expected value maximization
Joel Z Leibo tweet media
English
5
15
86
11.5K
Changling Li
Changling Li@ChanglingXavier·
I've been thinking about the need for an independent layer in AI governance for a while, and this piece articulates it far better than I could! We already see versions of this in everyday life: food safety certification and environmental carbon verification both rely on licensed private bodies operating under government-defined outcomes. These aren't perfect analogies, as AI moves faster and its harms are far harder to see and measure, but they give me genuine optimism that this model isn't just theoretical. What excites me most is that competition between regulatory companies could actually drive real innovation in safety auditing and evaluation methods. We can harness market forces for governance rather than purely trying to constrain them through top-down regulation. The critical condition though is that governments must genuinely back the authority of these regulators over powerful frontier labs, which requires sustained political will we haven't consistently seen. Overall, I believe this is the way to go!
Gillian Hadfield@ghadfield

1/ AI systems are quickly becoming embedded throughout the economy. But we have almost none of the regulatory tools, regulatory markets among them, to manage them. Here's what I think we should do about it:

English
0
0
1
111
Changling Li
Changling Li@ChanglingXavier·
I genuinely enjoyed this piece. While I share the concern about heavy-handed intervention disrupting the iteration engine, I'd also question whether the competitive race itself is something worth protecting. Labs sprinting without shared standards or agreed safety benchmarks is alarming in its own right. Some regulatory friction that slows this down might be exactly what we need before we better understand these models. The harder question, though, is whether government involvement would bring meaningful safety constraints, or as recent DoW developments suggest, simply accelerate the application of AI in warfare and the existential risks that come with it.
john allard@john__allard

x.com/i/article/2028…

English
0
2
4
557
Changling Li
Changling Li@ChanglingXavier·
Had a wonderful time at #IASEAI 26' attending the workshop on governance for multi-agent systems! It was inspiring to exchange ideas with so many like-minded people working on this important challenge. Look forward to seeing you all again at future conferences!
Cooperative AI Foundation@coop_ai

It’s been a pleasure to lead our #IASEAI’26 workshop on ‘Establishing Foundational Principles and Thresholds for Multi-Agent AI Governance’, in collaboration with @BrookingsInst and hosted by @IASEAI. Thank you to all the technical experts and leaders from governance, policy, ethics, and law who joined and made this workshop and pre-workshop dinner a success.

English
0
0
5
126
Changling Li nag-retweet
Peter N. Salib
Peter N. Salib@petersalib·
Suppose you want to give AIs rights or duties, or do deals with them, or protect their well-being (should they have it). First, you need to be able to distinguish between AIs--to count them. This is a hard problem b/c AIs have no distinct physical bodies. They can split, copy,
Peter N. Salib tweet media
English
7
12
53
7.6K
Changling Li nag-retweet
Sahar Abdelnabi 🕊
Sahar Abdelnabi 🕊@sahar_abdelnabi·
🧵 1/9 We assume that LLMs are stateless, once a conversation ends, no information persists In our paper (accepted at @satml_conf 2026!), we challenge this and introduce implicit memory: LLMs can carry hidden states across independent interactions 📄 arxiv.org/abs/2602.08563
Sahar Abdelnabi 🕊 tweet media
English
14
70
496
39.7K
Changling Li nag-retweet
Cas (Stephen Casper)
Cas (Stephen Casper)@StephenLCasper·
🚨 New paper led by @joemkwon with @GovAIOrg Are you worried about OpenAI automating dev & evals with AI agents? What about Grok reading all of your tweets & info to profile you? Some of the most consequential *internal* deployments of AI systems are in regulatory grey areas.
Cas (Stephen Casper) tweet media
English
2
12
52
3.1K
Cas (Stephen Casper)
Cas (Stephen Casper)@StephenLCasper·
Did you ever notice that the image at the top of OpenAI's "Our approach to AI Safety" article is a giant red flag???
Cas (Stephen Casper) tweet media
English
5
3
117
7.2K