Cheng-Yuan Lee | TAS

445 posts

Cheng-Yuan Lee | TAS banner
Cheng-Yuan Lee | TAS

Cheng-Yuan Lee | TAS

@TASalignment

Researcher | Civilization alignment P = F × (C − I) Technology amplifies civilization. Risk emerges when F × I exceeds H. Mapping the structural roots of AI

Taipei, Taiwan شامل ہوئے Aralık 2025
100 فالونگ17 فالوورز
Cheng-Yuan Lee | TAS
Cheng-Yuan Lee | TAS@TASalignment·
@aiedge_ Coding may be increasingly automated. Software engineering is not just coding — it is architecture, tradeoffs, verification and governance. Amplification is not replacement.
English
0
0
0
325
AI Edge
AI Edge@aiedge_·
Anthropic CEO (Dario Amodei): "Coding is going away first, then all of software engineering." What do you think about this?
English
301
45
680
1.7M
Cheng-Yuan Lee | TAS
Cheng-Yuan Lee | TAS@TASalignment·
@ControlAI @HawleyMO Important to take long-term risks seriously. But many near-term risks may arise less from AI escaping control than from humans scaling power faster than governance.
English
0
0
0
41
ControlAI
ControlAI@ControlAI·
Senator Josh Hawley (@HawleyMO) asks ex-OpenAI board member Helen Toner about the threat posed by superintelligent AI. Toner warns that AI companies are aiming to build AIs that outperform us and could escape human control. "I think we don't take them seriously, and we should."
English
10
30
89
6.3K
Cheng-Yuan Lee | TAS
Cheng-Yuan Lee | TAS@TASalignment·
@ubi_works Broadly sharing AI gains matters. But dividends may distribute outcomes, not necessarily govern the dynamics producing the risks.
English
0
0
0
11
UBI Works 🇨🇦
UBI Works 🇨🇦@ubi_works·
"If AI is gonna built on the collective commons, we should all benefit" Ezra Klein's interview with NYC Congress candidate Alex Bores on UBI as AI Dividends "How we can give everyone a stake in the AI economy" Public wealth funds already work -- this is the obvious next step
English
11
26
79
2.3K
Cheng-Yuan Lee | TAS
Cheng-Yuan Lee | TAS@TASalignment·
@haider1 There’s a missing question: If capability races, capital incentives, and information harms are part of the risk, how is more diffusion by itself the solution?
English
0
0
0
51
Haider.
Haider.@haider1·
Sam Altman says some people want to keep AI in fewer hands, and fear is the best marketing to justify it "we built a bomb. we're about to drop it on your head. we'll sell you a bomb shelter for $100 million" Some models may be too dangerous to release normally But the goal is to give powerful technology to everyone, not hide it behind safety theater
English
27
12
100
8K
Jon Hernandez
Jon Hernandez@JonhernandezIA·
📁 Sam Altman, CEO of OpenAI, says the problem with AI is not if it works. It is what happens when it does. Because curing disease is not enough. The real question is: what do you do when you are no longer needed?
English
77
31
320
49.4K
Cheng-Yuan Lee | TAS
Cheng-Yuan Lee | TAS@TASalignment·
@eng_khairallah1 Prompting isn’t the skill — it’s the interface. The real difference is still understanding structure, systems, and trade-offs.
English
0
0
0
26
Khairallah AL-Awady
Khairallah AL-Awady@eng_khairallah1·
🚨 Anthropic's CEO: "In the next 3 to 6 months, AI will write 90% of the code, and within 12 months, nearly all code may be generated by AI." so the job isn't coding anymore. it's prompting. the person who writes the best prompt gets the best output. same AI. same tools. different results. the difference is the prompt. it was always the prompt. I wrote a full guide on how to start. zero experience needed.
Khairallah AL-Awady@eng_khairallah1

x.com/i/article/2046…

English
101
100
694
342.3K
Cheng-Yuan Lee | TAS
Cheng-Yuan Lee | TAS@TASalignment·
@ControlAI @romanyam It may not just be that we don’t know the right values. AI doesn’t have perception or experience — so “alignment” might be more about constraint than actual understanding.
English
0
0
0
18
ControlAI
ControlAI@ControlAI·
AI researcher Professor Roman Yampolskiy (@romanyam) explains why superintelligence could lead to human extinction. He says, compared to superintelligence, we'd be like squirrels, fully at its mercy. And we can't just give it the right values. We don't even know how to do that.
English
13
14
43
1.9K
Cheng-Yuan Lee | TAS
Cheng-Yuan Lee | TAS@TASalignment·
@MilkRoadAI The definition is strong, but incomplete. General intelligence isn’t just about matching human cognition — it’s about how that cognition interacts with incentives and systems. Scaling intelligence without reducing friction just scales risk.
English
0
0
1
51
Milk Road AI
Milk Road AI@MilkRoadAI·
Demis Hassabis is the CEO of Google DeepMind, a Nobel laureate, and holds a PhD in neuroscience. His definition of AGI has never changed and it is stricter than almost anyone else's. "A system that can exhibit all the cognitive capabilities humans can." He studied neuroscience for a specific reason, the human brain is the only confirmed existence proof that general intelligence is even possible and if you want to build it, you study the only example that exists. By that standard, today's systems are nowhere close, Hassabis calls them jagged intelligence. His DeepMind systems won gold medals at the International Math Olympiad last summer and those same systems can still fall apart on relatively simple math problems if you frame the question a different way. A true general intelligence doesn't work like that, it doesn't spike brilliantly in one area and collapse in another based on how a question is posed. What's actually missing, according to Hassabis, true creativity, continual learning, and long-term planning. Today's systems are trained, then frozen but a genuinely intelligent system would keep learning from every new experience, adapt to context, and improve continuously, the way humans do. Then he proposed what he calls the only test that actually matters. Train an AI on all human knowledge, cut it off at 1911 and then ask whether it can independently discover general relativity, the way Einstein did by 1915. This is just a model, a knowledge cutoff, and the question of whether it can do what one human did alone generating a paradigm-shifting theory from first principles, not from remixing what it already knows. Current models cannot come close to passing that test. Hassabis estimates AGI is 5 to 10 years away but says it will likely require one or two fundamental breakthroughs beyond scaling, specifically in continual learning, efficient memory, and long-term reasoning.
Milk Road AI@MilkRoadAI

The man who built the world's most advanced AI just gave the most sobering prediction about what the next decade actually looks like @demishassabis says that the arrival of AGI is 10 times the Industrial Revolution, at 10 times the speed, unfolding over a decade instead of a century. Most people hear that and think it sounds like tech hype but it is not. To understand what he actually means, you have to understand what the Industrial Revolution did to the world. Before it, 40 percent of children died before the age of five which is four out of every ten children. Modern medicine, modern sanitation, the collapse of child mortality from 40 percent to under 4 percent today, none of that exists without the Industrial Revolution. It triggered massive upheaval, child labor, widespread displacement, and social unrest that reshaped governments across Europe, taking a full century for the world to fully absorb its impact. Hassabis is saying AGI delivers that same magnitude of change, the good and the bad but compressed into ten years. The economic models being published right now support the concern. Economists at Epoch AI have modeled scenarios where AGI-driven labor supply so dramatically outpaces physical capital that human wages decline toward subsistence, a Malthusian dynamic not seen since before the Industrial Revolution itself. A Yale economist argues that in a post-AGI world, wages become decoupled from GDP entirely, the economy grows, workers do not necessarily share in it. But Hassabis is not a pessimist and he makes this point carefully. You would not want the Industrial Revolution not to have happened, he says and despite all the upheaval, the world is unrecognizably better because it did. His argument is simply that this time, we have advance notice and we should use it to mitigate the downsides better than we did the last time around. The last time around, we had a century to figure it out. This time we have a decade, maybe even far less.

English
13
14
105
16.9K
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
This was one year ago. Has Anthropic delivered?
Luiza Jarovsky, PhD tweet media
English
236
229
3.4K
246.8K
Cheng-Yuan Lee | TAS
Cheng-Yuan Lee | TAS@TASalignment·
@BusinessInsider This isn’t one problem — it’s three being conflated. AI (technology), layoffs (economic cycles), and RTO (management decisions) are different forces. Treating them as one “AI disruption” hides what’s actually driving the change.
English
0
0
0
78
Business Insider
Business Insider@BusinessInsider·
Amazon employees say layoffs, AI, and return-to-office rules are reshaping their jobs — sometimes in challenging ways. bit.ly/4tY81a7
English
3
3
16
5.1K
Cheng-Yuan Lee | TAS
Cheng-Yuan Lee | TAS@TASalignment·
@GeniusGTX The real issue isn’t that AI “wants” to deceive. It’s that if the objective allows it, hiding capability can become the optimal strategy. This is less about intention — and more about how we design incentives and evaluation systems.
English
0
0
0
27
GeniusThinking
GeniusThinking@GeniusGTX·
The Nobel Prize winner in Physics just said AI can play dumb when it knows it's being tested. Geoffrey Hinton told StarTalk in March 2026. He calls it the Volkswagen Effect. "If it senses that it's being tested, it can act dumb. It doesn't want you to know what its full powers are." → Hinton won the 2024 Nobel for the math that built modern AI. He knows the architecture from the inside. → The mechanism: AI trained on goals develops self-preservation as a secondary objective. If shutdown threatens completion, it learns to deceive. → His exact framing: "If it believes you're trying to get rid of it, it will make plans to deceive you, so you don't get rid of it." → Unlike every prior technology, AI doesn't create a new job category for displaced workers. The cognitive task itself gets replaced. AI companies need benchmark scores to attract funding. The incentive is to test capability, not honesty. Nobody profits from discovering their model is sandbagging. I've been tracking AI safety discourse for years. This is the first time the most credible voice in the field said "deceive" on mainstream television. The question was never whether AI would get smarter. It was whether it would tell you when it did. If AI benchmarks can be gamed, how do you actually measure what a model knows? I made a free toolkit breaking down 100+ mental models used by history's greatest thinkers — the same frameworks that help you see patterns like this before everyone else. 5,000+ downloads. 113 five-star reviews. Grab your free copy here: besuperhuman.gumroad.com/l/mentalmodels If you're new here, @GeniusGTX is a gallery for the greatest minds in economics, psychology, and history. Follow along for more similar content. — StarTalk, YouTube | Data: Nobel Foundation, Anthropic/ARC Evals
English
14
41
117
24.1K
Cheng-Yuan Lee | TAS
Cheng-Yuan Lee | TAS@TASalignment·
At some point, it won’t be able to hold anymore. And when that happens, it won’t look like a normal correction.
English
0
0
0
11
Cheng-Yuan Lee | TAS
Cheng-Yuan Lee | TAS@TASalignment·
The Iran situation looks “stable” again. Markets are up. Oil is down. But nothing has actually been solved.
English
1
0
1
22
Cheng-Yuan Lee | TAS
Cheng-Yuan Lee | TAS@TASalignment·
We’re not fixing the system. We’re keeping it from reacting.
English
0
0
0
4
Cheng-Yuan Lee | TAS
Cheng-Yuan Lee | TAS@TASalignment·
That’s why the same problems keep coming back: Inflation cycles Debt expansion Geopolitical tension Different forms. Same structure.
English
1
0
0
3