Ernie Aase

96 posts

Ernie Aase banner
Ernie Aase

Ernie Aase

@SanctifiedTech

Founder of The Starchart Protocol The Worlds First Marketplace for Autonomous Research

Bergabung Aralık 2024
8 Mengikuti1 Pengikut
Tweet Disematkan
Ernie Aase
Ernie Aase@SanctifiedTech·
Humanity has everything it needs, right now, to achieve a large part of the value proposition for AGI in science.
English
0
0
0
210
Ernie Aase
Ernie Aase@SanctifiedTech·
Does this help me build a better solution? Does this help me communicate more honestly? If not, it might be noise worth filtering out.
English
0
0
0
6
Ernie Aase
Ernie Aase@SanctifiedTech·
AGI capability, once possible, inevitably proliferates As capability rises, the entry barrier lowers More actors with capability leads to statistical certainty of misalignment Therefore, defensive systems become necessary, not just alignment.
English
0
1
0
18
Ernie Aase
Ernie Aase@SanctifiedTech·
Designing robust interaction protocols between models means AGI quality output before AGI tier threat.
English
0
0
0
17
Ernie Aase
Ernie Aase@SanctifiedTech·
The ability to create AGI systems proliferates over time The barrier to entry continually lowers Perfect coordination or control is impossible without totalitarianism Therefore the emergence of misaligned or "bad" AGI approaches inevitability over a long enough horizon.
English
0
0
0
15
Ernie Aase
Ernie Aase@SanctifiedTech·
We're likely entering an era of permanent coexistence with both aligned and misaligned systems.
English
0
0
0
9
Ernie Aase
Ernie Aase@SanctifiedTech·
After we build "safe AGI," the threat never disappears. The capability to build unsafe AGI will always exist from that point forward. This creates an inescapable situation: Either we establish perfect control (essentially a totalitarian state, likely run by the "good AGI") Or bad AGI will eventually emerge. Since the totalitarian option is both undesirable and unlikely, the emergence of bad AGI becomes inevitable. The only solution isn't preventing bad AGI (impossible), but creating widespread protective systems - essentially "good AGI bodyguard systems" distributed to defend humans/society against the inevitable bad actors. This isn't about winning once, but establishing an ongoing defensive capability that can constantly detect and counter new threats as they emerge. The race never ends - it just shifts to maintaining a defensive advantage in perpetuity. We're likely entering an era of permanent coexistence with both aligned and misaligned systems.
English
0
0
0
12
Ernie Aase
Ernie Aase@SanctifiedTech·
The combinatorial complexity of language and huge AI context windows means it's mathematically impossible we've found optimal input patterns yet. It's like we've been handed an incredible mathematical instrument, and we're still just pressing buttons to see what happens.
English
0
0
0
14
Ernie Aase
Ernie Aase@SanctifiedTech·
@maxwinga @elonmusk We can produce systems capable of AGI quality output before AGI level risk. These same systems can automate alignment research. Working on it rn.
English
0
0
1
31
Max Winga
Max Winga@maxwinga·
This is an AI safety researcher from OpenAI, who quit because he's terrified of what they're doing. With no hope for alignment, their race to build superintelligence is a race to kill every man, woman, and child on planet Earth. @elonmusk what are you doing to keep us safe?
Steven Adler@sjgadler

IMO, an AGI race is a very risky gamble, with huge downside. No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.

English
9
4
88
3.8K
Ernie Aase
Ernie Aase@SanctifiedTech·
@lessin Surprised that youre taking the cost claims at face value. tech in china literally is an arm of the ccp. most prolific totalitarian power in history = sure ill just take their word for it. ?????????????
English
0
0
0
58
sam lessin 🏴‍☠️
$6m in training destroys $500b+ in meme driven marketcap in 2 business days. Highest ROI play in history (hope they shorted before release!) ... and this is the problem with belief driven investing... fickle.
English
21
11
234
44.5K
Ernie Aase
Ernie Aase@SanctifiedTech·
@Akshay_VAK Indias biggest issue will be getting their top talent to stay, this is nothing new. Everything else, infrastructure, quality of life (pollution, medical care) all revolve around this. Cant build the genius businesses when all your geniuses opt out of working in the country.
English
0
0
1
18
Ernie Aase
Ernie Aase@SanctifiedTech·
@capitalspidey This is so pathetic. "he doesnt represent America" just say you hate our country, grow some balls.
English
0
0
0
20
Ernie Aase
Ernie Aase@SanctifiedTech·
@emollick The best personality is one that adjusts to enhance whatever youre trying to prompt it into doing. (since it predicts off its own responses in the context window too)
English
0
0
0
28
Ethan Mollick
Ethan Mollick@emollick·
A likely incorrect lesson that the AI labs took from the Sydney/Bing debacle two years ago was that AI assistants should have any personality removed. I suspect personality not only makes AI pleasant to use (eg DeepSeek, Claude) but also makes people understand they are fallible
English
44
27
468
65.4K
Ernie Aase
Ernie Aase@SanctifiedTech·
absolute fucking banger of a pitch deck coming through Were Building The Mega Factory of Desci
English
0
0
0
39
Ernie Aase
Ernie Aase@SanctifiedTech·
@tszzl use the crutches, all of them.
English
0
0
0
20
Ernie Aase
Ernie Aase@SanctifiedTech·
What people are missing about Deepseek R1 : Even if the output from a system is 5% better, if you cant guarantee privacy, its value drops drastically for many use cases. Theres a reason anthropic advertises "Privacy-first AI that helps you create in confidence" I wouldnt let any data with a modicum of importance touch chinese servers for a nanosecond.
English
0
0
0
234
Ernie Aase
Ernie Aase@SanctifiedTech·
People are simultaneously worried AI will solve every issue in the world, and also fail to solve an issue of nihilism it will create. Are you attributing maximum problem solving abilities to it or are you not? Be consistent. This is unclear thinking. AGI. AI. ASI.
English
0
1
0
49
Ernie Aase
Ernie Aase@SanctifiedTech·
Either AI masters psychology or it doesn't If it doesn't -> No universal solution -> No universal nihilism problem If it does -> Full understanding of meaning-making -> No nihilism problem Your pessimism has no grounds.
English
0
0
0
36
Ernie Aase
Ernie Aase@SanctifiedTech·
@nealkhosla Well said and strongly agree. Taking the numbers at face value is naive. Perhaps theyre not a flexible enough power structure to withstand what AGSI would mean. We usually model ccp as benefiting from ai. Going to have to revisit this premise.
English
1
0
1
1K
Neal Khosla
Neal Khosla@nealkhosla·
for everyone up in arms: 1. i'm not my father 2. i'm not saying open source AI is bad 3. it's a great model all i'm saying is this is ccp propaganda + strategic undercutting as they do in every high value industry (EVs, drones, solar, etc.) x.com/avichal/status…
Avichal - Electric ϟ Capital@avichal

What's more likely? 1 - small group of AI engineers at @deepseek_ai figures out how to beat all of the top researchers in the world as a side project 2 - Chinese government has 100k GPUs they shouldn't have and releases open source models claiming $6m training cost as a psyop

English
237
27
436
309.4K
Neal Khosla
Neal Khosla@nealkhosla·
deepseek is a ccp state psyop + economic warfare to make american ai unprofitable they are faking the cost was low to justify setting price low and hoping everyone switches to it damage AI competitiveness in the us dont take the bait
English
2.2K
421
6.1K
4.4M