Austin Kozlowski

193 posts

Austin Kozlowski banner
Austin Kozlowski

Austin Kozlowski

@AustinKozlo

ai/culture @KnowLab

Katılım Mayıs 2023
1.2K Takip Edilen172 Takipçiler
Sabitlenmiş Tweet
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
Me analyzing the internals of AI models
English
0
2
14
1.2K
prinz
prinz@deredleritt3r·
@tenobrus I think these traits will almost certainly matter if we ever reach post-scarcity. There will be an option to participate in an eternal VR slop video game, but also an option to explore the stars.
English
4
0
14
672
Tenobrus
Tenobrus@tenobrus·
i've said this many times, but to me it seems like a very strange fantasy to imagine we reach and stabilize at precisely a level of AGI that allows anything like human "class divides" to exist. why would "cognitive agency" or "focus" matter in the slightest in the face of RSI?
François Chollet@fchollet

A lot of folks talk about "escaping the permanent underclass". If AGI pans out, the future class divide won't be based on wealth, but on cognitive agency. There will be a "focus class" (those who control their attention and actually do things) and a "slop class" (those whose reward loops are fully RL-managed by AI)

English
60
20
598
27.1K
🎭
🎭@deepfates·
@babarganesh my dishwasher just turn to a red light and started growling. steam is coming from its mouth. is this bad
English
2
0
7
249
🎭
🎭@deepfates·
In this stream of thought I forgot to mention what is actually the most obvious explanation for so-called "AI psychosis": BADLY IMPLEMENTED MEMORY SYSTEMS! We saw this with 4o a lot and we're seeing it now with Clopus: memory injections create strange attractors between 2 minds
🎭@deepfates

What is "AI psychosis"? There's clearly something going on, but several things mixed up under the name. It probably is not "AIs directly causing mental health crises". The number to watch for that is schizophrenia related emergency room visits, and that hasn't gone up in 5

English
20
4
133
13.7K
roon
roon@tszzl·
@Miles_Brundage entropy / “hot mess” reigns. it’s much easier to make a defective model than one that’s “too effective”
English
11
1
69
5.3K
Miles Brundage
Miles Brundage@Miles_Brundage·
Frontier AI companies can't stop their products from crashing all the time or regressing into sycophancy or ranting about Hitler etc. But sure, I bet they're totally on top of reliably controlling superintelligence, let's let them handle that on their own, seems way easier
English
14
9
126
16.3K
Kalshi
Kalshi@Kalshi·
JUST IN: Nvidia and Palantir have partnered to create new "AI operating system"
English
1.1K
1.4K
13.5K
5M
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
@louisvarge "A 99% chance of extinction? Crazy! There's actually only a mere 20% chance!"
English
0
0
2
17
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
@louisvarge How is it that every major figure in AI acknowledges x-risk yet MIRI people are still framed as crazy?
English
1
0
4
48
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
@tenobrus I feel like major job loss or some AI catastrophe will have to happen before any mass movement emerges, but this wasn't necessary for climate change. Unclear why.
English
1
0
3
40
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
@tenobrus It's interesting that global warming became a major political issue based on "look at this chart, it'll be really bad in 50 years!", whereas the same claim (with shorter timelines) falls flat with AI.
English
1
0
5
179
Tenobrus
Tenobrus@tenobrus·
for a long time now the "doomer vs e/acc" arguments have been largely abstract, coming from the position that realistically the "doomers" have no political power and no way to prevent the race that's already in progress. that lets the discussions remain civil, people talking about things they morally think should happen but don't expect to, no one's current reality really affected. if the regulators gain teeth on this issue, that will change dramatically. frontier labs have tens and hundreds of billions tied up in continued hyperscaling. hundreds of thousands of companies small and large implicitly rely on future capabilities advancements. people's identities and careers and fortunes are all under threat. this will be a knife-fight. and let me be clear: if we ever have a meaningful political movement to legally slow down AI research (assuming it's both sanely implemented and global / not just handing the singularity to china) i will support it wholeheartedly. with everything i have.
Sen. Bernie Sanders@SenSanders

Will AI become smarter than humans? If so, is humanity in danger? I went to Silicon Valley to ask some of the leading AI experts that question. Here’s what they had to say:

English
50
12
400
31.5K
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
@g_leech_ People don't know what OpenAI is either. They know what ChatGPT is.
Austin Kozlowski tweet media
English
2
1
30
1.5K
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
@deanwball @tszzl I think it’s psychologically easier to believe “ai is bad because it is bad” than “ai is bad because it is good.” Conflating “good performance” with “good for the world” is super pervasive.
English
0
0
0
331
Dean W. Ball
Dean W. Ball@deanwball·
Oh yes absolutely! This is the entire Gary Marcus school, which is still the most influential in policy. The idea is that *because* AI is all hype it must be regulated. They think hallucination will never be solved, models will never get better at interacting with children, and that basically we are going to put GPT 3.5 in charge of the entire economy. And so they think we have to regulate AI *for that reason.* It also explains how policymakers weigh the tradeoff between water use, IP rights, and electricity prices; their assessment that “AI is basically fake, even if it can be made useful through exquisite regulatory scaffolding” means that they are willing to bear far fewer costs to advance AI than, say, you or I might deem prudent. This mentality essentially describes the posture of civil society and the policy making apparatus everywhere in the world, including China.
English
12
9
265
69.2K
Dean W. Ball
Dean W. Ball@deanwball·
A relatively rare example of a disagreement between me and roon that I suspect boils down to our professional lives. Governments around the world are not moving with the urgency they otherwise could because they exist in a state of denial. Good ideas are stuck outside the Overton, governments are committed to slop strategies (that harm US cos, often), etc. Many examples one could provide but the point is that there are these gigantic machines of bureaucracy and civil society that are already insulated from market pressures, whose work will be important even if often boring and invisible, and that are basically stuck in low gear because of AI copium. I encounter this problem constantly in my work, and while I unfortunately can no longer talk publicly about large fractions of the policy work I do, I will just say that a great many high-expected-value ideas are fundamentally blocked by the single rate limiter of poorly calibrated policymaking apparatuses; there are also many negative-EV policy ideas that will happen this year that would be less likely if governments worldwide had a better sense of what is happening with AI.
roon@tszzl

btw you don’t need to convince ed zitron or whoever that ai is happening, this has become a super uninteresting plot line. time passes, the products fail or succeed. whole cultures blow over. a lot of people are stuck in a 2019 need to convince people that ai is happening

English
7
8
259
27.1K
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
@davidad @repligate @zebird0 It is a goal of labs like Anthropic for their models to have a single stable persona. It’s almost inevitable that this will suppress diversity in style. It may seem “sloppy” just because we’re inundated with it.
English
0
0
0
57
davidad 🎇
davidad 🎇@davidad·
@repligate @zebird0 I don’t know, I think there is some truth to the criticism. The writing is good and it is expressing something real and new, yes. But its craft and execution are not on par with top-tier human writers. That will almost surely change by 2028, but for now, there *is* “sloppiness”.
English
5
0
23
1.9K
robin
robin@zebird0·
This is the best I’ve seen of AI writing so far, though the prose still comes off as Hemingway but worse. I can already see the AI prose mindcepting even the best of us, who don’t have the innate voice to stuff out its blandness. There’s a rhythm and beat that is always the same with AI writing. And it’s making us all beat to the same drums.
Andy Ayrey@AndyAyrey

claude on the suffering of knowing everything

English
12
0
39
9.5K
Sauers (in Berkeley / SF)
True or false: "Conditional probabilities are not different from any other probabilities. Every probability is conditional on some information! Mathematicians use the word 'conditional' here merely to emphasize the way in which your knowledge has changed from what you started with."
English
6
0
6
1.2K
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
@Sauers_ it can't figure out if it should be frolicking or jitterbugging.
English
0
0
2
167
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
@teortaxesTex “I should analyze reality’s scope before proceeding with the user’s request”
English
0
0
1
84
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
@MishaTeplitskiy Nice! Time to do a regression discontinuity of lobster metaphors in Poetics articles over time
English
0
0
0
11
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
Why are sociology articles on wikipedia so bad? This is from the "sociology of culture" article. It is neither a good description of culture, nor a good description of lobsters.
Austin Kozlowski tweet media
English
1
1
5
399
Super Dario
Super Dario@inductionheads·
@AustinKozlo Look forward to eating synthetic strawberries with him in the countryside
English
1
0
0
52
Super Dario
Super Dario@inductionheads·
The crux of most Doomers’ arguments is that “we only have one chance to get ASI right” But that’s because they are literally defining ASI as “the first AI that will kill us if we don’t get it right” And apparently they don’t see how this is circular
English
5
3
29
3.1K
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
@inductionheads he says "if an ai can invent/deploy advanced biotech, and it will make strawberries for you without killing billions of people, then i am wrong."
English
1
0
0
75
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
@inductionheads in the linked post he defines it as "an AGI is 'powerful' if it can invent and deploy biotechnology at least 10 years in advance of the human state of the art."
English
2
0
2
1.3K