Clayton Miller

4.7K posts

Clayton Miller banner
Clayton Miller

Clayton Miller

@claymill

Emergentist. The beauty of Fourier is not that everything can be reduced to sine waves, but that sines can form a universe of infinitely complex waveforms.

Chicago Katılım Haziran 2009
301 Takip Edilen2.1K Takipçiler
Sabitlenmiş Tweet
Clayton Miller
Clayton Miller@claymill·
When I claim the label of “emergentist,” this is the kind of cosmic reductionism to which I stand constitutionally opposed. “We’re all just meatbags.” “We’re all just stardust.” “We’re all just carbon atoms.” We are infinitely complex arrangements of systems built upon systems, from the quantum properties of those carbon atoms up through the proteins that make the “meat” we are so glibly reduced to, through the complexities and adaptations of mammalian bodies, up to the fearsome order of the human brain and the intricate sprawl of human society and culture. To reduce us to anything less is to deny the cosmically implausible, fearfully and wonderfully emergent singularity that is humanity.
Vivian Midha Shen@vivianmshen

i think about this story often now we're all just meatbags

English
0
0
6
635
Clayton Miller
Clayton Miller@claymill·
@drunkenalpaca @Slatzism @Iithosphere But he was! In the mid-late 2000s, it was a bold counterpoint to the blind nationalism that gripped a certain segment of Americans and probably nobody else. This had become completely irrelevant by 2012 and may be a risible relic in 2026, but it was viscerally real circa 2007.
English
0
0
0
43
📎
📎@Iithosphere·
new banksy artwork, a man blinded by his flag
📎 tweet media📎 tweet media
English
938
15.6K
180.8K
6.8M
Clayton Miller retweetledi
Andy Matuschak
Andy Matuschak@andy_matuschak·
My favorite interfaces as a kid had a pervading feeling of magic and mystery. A palette of totally mystifying controls, secret places waiting to be discovered.
Andy Matuschak tweet mediaAndy Matuschak tweet mediaAndy Matuschak tweet media
English
12
20
191
9.4K
Clayton Miller
Clayton Miller@claymill·
@dystopiangf This fence is structurally oppressive and we need to remove 👏 it 👏 now 👏
English
0
0
1
47
ℜ𝔞𝔢
ℜ𝔞𝔢@dystopiangf·
I’m always so struck by the utter incuriosity leftist intellectuals have about why anything exists. They can spend hundreds of pages describing some “oppressive” societal structure in incredibly high resolution without ever once wondering how it came to be or why
English
205
430
6.1K
147.7K
Clayton Miller
Clayton Miller@claymill·
iPhone: use the existing Web ecosystem to bootstrap an app ecosystem GPTphone: use the existing Android ecosystem to bootstrap a model-context and tool-endpoint ecosystem
signüll@signulll

it’s interesting that openai is building toward a phone (instead of just earbuds) cuz this is probably the first moment where that isn’t obviously insane. lots of others like fb, microsoft, amazon, & others failed because they were trying to enter the smartphone market without a new computing paradigm. i.e. they were building another phone inside apple & google’s world. that meant they needed all the same things like an app store, messaging, developers, services, hardware, support, accessories, habit migration, & consumer trust. which is basically an impossible task. but ai sorta changes the premise. for the first time in a long time, you can argue the phone itself deserves to be reconsidered from first principles. so what factors matter most? there are likely three. first, the app store. this is where microsoft & amazon got murdered. they couldn’t create a real developer ecosystem. microsoft couldn’t even get google to build a proper youtube app. amazon had to build its own app store because it didn’t use the play store. once you don’t have the apps, you’re effectively good as dead. but ai potentially weakens this constraint. the future is likely one where agents generate lightweight interfaces on demand, call services directly, complete workflows, & collapse a lot of app specific functionality into primitives. you don’t need a weather app, a travel app, a banking app, a calendar app, a notes app, & a food app in the same way if the agent can understand intent, access context, & execute. great news! okay next. messaging. this is the hardest one. the phone is a social object. it is the remote control for your relationships. messaging is the most valuable layer of this. that means imessage, blue bubbles, facetime, & group chats. no matter how agentic the device becomes, openai still has to deal with the fact that ppl do not casually move their social graph. you can replace apps with agents more easily than you can replace group chats. the user can choose a smarter device, but their friends & family get a vote too. apple won here because there was no one doing this when they launched the iphone at scale. last one is ecosystem. an iphone is not just an iphone. it is the center of all personal hardware which consists of airpods, apple watch, mac, ipad, repairs, & retail stores. does openai build stores? does it build airpods competitor too? now you’re no longer just building a phone. apple has the better starting position. openai has to invent the ai native phone from scratch with all of the network effects. apple has to reinvent the iphone before someone else makes it feel like a blackberry. openai has the best shot at this than fb, microsoft, amazon ever did. esp with the talent they have. this will be very fun to watch & participate in.

English
0
0
0
126
Mike Solana
Mike Solana@micsolana·
@joshdcaplan @ollieforsyth drudge is pre-"new media," and not really sure how to categorize him honestly. massively important and influential, but really in his own lane.
English
1
0
13
1.1K
Ollie Forsyth
Ollie Forsyth@ollieforsyth·
The New Media Landscape Attention. Distribution. Power. Creators can now build media channels and reach millions of fans faster — and at a fraction of the cost of previous eras. We are entering a new era: the dawn of new media. Creators are going direct. Journalists are going independent. Content is becoming timely or timeless. Brands are hiring heads of new media to stay relevant. We mapped 270+ emerging new media channels shaping how — and from whom — people consume news, trends, startups, and technology. Every creator and new media channel deserves attention — and a community to call home. This is our mission. Don’t see your channel listed? Add it or explore all channels at new-media.co! This is the era of NEW MEDIA!
Ollie Forsyth tweet media
English
78
53
420
181K
Clayton Miller
Clayton Miller@claymill·
The elevators at my office have a similar problem where you scan a badge and there's some latency before it tells you which elevator to take, compounded by the fact that it's not a very intuitive mapping going from the screen to elevators that may be behind you. What I would do is add a perspective transform to the elevator bank map to address the latter problem, but also show it *immediately* with a waiting spinner so it's clear the system is in the process of assigning you an elevator.
John Carmack@ID_AA_Carmack

I was on a cruise ship last week (Star of the Seas), and they had pods of 10 elevators in a circle, where you picked your destination floor on a pad, and it directed you to the correct elevator, which was often behind you. It seemed to work efficiently, but multiple times I saw people tap their floor and just look away, conditioned for normal elevator operation, and miss the arrival of the elevator they were supposed to get on. Addressing my normal pet peeve of interaction feedback latency would have helped — with all the fades and slides, it takes over a second for the first hint of the elevator to show up, and two seconds for it to fully stabilize. That may not seem like much in some circumstances, but it is plenty of time for people to look away. The elevator letter should appear instantaneously, maybe with some festive animation around it to hold attention that was on the button press. Even better would be to add a localized audio cue from the elevator the instant you pressed the button, which would let you immediately know where it is without having to scan for the lighted letter. (the Starlink internet on the ship was excellent, allowing me to get some work in at sea)

English
0
0
0
101
Clayton Miller
Clayton Miller@claymill·
@morallawwithin “And that, Jean-Luc, is the real test. Not whether enough of you choose blue to save the blues. But whether you are brave enough to admit there was never a green button to begin with. The universe is red and blue, Captain. Choose… or be chosen for.”
Clayton Miller tweet media
English
0
0
19
857
florence 🦐🪻
florence 🦐🪻@morallawwithin·
There are three buttons. If at least half of everyone in the world presses blue, then everybody lives. Otherwise, everyone who voted Blue dies. If you press red, you survive for certain. Finally, you may press green to opt out, refusing to participate in these twisted games.
English
259
12
272
60.9K
Clayton Miller
Clayton Miller@claymill·
As Button Discourse runs into its third day, the clearest lesson is not one of game theory, altruism, or anything so much as the role of language in shaping people’s concept of the world. You don’t need to reimagine the thought experiment as a cliff or a giant blender to change people’s intuitive response; you need only rephrase the premise: Everyone in the world has to take a private vote by pressing a red or blue button. If everyone presses the red button, nobody dies. Anyone who presses the blue button dies unless at least 50% of people also press it. Which button would you press?
Aryeh Kontorovich@aryehazan

just stop it with the buttons really cut it out it's retarded and annoying no need for idiotic allegories just ask whatever it is that you want to ask do you value the lives of strangers more than of near kin, do you care about stupid people you're not related to, etc

English
1
0
2
173
Clayton Miller
Clayton Miller@claymill·
The whole thought experiment was only ever really about language at the foundational level. I’m pretty sure my rewrite would test very differently from the original: “Everyone in the world has to take a private vote by pressing a red or blue button. If everyone presses the red button, nobody dies. Anyone who presses the blue button dies unless at least 50% of people also press it. Which button would you press?”
English
1
0
5
303
Nemesis 2026
Nemesis 2026@Nemtastic1·
@cremieuxrecueil "Nothing happens" isn't a good characterization of the red button though. Nothing happens *to you*. But there's a very small chance that you choosing the red button kills 50% of humanity.
English
26
0
43
3K
Crémieux
Crémieux@cremieuxrecueil·
If this was how the buttons looked, what portion of humanity would press blue? It'd probably be a large enough number due to mistakes, the young, altruists, etc., such that it remains wise to press blue.
Crémieux tweet media
English
434
109
3.4K
228.8K
Clayton Miller
Clayton Miller@claymill·
It didn't muddy the waters enough having what should have been document icons as app icons for Docs/Sheets/Slides, so now only one app has a document icon for the app and the other two have app icons. Progress!
English
0
0
0
59
Clayton Miller
Clayton Miller@claymill·
@realEstateTrent It's the place to go for electronics that were discontinued too soon, like Apple's Airport Express (still the best way to turn an old stereo into an AirPlay 2 receiver).
English
0
0
0
33
StripMallGuy
StripMallGuy@realEstateTrent·
I Haven’t thought about eBay in over 10 years. They’re still doing $10B+ in revenue. Serious question: Who is actually using it?
English
3.5K
81
6.7K
1.2M
Jon Stokes
Jon Stokes@jon_stokes·
No. Absolutely not with a single word of this. There were many people well before Yud who said that AI would be important & transformative, & who have brought us to this moment -- one could obviously name Kurzweil, but SFBA is full of them... wow I could go on about that but post has a WORSE problem tho, and it's something I see again and again and again and I'm super sick of it. I thought the QC post was a useful corrective for it but here is this person literally doing the bad thing in a QT of that post. The problem: Yud's ENTIRE contribution to this discussion is that advanced AI will definitely, and without a doubt bring about the complete and total extinction of humanity. That's it, that's the pitch, and as the OT says that has been the pitch since his brother tragically died. The doom came first and the doom is the point and has remained the point to this very day right now in 2026. You do NOT get to bracket the imminent, definite doom out, as this pitiful, excuse-making post does, because you are embarrassed about it or disagree or really really wish he were making some argument OTHER than the argument that is the LITERAL TITLE TO HIS LATEST BOOK. You do NOT get to pretend he is merely warning about "risks" in general, and alerting people to the fact that this could go sideways in some mild but eminently survivable way like cyberattacks or jobs or whatever. And omg you do NOT get to credit him with getting people excited and hyped about AI so they want to take it seriously and work in it!! What even is that!? And this isn't even the first time I have seen this on my TL this week! Are you high?! But back to the prior point: None of this "risks and harms" stuff is what he is saying, nor (crucially!) is it his Big Contribution to the AI discourse. Just because this was the entry point of you and your little squad into the concept of, "this AI thing will be a big deal, but maybe it isn't all upside...", you don't get to retcon Yud's message -- a bizarre, insane, toxic message that still resonates and is still doing massive amounts of psychic and economic damage -- into some reasonable warning about how AI might have some big downsides that we should all think about. And the "alignment" research you're so high on that you credit him with -- it's a bunch of anthropomorphizing fakery and is basically mass LLM psychosis that has been enthusiastically bent by its commercial practitioners to the task of pumping up startup valuations. We'd all be better off if nobody was paid to do this entirely fake "research" -- if we could get DOGE to step in with a spreadsheet and fire /that/ group of phony academics. At any rate, I don't know who you are anon, but you are clearly a child. Like a literal child who has read nothing and knows nothing except BDSM fan fiction and the weird, mass-hallucinatory output of some internet forum. It's not a crime to have been born intellectually in such a ghetto, but it is a total affront to act as if the world we live in now was born there because you never really managed to get out. You have embarrassed yourself, here. Delete your account.
Tenobrus@tenobrus

this hopefully won't sound like an attack on QC but maybe will be taken as one: how you handle the impending singularity is in fact entirely up to you. eliezer wrote the sequences and HPMOR to get young and smart people very interested in these problems and making sure we get the good ending. and he has turned out to be largely quite obviously right in most of the important ways. transformative artificial intelligence *is* impending, in our lifetimes. it almost certainly *is* the most important political, practical, and moral issue of our times, totally outweighing everything else. it very very likely *does* carry tremendous risks. these things have all been proved more correct with time. to the extent that it now looks like we're in a better timeline than we could have been, to the extent that we have better alignment tools and the models seem safer, this is not purely due to luck. we did not "just get alignment by default". we got *some* of that, we got way more than eliezer predicted! but much more importantly we got a huge population of the smartest people in the world who are directly working on the most transformative technologies in the world *being very careful and doing a lot of work* to actually make alignment happen. and this is very clearly downstream of eliezer's efforts and writings. not wholly!! but clearly to a meaningful extent his cultural influence pushed towards this. not everyone was exposed to his memetic sphere and felt immense pressure and panic and shame over the fate of the world. many of the people exposed to these concepts, who correctly determined they were largely accurate, instead now just work at anthropic, or openai. rather than having their minds broken, they decided to do something about it, and are currently doing something about it, and it's currently (to some degree) working. it is not a hell realm for them: they found a problem desperately worth working on and are working on it! they walk out in the light of day and run and laugh and dance along with the rest of us. being crippled with indecision and panic over the weight of the world and feeling that it must rest directly on your shoulders *is not something eliezer yudkowsky told you to do*. it is not unique to lesswrong posters or effective altruists or singularitarians. many people are neurotic! many people twist themselves into horrible painful knots at all kinds of aspects of their lives, important or unimportant. most of the time it actually has very little to do with the specific ideas or subcultures they're in. it's the kind of thing they would do to themselves wherever they are, until they learn enough about themselves to stop. now there's real truth to what QC says. singularitariansim and effective altruism *are* quite potentially totalizing ideologies, and they can have serious negative impacts on certain types of people. i don't mean this as an attack on him: i went through something very similar myself. i read lesswrong very young, starting around 13. i was pulled in by the force of HPMOR in exactly the way it now seems eliezer intended, holistically into his worldview and frame. i planned out my trajectory as a high schooler, applied to colleges with good CS programs for the purpose of getting a PhD in AI, either helping at MIRI directly or wherever else seemed useful at the time. i got into ML PhD programs, and didn't attend. i correctly determined at the time that i was depressed as fuck and that if i tried to go another 5 years stuck in a little box churning through training runs I would lose my mind. i might not survive. i decided i had to just pursue happiness instead. it broke much of my self image, the stuff i'd been working towards since my identity even started forming. i stayed depressed for a long long time. but i reached a different frame. that's just not how morality actually works man. you should care about the child drowning in the pool next to you, you should think about global utility, you should give something to against malaria foundation. and you should care about ai safety. but you're a human!! you're a person! you *deserve* to be happy. you don't have to donate every penny you make to EA orgs! they're *not asking for that*!! the pledge is called "giving what you can" not "giving what you can't". i didn't have it in me to give my youth and mind towards saving the world. i became a normal software engineer, i tried to build a happy life. it's okay! i gave what i could and it turned out i didn't have much more. maybe sometimes my words here help a little, maybe not. maybe one day i'll find it in me to do something harder. but the choice to place the burdens of the world on my shoulders *was mine*, not imposed by anyone else, and it was perfectly possible for me to just... stop. eliezer is mostly right about most things he says. that doesn't stop you from taking a deep breath, and hearing the birds outside, and loving those around you, and being happy. you don't need to believe false things to *be yourself and live a good life*. most people through history have lived with tremendous danger all around them, and found the joy anyway.

English
11
8
76
8.9K
Midwest Antiquarian
Midwest Antiquarian@Eric_Erins·
The “Mall” at the Park Tower Condo building in Edgewater was hosting a rummage sale. Always wondered what went on in there and it appears it really was an indoor shopping mall at one point. Most of the storefronts today seem to be offices for Lettuce Entertain You
Midwest Antiquarian tweet mediaMidwest Antiquarian tweet mediaMidwest Antiquarian tweet media
English
12
2
267
20.9K
Kitten 🐈
Kitten 🐈@kitten_beloved·
Actually somebody does need to die, someone will die, it is a law of nature, and it makes us sad to watch you throw your life away in a fruitless attempt to prevent it
taoki@justalexoki

English
47
6
528
16.1K
Clayton Miller
Clayton Miller@claymill·
I am probably about six hours too late for this to become the banger it was meant to be, but I’m still proud of it.
English
0
1
4
105