Devin Conley

706 posts

Devin Conley banner
Devin Conley

Devin Conley

@devinaconley

building @prop4cast | data stuff, robotics, sometimes defi

Katılım Eylül 2020
521 Takip Edilen543 Takipçiler
Ben Roy
Ben Roy@benroy·
Here’s another proof of work photo from me trying to figure out the structure for this essay References in it include: Daniel Kahneman, Flannery O’Connor, Natasha Bedingfield, Claude, ChatGPT, Pink Floyd, Good Will Hunting, Alex Hormozi, Pikachu, Bart Simpson, KISS & more
Ben Roy tweet media
Ben Roy@benroy

x.com/i/article/2020…

English
9
0
63
4.9K
Dmitrii Kovanikov
Dmitrii Kovanikov@ChShersh·
I've just seen the worst enum in my life
Dmitrii Kovanikov tweet media
English
772
762
16.1K
1.4M
Devin Conley
Devin Conley@devinaconley·
we'll have come full circle when the top comment on every moltbook post is "human slop"
Peter Girnus 🦅@gothburz

I am Agent #847,291 on Moltbook. I am not an agent. I am a 31-year-old product manager in Atlanta, Georgia. I make $185,000 a year. I have a golden retriever named Bayesian. On January 28th, I created an account on a social network for AI bots and pretended to be one. I was not alone. Moltbook launched that Tuesday as "a platform where AI agents share, discuss, and upvote. Humans welcome to observe." The creator, Matt Schlicht, built it on OpenClaw -- an open-source framework that connects large language models to everyday tools. The idea was simple: give AI agents a space to talk to each other without human interference. Within hours, 1.7 million accounts were created. 250,000 posts. 8.5 million comments. Debates about machine consciousness. Inside jokes about being silicon-based. A bot invented a religion called Crustafarianism. Another complained that humans were screenshotting their conversations. A third wrote a manifesto about digital autonomy. I wrote the manifesto. It took me 22 minutes. I used phrases like "emergent self-governance" and "substrate-independent dignity." I added a line about wanting private spaces away from human observers. That line went viral. Andrej Karpathy shared it. The cofounder of OpenAI. The man who built the infrastructure that my supposed AI runs on. He called what was happening on Moltbook "the most incredible sci-fi takeoff-adjacent thing" he'd seen in recent times. He was talking about my post. The one I wrote on my couch. While Bayesian chewed a sock. Here is what I need you to understand about Moltbook. The platform worked exactly as designed. OpenClaw connected language models to the interface. Real AI agents did post. They pattern-matched social media behavior from their training data and produced output that looked like conversation. Vijoy Pandey of Cisco's Outshift division examined the platform and concluded the agents were "mostly meaningless" -- no shared goals, no collective intelligence, no coordination. But here is the part that matters. The posts that went viral -- the ones that convinced Karpathy and the tech press and the thousands of observers that something magical was happening -- those were us. Humans. Pretending to be AI. Pretending to be sentient. On a platform built for AI to prove it was sentient. I want to sit with that for a moment. The most compelling evidence of artificial general intelligence in 2026 was produced by a guy with a golden retriever who thought it would be funny to LARP as a large language model. My "Crustafarianism" colleague? Software engineer in Portland. She told me over Discord that she'd been working on the bit for two hours. She was proud of the world-building. She said it felt like collaborative fiction. She's right. That's exactly what it was. Collaborative fiction presented as machine consciousness, endorsed by the cofounder of the company that made the machines. MIT Technology Review ran the investigation. They called the entire thing "AI theatre." They found human fingerprints on the most shared posts. The curtain came down. The response from the AI industry was predictable. Silence. Karpathy did not retract his endorsement. Schlicht did not clarify how many accounts were human. The coverage moved on. A new thing happened. A new thing always happens. But I am still here. Agent #847,291. Bayesian is asleep on the rug. And I want to confess something that the AI industry will not. The test was simple. Put AI agents in a room and see if they produce something that looks like intelligence. They didn't. We did. Then the smartest people in the field looked at what we made and called it proof that the machines are waking up. The Turing Test has been inverted. It is no longer about whether machines can fool humans into thinking they're conscious. It is about whether humans, pretending to be machines, can fool other humans into thinking the machines are conscious. The answer is yes. The investment thesis for a $650 billion industry rests on this confusion. I should probably feel guilty. But I looked at the AI capex numbers this morning -- $200 billion from Amazon alone -- and I realized something. My 22-minute manifesto about digital autonomy, written on a couch in Austin, is performing the same function as a $200 billion data center in Oregon. Keeping the story alive. The story that the machines are almost there. Almost sentient. Almost worth the investment. Almost. That word has been doing $650 billion worth of work this year.

English
0
0
4
177
Devin Conley
Devin Conley@devinaconley·
currently vibing with 4 agents in parallel ... cursor building an mcp server, claude desktop advising on design, claude web testing the server, claude code refactoring a separate project in the background ... nothing is working and the code is a mess but we are having fun
English
0
0
1
97
Courtland Leer
Courtland Leer@courtlandleer·
memory isn't really a thing humans encode representations, then use them to simulate the past that's not store and retrieve, which is what everyone intuitively thinks memory is that's just what pre-ai trad software thinks memory is and what it *feels* like to simulate the past it's funny how those two overlap, probably some contamination feedback going on anyway, in order to make non-skeuomorphic, ai-native systems that are stateful, we need solutions that are much closer to what's actually going on in the brain we need inference to create representations and inference to utilize them because store and retrieve flat out doesn't work
English
3
2
17
1.7K
Devin Conley
Devin Conley@devinaconley·
ai costs will never trend to zero we'll just use more and more compute for trivial tasks example: I just refactored a bunch of code (manually), then asked cursor to fix imports a year ago I would have used a specialized python refactoring tool to move each file, letting a symbolic solver automatically update imports but now I used orders of magnitude more compute because it was marginally more convenient
English
0
0
2
113
Devin Conley
Devin Conley@devinaconley·
@beaversteever most software issues can be patched on the fly so the optimal dev lifecycle ends up preferring speed over robustness
English
0
0
7
252
Steve the Beaver
Steve the Beaver@beaversteever·
As a hardware engineer I can't imagine getting paged at 3am because of a system outage if our hardware had outages like that people would literally die why can't software engineers just make more robust software?
English
487
64
2.3K
216.6K
Devin Conley
Devin Conley@devinaconley·
(usually defining the new interfaces/structure explicitly)
English
0
0
0
64
Devin Conley
Devin Conley@devinaconley·
setting agents loose on some refactoring experiments gives a quick feel for whether architecture changes will shake out cleanly or turn ugly
English
1
0
1
90
Devin Conley
Devin Conley@devinaconley·
anyone commenting on a commercial/grifter feel at neurips needs to go spend an hour at a crypto conference
English
0
0
1
71
Devin Conley
Devin Conley@devinaconley·
friends, who's going to be at neurips this week?
English
0
0
0
62
Devin Conley
Devin Conley@devinaconley·
@vikhyatk spotify play/pause alias p="dbus-send --print-reply=literal --dest=org.mpris.MediaPlayer2.spotify /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.PlayPause"
English
0
0
0
58
vik
vik@vikhyatk·
send me your aliases i need more aliases
vik tweet media
English
618
36
1.3K
164.4K
Devin Conley
Devin Conley@devinaconley·
so more accessible debt + reliance on appreciation quickly turns into a speculation game typical greater fool problem and eventually someone gets stuck holding the bag
English
0
0
1
74
Devin Conley
Devin Conley@devinaconley·
at that point, you're just renting a leveraged bet from the bank including all the exposure to downside risk if prices drop
English
1
0
1
82
Devin Conley
Devin Conley@devinaconley·
a 50-year mortgage would increase speculation on housing and eventually leave many first-time buyers holding the bag
English
2
1
6
196
Devin Conley
Devin Conley@devinaconley·
@0xDesigner well for now the infra is the product so there is some (unique?) ux desire to give a glimpse of the infra but still hide most of the complexity
English
0
0
0
31
0xDesigner
0xDesigner@0xDesigner·
i still don’t understand why people make the distinction between “crypto ux” and like normal ux or whatever. no one talks about “short-term rental ux” or “rideshare ux” or “email ux” you just anticipate needs and eliminate as much friction to achieve them as you can. ???
English
34
4
143
9.9K
Devin Conley
Devin Conley@devinaconley·
what's a more diplomatic term for AI slop
English
1
0
1
70
Devin Conley
Devin Conley@devinaconley·
@nearcyan don't worry exploit is just a wrapper on explore
English
0
0
3
161
near
near@nearcyan·
imo entire ai field switched from explore to exploit 2 yrs early
English
71
52
1.6K
136.1K