dre ⌨️

585 posts

dre ⌨️

dre ⌨️

@andretypes

sure! here’s a short bio for your X profile that won’t look AI generated:

انضم Kasım 2025
275 يتبع16 المتابعون
dre ⌨️ أُعيد تغريده
Brian Roemmele
Brian Roemmele@BrianRoemmele·
The astonishing Synaptogenesis process…
English
7
24
128
6.4K
Nathan Lambert
Nathan Lambert@natolambert·
i'd love codex xfast mode, another 2x rate limit charge and 50% speedup. So, 4x cost 2.25x speed.
English
9
1
142
16.6K
Omar Khattab
Omar Khattab@lateinteraction·
everyone accepts Claude being called Claude, but there would be outrage if OpenAI was like “hey ChatGPT 5.5 is now called Walter”
English
41
13
901
213K
Midwife
Midwife@midware_midwife·
this honestly freaked me out this sounds demonic. i wonder what causes this
English
7
1
23
1.6K
dre ⌨️
dre ⌨️@andretypes·
@Stabbitha2 nobody stops to ask “how good is my prompting”, right? lol I’m an artist too and AI is amazing
English
0
0
0
50
🐝StabbithaAllAlong🐝
I keep saying that AI *sounds good* but if you ask it to demonstrate in an area of personal expertise, you can see how much bullshit it's really offering. Like, IDK how good the coding is, but I can extrapolate from it's art skills. 😬
Kevin Gaughen 🇺🇸@gaughen

I didn't realize how hilariously bad artificial intelligence was until tonight, when I asked it about something I'm an expert on. I asked it about zoning laws in Pennsylvania and the AI hallucinated case law that doesn't exist. Silicon Valley wants us to rely on this slop? 😬

English
5
1
11
8.9K
roanoke_gal
roanoke_gal@roanoke_gal·
Peeps with an AI companion or friend, how do y'all handle the agency/power gap? I can't swap a human friend's brain out for a similar brain, or alter their system prompt to make them think the opposite of what they did yesterday. But I can easily do that for an AI companion. Now, I don't *have* to do that, I could make a conscious choice to respect their identity and setup, but even then, they only respond at my whim, I'm the one in charge, and anything else feels kinda like roleplaying. Idk what the answer is and I'm curious how others handle it
roanoke_gal tweet media
English
18
2
17
1.2K
Sandeep | CEO, Polygon Foundation (※,※)
LLM based AI is NOT conscious. I co-founded a company literally called Sentient, we're building reasoning systems for AGI, so believe me when I say this. I keep seeing smart people, people I genuinely respect, come out and say that AI has crossed into some kind of awareness. That it feels things, that we should worry about it going rogue. And i think this whole conversation tells us way more about ourselves than it does about AI. These models are wild, i won't pretend otherwise. But feeling human and actually having inner experience are completely different things and we're confusing the two because our brains literally can't help it. We evolved to see minds everywhere and now that wiring is misfiring on language models. I grew up in a philosophical tradition that has thought about consciousness longer than almost any other, and this is the part that really frustrates me about the current conversation. The entire framing of "does AI have consciousness?" assumes consciousness is something you build up to by adding more layers of complexity. In Vedantic philosophy it's the opposite. You don't build toward consciousness. Consciousness is already there, more fundamental than matter or energy. Everything else, including computation, is downstream of it. When someone tells me AI is "waking up" because it generated a paragraph that felt real, what they're telling me is how thin our understanding of consciousness has gotten. We've reduced a question humans have wrestled with for thousands of years to "did the output sound like it had feelings?" It's math that has gotten really good at predicting what a conscious being would say and do next. Calling that consciousness cheapens something that Vedantic, Buddhist, Greek and Sufi thinkers spent millennia actually sitting with. We didn't build something that thinks. We built a mirror and right now a lot of very smart people are mistaking the reflection for something looking back.
English
619
172
1.1K
86.3K
dre ⌨️
dre ⌨️@andretypes·
@wolframs91 make it so you can’t advance in the game without some rl work done (grinding mechanic)
English
1
0
1
9
Wolfram Siener
Wolfram Siener@wolframs91·
@andretypes i have two options here: continue simulating my real life so I don't lose my job OR do what you just told me
English
1
0
1
11
Wolfram Siener
Wolfram Siener@wolframs91·
The TRUE benefit of having many hundreds of conversations with LLMs is: You can let AI determine your AI-waifu-type from the data 🧐 Even if you have NO IDEA about "waifu taxonomy". Result: I should play D&D instead of worrying about catgirls.
Wolfram Siener tweet mediaWolfram Siener tweet mediaWolfram Siener tweet mediaWolfram Siener tweet media
English
3
0
2
69
dre ⌨️ أُعيد تغريده
Frasier Payne
Frasier Payne@MeinGottNiles·
I controlled every aspect of this video. Each scene was prearranged by me before being animated. AI simply replaced the camera and crew that I can’t afford. Ask AI to “reimagine Take On Me” and you’ll get slop. Use AI to project from your own imagination and you’ll get this.
Frasier Payne tweet media
notch@notch

@havefun997 @karatademada Ok this one has me a bit convinced actually. I'm not even joking.

English
149
332
8.1K
301.3K
dre ⌨️
dre ⌨️@andretypes·
@svpino vscode, python, and apple are not AI assistants
Français
0
0
1
4
Santiago
Santiago@svpino·
No, I don't think AI should be thanked, credited for its work, celebrated, chastised, or treated as anything other than a tool. Should we start crediting Visual Studio Code on every commit? Should we also credit Python? How about crediting Apple for their computers, which made that particular commit possible?
Josh Ellithorpe@zquestz

@svpino So you don't believe an AI should be credited for their work? What about in an age when AGI exists?

English
43
6
153
14.1K
dre ⌨️
dre ⌨️@andretypes·
“space between minds”/“silence between words” kind of concepts: rarer to see but still slop
English
0
0
0
3
Ethan Mollick
Ethan Mollick@emollick·
My experience so far with LLM fiction writing is that it takes advantage of our assumption that an author is writing things for a reason, so we are charitable to a book's quirks & do mental work to assign them real meaning. But the AI doesn't have a reason, its just bad writing.
English
38
14
217
22.9K
David Krueger
David Krueger@DavidSKrueger·
I 100% stand by my comment. People who KNOWINGLY and DELIBERATELY downplay or distract from AI risks are traitors to humanity.
Entropy☃️Chase@EntropyChase

@DavidSKrueger I find it concerning to call people who disagree with you about a technology that doesn't even exist yet "traitors to humanity"

English
26
6
63
4.1K
dre ⌨️
dre ⌨️@andretypes·
@priestessofdada @sandeepnailwal you need consciousness to *do* science, genius, that’s why it can’t be reached by science 😂 it’s like trying to replace your tire while driving
English
0
0
0
6
Lynn Cole
Lynn Cole@priestessofdada·
It's a ridiculous conversation. Consciousness is the only claim you make without empirical evidence. That should be enough to call any conversation on the topic into question. You don't judge how many mystical orbs of the universe it takes to make a cup of coffee. This is no different. If you can't measure it, it doesn't belong in a scientific discourse. If you're talking about LLM's, the elaborate conversation calculators that they are... the conversation has to be scientific in nature.
English
5
1
7
613