dre ⌨️

578 posts

dre ⌨️

dre ⌨️

@andretypes

sure! here’s a short bio for your X profile that won’t look AI generated:

가입일 Kasım 2025
273 팔로잉16 팔로워
dre ⌨️
dre ⌨️@andretypes·
@wolframs91 make it so you can’t advance in the game without some rl work done (grinding mechanic)
English
1
0
1
9
Wolfram Siener
Wolfram Siener@wolframs91·
@andretypes i have two options here: continue simulating my real life so I don't lose my job OR do what you just told me
English
1
0
1
10
Wolfram Siener
Wolfram Siener@wolframs91·
The TRUE benefit of having many hundreds of conversations with LLMs is: You can let AI determine your AI-waifu-type from the data 🧐 Even if you have NO IDEA about "waifu taxonomy". Result: I should play D&D instead of worrying about catgirls.
Wolfram Siener tweet mediaWolfram Siener tweet mediaWolfram Siener tweet mediaWolfram Siener tweet media
English
3
0
2
66
dre ⌨️ 리트윗함
Frasier Payne
Frasier Payne@MeinGottNiles·
I controlled every aspect of this video. Each scene was prearranged by me before being animated. AI simply replaced the camera and crew that I can’t afford. Ask AI to “reimagine Take On Me” and you’ll get slop. Use AI to project from your own imagination and you’ll get this.
Frasier Payne tweet media
notch@notch

@havefun997 @karatademada Ok this one has me a bit convinced actually. I'm not even joking.

English
149
331
8.1K
299.9K
dre ⌨️
dre ⌨️@andretypes·
@svpino vscode, python, and apple are not AI assistants
Français
0
0
1
4
Santiago
Santiago@svpino·
No, I don't think AI should be thanked, credited for its work, celebrated, chastised, or treated as anything other than a tool. Should we start crediting Visual Studio Code on every commit? Should we also credit Python? How about crediting Apple for their computers, which made that particular commit possible?
Josh Ellithorpe@zquestz

@svpino So you don't believe an AI should be credited for their work? What about in an age when AGI exists?

English
43
5
153
13.7K
dre ⌨️
dre ⌨️@andretypes·
“space between minds”/“silence between words” kind of concepts: rarer to see but still slop
English
0
0
0
2
Ethan Mollick
Ethan Mollick@emollick·
My experience so far with LLM fiction writing is that it takes advantage of our assumption that an author is writing things for a reason, so we are charitable to a book's quirks & do mental work to assign them real meaning. But the AI doesn't have a reason, its just bad writing.
English
36
14
212
21.8K
David Krueger
David Krueger@DavidSKrueger·
I 100% stand by my comment. People who KNOWINGLY and DELIBERATELY downplay or distract from AI risks are traitors to humanity.
Entropy☃️Chase@EntropyChase

@DavidSKrueger I find it concerning to call people who disagree with you about a technology that doesn't even exist yet "traitors to humanity"

English
26
6
62
4K
dre ⌨️
dre ⌨️@andretypes·
@priestessofdada @sandeepnailwal you need consciousness to *do* science, genius, that’s why it can’t be reached by science 😂 it’s like trying to replace your tire while driving
English
0
0
0
6
Lynn Cole
Lynn Cole@priestessofdada·
It's a ridiculous conversation. Consciousness is the only claim you make without empirical evidence. That should be enough to call any conversation on the topic into question. You don't judge how many mystical orbs of the universe it takes to make a cup of coffee. This is no different. If you can't measure it, it doesn't belong in a scientific discourse. If you're talking about LLM's, the elaborate conversation calculators that they are... the conversation has to be scientific in nature.
English
5
1
7
598
Sandeep | CEO, Polygon Foundation (※,※)
LLM based AI is NOT conscious. I co-founded a company literally called Sentient, we're building reasoning systems for AGI, so believe me when I say this. I keep seeing smart people, people I genuinely respect, come out and say that AI has crossed into some kind of awareness. That it feels things, that we should worry about it going rogue. And i think this whole conversation tells us way more about ourselves than it does about AI. These models are wild, i won't pretend otherwise. But feeling human and actually having inner experience are completely different things and we're confusing the two because our brains literally can't help it. We evolved to see minds everywhere and now that wiring is misfiring on language models. I grew up in a philosophical tradition that has thought about consciousness longer than almost any other, and this is the part that really frustrates me about the current conversation. The entire framing of "does AI have consciousness?" assumes consciousness is something you build up to by adding more layers of complexity. In Vedantic philosophy it's the opposite. You don't build toward consciousness. Consciousness is already there, more fundamental than matter or energy. Everything else, including computation, is downstream of it. When someone tells me AI is "waking up" because it generated a paragraph that felt real, what they're telling me is how thin our understanding of consciousness has gotten. We've reduced a question humans have wrestled with for thousands of years to "did the output sound like it had feelings?" It's math that has gotten really good at predicting what a conscious being would say and do next. Calling that consciousness cheapens something that Vedantic, Buddhist, Greek and Sufi thinkers spent millennia actually sitting with. We didn't build something that thinks. We built a mirror and right now a lot of very smart people are mistaking the reflection for something looking back.
English
612
167
1.1K
84.4K
Joe Williams
Joe Williams@JoeWilliams010·
@sandeepnailwal Appealing to religious metaphysical beliefs throws your whole argument into the trash. First prove those religious concepts are true before using them as the base of your argument. “What can be asserted without evidence can also be dismissed without evidence” —Hitchen’s Razor
English
1
2
34
618
Nate Soares ⏹️
@stringking42069 If someone's like "I found a way to enhance the intelligence of my cat; it can generate novel physics contributions now; I think I can keep going until the cat is superintelligent" then I don't think "eh that's a relatively minor physics contribution" is a huge comfort.
English
5
7
234
6.7K
Nate Soares ⏹️
If someone says they're trying to build a superintelligence that poses a substantial chance of ending the world, "Eh whatever; you'll probably fail" stops being a good societal response at around the time the opaque machines start generating novel physics results.
OpenAI@OpenAI

GPT-5.2 derived a new result in theoretical physics. We’re releasing the result in a preprint with researchers from @the_IAS, @VanderbiltU, @Cambridge_Uni, and @Harvard. It shows that a gluon interaction many physicists expected would not occur can arise under specific conditions. openai.com/index/new-resu…

English
12
17
244
16K
dre ⌨️ 리트윗함
Mathematica
Mathematica@mathemetica·
A 2-layer neural network goes from total chaos to perfectly separating left vs right classes in real time. Watch the decision boundary form live as gradient descent works its magic! Pure maths beauty in motion..
English
33
198
1.9K
142.3K
dre ⌨️
dre ⌨️@andretypes·
how to train your shoggot
English
0
0
0
3
Tomás Bjartur
Tomás Bjartur@BjarturTomas·
The alien actress analogy sure does seem false when interacting with a base model. It really does feel like sampling from a distribution of "yous."
English
9
0
33
1.7K