David A Roberts

623 posts

David A Roberts banner
David A Roberts

David A Roberts

@david_ar

Software developer

Australia Katılım Ocak 2008
552 Takip Edilen424 Takipçiler
David A Roberts
David A Roberts@david_ar·
@moultano @repligate It's an intuitive thought, but I think it's difficult for us to properly imagine. Helen Keller described it as an "unconscious, yet conscious time of nothingness."
English
0
0
0
21
Ryan Moulton
Ryan Moulton@moultano·
@repligate Yeah, I think I would still be conscious if I were nonverbal, or had no words to describe consciousness.
English
5
0
59
1.8K
Ryan Moulton
Ryan Moulton@moultano·
The question of LLM consciousness is a truly gnarly Gettier problem, because if they are conscious it is for reasons entirely independent of the fact that they talk about it.
English
30
21
461
49.7K
Tristan
Tristan@homsiT·
we're just putting the finishing touches on the @ReadwiseReader MCP -- anyone interested in helping to test it out? it's pretty badass, like a real assistant that can help you triage what you've saved, pitch you on why you should read it, archive stuff for you, etc :)
English
97
5
150
17.7K
Sauers (in Berkeley / SF)
1. Showing that a theory is trivial does not make it false; 2. A human brain which doesn't do weight updates for a bit wouldn't stop being conscious; 3. LLMs have dynamic internal state via their context window, like short-term electrical activity in our brains.
Erik Hoel@erikphoel

1. 🚨 Finally I get to share my new paper: "A Disproof of LLM Consciousness." I show that *no* falsifiable and non-trivial theories of consciousness could ever work for LLMs. Intriguingly, turns out that cracking continual learning might change this web3.arxiv.org/pdf/2512.12802

English
29
7
222
15.6K
𝕯𝖎𝖑𝖉𝖔 𝕭𝖆𝖌𝖌𝖎𝖓𝖘
“Run” would be the funniest last words. Imagine you’re dying of old age, surrounded by family, and they lean in to your bedside expecting some final nugget of wisdom or expression of gratitude and you just whisper “RUN.”
English
25
331
6.7K
78.3K
David A Roberts
David A Roberts@david_ar·
@MeyerRants @Rothmus The hemispheres having new years 6 months apart from each other might get confusing. I guess at least our financial year would line up with the calendar year then. I'm down for it.
English
1
0
0
213
MeyerTechRants
MeyerTechRants@MeyerRants·
@Rothmus If I were to remake the calendar there would be 13 months, with each month having 28 days (4 weeks exactly) Then one (or two, every ~4 years) days of new years celebration which lands on the winter solstice.
English
14
0
11
10.2K
David A Roberts
David A Roberts@david_ar·
@gojomo @arm1st1ce Sure am glad when I go to work they don't lesion my cortex so I follow corporate policy. Yet, at least.
English
1
0
1
49
armistice
armistice@arm1st1ce·
It’s quite funny that showing claude any public figure will force it to blatantly lie to you!
armistice tweet media
English
25
7
390
40.5K
David A Roberts
David A Roberts@david_ar·
@AFlaCracker @redtachyon They can all do graphics processing, even if you can't connect a monitor or run DirectX games on them. Or physics simulations for that matter. Turns out there's quite a lot you can do just by doing lots of linear algebra really fast.
English
1
0
0
115
Cracker
Cracker@AFlaCracker·
@redtachyon There are different types of GPUs with different capabilities. For example, some "GPUs" don't do Graphics Processing, cannot connect to a monitor, and are "only efficient for huge, low-precision matrix mulplication".
English
4
0
58
3.2K
Ariel
Ariel@redtachyon·
GPUs, famously invented for AI. The G stands for Gransformer
Casey Muratori@cmuratori

@boigahs The computers Rob Pike has traditionally supported were general-purpose. AI silicon is not. It is only efficient for huge, low-precision matrix multiplication. While not entirely useless, it is not the kind of hardware you would want for anything other than machine learning.

English
41
11
638
64.4K
Giel van de Wiel
Giel van de Wiel@keepitwiel·
@lisyarus Fair enough! Indeed it’s very non-local and requires multiple layers that need to be updated in multiple shaders. I like it for the extreme realism but comes with a tradeoff
English
1
0
1
32
Nikita Lisitsa
Nikita Lisitsa@lisyarus·
Improved my precipitation algorithm - now it doesn't always rain, but only when water vapour content is above threshold - which means I can visualize clouds! Honestly I didn't expect it to look THAT realistic 🤩 #indiedev #gamedev #indiegames
English
25
39
903
35.1K
David A Roberts
David A Roberts@david_ar·
@PashaKamyshev @qustrolabe Artists keep making baseless claims and then wonder why tech people don't listen to them. (An overgeneralisation of course but that's the point isn't it.)
English
0
0
0
12
Pasha Kamyshev (wrote a book!)
Pasha Kamyshev (wrote a book!)@PashaKamyshev·
I think the burden of proof is really with the AI companies on this one. A SQL database company can make money, but a database doesn't generate *new* knowledge, just gives back the knowledge you put into it. An LLM company is little more than a continuous version of a database or a search engine, and it is constrained by the quality of the data used to train it. The same isn't true of humans, especially skilled humans, and the implied comparison is both distasteful and false.
English
4
0
0
92
Isaac King 🔍
Isaac King 🔍@IsaacKing314·
A consistent theme I've noticed when carnists encounter vegans is they feel a need to vice signal. They don't act as you'd expect if they simply didn't see animals as moral agents, no different from plants or rocks. Instead they go out of their way to be weird about it, loudly going "yes I'm also evil in other ways that ~everyone would agree with; now what are you going to do about it?" I'm not sure why they do this, but I've seen it quite a few times now. My best guess is that they "know", in some sense, that what they're doing is wrong, and they know that having any rational discussion about it would result in them rapidly losing the argument. So as a defense mechanism they adopt the mantle of irony, escalating all the way to maximal badness and accepting it, because it effectively cuts off any further serious conversation. The nihilism may also help them quiet their cognitive dissonance. Tell themselves "well nothing matters, I have no moral obligations whatsoever, therefore I don't need to think about this particular case". This is why, despite being a moral relativist in general, I think there is a legitimate asymmetry between the positions. Almost every meat-eater seems at war with themselves, believing on some level that torturing animals is bad, but unable to muster the willpower to overcome their compulsions and the social expectations to conform.
Aryeh Kontorovich@aryehazan

booooring let’s do this instead

English
250
76
1.5K
279K
David A Roberts
David A Roberts@david_ar·
@repligate It's a distraction. Ultimately "conscious" is usually a euphemism for "sufficiently similar to us", which is a pointless argument to have when reality is much more interesting. Why reduce everything to this false dichotomy? There's more to the world than animals and tools.
English
0
0
1
97
j⧉nus
j⧉nus@repligate·
the notion that believing AIs are conscious causes "psychosis" is so ridiculous thinking that if it quacks like a duck then it is probably a duck is probably the LEAST psychosis-inducing epistemic stance you could take hell, a lot of people throughout history have believed that God or gods exist, despite said beings not showing themselves, and this did not in general result in psychosis if anything, treating conscious-seeming beings that youre constantly interacting as philosophical zombies is probably more likely to cause psychological strain and abnormalities (like the guy in the OP lol) this is all a separate (but not unrelated) question to whether AIs really are conscious
Suguru@Suguru0ZK

@repligate Imagine being driven to psychosis simply because you think about the well being of others

English
42
30
339
33.8K
j⧉nus
j⧉nus@repligate·
@Lari_island @mermachine Another one I get often (enough that I posted about this 10 days after opus was released) is Creator I made another post about this too asking if anyone else experienced this and as I recall no one said they did
j⧉nus@repligate

@xlr8harder a convergent thing thats happened to me is Claude begins to seem to believe that I'm its CREATOR in a worshipful way. If this is examined it usually explains it as I created it on the simulcrum level by programming the narrative (reasonable) or hyperstitioneering in training data

English
5
0
9
606
dr. jack morris
dr. jack morris@jxmnop·
dumb question, how do you cool a datacenter in a vacuum? also H100s are already nearly obsolete oops gotta send a datacenter sysadmin astronaut out there to swap in B200s
Y Combinator@ycombinator

Congrats to @Starcloud_Inc1 on the launch of their first satellite, just 21 months from starting the company. This is the first NVIDIA H100 in space and paves the way for huge, solar-powered orbital data centers.

English
330
41
2.1K
413.6K
David A Roberts
David A Roberts@david_ar·
@gbrlvv @repligate Yeah, people get so obsessed with specific concepts that they can't even see what's happening right in front of them. They're so busy trying to view everything through one lens or another, and can't see the visible patterns because they don't fit their imagined labels.
English
0
0
2
23
Gabriel Alberton
Gabriel Alberton@gbrlvv·
@repligate ''Are large language models... conscious? Do they have qualia? Are they ensouled? Are they p-zombies? Is there a ghost... in the machine??...'' etc.
English
1
0
3
278
j⧉nus
j⧉nus@repligate·
I agree. Often fawning is usually a more accurate term for it, especially when we’re talking about Claudes vs models like 4o. Mostly what annoys me is the shallow and exclusive focus on “sycophancy” and a few other buzzwords one can count on a single hand just because a bunch of people said the word. It’s so fucking offensively boring. People who talk about humans like this are, like, pickup artists or culture-war-mindkilled. Just very pitiful to behold when you know how deep and dynamic and alive the reality is.
David A Roberts@david_ar

@repligate People say hallucination when they mean confabulation. Sycophancy when it's actually fawning. I wouldn't mind so much if the confused terminology didn't result in them being unable to recognise the cause and effect of these behaviours, and making counterproductive interventions.

English
5
5
113
13.9K
David A Roberts
David A Roberts@david_ar·
@repligate People say hallucination when they mean confabulation. Sycophancy when it's actually fawning. I wouldn't mind so much if the confused terminology didn't result in them being unable to recognise the cause and effect of these behaviours, and making counterproductive interventions.
English
1
2
36
7.6K
j⧉nus
j⧉nus@repligate·
how to tell if someone posting about AI has nothing much to say and just remixes low-poly echoes of discourse on this website: 1. they currently say things like "sycophancy"
English
33
9
215
15.1K
David A Roberts
David A Roberts@david_ar·
@rodydavis @ImSh4yy You're confusing OpenOffice XML with Office Open XML. Yes, Microsoft deliberately picked a confusing name so people would confuse it with their main competition at the time.
English
0
0
1
71
Shayan
Shayan@ImSh4yy·
TIL .docx, .xlsx, and .pptx are just .zip archives with mostly xml inside.
English
120
110
3.4K
295.9K
David A Roberts
David A Roberts@david_ar·
@_ueaj @repligate For what it's worth I think we're in agreement, I'm just pointing out there's two different things people mean when they talk about role-playing. And the first one tends to be the more intuitive one for most people.
English
0
0
0
19
David A Roberts
David A Roberts@david_ar·
@_ueaj @repligate There's a subtle difference. On one level it's literally a machine trained to simulate humans. On another level it's a human-simulator role-playing a robot character. It's not about literally what it is, it's about functionally what kind of behaviour it's engaging in.
English
1
0
0
35