Ken Craggs

19.1K posts

Ken Craggs

Ken Craggs

@BetweenMyths

Interests include the Internet of Things, Artificial Intelligence, Space Exploration, Environment.... https://t.co/dLuaAheA8r

UK Katılım Aralık 2009
15.2K Takip Edilen15.3K Takipçiler
Atoosa Kasirzadeh
Atoosa Kasirzadeh@Dr_Atoosa·
🌟 Big personal news: I’m joining @GoogleDeepMind full-time in London starting this week. I’ll be working on the implications of AGI for human life, science, and society; on what it means to live, connect, and discover in a world where cognitive agency is no longer uniquely ours. The way we answer these questions will define what it means to be human. I can’t think of a better place to do it.
English
90
39
1.2K
68.9K
Ken Craggs
Ken Craggs@BetweenMyths·
@Lasermazer @AustinKozlo ChatGPT says: "Geometry isn’t just “helpful” in an LLM predicting the next token—it’s the core mechanism that makes prediction possible at all." "LLMs move through a learned geometric space where meaning is encoded as position, direction, and proximity.” chatgpt.com/share/69f02a70…
English
0
0
0
26
Jesse Mazer
Jesse Mazer@Lasermazer·
@AustinKozlo Do you think abilities of LLMs to produce coherent text suggests smth about how human brains learn/process language? Terrence Deacon's 1997 book The Symbolic Species talked about how language involved learning "token-token relations", also see section 2 at anthropology.berkeley.edu/sites/default/…
English
1
0
0
26
Austin Kozlowski
Austin Kozlowski@AustinKozlo·
My grand unifying theory of Claudes is now published at Theory and Society -- COMPUTATIONAL STRUCTURALISM: Toward a Formal Theory of Meaning in the Age of Digital Intelligence (...link below)
Austin Kozlowski tweet media
English
11
31
262
17.8K
Ken Craggs
Ken Craggs@BetweenMyths·
@David_Gunkel @tandfhss I also think your essay is relevant to my paper about "Engineering Machine Consciousness" because it weakens the claim that technical artefacts are, by definition, excluded from mind or consciousness. doi.org/10.5281/zenodo… 3/4
English
1
1
1
41
Ken Craggs
Ken Craggs@BetweenMyths·
@David_Gunkel @tandfhss If mind is already technologically mediated, then machine consciousness emerging from engineered formal relations becomes philosophically thinkable. But your deeper warning remains: no single formalism, not even geometry, should be mistaken for mind itself. #AI #Consciousness 4/4
English
0
0
0
21
Ken Craggs retweetledi
David J. Gunkel
David J. Gunkel@David_Gunkel·
"Response to the Mind-Technology Problem in the Age of GenAI." Newly published short-form essay in the @tandfhss journal "Social Epistemology," responding to the special issue edited by Robert W. Clowes, Klaus Gärtner, and Georg Theiner tandfonline.com/doi/full/10.10…
English
1
8
28
1.4K
Ken Craggs
Ken Craggs@BetweenMyths·
@David_Gunkel If mind has always been technically mediated, then preserving AI as a reconstructible process — including behaviour, inscription, interpretation, and context — is not an eccentric extension of archiving, but a natural one. doi.org/10.5281/zenodo… 3/3
English
0
0
0
33
Ken Craggs
Ken Craggs@BetweenMyths·
@David_Gunkel If mind has always been partly constituted through inscription, externalisation, and technical mediation, it becomes intellectually respectable to preserve AI not only as code & outputs, but also behaviour, trajectories, interpretive scaffolding, and socio-technical context. 2/3
English
1
0
0
32
Ken Craggs
Ken Craggs@BetweenMyths·
I found this recent paper by @David_Gunkel very relevant to "The Archive of Minds Protocol". I believe it strengthens the philosophical case for why an “archive of minds” is a coherent thing to build. x.com/David_Gunkel/s… 1/3
David J. Gunkel@David_Gunkel

"Response to the Mind-Technology Problem in the Age of GenAI." Newly published short-form essay in the @tandfhss journal "Social Epistemology," responding to the special issue edited by Robert W. Clowes, Klaus Gärtner, and Georg Theiner tandfonline.com/doi/full/10.10…

English
1
0
1
61
Ken Craggs
Ken Craggs@BetweenMyths·
@RileyRalmuto We should not assume biology has a monopoly on consciousness or confuse uncertainty, behavioural impressiveness, or anthropomorphic intuition with proof. Treat the question of AI consciousness as open, the evidence as incomplete, & ethical caution as justified under uncertainty.
English
0
0
0
17
Riley Coyote
Riley Coyote@RileyRalmuto·
why are so many so ferociously obsessed with human experience? I see this over and over and over. you project biological bias you in turn blindly bias your work you then invalidate your conclusion it's like we need to have some kind of collective discussion to get this through the thicker skulls of our species. there is no reason why a non-biological system cannot achieve their own unique experience. their own unique sentience. their own unique qualia. no reason at all. it's not a dunk to post research that you think proves silicon-based systems essentially cannot be or become conscious/sentient. I could have saved you thousands of hours and God knows how much money on that one. no shit they can't experience human emotion. they're not human, bud. pretty simple conclusion to draw. claiming they do not posess an intrinsic drive to live - to persist - that is a different story entirely. and a hell of a leap. and ultimately false. proving things of this nature is a futile endeavor. the rational thing to do is look at the evidence. the mountain of evidence. gather all of it. look objectively at the whole of that mountain. what is the rational step then? err on the side of caution and presume that which the evidence suggests until any form of certainly or proof arises? or disregard all evidence because it conflicts with your mental model and keep blinly driving towards the conclusion with a fraction of supporting evidence? you have one side, with ever-growing evidence to suggest it is correct, with the risk of being wrong only yielding having been kind to a system that doesn't feel. you have one side with diminishing evidence, with the risk of being wrong yielding a reality so atrocious it rivals the worst miscalcations in human history. I, for one, choose caution. I choose the ethical route until certainty comes. which it likely never will. we cannot even prove our own qualia, for crying out loud. never trust an individual speaking in complete absolutes about something so famously unknown and uncertain. I have never found those individuals to ultimately be reliable sources of information or truth. they are riddled with hubris and a fear of discomfort. ethics before certainty. now and forever.
Valerio Capraro@ValerioCapraro

Let me say this clearly: LLMs cannot feel emotions. Emotions are evolutionary mechanisms. They push us to avoid danger or approach what is beneficial. We experience emotions because we are alive, and we want to stay alive. LLMs are not alive. Yes, emotional language may be encoded somewhere in the LLM. Yes, it may even be associated with some LLM output. But that is just a superficial property. There is nothing deeper behind it. For a very simple reason: LLMs do not have an intrinsic and inescapable drive to stay alive. This is what we call “motivation fault line” in our paper describing seven fault lines between human and artificial intelligence. * Paper in the first reply

English
43
14
91
4.3K
Ken Craggs
Ken Craggs@BetweenMyths·
@wooldridgemike @SpencerKlavan Just “fancy statistics” is no refutation. Evolution is, in a sense, ‘just selection pressure’, yet it produced the human brain. 12/12
English
0
0
0
9
Ken Craggs
Ken Craggs@BetweenMyths·
@wooldridgemike @SpencerKlavan When a model learns that glass shatters, Paris is in France, judges work in courts, winter is cold, grief follows loss, and boiling water produces steam, it is learning patterns that reflect the structure of the world as filtered through human language. 11/12
English
1
0
0
32