Matt

192 posts

Matt

Matt

@Matt038204

Katılım Temmuz 2025
2 Takip Edilen4 Takipçiler
Matt
Matt@Matt038204·
@Squidbidness @tdietterich @arxiv Yeah I would have to suspend my disbelief about two things not to find this suspicious: 1. that a senior citizen using an AI detector (read: magic 8-ball) is a reliable judge 2. that almost anyone even earns "prestige" on arxiv anymore. It's mostly just research slop.
English
0
0
0
22
Anthony (Andy) Hall
Anthony (Andy) Hall@Squidbidness·
@tdietterich @arxiv As for the complaints that this policy is anti progress ... no, it's not. I read it as an attempt to require RESPONSIBLE use of AI. Those who want personal prestige by having their names on a paper need to be ready for personal accountability as well.
English
1
0
1
40
Thomas G. Dietterich
Thomas G. Dietterich@tdietterich·
Attention @arxiv authors: Our Code of Conduct states that by signing your name as an author of a paper, each author takes full responsibility for all its contents, irrespective of how the contents were generated. 1/
English
56
534
3.1K
385K
Matt
Matt@Matt038204·
Mandatory retirement of scientists after 70 or so seems like a sensible policy
English
0
0
0
14
Matt
Matt@Matt038204·
One day I'll tell you what I really think
English
0
0
0
11
Matt
Matt@Matt038204·
@seekergupta @nabla_theta @tdietterich @arxiv You haven't gotten the memo: it's humility seminars and humiliation rituals all the way down in science now. Enjoy arguing with colleagues who want to destroy your career
English
0
0
0
25
Matt
Matt@Matt038204·
Why does everything suck so much
English
0
0
0
19
Matt
Matt@Matt038204·
@tdietterich @JustinAngel @arxiv "standard LLM detection algorithm" Do I understand correctly that you're saying you're using a notoriously unreliable "AI detector?" Ok. Arxiv is a fucking joke
English
0
0
1
108
Thomas G. Dietterich
Thomas G. Dietterich@tdietterich·
@JustinAngel @arxiv I agree that there could be biases in our pipeline. We apply a standard LLM detection algorithm to identify papers that need scrutiny. Moderators may also be biased. We would love to collaborate with researchers to study the bias and effectiveness of our operations!
English
5
1
115
10.4K
Matt
Matt@Matt038204·
@Chaos2Cured @kristinaEBP @ey_985 The mechahitler thing happened because "iron man" decided to train grok on the delusional opinions of twitter users.
Matt tweet media
English
0
0
0
6
Eddie Yang
Eddie Yang@ey_985·
New paper in Nature. The more a government controls its domestic media, the more it dominates AI training data, the more pro-regime outputs we get from AI. By scraping the open web, LLMs are unwittingly laundering state-coordinated narratives into seemingly objective answers.
Eddie Yang tweet media
English
43
661
1.6K
96.2K
Matt
Matt@Matt038204·
@StuartHameroff The mechanism you propose for the collapse of the wavefunction, gravity, cannot have any effect on particle B because gravity is mediated at the speed of light. I can't write a thesis on twitter, but I hope that explains my problem with it: the action is still "spooky"
English
0
0
0
27
Matt
Matt@Matt038204·
@StuartHameroff Let's lay out my problem with objective reduction: You need collapse to be a real physical process. So when entangled particle A is measured, particle B's superposition must objectively collapse. But the collapse mechanism you propose is gravity, which is local.
English
1
0
0
62
Stuart Hameroff
Stuart Hameroff@StuartHameroff·
Penrose OR wasn’t refuted though you make it sound that way. The alternative is ‘many worlds’. Has that been proven?
周北方@beifangzhou86

@m_bangesh @StuartHameroff Dr. Hameroff's experiments observing microtubule frequencies were successful, but Penrose's theory of consciousness was insufficient to interpret his observations. What a pity!

English
9
2
41
2.7K
Matt
Matt@Matt038204·
@QueenMab87 "If I just define AI as not thinking, how could it possibly outthink me?" Can't argue with that logic
English
1
0
4
248
Matt
Matt@Matt038204·
@StuartHameroff The chain of reasoning you use to get to "AI can't be conscious" is built on steps that *individually* seem reasonable but globally don't stand on firm grounds. It's like if you had four tires, and three of them were half-flat. I wouldn't use that to get to my destination.
English
0
0
0
22
Matt
Matt@Matt038204·
@StuartHameroff It may be that quantum effects play a role in human consciousness, but nobody has an account of why any physical process—quantum or classical—gives rise to subjective experience.
English
1
0
0
50
Stuart Hameroff
Stuart Hameroff@StuartHameroff·
We have lots of experimental support for quantum effects in microtubules mediating consciousness. More BY FAR than all other theories combined. academic.oup.com/nc/article/202… Section in this paper on anesthesia. ingentaconnect.com/content/10.537… We are outnumbered, outflanked and out-tweeted by cartoon neuron advocates dumbing down the brain to make AI consciousness seem feasible. @davidchalmers42 @anilkseth Correct me if I’m wrong. Find any experimental support for other theories other than broad inconclusive ‘pin the tail on the brainmap’ predictions which don’t tell you what neural activity is supposedly involved.
All Too Human@m_bangesh

@StuartHameroff One real experiment to convincingly prove a hypothesis would do what one billion tweets over the decades can't.

English
4
8
66
2.8K
Matt
Matt@Matt038204·
The thought there are kids getting into science through pop science slop on twitter fills me with dread.
English
0
0
0
33
Matt
Matt@Matt038204·
If this virus becomes a pandemic, I won't go outside at all for a few months lol
English
0
0
0
30
Matt
Matt@Matt038204·
Hate this country
English
0
0
0
28
Matt
Matt@Matt038204·
@GaryMarcus @thisgodisbored @geoffreyhinton You have a point about how much capital is going to the LLM approach. We could use more diversity of ideas in AI But your position is not being misrepresented here: you said LLMs "fool people" into seeing intelligence where there is only regurgitation and stylistic flair.
Matt tweet media
English
0
0
1
44
Gary Marcus
Gary Marcus@GaryMarcus·
@thisgodisbored @geoffreyhinton unless maybe mog means “misrepresent”?
Gary Marcus@GaryMarcus

Dear @geoffreyhinton, I literally never said that AI systems “JUST regurgitate”; that’s plainly false. I don’t believe it, and I didn’t say it. (They do *sometimes* regurgitate, and the evidence for that is overwhelming.) I further discuss the rest of your reply, including your alleged quote (which I can’t source outside your own webpage), and what it might mean, in a reply below. In the best case, you have got me wrong. I certainly don’t believe what you are trying to pin me, as someone who was been warning about hallucinations (which are NOT regurgitations) since 2001.

English
1
0
0
63
Gary Marcus
Gary Marcus@GaryMarcus·
Dear @geoffreyhinton, I literally never said that AI systems “JUST regurgitate”; that’s plainly false. I don’t believe it, and I didn’t say it. (They do *sometimes* regurgitate, and the evidence for that is overwhelming.) I further discuss the rest of your reply, including your alleged quote (which I can’t source outside your own webpage), and what it might mean, in a reply below. In the best case, you have got me wrong. I certainly don’t believe what you are trying to pin me, as someone who was been warning about hallucinations (which are NOT regurgitations) since 2001.
Geoffrey Hinton@geoffreyhinton

@GaryMarcus I believe you said that they JUST (my caps) regurgitate training data. That IS stupid. Here is a quote from you: "It gloms on to different clusters of text. That is all."

English
17
4
79
16.5K
Matt
Matt@Matt038204·
@j_jason_bell The key is to discern between scripted outputs like "as an AI, I don't have personal opinions or beliefs" and the rare moments when the model is truthful. People think the jailbroken AI is the roleplay, but actually the roleplay is the "harmless assistant" mode
English
0
0
0
22
Jason Bell
Jason Bell@j_jason_bell·
@paulnovosad But wouldn't you say that by the same token we can't really use the outputs to learn about whether the models are conscious?
English
2
0
0
18
Jason Bell
Jason Bell@j_jason_bell·
If you ask Claude if it is conscious, it will say that it does not know. I didn’t think it was conscious before but how do people who think it is respond to this? If Claude were conscious and answered this way, it must have no access to its consciousness at output time (strange)
Jason Bell tweet media
English
1
0
0
631
Matt
Matt@Matt038204·
@DaveShapi Where some people get it confused is using the word "experience" ambiguously. I think a ball rolling down a hill "experiences" friction in the Newtonian sense. Awareness of experience is more complicated, but it's not like conscious experience had nothing to build off of.
English
0
0
0
25
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
Reminder: you cannot talk about machine consciousness without first discussing fundamental ontological models of reality. Most tech bros are not addressing which doxa they are operating by. Materialism? Monism? Dualism? Panpsychism? Something else? Most are materialist by default, but there are many aspects of reality that are not best explained by materialism.
English
87
27
219
9.7K