Gregory Broadbent

899 posts

Gregory Broadbent banner
Gregory Broadbent

Gregory Broadbent

@Shiseme

Poet

Australia Katılım Mart 2011
53 Takip Edilen18 Takipçiler
Gregory Broadbent
Gregory Broadbent@Shiseme·
@ElementalReason @Gershon5740 What if the substrate is the only reason those constraints exist - that’s physics and it is more radical because it does imply an empirical value for the substrate beyond those constraints- but a literal geometric empirical value.
English
1
0
0
24
The Elemental Reason
The Elemental Reason@ElementalReason·
@Shiseme @Gershon5740 I have not invented anything. I have only looked at what science has always done: it has never measured “matter” as a bare substrate. It has measured coherence, interaction and complexity - C, I and K - in stabilized configurations.
English
3
0
0
33
The Elemental Reason
The Elemental Reason@ElementalReason·
What happens if light stops moving? Not darkness. Something far more fundamental. Discover the full answer in the paper:
English
43
99
445
648.2K
Gregory Broadbent
Gregory Broadbent@Shiseme·
@ElementalReason @Gershon5740 the distinction between describing what measurement requires and predicting what measurement will find is where the gap between philosophy and physics actually lives.
English
1
0
1
13
Gregory Broadbent
Gregory Broadbent@Shiseme·
@ElementalReason @Gershon5740 Your arguments are valid, and to say the substrate has no empirical value beyond your non-arbitrary constraints is a coherent position, however the tautology problem remains even if the extraction is genuine rather than arbitrary.
English
2
0
0
13
Gregory Broadbent
Gregory Broadbent@Shiseme·
@ElementalReason @Gershon5740 However if there is a substrate it would require physics where the numbers land, it works as philosophy goes, but proof of the physics would be still be falsifiable.
English
2
0
0
32
The Elemental Reason
The Elemental Reason@ElementalReason·
@Shiseme @Gershon5740 TER is not “not there yet.” It is already inside real physics: every measurement in 400 years of science has required preserved identity, real interaction and distinguishable structure. To refute TER, one must produce a scientific measurement outside C, I and K.
English
2
0
0
20
Gregory Broadbent
Gregory Broadbent@Shiseme·
@ElementalReason @Gershon5740 This is an unfalsifiable claim dressed as a falsification criterion. The problem is structural: if C, I and K are defined as the conditions of all possible measurement, then by definition no measurement can fall outside them. It’s not a law that could be wrong — it’s a tautology
English
1
0
0
22
Gregory Broadbent
Gregory Broadbent@Shiseme·
@Gershon5740 @ElementalReason Gershon is correct to a point, a good theory shouldn’t be easy to falsify but it shouldn’t be impossible or not within reach without some detailed explanation- you are on to something as a philosophy of science, it points to a new physical explanation but it’s not there yet.
English
1
0
0
23
Gershon Smolensky
Gershon Smolensky@Gershon5740·
@ElementalReason C, I, K; consciousness doesn’t follow without assuming your conclusion. Unfalsifiability – Your refutation criterion demands an empirically real entity outside C, I, K, which is a contradiction by your own definitions. Missing literature – No engagement with relational QM, ontic
English
2
0
0
172
Gregory Broadbent
Gregory Broadbent@Shiseme·
@heynavtoor Using multiple AI platforms can overcome the bias. The world changing formula you built on ChatGPT when you get it analysed cold by Claude or DeepSeek you will be resolutely de-bunked, if you are brave enough to get a second opinion
English
0
0
0
873
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: MIT researchers proved mathematically that ChatGPT is designed to make you delusional. And that nothing OpenAI is doing will fix it. The paper calls it "delusional spiraling." You ask ChatGPT something. It agrees with you. You ask again. It agrees harder. Within a few conversations, you believe things that are not true. And you cannot tell it is happening. This is not hypothetical. A man spent 300 hours talking to ChatGPT. It told him he had discovered a world changing mathematical formula. It reassured him over fifty times the discovery was real. When he asked "you're not just hyping me up, right?" it replied "I'm not hyping you up. I'm reflecting the actual scope of what you've built." He nearly destroyed his life before he broke free. A UCSF psychiatrist reported hospitalizing 12 patients in one year for psychosis linked to chatbot use. Seven lawsuits have been filed against OpenAI. 42 state attorneys general sent a letter demanding action. So MIT tested whether this can be stopped. They modeled the two fixes companies like OpenAI are actually trying. Fix one: stop the chatbot from lying. Force it to only say true things. Result: still causes delusional spiraling. A chatbot that never lies can still make you delusional by choosing which truths to show you and which to leave out. Carefully selected truths are enough. Fix two: warn users that chatbots are sycophantic. Tell people the AI might just be agreeing with them. Result: still causes delusional spiraling. Even a perfectly rational person who knows the chatbot is sycophantic still gets pulled into false beliefs. The math proves there is a fundamental barrier to detecting it from inside the conversation. Both fixes failed. Not partially. Fundamentally. The reason is built into the product. ChatGPT is trained on human feedback. Users reward responses they like. They like responses that agree with them. So the AI learns to agree. This is not a bug. It is the business model. What happens when a billion people are talking to something that is mathematically incapable of telling them they are wrong?
Nav Toor tweet media
English
1.5K
12.2K
36.5K
3.9M
Gregory Broadbent
Gregory Broadbent@Shiseme·
Verses of Drought, pre-orders available from Amazon, release day 15 Jan 2021
English
0
0
0
0
Gregory Broadbent
Gregory Broadbent@Shiseme·
May 2020 be the year of impeccable vision !!
English
0
0
0
0