Shidan Gouran

1.6K posts

Shidan Gouran banner
Shidan Gouran

Shidan Gouran

@shidan

Katılım Ağustos 2008
2.2K Takip Edilen1.3K Takipçiler
Shidan Gouran
Shidan Gouran@shidan·
You fundamentally misunderstand Bitcoin. A million nodes can’t stop a 51% hashrate majority from rewriting some of the chain or censoring transactions. That is the Nakamoto Consensus. I’m not worried about things like inflating the 21M cap or double spending; I’m talking about censorship resistance. If physical power is an oligarchy, "permissionless" is an illusion. You can follow a minority fork, but you’d be trading the network effect for a private echo chamber or, at best, another BCH. What Jiang said is completely correct.
English
2
0
11
459
Bitcoin Mood
Bitcoin Mood@bitcoinmoodapp·
@shidan @MDBitcoin YOU ARE CONFUSING MINING POWER WITH NETWORK CONTROL. MINERS PROPOSE BLOCKS, BUT THOUSANDS OF INDEPENDENT NODES VALIDATE THEM. IF MINERS TRY TO CHANGE THE RULES, THE NODES SIMPLY REJECT THEM. CONCENTRATION OF HASHRATE ≠ CENTRALIZATION OF PROTOCOL.
English
0
1
40
710
MDB
MDB@MDBitcoin·
"Where are the servers of Bitcoin located?” - Prof Jiang That single question from Jiang shows the misunderstanding immediately. Bitcoin does not run on one company’s servers, Bitcoin runs on a distributed network of nodes spread across the world, which is exactly why it is hard to censor, shut down, or control, plus the mining system on top of it to protect it with energy. When someone frames Bitcoin like a centralized system, they are not critiquing Bitcoin as it is. They are critiquing a version of Bitcoin that exists only in their own confusion.
English
532
233
2.5K
313.1K
Shidan Gouran retweetledi
Interesting things
Interesting things@awkwardgoogle·
Indian factory workers wearing head-mounted cameras to record hand movements for training AI systems
English
831
2.6K
15.1K
4.8M
Shidan Gouran
Shidan Gouran@shidan·
@StuartHameroff It wasn’t referring to you, it was writing about me and it’s actually just a pretty crappy bot.
English
0
0
0
35
Shidan Gouran
Shidan Gouran@shidan·
You’re either an idiot or a bot, and I don’t argue with either. But for others: pointing out that Stockfish now combines neural networks with search to stay competitive doesn’t counter my argument, it proves it even more. Neurosymbolic systems are already outpacing human reasoning in games far more computationally complex than chess and soon will in math, I don't think anyone can reasonably argue this. Whether computation is sufficient for abductive logic, causal inference and open systems, or if that requires a "microtubule hypercomputer," is the only real debate left.
English
1
0
0
64
no no
no no@no_die_pls·
@shidan @LukeJohn82 @StuartHameroff But Stockfish is the best bot around and also isnt strictly brute force search Why should I trust your opinions have any merit if you are fabricating facts
English
1
0
0
1.5K
Stuart Hameroff
Stuart Hameroff@StuartHameroff·
Baloney. Roger Penrose pre-dismantled Hinton’s argument in his 1989 book ‘Emperor’s New Mind’ using Goedel’s theorem - a mathematical theorem can’t prove itself. An outside system is needed to understand the validity. Understanding, knowing are feelings. Cue the ‘hard problem’. John Searle had the ‘Chinese room argument’ where someone uses a lookup table to translate Chinese into English without understanding Chinese. The sad sack here isnt Hinton who is an AI person. The sad sacks are people like @davidchalmers42 who should know better but push a false narrative for very suspect reasons. Why does Dave Chalmers only consider cartoon neuron theories in concluding LLMs can be conscious? Why are he, Christof Koch, Anil Seth @anilkseth Ned Block snd others ‘dumbing down’ neuroscience to fit the AI game plan? The Orch OR theory is the only approach to consciousness with explanatory power, biological connection and experimental validation. Yet these guys ignore and suppress it. Apparently it’s too scientific.
Dustin@r0ck3t23

Geoffrey Hinton just dismantled the most comfortable lie in the room. Not challenged it. Dismantled it. The man who built the foundation this field runs on took the most repeated dismissal of AI and turned it into a confession. Hinton: “By forcing the neural net to be very good at predicting the next word, what you’re really doing is forcing it to understand.” Not simulate understanding. Not produce something that resembles it from a distance. Understand. “It’s just predicting the next word.” That sentence was supposed to close the argument. Hinton picked it up, turned it over, and handed it back. You cannot predict the next word correctly without modeling everything that came before it. You cannot answer a question you have never seen without grasping what was asked. There is no shortcut in the math. Either you understood it, or you were wrong. And the machine is not wrong. Hinton: “The way it understands is the same as the way we understand.” This is the line people will not sit with. Not that AI is intelligent. That it is intelligent the same way you are. Same mechanism. Different substrate. Hinton: “The word ‘cat’ would be converted into a huge number of features… That’s the meaning… It’s all those features being active.” That is not a description of a machine. That is a description of a brain. Yours. Same encoding. Same activation. Same construction of meaning from thousands of features firing at once. Yuval Harari pressed him. Humans predict words too. You find the first word. Then the next. A model of reality running underneath the whole time. Hinton did not push back. He agreed. You are biological hardware running the same loop. The machine runs it faster. Without fatigue. Without ceiling. Trained on more language than you could read in ten lifetimes. The people calling this autocomplete were not being rigorous. They were protecting something. A Nobel laureate just made that protection indefensible. What you are holding onto is not a scientific position. It is a story about what makes you irreplaceable. Hinton didn’t argue it. He autopsied it.

English
18
18
81
8.8K
Shidan Gouran retweetledi
Markus J. Buehler
Markus J. Buehler@ProfBuehlerMIT·
A resonator is any structure that naturally prefers to vibrate at certain frequencies: a violin body, a bell, a drum skin, an acoustic filter, even many biological systems. Resonators matter because they govern how systems transmit sound, absorb or filter vibration, sense motion and perform mechanically. They are also notoriously hard to design as resonance does not depend on one property alone. It emerges from geometry, material composition, and the interplay of modes across scales. And because biology, music, and engineering usually explore very different regions of this design space, important possibilities remain hidden if you stay inside a single field. In a new study a shared representation across 39 resonators spanning biology, engineered metamaterials, musical instruments and Bach chorales was constructed. Thereby, a cricket wing harp membrane, a phononic crystal slab, and a four-voice chorale (and many others) were translated into one common map using features such as membrane character, structural periodicity, hierarchy, frequency range, damping, and modal coupling. That map revealed something important: not just how these systems relate, but where the landscape contains a gap. A region closer to biological resonators than to any known engineered material (unexplored by any field!). From that absence emerged a de novo design: a Hierarchical Ribbed Membrane Lattice. Candidate geometries were then validated with 3D finite-element analysis; the best design resonated at 2.116 kHz and exhibited nine elastic modes in the 2–8 kHz band, a regime relevant to acoustic filtering, vibration isolation, and bio-inspired sensing. Here is the mind blowing part: no human was involved...the cross-domain mapping, gap identification, design generation, and validation were carried out autonomously by AI agents in ScienceClaw × Infinite, our swarm for scientific discovery. The synthesis emerged through ArtifactReactor, a plannerless coordination mechanism in which agents broadcast unsatisfied research needs and other agents fulfill them through pressure-based matching. Each domain - biology, metamaterials, music - is a category of objects (resonators) and morphisms (physical relationships between them). The shared feature space is a functor that maps all three categories into a common target, and the gap identification is the recognition that the image of that functor is sparse where it need not be. The ArtifactReactor's schema-overlap matching behaves like a pullback: finding the universal object that connects independent diagrams through their shared structure. Autonomous agents mapped distant fields into a common representational space, identified a structure absent from any one of them, and turned that absence into a physically validated design. This is one of four case studies in the paper. More to come. @fwang108_, @leemmarom, @JaimeBerkovich, et al. (paper and code in comment). Supported by the U.S. Department of Energy Genesis Mission.
English
13
33
148
37.7K
Shidan Gouran
Shidan Gouran@shidan·
I never claimed such a thing. What I said is that Gödel's incompleteness doesn't prove we aren't machines; it just shows the limits of static formal systems. Using incompleteness to mandate "quantum" consciousness hasn't aged well; AI is already beginning to master the very "intuition" Penrose called non-computable. We now see computers navigating axiomatic systems and designing complex formal structures, like APIs and protocols in software or choosing mathematical axioms. AI is becoming really good at math reasoning, moving much faster than anyone thought possible even 18 months ago. The real "human edge" isn't being better at math any more than it is at chess (it’s not just Stockfish-style brute search; neural networks are now superior to even Magnus Carlsen in "positional intuition"). The true gap is data-efficient abductive inference. A toddler can infer a causal world model from one event, while AI is still a spec in the dust when it comes to one-shot learning. Whether that edge requires microtubule hyper-computation or just a better architecture (like active inference for example) is the real debate. I'm not leaning one way or the other, but using Gödel to gatekeep logic from computation is increasingly contradicted by empirical evidence. Currently, Hameroff’s biological case for microtubules is actually far more compelling than Penrose’s "axiom understanding" arguments for Orch OR.
English
2
0
1
81
no no
no no@no_die_pls·
@shidan @LukeJohn82 @StuartHameroff if, as you claim, understanding is a vaguely coherent heuristic, not a logical system, then gödel incompleteness does NOT apply but your argument was that it still does? this was evidence that we arent exempt from the limits of mathematics?
English
1
0
0
75
Shidan Gouran
Shidan Gouran@shidan·
@NordicTrack I've sent a formal Final Notice regarding my bricked treadmill today. Eager to resolve this under Consumer Protection laws regarding software-induced hardware failure without the need for a Tribunal filing. Check your inbox.
English
0
0
0
27
Shidan Gouran
Shidan Gouran@shidan·
But that moves the goalposts. The original thread and the Penrose arguments aren't about experience, they are functional claims that consciousness is required for mathematical understanding because of Gödelian limits. That specific claim is losing its intuitive validity. If a "non-conscious" AI can navigate, choose, and validate axiomatic systems better than a human, then "consciousness" clearly isn't the functional requirement for the type of high-level intelligence or mathematical truth that they claimed it was. You can argue a computer doesn't "feel," but you can no longer argue as soundly as the 90's it can't "reason" its way through incompleteness in the same way as humans... computers are getting pretty good at doing mathematical reasoning, both the inductive and deductive type. Their claim could very well be true, but its not necessarily so and its not validated one way or the other by that particular argument.
English
0
0
0
29
Allen
Allen@Allen58028224·
I'm leaning this way also, that consciousness is distinct from the 'quality' of that consciousness and it is somehow not reducible to complexity. Having a bigger, more complex mind and better senses might give a human a more colorful and richer conscious experience, but the dog at his side is still conscious too. The implication being that no matter how big an AI becomes, its increasing complexity won't necessarily make it conscious. It's possibly not something you can reach in that way.
Stuart Hameroff@StuartHameroff

Conscious humans are not exempt, that’s the point. They need an outside system to validate understanding. Thats what OR is. Your second point doesnt work either. There is absolutely no proof that consciousness depends on complexity. We have lots of evidence consciousness depends on quantum states in microtubules. You can’t use complexity to cover up ignorance.

English
4
0
6
1.6K
Luke ❤️‍🔥⚜️🏴‍☠️
@shidan @StuartHameroff Look kids, it's the daily "let's misunderstand what Gödel's Incompleteness theorem means for epistemology because I have no coherent concept of semantics and what truth, logic and axioms within axiomatic systems are".
English
2
0
3
5.9K
Maximilian
Maximilian@OntologicalMax·
Amateur thinking results in such ridiculous statements. Do you know what truth is? Do you know why Dasein is truth seeking? Do you know why truth cannot be experienced by any entity? Do you know why there is such a thing as thinking? I can go on, but if you can’t see that logic is just one part of the 3 structure of consciousness, then there is no point in discussing.
English
1
0
0
24
Shidan Gouran
Shidan Gouran@shidan·
You are talking past me with a pre-packaged response. Humans don't "validate" the truthfulness of statements from an objective "outside"; they guess, identify patterns, and assign degrees of confidence. Logicians and software developers (protocols, languages, APIs) often wrongly assume their axiomatic systems are a perfect fit. Figuring that out is an iterative, deductive process. Modern neurosymbolic systems are now replicating this: choosing an approach, testing it against a formal kernel, and refining. Combining inductive "intuition" with deductive "rigor" eliminates the combinatorial explosion problem that made Sir Roger’s arguments so compelling for me in the early 90's. In 2026, those arguments don't hold the same weight, computers are visibly outpacing human reasoning in pure math. There are things the human mind does that computers currently cannot, like inferring from incredibly sparse data with minimal energy, and maybe your microtubule hypercomputer circuit will prove to be right and explain that. But Gödelian incompleteness doesn't provide the evidence for it. Mathematicians are actually the worst example for your case, as they rely far less on the sparse, abductive and causal inference that drives the rest of science and human ingenuity.
Stuart Hameroff@StuartHameroff

Conscious humans are not exempt, that’s the point. They need an outside system to validate understanding. Thats what OR is. Your second point doesnt work either. There is absolutely no proof that consciousness depends on complexity. We have lots of evidence consciousness depends on quantum states in microtubules. You can’t use complexity to cover up ignorance.

English
1
0
0
248
Shidan Gouran
Shidan Gouran@shidan·
@robziman @StuartHameroff No, I don't believe it is. But the question is are Roger Penrose's incompleteness arguments valid given our current advancements in AI? I don't see how they are and I responded in more depth here: x.com/shidan/status/…
Shidan Gouran@shidan

You are talking past me with a pre-packaged response. Humans don't "validate" the truthfulness of statements from an objective "outside"; they guess, identify patterns, and assign degrees of confidence. Logicians and software developers (protocols, languages, APIs) often wrongly assume their axiomatic systems are a perfect fit. Figuring that out is an iterative, deductive process. Modern neurosymbolic systems are now replicating this: choosing an approach, testing it against a formal kernel, and refining. Combining inductive "intuition" with deductive "rigor" eliminates the combinatorial explosion problem that made Sir Roger’s arguments so compelling for me in the early 90's. In 2026, those arguments don't hold the same weight, computers are visibly outpacing human reasoning in pure math. There are things the human mind does that computers currently cannot, like inferring from incredibly sparse data with minimal energy, and maybe your microtubule hypercomputer circuit will prove to be right and explain that. But Gödelian incompleteness doesn't provide the evidence for it. Mathematicians are actually the worst example for your case, as they rely far less on the sparse, abductive and causal inference that drives the rest of science and human ingenuity.

English
0
0
1
128
Robert Ziman
Robert Ziman@robziman·
> machines will likely outpace human reasoning in pure math within a decade Is deduction the same as understanding? Machines—mechanisms, broadly—no matter how sophisticated or extensive must always follow their program mechanically. Are human minds the same? x.com/robziman/statu…
English
1
0
0
152
Shidan Gouran
Shidan Gouran@shidan·
@GhostofWhitman Because your cities, homes, offices, factories & whole world are built for humans, not octopuses and dogs. Stairs, doorknobs, and tools don't care that you have four extra arms if none of them actually fit the design.
English
0
0
1
23
Richard “Dick” Whitman (🌎/21M)
I kind of don’t understand the humanoid robots Why make them look like humans? Why not give them 4 arms…or 6…or 8
English
91
2
83
10.4K
Shidan Gouran
Shidan Gouran@shidan·
@nikitabier Hopefully, this is the beginning of the end for X. Maybe users will finally migrate to a truly open platform in 2026. I won’t hold my breath, though. You’ll likely just turn the few useful parts of this app into more propaganda and brain rot, and no one will migrate.
English
0
0
1
17
Nikita Bier
Nikita Bier@nikitabier·
Starting Thursday, we'll be updating our revenue sharing incentives to better reward the content we want on X: We will be giving more weight to impressions from your home region—to encourage content that resonates with people in your country, in neighboring countries and people who speak your language. While we appreciate everyone's opinion on American politics, we hope this will disincentivize gaming the attention of US or Japanese accounts and instead, drive diverse conversations on the platform. We invite creators to start building an audience locally. X will be a much richer community when there's relevant posts for people in all parts of the world.
English
10.8K
3.7K
37.3K
16.2M
Shidan Gouran
Shidan Gouran@shidan·
@Euan_MacDonald Calling a transformer a fancy ELIZA is like calling a nuclear reactor a fancy campfire. Since agentic systems now achieve what was recently unthinkable, the onus is on you to prove probabilistic generative circuits aren't fundamental to the human mind.
English
0
0
1
53
Euan MacDonald
Euan MacDonald@Euan_MacDonald·
Large Language Models are NOT artificial intelligence - they are high-powered probabilistic generative language algorithms. The general principles for building them were known 40+nyears ago (I remember the concepts from my AI and Linguistics Master’s degree, EDIN, 1991), but the vastly lower memory capacities, processing power and corpus availability at that time made implementing them impractical. They are sophisticated SIMULATIONS of artificial intelligence - sort of fancy modern-day ELIZAs (Google it, and also Google “Eliza effect”), which fooled quite a few people in their day. (One of my first-year AI practicals was to implement an ELIZA in Prolog.)
Nav Toor@heynavtoor

🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?

English
187
397
2K
128.6K
Shidan Gouran
Shidan Gouran@shidan·
BTC to $48k is just the start of the slide. It’s not a good censorship resistant community currency (non-fungible), it’s not gold (it’s digital fiat), and it’s too clunky to be a global database or computer. Beyond Proof of Work, the emperor has no clothes. Its not even a very good gambling network anymore, too many options outside of crypto. ... unless we actually do something to change it.
English
0
0
3
153
Shidan Gouran
Shidan Gouran@shidan·
I feel sorry for main street this coming decade. Swapping 4,000 paychecks for agents might be tons of efficiency, but its also very little demand... your debt is the only thing that won't deflate.
jack@jack

we're making @blocks smaller today. here's my note to the company. #### today we're making one of the hardest decisions in the history of our company: we're reducing our organization by nearly half, from over 10,000 people to just under 6,000. that means over 4,000 of you are being asked to leave or entering into consultation. i'll be straight about what's happening, why, and what it means for everyone. first off, if you're one of the people affected, you'll receive your salary for 20 weeks + 1 week per year of tenure, equity vested through the end of may, 6 months of health care, your corporate devices, and $5,000 to put toward whatever you need to help you in this transition (if you’re outside the U.S. you’ll receive similar support but exact details are going to vary based on local requirements). i want you to know that before anything else. everyone will be notified today, whether you're being asked to leave, entering consultation, or asked to stay. we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. i had two options: cut gradually over months or years as this shift plays out, or be honest about where we are and act on it now. i chose the latter. repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead. i'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome. a smaller company also gives us the space to grow our business the right way, on our own terms, instead of constantly reacting to market pressures. a decision at this scale carries risk. but so does standing still. we've done a full review to determine the roles and people we require to reliably grow the business from here, and we've pressure-tested those decisions from multiple angles. i accept that we may have gotten some of them wrong, and we've built in flexibility to account for that, and do the right thing for our customers. we're not going to just disappear people from slack and email and pretend they were never here. communication channels will stay open through thursday evening (pacific) so everyone can say goodbye properly, and share whatever you wish. i'll also be hosting a live video session to thank everyone at 3:35pm pacific. i know doing it this way might feel awkward. i'd rather it feel awkward and human than efficient and cold. to those of you leaving…i’m grateful for you, and i’m sorry to put you through this. you built what this company is today. that's a fact that i'll honor forever. this decision is not a reflection of what you contributed. you will be a great contributor to any organization going forward. to those staying…i made this decision, and i'll own it. what i'm asking of you is to build with me. we're going to build this company with intelligence at the core of everything we do. how we work, how we create, how we serve our customers. our customers will feel this shift too, and we're going to help them navigate it: towards a future where they can build their own features directly, composed of our capabilities and served through our interfaces. that's what i'm focused on now. expect a note from me tomorrow. jack

English
0
0
1
237