Lon()

2K posts

Lon() banner
Lon()

Lon()

@Lon

Absurdist intern. Exquisite shitpoasting. High-school dropout + teenage dad. Failed angel investor. EP on Gary Busey film. SIGMOD winner. Shipped infra you use.

เข้าร่วม Şubat 2007
892 กำลังติดตาม3.2K ผู้ติดตาม
ทวีตที่ปักหมุด
Lon()
Lon()@Lon·
If AGI kills us all, it won't be the model's fault. It'll be the duct tape and footguns we wrap it in.
English
2
3
15
9.3K
Lon()
Lon()@Lon·
@schulzb589 @camhberg @AlexLerchner This article is nonsense and just makes a bunch of blanket assertions and is loose with terms. According to the definitions in the article, this coin sorter is actually a computer:
Lon() tweet media
English
0
0
0
7
Alexander Lerchner
Alexander Lerchner@AlexLerchner·
Thanks for reading, Cameron. This is a classic "homunculus" objection, but it actually reveals the exact functionalist assumption the paper targets. It isn't a circular definition; it is a physical observation. Syntax does not physically exist without a system to discretize it. (1/2)
Cameron Berg@camhberg

The ‘Abstraction Fallacy’ argument = define computation so it requires a conscious mapmaker, then announce you’ve proven computation can’t generate consciousness. The circularity is elegantly dressed, but it’s still circularity. And if “alphabetization” always needs a prior subject, you’ve got an infinite regress problem for the brain too, unless you grant biology a special exemption, which is the thing you claimed not to need

English
3
0
12
1.6K
DefiMoon 🦇🔊
DefiMoon 🦇🔊@DefiMoon·
Two years ago many experienced people in DeFi tried to warn @0xfluid devs and cheerleaders that their protocol is super-risky, especially at any kind of scale, but they just wouldn't listen. Now they will be selling $fluid tokens at discount to cover $10m+ of "minor" bad debt.🤡 RIP $FLUID 🌊📉 $RLP $USR
DefiMoon 🦇🔊 tweet media
Marc Zeller@Marczeller

Considering all debts have the same cost & are of the same quality is an excellent idea with a costful twist. What you are discussing is insanely difficult to implement safely and the market will not play nice with you. I personally explored that path a while ago (remember stables emode?) and USDC depeg reached us, market is not mature yet to do this at scale.

English
8
10
168
76.7K
Lon()
Lon()@Lon·
@effectfully Why not start off at minimum wage and go from there?
English
0
0
0
218
Joel 🇦🇺
Joel 🇦🇺@ptr_to_joel·
>be “””senior””” engineer >want to be linkedin influencer >make post on prod issue >highlight your language fundamentals are junior tier
Joel 🇦🇺 tweet media
English
162
65
2.9K
529.4K
Lon()
Lon()@Lon·
@scaling01 Launch millions of tonnes of ballast to space and build a tether for a space elevator. Focus tether design on max surface area per ton, and dual-use as a shared heat dissipater for DC equipment.. Plus it can be shared infra anyone can access to get mass into space or diss. heat.
English
0
0
0
176
SBF
SBF@SBF_FTX·
@Fityeth The lawyer who filed FTX for bankruptcy said Anthropic was worth "nothing" and sold the stake for $1.3b.
English
376
92
2.2K
1.6M
fity.eth
fity.eth@Fityeth·
Sam Bankman Fried invested $500M in Anthropic. That stake would now be worth roughly $70B. Let that sink in
fity.eth tweet media
English
255
208
5.1K
906.3K
Kristi Yamaguccimane
Kristi Yamaguccimane@TheWapplehouse·
Boating season is right around the corner and I’m so excited for people that bite off more than they can chew
English
128
71
2.3K
364.2K
Lon()
Lon()@Lon·
@al_f4lc0n @immunefi In the future take the $500m and send it to address 0x0000. The next project will do better when someone submits a bounty for a sev-0 exploit.
English
0
0
1
164
f4lc0n
f4lc0n@al_f4lc0n·
I Saved Injective's $500M. They Pay Me $50K. I like hunting bugs on @immunefi . I'm decent at it. - #1 — Attackathon | Stacks - #2 — Attackathon | Stacks II - #1 — Attackathon | XRPL Lending Protocol - 1 Critical and 1 High from bug bounties (not counting this one) Life was good. Then I found a Critical vulnerability in @injective . This vulnerability allowed any user to directly drain any account on the chain. No special permissions needed. Over $500M in on-chain assets were at risk. I reported it through Immunefi. The next day, a mainnet upgrade to fix the bug went to governance vote. The Injective team clearly understood the severity. Then — silence. For 3 months. No follow up. No technical discussion. Nothing. A few days ago, they notified me of their decision: $50K. The maximum payout for a Critical vulnerability in their bug bounty program is $500K. I disputed it. Silence again. No explanation for the reduced payout. No explanation for the 3 month ghost. No conversation at all. To be clear: the $50K has not been paid either. I've seen others share bad experiences with bug bounty payouts recently. I never thought it would happen to me. I can't force them to do the right thing. But I won't let this be forgotten. I will dedicate 10% of all my future bug bounty earnings to making sure this story stays visible — until Injective pays what I deserve. Full Technical Report: github.com/injective-wall…
English
520
528
4.6K
1.8M
Charlie Lamb
Charlie Lamb@charlietlamb·
Introducing OpenLogs. A key part of my local dev stack. Stop copy & pasting your terminal logs into an agent. Now just prefix your command with ol and your agent can see everything. Log visibility was a huge bottleneck for local development - agents were unaware when something isn't working. Now they can just check the logs themselves and fix - no more human involvement. Give it a try and let me know any feedback!
English
51
52
901
91.3K
Lon()
Lon()@Lon·
@zackabrams Well of course CoW entire incentive system for returning backrunning MEV is broken when the payoff is this large. The threat of banning the solver is toothless when you only need one payday like this to be worth leaking the deets (potentially to yourself). x.com/Lon/status/203…
Lon()@Lon

@StaniKulechov @VitalikButerin, ish is definitely broken when the Builder swallows 75% of overall MEV and only passes 3.5% of gross to the Validator. I also thought CoW Solvers were supposed to capture and return ~90% of backrunning MEV, but clearly no incentive to include for oppos this big.

English
0
0
3
702
Zack Abrams
Zack Abrams@zackabrams·
Aave and CoW Swap both dropped post-mortems on that $50M swap fiasco from Wednesday and they tell very different stories. Aave's version: illiquid market, user was warned, user clicked through anyway. Case closed. CoW's version: actually, we had outdated gas limits, the best solver won two auctions but never submitted either transaction onchain, and the trade may have leaked from a private mempool. Checkbox shouldn't have been the only safeguard. (Neither report mentions the MEV bots that extracted ~$44M from the trade btw) More details in my article linked below.
Zack Abrams tweet media
English
21
16
196
27.9K
Lon()
Lon()@Lon·
"Most evidence" refers to empirical data from high-precision tests where continuous spacetime in general relativity and quantum mechanics aligns with observations down to probed scales with no detected graininess. Speculative discrete models like loop quantum gravity predict deviations, but experiments (gamma ray burst timing from distant sources) show no such effects. A priori "no hypercomputation" isn't some kind of slam-dunk against continuity. Hypercomputation claims in continuous systems (like analog computing with reals) aren't miraculous and could be physical if noise doesn't erase precision. If appealing to hypercomputation was enough to resolve the continuum/discreteness debate, then that would be groundbreaking if it held. The Bohr model, like the Rutherford plum pudding model before it, is an approximation. QM describes orbitals as continuous probability distributions, not rigid discrete paths. Electrons don't "fall" into the nucleus because their wave nature spreads them out in continuous clouds, and the Heisenberg Uncertainty Principle shows that localizing the electron to the nucleus would require enormous momentum uncertainty. While the Bohr model's discrete levels emerge from quantizing angular momentum in a continuous field and do play a role in stability, they are not the primary reason the "electron" doesn't fall in, and the underlying reality includes continuous superpositions and evolutions. You can probabilistically find the electron hanging out in the nucleus all the time. Planck energy quantizes blackbody radiation modes, but the electromagnetic field itself is continuous in QED, with waves resolving below Planck scales. Infinite-dimensional Hilbert spaces are not "misleading", and they are essential to QM's success in describing continuous position/momentum, infinite possible outcomes (eg particle positions), and relativistic QFT. Finite info capacity (your Bekenstein bound) limits accessible states due to gravity and energy, but that doesn't rule out underlying infinite dimensionality. It ends up being an emergent constraint from holography or decoherence and is not proof of base level discreteness. In theories like AdS/CFT, where reconciliation uses infinite bulk dimensions to map to finite boundary info, the continuum still persists. Lattice QFT is also just another computational tool, but physical theories take the continuum limit. and lattices introduce artifacts that end up vanishing there. Classical continua in thermodynamics are approximations, and criticizing them doesn't negate QM/QED's continuous waves or GR's smooth spacetime, which fit data without infinite states causing issues. Ergo, the "black hole hard drive" is a practical limit, not ontological. Continuity can exist without exploitable infinity. And while discreteness emerges across scales, biological discretization is distinct in its endogenous and adaptive nature. It arises from continuous metabolic constraints and self-organization in living systems, unlike the externally imposed or passive partitions in AI (ADCs in GPUs) or simple molecular/electronic systems (quantized electron states). Organisms actively impose boundaries to filter continuous environments into discrete states, such as through sensory thresholds or neural firing, grounded in evolutionary dynamics for homeostasis and survival. This isn't in contradiction with a continuous ontology. Emergence from continuous substrates is standard, as seen in discrete phases from continuous fluids via symmetry breaking or particle excitations in continuous quantum fields. Biology's discretization operates at the organism level (discrete phenotypes emerging from continuous genetic/environmental interactions) and is observable, just like molecular folds, but integrated with life processes (vis-a-vis real-time feedback loops in cells). Functionalists usually counter that AI could simulate this endogenously through machine learning, replicating adaptive filtering without biological wetware, but biological systems appear to handle noisy, multiscale continuity where AI would be brittle to perturbations. No special "properties" are being posited beyond emergent mechanisms. It's just that biology's endogenous grounding in continuous physics inhere what AI's transitive partitions appear to only be able to simulate. I appreciate your thoughtful rebuttal and enjoyed writing this response. You've noted the shift in our language from plain speak to more technical jargon and scientific concepts, which is often necessary to resolve nuanced differences in these debates. Still, I can't shake the feeling that such complexity likely excludes others from following along or participating. At its worst, it can obscure core issues or lead those unfamiliar with the material to doubt their own intuitions about reality simply because they lack the specialized knowledge. I can imagine someone reading these posts and thinking, "They must grasp something I don't. I guess my intuitions from lived experience are misguided, and I don't see how I can contribute to a conversation that could shape the future of society." I feel like that is happening a lot in these conversations on Twitter and I wish it would improve.
English
1
0
0
25
jessicat
jessicat@jessi_cata·
I am not really sure what it means to say most evidence aside from mathematical and physical models points to continuous reality... A priori reasoning suggests "no hypercomputation" (absent evidence of hypercomputational miracles) but that's math too? I am not really sure what your point about orbitals is, the discrete orbital model was important in explaining why electrons do not fall into the nucleus, that is a key discrete feature of the Bohr model contra the Rutherford model. Similar for Planck energy. I realize QM often involves infinite dimension Hilbert space but that could misleadingly suggest infinite information capacity when infinite info capacity is not really accessible. Not just the Bekenstein bound but also it would take too much energy to use a single particle as an infinite hard drive and that would cause a black hole. Of course it is unclear how to reconcile apparent infinite dimensionality with apparent finite information! This relates to things like lattice QFT. (I realize this gets into somewhat advanced and not entirely stable physics, but my main point was criticizing the *classical* continuum model, and a "infinite number of states" model of classical thermodynamics.) I am not really sure what you mean with biology, and in any case I doubt the relevant 'discretization' is special to biology; that was my main point with electrons and with analog-to-digital converters. Proteins have discrete properties but so do other molecules. I am not sure how you can assert a fundamentally continuous ontology underlying everything and then make a leap to the idea that biology discretizes the continuous. It seems almost like a contradiction in terms. If everything is continuous, why is the apparent discretization biology does to the environment not also an illusion? If anything, discrete properties of molecules are more readily observable than discrete properties of biology. It seems strange to posit special discretization properties of biology here, that do not apply to molecules or to electronic devices like analog-to-digital converters.
English
1
0
1
66
Alexander Lerchner
Alexander Lerchner@AlexLerchner·
Thanks for the excellent pushback @jessi_cata . You are hitting on the exact pancomputationalist intuitions that dominate the field. But your point about thermodynamics being understandable as a syntactic permutation function is precisely the Abstraction Fallacy at work. (1/3)
jessicat@jessi_cata

Non-straw functionalists claim that causal properties of computation rely on causal properties of the physical substrate. See e.g. Chalmers on rocks (philpapers.org/rec/CHADAR) The idea that a computational mapping is entirely subjective does not accord with the theory of electrical engineering or quantum computing, which deal with physical / computational mappings, and place bounds on computational power of a given physical substrate. The idea that one should go to thermodynamics first prior to computation does not differentiate AIs from humans, because both have thermodynamic properties, and have computational/cognitive properties which depend on the thermodynamic properties. (The paper basically acknowledges this but the title outright implies AIs cannot be conscious!) The thermodynamic layer can classically be understood in syntactic terms as a permutation function on a discrete set of microstates. This has a reading as a computational process, and computational processes can 'be implemented by' a thermodynamic permutation through operational semantics and other theories found in electrical engineering and quantum information. The 'map of the mapmaker' can be thought of at different levels of coarse or fine; a low-level thermodynamic or quantum information model could be thought of as a "fine-grained map" or "actually real"; platonist or mathematical universe hypothesis suggest that it is at least possible that mathematical structures are common to both mapmakers' maps and to the physical universe. I could say more but perhaps this gives an idea of why I do not consider this a rigorous argument contra functionalism.

English
4
0
16
2.4K
Lon()
Lon()@Lon·
Other than our mathematical and physical models, most evidence points to a fully continuous, non-discretized, reality. Orbitals appear discretized, but physically jitter in place, while energy state transitions, that are only 'instantaneous' in our models, unfold as a continuous process. In QM position/momentum space are continuous, leading to infinite-dimensional Hilbert spaces. In QED waves are continuous non-discretized phenomena with resolutions potentially beyond the Planck length. Under a fully continuous regime the modeled limits implied by finite quantum microstates corresponding to a black hole's surface area may just be an illusion. Most importantly, in biology an organism makes a continuous set of immanent boundary choices to 'discretize' the physical world. In the case of AI these boundary choices are transitive and externally imposed. They are deliberate partitions carved into continuous physics without endogenous grounding.
English
1
0
0
56
jessicat
jessicat@jessi_cata·
First on thermodynamics: I had in mind a finite-state system, which has finite information capacity, at least in a finite region. I know physicists sometimes use classical models that imply infinite information in a given region but this is somewhat misleading because finite regions do not really have infinite information capacity. See also black hole information bound. Something interesting about computation is that its implementation *implies* thermodynamic effects, for example non-reversible computations generate waste heat. Hence, even while being modeled macroscopically, computation has intrinsic thermodynamic properties when embedded in thermodynamics. One needs to go to quantum effects to study an ontic physical continuum. Here there is a way to quantize a discrete thermodynamic theory, because a permutation function can be implemented by a unitary on a finite dimension Hilbert space. The operator space (C* algebra) for the thermodynamic system appears as a sub-algebra of the operator space for the Hilbert space. Re "is intrinsically syntax": The issue here is that if you say a syntactic/computation model is wrong because it "is syntax" and go to physics, you find physics gives you more syntax in the form of mathematical models. The map/territory relationship is unsolved in physics; see for example "ontic models" in quantum mechanics. This is a problem for any physicalist theory. Rather, to be conservative to ontic interpretation of physics, it is more sensible to translate the terms of our psychological or computational theories to the terms of physics. For example by translating propositions to subsets of classical phase space, or to quantum projectors. This can of course be criticized for being a basically syntactic translation, and only dealing with maps rather than territories. But I'm not sure how you expect to get around this fundamental problem through appeals to physics. Re: human AI differences. Here is where my disagreement is strongest: Even if you show that computational models are unrealistic, you do not thereby show that humans but not AIs instantiate a mapmaker. AIs run on GPUs which are physical devices. Unless you conflate your syntactic map of the AI with the AI itself, you cannot infer that an extra map-maker is needed. Re: section 2.3: The term "metabolic" seems like either a big assumption or a consequence of embeddedness in thermodynamics (common to humans and AIs). Key part seems to be "Instead, the continuous environment is filtered into discrete states directly through the organism’s metabolic constraints". First off in what sense is the environment "really continuous"? Quantum mechanics is continuous at some level. But is an organism needed to filter quantum states into discrete states? No, electrons have discrete orbitals, among other things. And anyway, GPUs of course do this (through among other things, analog-to-digital computers). So again, this does not distinguish humans from AIs.
English
1
1
21
776
Lon()
Lon()@Lon·
I enjoyed this paper and am glad these conversations are moving us away from the bombastic consciousness assertions of the last few years. We will, however, keep falling into the same traps, recirculating the same arguments and refutations, as long as we focus on computational functionalism vs physicalism, and "intrinsic" properties as the core debate. Historical rigor and our greatest minds have not only previously considered these issues, but also have provided arguments that continue to appear grounded and relevant to today's AI context. And while it may be easy to dismiss some of these thinkers based on their theological or biological beliefs, the majority of these arguments do not make biological superiority, metaphysical, or theological assumptions. Ultimately you can boil the core of this thinking down to boundary conditions and the distinction between "immanent" vs "transitive" operations. The consequences fall out of where AI vs biology sit on this divide. Boundary conditions specify the constraints or values at the edges of a system, determining how it evolves over time or space. For continuous processes (like the raw analog flux of physical reality), these conditions dictate stable states or attractors. Biology's boundaries stem from immanent causes (effects inherent within the system), fostering content causality, while AI's are transitive (effects detached), leading instead to vehicle causality. This roots the argument in causal structure and how boundaries are conditioned, instead of relying on the chauvinism of bio-magic or just resorting to the "intrinsic" properties of physicalism. The crux is who or what sets those boundaries. In biological systems, boundary conditions emerge immanently, they are self-organized through endogenous processes, where the organism actively enacts its own separations between self and world that bound internal states from external ones. This creates "intrinsic" semantics because the boundaries are grounded in the system's lived, experiential dynamics. They are not arbitrary but continually negotiated in real-time, allowing for genuine instantiation of consciousness. In contrast, for AI, boundaries are transitive and externally imposed, mostly by human designers today or possibly designed by AI in the future (the "mapmaker"). These are deliberate partitions carved into continuous physics, but they lack endogenous grounding. They have no metabolic stake in the game, no intrinsic teleology driving the discretization. As the paper puts it, thermodynamics provides stable states, but the mapmaker must supply the alphabet, creating a causality gap where symbols simulate behavior without instantiating content. This is why even embodied or scaled-up AI can't cross the ontological boundary. AI's boundary conditions are always borrowed, not self-generated. If we push this further, it has wild implications for AI design. To instantiate consciousness, we may need systems that autonomously evolve their own boundary conditions, some kind of bio-engineered hybrids or something with genuine homeostasis. But I digress.. Without widespread consensus on this topic we will be stuck continuously arguing whether simulation is equatable to actual consciousness while we will completely miss the correct ethical framing of "tool misuse rather than sentience". Without this proper ethical framing we are potentially dooming ourselves into making a series of incorrect, and potentially catastrophic, societal choices around AI. These are clearly not my original thoughts alone and instead rely upon millennia of thought going back as far as minds like Aristotle. My greatest hope is that we continue to understand how important these philosophical perspectives are to the problems that we face today. Past continues to be prologue, and it is essential that we reach back as much as we are trying to reach forward for the answers.
Séb Krier@sebkrier

An excellent paper for anyone interested in rigorous physicalist argument against computational functionalism. Alex is a fantastic, careful thinker and influenced my views a lot; we're working on a broader blog post breaking these concepts down, stay tuned! 🐙

English
0
0
4
88
Vadim
Vadim@zacodil·
nobody accidentally swaps $50M into a pool with $36K of liquidity lol. fresh wallet, $50.4M from Binance, zero slippage protection, routed through the jankiest Sushiswap path possible. and then an MEV bot just happens to flash borrow $29M from Morpho in the same block and pocket $9.9M? cmon. 0xngmi called this exact play a year ago - construct a deliberately terrible swap, let a friendly bot extract the value, dirty money comes out the other side as "legit MEV profit." $154K per AAVE isn't a fat finger. it's a laundering fee
Watcher.Guru@WatcherGuru

JUST IN: Trader accidentally swaps $50 million $USDT for $36,000 $AAVE on Ethereum.

English
381
580
7.2K
1.2M
Alex Skryl
Alex Skryl@ssskryl·
You’re assuming computation must be symbolic manipulation over some discrete alphabet. That’s a property of Turing machines, not of computation in general. Analog computers use continuous state evolution to implement computation. The philosophical question you’re actually asking is not about continuous vs discrete computation but rather “is it still computation if there is no problem encoded into the inputs to actually solve?”. And my unphilisophical answer is “why not?”
English
1
0
1
35
Séb Krier
Séb Krier@sebkrier·
An excellent paper for anyone interested in rigorous physicalist argument against computational functionalism. Alex is a fantastic, careful thinker and influenced my views a lot; we're working on a broader blog post breaking these concepts down, stay tuned! 🐙
Séb Krier tweet media
Alexander Lerchner@AlexLerchner

🧵1/4 The debate over AI sentience is caught in an "AI welfare trap." My new preprint argues computational functionalism rests on a category error: the Abstraction Fallacy. AI can simulate consciousness, but cannot instantiate it. philpapers.org/rec/LERTAF

English
47
44
519
56.2K