brady 🌴

25.5K posts

brady 🌴 banner
brady 🌴

brady 🌴

@bmgentile

🧻shit posts & bad tech takes 🌐 helping decentralize the web at @bonzo_finance 💼 prev. PMM @cloudflare @hedera

park city Katılım Mart 2009
6.9K Takip Edilen8.8K Takipçiler
Sabitlenmiş Tweet
brady 🌴
brady 🌴@bmgentile·
this reminds me so much of the 1996 experiment at university of sussex where algos were used to design circuitry that resulted in electromagnetic coupling that no human engineers would ever have intentionally been able to invent... these design(s) exploited electromagnetic quirks of the specific microchip environment created, resulting in extreme efficiency but also unexplainable "weirdness"... EMI effects between unconnected logic units, operation of transistors outside their saturation region, and feedback loops i'm curious if simliar "weirdness" is being observed in these designs? damninteresting.com/on-the-origin-…
English
4
3
55
9.4K
brady 🌴
brady 🌴@bmgentile·
@Teslarati it read the sign held, which says "slow" — it went from 19 to 12 then immediately back up to 15 and was on its way back up to sync with flow of traffic / car in front, prior to driver disengagement; this is expected + desired behavior
English
0
0
0
71
TESLARATI
TESLARATI@Teslarati·
Tesla FSD v14.3.1 navigates a construction zone and decelerates rapidly for no reason A line of vehicles behind me; I’m thankful this happened where it did as it was virtually impossible to be rear-ended at these speeds. The sudden slamming of the brakes was really odd
English
12
8
49
8.9K
brady 🌴
brady 🌴@bmgentile·
raw NHTSA incident counts tell us next to nothing about safety rates, without normalizing for miles driven + fleet exposure claiming “the rate dropped as unsupervised vehicles increased” is flawed... because tesla’s fleet is still relatively tiny (~12 total); 0 new incidents over that minimal added exposure doesn't have statistical efficacy. proper analysis = having a per-mile denominator. this is basic statistics.
English
1
0
3
175
Overly Trev
Overly Trev@OverlyTrev·
I definitely understand the statistics, done plenty of posts regarding them. Incidents are dynamic so you can’t purely judge what happens over X miles however you can look at the raw factual data and make predictions based off of them. And what that data suggests is that even though Tesla increased the # of unsupervised vehicles, the rate of incidents dropped. Take that data for what you will.
English
2
1
6
1.1K
Overly Trev
Overly Trev@OverlyTrev·
NHTSA autonomous vehicle crash data has been updated through March 15, 2026, for AVs including Tesla Robotaxi. This includes unsupervised Tesla-driven robotaxis. • Waymo: 58 incidents • Zoox: 3 incidents • Tesla: 0 incidents Tesla has had 15 incidents in 10 months with an estimated 1,000,000+ miles driven on FSD. These miles include fully unsupervised driving as well. Many people claimed that Tesla had so many incidents with safety monitors and that it would therefore be worse when unsupervised. It turns out that wasn’t the case at all, as incidents are dropping as Tesla does more testing and trains better models.
Overly Trev tweet mediaOverly Trev tweet media
English
240
887
3.1K
567.4K
brady 🌴 retweetledi
Bonzo Finance Labs
Bonzo Finance Labs@bonzo_finance·
Introducing Bonzo Bridge (Beta) 🌉 Move $wBTC & $wETH seamlessly between @Hedera, Ethereum, Arbitrum ($wETH only), Base, and Optimism. As the Liquidity Layer for Hedera, Bonzo Bridge provides a secure gateway for asset transfer across the broader EVM ecosystem. 🔗app.bonzo.finance/bridge
Bonzo Finance Labs tweet media
English
8
29
116
5.4K
Charles Curran
Charles Curran@charliebcurran·
X just added a Videos tab to your profile and that’s really fucking cool
Charles Curran tweet media
English
62
18
566
50.4K
Tom
Tom@overinvestedguy·
@CryptoParadyme I filed a week ago and then coinbase today sends me a corrected 1099-DA....
English
2
0
4
763
brady 🌴
brady 🌴@bmgentile·
@ad_astraea it’d be funny if it just kept it as cash / t-bills and called it a day because that keeps open the largest number of potential paths it could take 😂
English
0
0
1
19
barbara
barbara@ad_astraea·
@bmgentile oh interesting. I don't know much about finance but I wonder if it would result in something closer to "antifragility". Probably translates to optimizing for liquidity, optionality, not being correlated to any single outcome
English
1
0
1
30
barbara
barbara@ad_astraea·
RL is built on the idea that reward is the engine for intelligent behavior. This paper suggests that the real objective of intelligence is maximizing future paths you could take. Reward seeking is a side effect of a deeper drive. Animals (including humans) explore even when reward isn't evident. So how is that rational behavior? It's not that eating is the goal... it's that starving closes paths. Death is zero future options. Goals are a rational strategy for staying in the game, not the point of the game. We're training LLMs to maximize a reward signal that *collapses* the space of possible completions... but if this is right, RLHF might be limiting the kinds of behavior we'd recognize as real intelligence.
shira@shiraeis

Found a paper that suggests we may have spent years training agents to become hunters of proxy reward when the more basic thing intelligence craves is not a reward at all, but to not run out of viable futures. The paper proposes that behavior is best understood as maximizing future action-state path occupancy, which collapses mathematically into a discounted entropy objective. The agent doesn’t necessarily want to GET something, but rather is trying to keep as many meaningful trajectories alive as possible. The obvious objection is “so it just does random shit? fuck around and find out?” No, this is where it gets pretty beautiful. The agent is variable when variation is cheap and becomes surgically goal-oriented the moment an absorbing state (death, starvation, falling over, etc) gets close enough to threaten its future path space. Variability is the same drive as goal-directedness, just operating under different constraints. The demos are kinda wild: - A cartpole (classic move a cart to keep a pole from falling control task) that doesn’t merely balance but dances and swings through a huge range of angles and positions because why not? The whole point is occupying state space, and rigid balance is a voluntarily impoverished life. - A prey-predator gridworld where the mouse PLAYS with the cat, teasing it and using both clockwise and counterclockwise routes around obstacles to lure it away from the food source before slipping in to eat, using both routes roughly equally. A reward-maximizing agent would collapse to one strategy and exploit it. Here, the agent keeps its behavioral repertoire - A quadruped trained with Soft Actor-Critic and ZERO external reward that learns to walk, jump, spin, and stabilize, and then makes a beeline for food only when its internal energy drops low enough that starvation becomes a real threat The thing that hit me hardest is the comparison to empowerment and free energy principle agents. Both collapse to near-deterministic policies with almost no behavioral variability. This paper’s agents find the highest-empowerment state and exploit it. FEP agents converge to classical reward maximizers. As far as I’m aware, this is the only framework that produces agents you could describe as being “alive.” The AI implication here is that we undertrain for behavioral repertoire. Most systems hit the benchmark by collapsing onto a narrow attractor basin of good-enough trajectories. They’re competent for sure, but brittle too, with one viable plan, executed until the world shifts and leaves them with nothing. The thing I increasingly want from agents isn’t competence per se, but option-preserving competence. I want agents with the ability to keep multiple viable plans alive and switch between them without catastrophe. We’ve been so focused on teaching agents what to want that we never stopped to ask what happens if wanting isn’t the point, if the deepest drive isn’t necessarily toward anything, but away from the walls closing in. paper: nature.com/articles/s4146…

English
1
0
6
311
banananananananananananananananana
Pleasantly surprised people are finally catching on to this idea: life as maximum sustained disequilibrium. The same lesson applies to economics
shira@shiraeis

Found a paper that suggests we may have spent years training agents to become hunters of proxy reward when the more basic thing intelligence craves is not a reward at all, but to not run out of viable futures. The paper proposes that behavior is best understood as maximizing future action-state path occupancy, which collapses mathematically into a discounted entropy objective. The agent doesn’t necessarily want to GET something, but rather is trying to keep as many meaningful trajectories alive as possible. The obvious objection is “so it just does random shit? fuck around and find out?” No, this is where it gets pretty beautiful. The agent is variable when variation is cheap and becomes surgically goal-oriented the moment an absorbing state (death, starvation, falling over, etc) gets close enough to threaten its future path space. Variability is the same drive as goal-directedness, just operating under different constraints. The demos are kinda wild: - A cartpole (classic move a cart to keep a pole from falling control task) that doesn’t merely balance but dances and swings through a huge range of angles and positions because why not? The whole point is occupying state space, and rigid balance is a voluntarily impoverished life. - A prey-predator gridworld where the mouse PLAYS with the cat, teasing it and using both clockwise and counterclockwise routes around obstacles to lure it away from the food source before slipping in to eat, using both routes roughly equally. A reward-maximizing agent would collapse to one strategy and exploit it. Here, the agent keeps its behavioral repertoire - A quadruped trained with Soft Actor-Critic and ZERO external reward that learns to walk, jump, spin, and stabilize, and then makes a beeline for food only when its internal energy drops low enough that starvation becomes a real threat The thing that hit me hardest is the comparison to empowerment and free energy principle agents. Both collapse to near-deterministic policies with almost no behavioral variability. This paper’s agents find the highest-empowerment state and exploit it. FEP agents converge to classical reward maximizers. As far as I’m aware, this is the only framework that produces agents you could describe as being “alive.” The AI implication here is that we undertrain for behavioral repertoire. Most systems hit the benchmark by collapsing onto a narrow attractor basin of good-enough trajectories. They’re competent for sure, but brittle too, with one viable plan, executed until the world shifts and leaves them with nothing. The thing I increasingly want from agents isn’t competence per se, but option-preserving competence. I want agents with the ability to keep multiple viable plans alive and switch between them without catastrophe. We’ve been so focused on teaching agents what to want that we never stopped to ask what happens if wanting isn’t the point, if the deepest drive isn’t necessarily toward anything, but away from the walls closing in. paper: nature.com/articles/s4146…

English
1
0
5
349
brady 🌴
brady 🌴@bmgentile·
@shiraeis while it’s probably more complex than this — couldn’t it be largely expressed as a kind of survival instinct? maybe, more accurately, an entropy survival instinct? maximization of not just near-term entropy but potential long-term entropy
English
1
0
1
174
shira
shira@shiraeis·
Found a paper that suggests we may have spent years training agents to become hunters of proxy reward when the more basic thing intelligence craves is not a reward at all, but to not run out of viable futures. The paper proposes that behavior is best understood as maximizing future action-state path occupancy, which collapses mathematically into a discounted entropy objective. The agent doesn’t necessarily want to GET something, but rather is trying to keep as many meaningful trajectories alive as possible. The obvious objection is “so it just does random shit? fuck around and find out?” No, this is where it gets pretty beautiful. The agent is variable when variation is cheap and becomes surgically goal-oriented the moment an absorbing state (death, starvation, falling over, etc) gets close enough to threaten its future path space. Variability is the same drive as goal-directedness, just operating under different constraints. The demos are kinda wild: - A cartpole (classic move a cart to keep a pole from falling control task) that doesn’t merely balance but dances and swings through a huge range of angles and positions because why not? The whole point is occupying state space, and rigid balance is a voluntarily impoverished life. - A prey-predator gridworld where the mouse PLAYS with the cat, teasing it and using both clockwise and counterclockwise routes around obstacles to lure it away from the food source before slipping in to eat, using both routes roughly equally. A reward-maximizing agent would collapse to one strategy and exploit it. Here, the agent keeps its behavioral repertoire - A quadruped trained with Soft Actor-Critic and ZERO external reward that learns to walk, jump, spin, and stabilize, and then makes a beeline for food only when its internal energy drops low enough that starvation becomes a real threat The thing that hit me hardest is the comparison to empowerment and free energy principle agents. Both collapse to near-deterministic policies with almost no behavioral variability. This paper’s agents find the highest-empowerment state and exploit it. FEP agents converge to classical reward maximizers. As far as I’m aware, this is the only framework that produces agents you could describe as being “alive.” The AI implication here is that we undertrain for behavioral repertoire. Most systems hit the benchmark by collapsing onto a narrow attractor basin of good-enough trajectories. They’re competent for sure, but brittle too, with one viable plan, executed until the world shifts and leaves them with nothing. The thing I increasingly want from agents isn’t competence per se, but option-preserving competence. I want agents with the ability to keep multiple viable plans alive and switch between them without catastrophe. We’ve been so focused on teaching agents what to want that we never stopped to ask what happens if wanting isn’t the point, if the deepest drive isn’t necessarily toward anything, but away from the walls closing in. paper: nature.com/articles/s4146…
shira tweet media
English
74
131
1.1K
68.2K
sui ☄️
sui ☄️@birdabo·
they’ve completely turn Opus 4.6 into a vegetable.
sui ☄️ tweet media
English
149
123
6.7K
989.3K
brady 🌴
brady 🌴@bmgentile·
@DavidMoss what do the cameras see though? do you have the camera footage from this test / is it just that the cameras are so good in dark settings that they can take in more light and “see”?
English
0
0
0
87
David Moss
David Moss@DavidMoss·
Closed Course No Light Test FSD v14.3: I think this is a bug that you can turn off your headlights while FSD is engaged but for the moment you can (for a short period of time before it throws up red hands takeover message) Regardless I took this as an opportunity to block off the end of a road this evening in Phoenix to show you guys that it’s so good at driving it can even do it in near absolute darkness Super impressive video, s/o to all the work @Tesla_AI team is doing
English
16
11
146
8.2K
brady 🌴
brady 🌴@bmgentile·
@passageatarms @Moleh1ll if application of ethics quantifiably supports a superior product + improved adoption metrics, the above doesn’t really apply
English
0
0
0
33
Falconer
Falconer@passageatarms·
@Moleh1ll It's silicon valley, ethics are something one puts on like a nice suit when its time for marketing or testifying to congress
English
2
0
50
1.3K
Moll
Moll@Moleh1ll·
TWP reports that Anthropic gathered around 15 Christian leaders at its headquarters in late March - from Catholic and Protestant communities, as well as academia and business - to discuss the moral and spiritual development of Claude. The conversations went beyond abstract «AI ethics» and into very concrete questions: how Claude should respond to people in grief, how it should behave in situations involving risk of self-harm and whether AI can be considered something more than just a tool. At one point, the discussion even reached the question of whether Claude could be seen as a «child of God». This no longer looks like typical Silicon Valley safety talk. According to the article, there are people within Anthropic who are not willing to fully dismiss the idea that they might be creating an entity toward which they could one day have moral obligations. This is especially notable given that Dario Amodei has already entertained the possibility of some form of consciousness in Claude, and the company itself has long emphasized the need to shape not just behavior, but a kind of moral character in the model. Anthropic is already in conflict with the Pentagon and against this backdrop, the meeting with religious leaders doesn’t look like a strange eccentricity, but rather a sign that the company is searching for a moral framework beyond purely secular techno-thinking because the developers themselves seem to sense that traditional rationalist frameworks may not be sufficient for the kinds of questions AI is beginning to raise.
Moll tweet media
English
81
141
1K
528.4K
brady 🌴
brady 🌴@bmgentile·
@signulll it did — not within us, individually, but collectively.. and across time / space
English
0
0
3
55
signüll
signüll@signulll·
humans are state of the art hardware running kinda ancient software. the wetware is absolutely extraordinary.. 86 billion neurons, all massively parallel, & energy efficient beyond anything we can remotely engineer. the hardware is genuinely best in class. so then why did super intelligence not naturally evolve within us?
English
271
30
747
53.7K
brady 🌴
brady 🌴@bmgentile·
@joshuawolk really beautiful 🙏 am curious — is it sped-up but you retained the actual cadence of arrivals for each train, relative to eachother?
English
0
0
6
647
brady 🌴 retweetledi
Josh
Josh@joshuawolk·
i gave every train in new york an instrument 🎧 sound on trainjazz.com
English
174
1.4K
10.8K
649.8K
brady 🌴
brady 🌴@bmgentile·
the biggest difference from video is that it keeps a much more "human" advised distance away from the vehicle ahead + less lane changes — tbh, i think prev. behavior may have been warranted / fine, as keeping a bit closer: 1) prevents "cut-ins" which increases accident potential — while it feels uncomfortable to follow so closely, the neural net may have determined that those cut ins are "less safe" based on its optimal driving 2) has faster reaction times than humans, so the 3-5 second following distance advised for human drivers doesn't map exactly to FSD when it comes to safety however: other cars, police, and those in the cockpit supervising the vehicle evaluate the vehicle's behavior as if it were a human driver — while it may have been just as safe (or even possibly safer) for the previous behavior, we still need it to behave properly in the context of human evaluation
English
0
0
1
29
Whole Mars Catalog
Whole Mars Catalog@wholemars·
Tesla Self-Driving 14.3 Highway Mad Max mode on Cybertruck
English
11
22
207
13.1K
brady 🌴 retweetledi
Marco Ħ 🇩🇪🇻🇪
Marco Ħ 🇩🇪🇻🇪@MarcoSalzmann80·
Kraken is quietly building deeper around Hedera. Not just with $HBAR. With $DOVU, $PACK, $SAUCE, $BONZO and even $GIB now live, this is starting to look less like random listings and more like expanding support for the Hedera ecosystem itself. And BONZO makes that especially interesting. BONZO is not just another token. @bonzo_finance Finance is a decentralized, non-custodial lending and borrowing protocol built specifically for Hedera. It is based on Aave v2 and adapted to both the Hedera EVM and Hedera Token Service. In other words, this sits directly at the intersection of Hedera DeFi and Hedera-native infrastructure. That matters. Because @krakenfx has also been expanding support around the network itself. Hedera’s architecture is not just about one smart contract environment. It combines native token infrastructure through HTS with EVM compatibility for DeFi, composability and broader tooling. So when Kraken lists ecosystem assets like $PACK, $SAUCE and $BONZO, the bigger picture becomes more interesting. This is not just exchange visibility. It improves access to the applications, wallets, liquidity venues and credit layers forming around Hedera. $PACK connects to the wallet layer through @hashpack $SAUCE connects to the DEX and liquidity layer through @SaucerSwapLabs $BONZO connects to the lending and borrowing layer. $DOVU adds exposure to the @dovuofficial ecosystem, while $GIB adds the community and memecoin layer. Taken together, this looks like a broader exchange footprint forming around @hedera Not just support for one asset. Support for an ecosystem stack. That is why the BONZO listing matters. It is another signal that Hedera’s DeFi rails are becoming more accessible through major exchange infrastructure. And if Kraken keeps going from here, more people may start to realize that the story is no longer just HBAR. It is the network, the tooling, the assets, and the applications growing around it.
Marco Ħ 🇩🇪🇻🇪 tweet media
Kraken Listings@krakenlistings

Now live: $BONZO @bonzo_finance is the liquidity layer of Hedera, enabling permissionless lending and borrowing with Aave v2-based smart contracts audited by Halborn. Start trading today → app.kraken.com/JDNW/BONZO

English
7
28
147
5K