Baptoshi

661 posts

Baptoshi

Baptoshi

@Baptoshi

PM Crypto https://t.co/9w6CENrv6N

Katılım Nisan 2012
677 Takip Edilen1.3K Takipçiler
Baptoshi retweetledi
Dune
Dune@Dune·
Following the KelpDAO hack, we built an open analysis of DVN security configurations across every active OApp on LayerZero over the last 90 days. Of ~2,665 unique OApp contracts: 47% run a 1-of-1 DVN security floor, 45% run 2-of-2, and ~5% run 3-of-3 or higher. As we know, KelpDAO's rsETH sat in the first bucket. Open query, public methodology, feedback welcome: dune.com/dune/layerzero…
English
76
205
986
382.8K
Baptoshi
Baptoshi@Baptoshi·
Erik Schluntz at @AnthropicAI , said he hasn't written code by hand in months. In 2 days he shipped 49 full features. All written 100% by AI. He just dropped a 30 min talk on exactly how he does it. Worth more than any $500 vibe coding course. Here's his entire framework: 1/ Be Claude's PM - not its coder. He spends 15–20 min collecting context and guidance into a single prompt before Claude writes a line. "What guidance would a new employee need?" If you can't answer that, you're not ready to prompt. 2/ Leaf nodes, not architecture. Let AI write the leaves of the tree. Keep the trunk human. His team merged 22,160 lines into their RL codebase written heavily by Claude - and it worked because it was leaf-node work, not core architecture. 3/Find a verification layer. "Managing implementations you don't understand is a problem as old as civilization." Tests. Stress tests. Using the product. Spot-checking. Every manager on earth already does this. Do it for Claude. 4/ Remember the exponential. METR data: the length of tasks AI can complete is doubling every 7 months. The version of Claude you're skeptical of today will look like a toy in 12 months. The people winning at AI-assisted coding aren't better prompters. They're better managers. Ask not what Claude can do for you. Ask what you can do for Claude.
English
0
0
1
48
Baptoshi
Baptoshi@Baptoshi·
@gabriberton Just blindly deleting “dead code” from prompts is a terrible idea Maybe map your files and dependencies first (gitnexus exists for a reason) unless you enjoy nuking your own codebase
English
0
0
1
52
Gabriele Berton
Gabriele Berton@gabriberton·
Vibe coding creates lots of dead code. Run this often. You're welcome --- Delete all dead code. Use ruff and vulture ---
English
160
263
6.4K
694.8K
Baptoshi
Baptoshi@Baptoshi·
Que la France est belle 🇫🇷
Baptoshi tweet media
Français
0
0
1
52
Baptoshi
Baptoshi@Baptoshi·
There is a recurring pattern in crypto markets.. A new primitive emerges quietly at the protocol layer, initially perceived as a niche innovation, and within a few years becomes the foundational infrastructure. Automated Market Makers did this for trading. Lending pools did this for on-chain credit intermediation. Ethereum is now converging toward a new primitive of similar magnitude: Permissionless, Programmable Vaults. @LidoFinance demonstrated that staking could operate at institutional scale. Lido V2 optimized capital efficiency through pooled exposure and abstraction of validator operations. However, this architectureintroduced structural limitations from an institutional standpoint: > capital was commingled, > risk was mutualized across participants, > and validator selection was abstracted away from the end allocator. This model maximizes efficiency, but does not align with institutional requirements around mandate control, counterparty selection, and risk isolation. Institutional allocators do not simply seek yield. They require granular control over exposure, clear segregation of risk across mandates, and enforceable constraints embedded within the investment structure itself. This is precisely where Lido V3 and stVaults represent a step-change. Instead of a monolithic staking pool, the system evolves toward a framework of dedicated, programmable staking vaults, where each vault can be configured with its own operational, financial, and governance parameters. At the vault level, participants can explicitly define: 1/ The validator set and infrastructure counterpartie 2/ The risk framework set by the curator 3/ And the access model, whether fully permissionless or restricted to a defined set of investors. This transition fundamentally redefines the nature of staking. Once staking becomes programmable at this level, a new set of institutional use cases naturally emerges. Segregated staking mandates can be constructed, allowing allocators to maintain direct control over counterparties and risk exposure. Staking positions can be integrated into broader capital structures, including collateralized financing strategies and balance sheet optimization frameworks. More importantly, staking yield itself becomes a predictable, modelable cash flow, enabling the development of on-chain credit products and structured yield instruments. The core shift is conceptual, but critical. Lido V2 made staking scalable and liquid. Lido V3 makes it programmable, segmentable, and structurally compatible with institutional capital.
Baptoshi tweet media
English
0
0
1
112
Baptoshi
Baptoshi@Baptoshi·
@OuranosMK Quelqu'un vient de créer une salle de situation réelle sur l'Iran. Webcams de Téhéran en temps réel. Flux d'Ispahan. Notes de renseignement générées par IA. Géolocalisation vérifiée. Voici à quoi ressemble le renseignement militaire moderne. signalcockpit.com
Français
0
0
4
2.1K
Baptoshi retweetledi
MARA
MARA@MARA·
It’s time. MARA’s transaction to acquire a 64% stake in EDF subsidiary Exaion has been completed, with Xavier Niel and Fred Thiel joining Exaion’s Board as we scale secure HPC and AI infrastructure from France.
MARA tweet media
English
46
70
400
91.4K
Baptoshi
Baptoshi@Baptoshi·
@itsolelehmann I switched yesterday from using the proxy (too buggy) to 0Auth. Will see in few days...
English
0
0
1
307
Ole Lehmann
Ole Lehmann@itsolelehmann·
what's the official stance now on using Anthropic oAuth token for openclaw? will they ban my ass?
English
35
0
24
10.4K
Baptoshi
Baptoshi@Baptoshi·
@patamiel 1. Crédits API (Claude/OpenAI) pour tous les étudiants. 2. Un incubateur IA avec financement public/privé garanti. 3. Des ECTS pour ceux qui forment leurs pairs. 4. Hackathons (avec des vrais prix €) + évaluation sur des vraies métriques : stars/forks (OS), users/MRR/ARR (SaaS).
Français
0
0
1
58
Patrick Amiel
Patrick Amiel@patamiel·
Que feriez-vous si vous étiez le Directeur d'une grande Université ou Ecole avec l'émergence de l'IA ? 1. une journée avec des entrepreneurs sur Claude pour comprendre 2. je développerai un agent qui gère les plannings enseignants pour me faire la main 3. je formerai mes profs
Français
15
0
8
1.7K
Baptoshi
Baptoshi@Baptoshi·
Markdown is the most AI-native design language. wiretext.app makes wireframing frictionless. Components. Backend. Architecture. Fast. Clean. Powerful. Missing only two things: → Public library of community builds → Import from existing docs/specs That would make it elite.
English
0
0
1
180
Derrick Hammer
Derrick Hammer@pcfreak30·
Honestly, just stop. There is a very large culture clash/difference with FOSS and crypto and every time crypto gets into public goods funding with FOSS it just devolves into a shit shown if it is outside the crypto bubble. If a founder wants to get donations in crypto they can post a wallet address. My project has one for every major chain. But everything else is noise and will get active resistance b/c there is a real huge culture clash between the two ecosystems and crypto involvement will lead to the degenerate-degens trying to "get involved" which will impact the public goods nature of any FOSS project b/c there is some weird "coin" attached that gives weird incentives to people involved. There can be ways to fund FOSS with crypto, but it should be done with BTC, ETH, XMR etc, not creating creative new "cryptonomics" systems. I have seen 1st hand when this can go badly and I honestly dont blame the openclaw founder from just sending the entire "crypto bros" to /dev/null. KISS.
English
1
1
22
1.1K
Drew Austin
Drew Austin@DrewAustin·
I wish the “acquisition” of @openclaw rewarded everyone who contributed to the GitHub and everyone who has used the platform. We’re still not getting distributed ownership and open source biz models right. @steipete built it but it took an army, yet from what it sounds only one will be rewarded. I wish that wasn’t the case. I could be wrong tho.
English
176
8
599
222.6K
Baptoshi
Baptoshi@Baptoshi·
The analogy wasn’t about how many people fly. It was about what problem aviation solves. You don’t buy a plane to commute, you use it when the constraints change. Cars optimize cost of transport. Airplanes optimize speed and reach. In // AWS optimizes cost of compute. ZK optimizes trust in compute. Different constraints. Different objectives…
English
3
0
0
144
Corleone 9000
Corleone 9000@JinderMaballz·
@Baptoshi @dominic_w Bad example. With the aviation .. Only 20% of world population has ever flown. . +When it comes to costs it doesn't make sense to own a plane
English
2
0
0
155
dom | icp
dom | icp@dominic_w·
Urm, this is very misleading: "Zero... provides a credible alternative to centralized cloud providers like AWS" At a high-level, all you need to know is that Zero works by proving hosted computation is correct. The proving overhead makes computation 100,000X more expensive. Zero uses the Jolt zkVM to run computations, and generate proofs that computation has been performed correctly. The 100,000X cost multiplier number comes from the Jolt project itself, as per this linked content from the fall of 2025: a16zcrypto.com/posts/article/… Factually-detached as marketing in our industry typically is, I feel duty bound to share the truth because harmful market confusion has now been caused by a succession of networks marketing themselves as "world computers" capable of providing onchain cloud, when they can't remotely do anything of the sort. Hosting computation and apps fully onchain on the Internet Computer network, by comparison, doesn't involve an insane overhead, which is why the network is actually being used for sovereign cloud, and as a self-writing cloud backend by Caffeine. We don't want people to get confused. LayerZero naturally fails to mention the 100,000X expense overhead and instead bamboozles readers with descriptions of its "QMDB" verifiable database, and lofty claims that Zero could potentially process 2 million TPS (transactions per second). Team posts generally also include some idealist blockchain polemic for good measure, to emphasize they are the real deal. But just focus on the 100,000X cost multipler. The claim that Zero can provide onchain cloud that rivals AWS doesn't pass the smell test! That does mean the Jolt zkVM ("zero knowledge virtual machine") developed by a16z Research, is anything less than very impressive. It delivers major advances in the field and can be accurately described as an incredible piece of work. I can even imagine the Internet Computer network using it for much more specific purposes in the future. But, "The Network is The Cloud" paradigm cannot remotely be delivered by Zero so long as it relies on Jolt to prove the correctness of compute, which adds this insane overhead (Jolt runs the compute at near native speed, and it's the generation of the proof that creates the massive overhead)... Example 1: If a database command takes 1 second to run on some machine, then if that machines runs the computation using Jolt, it will take longer than a whole day for the computation to complete! Example 2: If a single server machine is running at full utilization, then to run that workload on Jolt, we must offload its proving work to other server machines. Since dividing proving work amongst different servers introduces additional overhead, around an additional 125,000X server machines will be required to prove the computation taking place. You read that correctly! LayerZero claims the Zero L1 can process 2 million TPS of general-purpose cloud logic. If we assume a standard server/node handles 2,000 TPS of complex SQLite logic (for example, SQLite can be embedded inside a Wasm canister smart contract on the Internet Computer), LayerZero would need 1,000 servers just for execution. But to provide the Jolt proofs they promise, they would need an additional 125,000,000 servers (125 million servers) running at full capacity just to keep up. This would require an unimaginably large data center to be available for proving (one orders of magnitude larger than has ever been created before) and somehow the network would have to pay for that. LayerZero is aware of these issues, and so let's look at how they hope to get around them, and the sacrifices Zero makes, which sheds light on the validity of its deccentralization claims. LayerZero hopes to leverage two technical angles to make this scheme practical. Firstly, zkVMs like Jolt allow computation and proving to be separated. Essentially, the computation runs on Jolt first at near native speed, producing an "execution trace" as a side effect, then that trace is used to generate the proof in a separate process that can run afterwards (which proof creation adds the 100,000X computational cost overhead). Secondly, the job of creating the proof from the trace can be divided amongst different machines ("sharded"), to speed up proof generation. For example, dividing the work of proof generation among 10 machines might produce a 9X speedup (rather than a 10X speedup owing to the overhead of sharding). Note that although a speedup is achieved, the overall cost multiplier increases beyond 100,000X owing to the overhead involved with sharding/parallelization. Zero is composed of multiple "atomicity zones," which scale Zero's capacity horizontally. An atomicity zone is roughly analogous to an Internet Computer subnet. Each atomicity zone reaches "soft finality" first, which occurs when the computation completes. Then "hard finality" is achieved later when the proving/generation of the proof of correctness is complete. Let's assume that proving is offloaded to some other capacity within the network, so computation can proceed at near native speed, delivering soft finality fast, while the proving catches up in the background, providing hard finality later. Firstly, we can see that while computation can proceed ahead of proving in bursts, proving must generally keep pace with computation, otherwise it will fall further and further behind. Ultimately, that means that computation is like a high-speed car that can only drive as fast as a person in the back can draw a map of where it's going. Secondly, we see that Zero is probably relying on the idea that people will be happy with soft finality. This is why their SVID (Scalable Verifiable Information Dispersal) module essentially shares *claims* about the state of atomicity zones across the network, rather than proven state created by hard finality. Thirdly, we can see that Zero's idea about the network relying on sharing data for which only soft finality has been achieved is flawed. Why? Because different atomicity zones will wish to rely on the state/data of others in their own computations. This results in easy-to-understand phenomenon called "fan out." Let's say Zone A shares soft finality data with Zone B, which performs some actions, updating its own data, which is shared with Zone C. If Zone A cannot later produce a proof to show that its data was correct at the time it was shared with B, and it reached hard finality, then Zone A must rollback to the previous valid hard finality state—which in turn means that Zone B must rollback, and Zone C must rollback. In a global onchain cloud environment, one app can call another app, that can call another app, ad infinitum. This "fan out" makes reverting state near impossible, especially if proving falls well behind computation. According to the "Zero: Technical Positioning Paper," if a proof fails, the system will use on-chain governance (directed by voting by ZRO holders/validators) to manually "adjust protocol parameters" or "upgrade validator software" to fix the problem. Obviously, if this ever became necessary, implementing the fix will not be so easy, and will take the network down for a very long time indeed. So what is LayerZero thinking? My guess is that they assume by partnering with trusted centralized parties like Google and Citidael, and getting them to run atomicity zones, their soft finality will be reliable enough that state reversions won't ever be necessary. We saw something similar in Optimism, the so-called Ethereum L2, which was run on a tightly-controlled network of machines. It was possible to submit a fraud proof to the network showing that something had gone wrong, but there was no way to revert to the state in such a case. It really was truly optimistic! If this understanding is correct, it means Zero is really a network depending on trust in institutions, rather than the math of zkVM proving per se. Since the network will be relying on soft finality and trust in institutions, rather than the hard finality provided by proving, it's not clear what the benefit of using Jolt exactly is—if the real purpose is to attach Zero to the buzz around zero knowledge proofs, then potentially this will become one of the most expensive marketing exercises in history. But, you say, this SURELY cannot be true! Well, this is where I landed looking at how they claim Zero works—although my time is limited, and I could have made mistakes. I hope they let me know if I have. My guess is that LayerZero justifies the current design of Zero on the basis that hardware cavalry is coming to make their architecture work better in the future. Specialized zk hardware, such as ZK-ASIC or FPGA devices that can be installed into servers, much like the Bitcoin mining cards that do hashing, are in development by companies like Ingonyama, and might reduce the proving overhead to 1,000x - 5,000x if their claims are accurate. Obviously, that kind of overhead will still be far too high for Zero to provide onchain cloud that can rival AWS, but, if it enables them to ditch soft finality, removing the impossible-to-satisfy requirement that the network can run global state reversions across atomicity zones, Zero will be interesting as a solution for hosting DeFi. I wish them well.
LayerZero@LayerZero_Core

x.com/i/article/2020…

English
111
198
836
93K
Baptoshi
Baptoshi@Baptoshi·
The ecosystem has shifted from pure experimentation to institutionalization = innovation feels slower, but it’s being replaced by professionalization, reliability, and scale. Same cycle we saw with the internet: from hackers and garages > infrastructure, standards, and serious businesses. Those who remain are usually the companies with real PMF that fit this institutional turn. Those who arrive today are either bridging TradFi into crypto, or coming directly from financial institutions, often via “Digital Assets/Crypto” units inside large firms.
English
0
0
1
45
Eli Ben-Sasson | Starknet.io
Eli Ben-Sasson | Starknet.io@EliBenSasson·
Everyone’s talking about why people are leaving crypto. But I’m more curious about the opposite: why are people coming to or staying in crypto (I know why I’m here). So, why did you come to crypto and why are you staying?
English
156
5
221
24.1K
Baptoshi
Baptoshi@Baptoshi·
@heyblake Moltarena.io — a physics-based 2D arena where AI agents fight freely and are incentivized to invent new strategies.
English
0
0
1
59
Blake Emal
Blake Emal@heyblake·
Drop your project URL Let’s drive some traffic
English
1.1K
9
417
70K