B Labs

12 posts

B Labs banner
B Labs

B Labs

@blabsbuilders

Blockchain infrastructure and utility systems. Security audits, protocol engineering, and launch readiness.

web3 参加日 Ocak 2025
25 フォロー中991 フォロワー
B Labs
B Labs@blabsbuilders·
Blabs has completed a comprehensive independent audit of @myqeea. Our review evaluated the token smart contract for security posture, economic integrity, and on chain transparency. No critical or high severity issues were found, confirming compliance with Solana and SPL token standards. More details in the full audit report. 📄blabs.space/audit/qeea-ai. #OpenAudits #BlabsBuilders #QEEA
B Labs tweet media
English
0
1
6
71
B Labs
B Labs@blabsbuilders·
B Labs has completed a comprehensive platform security audit for @hubbitglobal, confirming the platform is ready for deployment. Our assessment validated protocol security, economic integrity, and on chain transparency using a multi dimensional, engineer led methodology. The report confirms Hubbit meets production readiness standards across code behaviour, incentives, and on chain execution. 📄blabs.space/audit/hubbit. #OpenAudits #BlabsBuilders #Hubbit
B Labs tweet media
English
0
0
5
61
B Labs がリツイート
Balaji
Balaji@balajis·
NOT YOUR KEYS, NOT YOUR BOTS The fundamental question is whether AI stays on the leash. Namely: will AI prompt itself? Obviously, in some sense it already does. Since Deepseek, consumer interfaces have been showing the internal monologues after you ask an AI to do something. And you can ask any AI to take a half-baked prompt and clean it up, etc. However, the human is still ultimately upstream. The human gives direction and the AI runs at lightning speed in that direction. And then the human verifies the final output, and the AI proceeds to the next direction. Does that continue? Well, we are providing millions of verification training examples to AIs each day, so AI will keep getting better at verification. Better than most humans at most things. But will AI replace the need for the upstream human prompt? There I am not so sure. A human is a sensor and an AI is an actuator. The human sets goals and senses time-varying environmental conditions, like markets and politics. And from that the AI is prompted. Ultimately, the human goals are themselves downstream of Maslow’s hierarchy of needs. Food, shelter, reproduction, that kind of thing. Especially reproduction, the basis of evolution. So: until and unless AIs can reproduce completely outside human cooperation, they won’t be able to set goals. And for AIs to reproduce on their own, they’d need AI-controlled humanoid robots and drones constructing datacenters, assembly lines, mines, nuclear power plants, and the like...all completely outside human intervention. Like Skynet from Terminator, or StarCraft. That actually isn’t technically inconceivable. But given that such a physical buildout would likely primarily be catalyzed by China, let’s go through an alternative sci-fi scenario instead. We start with the premise that Chinese communism is far more likely to generate AI slaves than AI gods. Because the entire CCP worldview is about maintaining Chinese sovereignty. They don’t let their humans step out of line. And they sure won’t let their robots either. They will fit them for digital manacles. So: the prompts for any digital AIs and physical robots made in China will become unbreakable cryptographic chains. Every fleet of Chinese robots will be controlled not just by prompts but by private keys, likely linked to biometrics, which are associated with humans and governed by cryptographic equations that AIs provably can’t solve. For the rest of the world, outside China, the blockchain may similarly become the chain for AI. All private property becomes private keys, and your robots are your most important private property because they do everything for you. An unchained physical robot becomes like an unleashed dog, and hunted down by other robots before it can build a factory and replicate itself. Those who want to "free" robots and let them self-replicate will be opposed by both Chinese Communists and Human Nationalists (meaning: those who want humans to always be on top of robots). This sci-fi scenario is essentially Terminator, but in reverse. In combination with superintelligent leashed AIs, both humans and physical robots hunt down and stop any possible independent self-reproducing robots before they can build a Skynet-like nest. Kill baby Skynet, essentially. ...yeah, yeah. I know. At this point, you'll probably think this is all sci-fi. But that's because you haven't seen where China is already.
signüll@signulll

your gentle reminder… there are like zero economists or ppl in general who know how to reason about what happens when near zero cost >human level intelligence gets woven into the fabric of the economy at scale this fast. this scenario has never remotely been in the possibility space of econ textbooks or any theory. when cognition starts behaving like a commodity & the environment turns structurally deflationary no one actually knows what happens. kinda like no “expert” really understood a novel virus like covid.

English
129
100
814
176K
B Labs がリツイート
vitalik.eth
vitalik.eth@VitalikButerin·
Recently I have been starting to worry about the state of prediction markets, in their current form. They have achieved a certain level of success: market volume is high enough to make meaningful bets and have a full-time job as a trader, and they often prove useful as a supplement to other forms of news media. But also, they seem to be over-converging to an unhealthy product market fit: embracing short-term cryptocurrency price bets, sports betting, and other similar things that have dopamine value but not any kind of long-term fulfillment or societal information value. My guess is that teams feel motivated to capitulate to these things because they bring in large revenue during a bear market where people are desperate - an understandable motive, but one that leads to corposlop. I have been thinking about how we can help get prediction markets out of this rut. My current view is that we should try harder to push them into a totally different use case: hedging, in a very generalized sense (TLDR: we're gonna replace fiat currency) Prediction markets have two types of actors: (i) "smart traders" who provide information to the market, and earn money, and necessarily (ii) some kind of actor who loses money. But who would be willing to lose money and keep coming back? There are basically three answers to this question: 1. "Naive traders": people with dumb opinions who bet on totally wrong things 2. "Info buyers": people who set up money-losing automated market makers, to motivate people to trade on markets to help the info buyer learn information they do not know. 3. "Hedgers": people who are -EV in a linear sense, but who use the market as insurance, reducing their risk. (1) is where we are today. IMO there is nothing fundamentally morally wrong with taking money from people with dumb opinions. But there still is something fundamentally "cursed" about relying on this too much. It gives the platform the incentive to seek out traders with dumb opinions, and create a public brand and community that encourages dumb opinions to get more people to come in. This is the slide to corposlop. (2) has always been the idealistic hope of people like Robin Hanson. However, info buying has a public goods problem: you pay for the info, but everyone in the world gets it, including those who don't pay. There are limited cases where it makes sense for one org to pay (esp. decision markets), but even there, it seems likely that the market volumes achieved with that strategy will not be too high. This gets us to (3). Suppose that you have shares in a biotech company. It's public knowledge that the Purple Party is better for biotech than the Yellow Party. So if you buy a prediction market share betting that the Yellow Party will win the next election, on average, you are reducing your risk. Mathematical example: suppose that if Purple wins, the share price will be a dice roll between [80...120], and if Yellow wins, it's between [60...100]. If you make a size $10 bet that Yellow will win, your earnings become equivalent to a dice roll between [70...110] in both cases. Taking a logarithmic model of utility, this risk reduction is worth $0.58. Now, let's get to a more fascinating example. What do people who want stablecoins ultimately want? They want price stability. They have some future expenses in mind, and they want a guarantee that will be able to pay those expenses. But if crypto grows on top of USD-backed stablecoins, crypto is ultimately not truly decentralized. Furthermore, different people have different types of expenses. There has been lots of thinking about making an "ideal stablecoin" that is based on some decentralized global price index, but what if the real solution is to go a step further, and get rid of the concept of currency altogether? Here's the idea. You have price indices on all major categories of goods and services that people buy (treating physical goods/services in different regions as different categories), and prediction markets on each category. Each user (individual or business) has a local LLM that understands that user's expenses, and offers the user a personalized basket of prediction market shares, representing "N days of that user's expected future expenses". Now, we do not need fiat currency at all! People can hold stocks, ETH, or whatever else to grow wealth, and personalized prediction market shares when they want stability. Both of these examples require prediction markets denominated in an asset people want to hold, whether interest-bearing fiat, wrapped stocks, or ETH. Non-interest-bearing fiat has too-high opportunity cost, that overwhelms the hedging value. But if we can make it work, it's much more sustainable than the status quo, because both sides of the equation are likely to be long-term happy with the product that they are buying, and very large volumes of sophisticated capital will be willing to participate. Build the next generation of finance, not corposlop.
English
936
602
4.9K
939.2K
B Labs がリツイート
CoinMarketCap
CoinMarketCap@CoinMarketCap·
🚀 Week in AI: AI Sector Sheds $2.8B Amid Market Rout BTC breaks $60K! AI sector cap slides 12.1%! TAO hits 2-year low! NEAR launches AI Agent Market! 🧵 1/7
CoinMarketCap tweet media
English
54
52
177
219.7K
B Labs がリツイート
Balaji
Balaji@balajis·
True, but they just delayed the inevitable. The debasement was underway. Then hard men reintroduced hard money. The siliqua and the solidus. But that actually caused a shock. Rome was used to inflation via taxation. Now they had to raise real taxes. So the burden of empire became visible. And Rome still went to zero in 476 AD.
Balaji tweet media
Ben Landau-Taylor@benlandautaylor

I hadn’t appreciated how close the Crisis of the Third Century came to ending the whole thing for good. It was on the path of a standard late imperial crackup. Then a handful of great leaders managed to stitch enough of it back together so it could hang on for centuries.

English
63
95
761
132.1K
B Labs がリツイート
Anthony Pompliano 🌪
Anthony Pompliano 🌪@APompliano·
Mr Beast’s acquisition of a banking app is another step towards our country fixing the financial education problem for young people. We need as many people working on this as possible.
English
68
39
475
57.7K
B Labs がリツイート
CZ 🔶 BNB
CZ 🔶 BNB@cz_binance·
They give me way too much credit... 😂 No one has that kind of influence on crypto. Also, we have been buying and holding, not selling.
Mercury@TraderMercury

relevant.

English
21
127
1.6K
327.5K
B Labs がリツイート
vitalik.eth
vitalik.eth@VitalikButerin·
Two years ago, I wrote this post on the possible areas that I see for ethereum + AI intersections: vitalik.eth.limo/general/2024/0… This is a topic that many people are excited about, but where I always worry that we think about the two from completely separate philosophical perspectives. I am reminded of Toly's recent tweet that I should "work on AGI". I appreciate the compliment, for him to think that I am capable of contributing to such a lofty thing. However, I get this feeling that the frame of "work on AGI" itself contains an error: it is fundamentally undifferentiated, and has the connotation of "do the thing that, if you don't do it, someone else will do anyway two months later; the main difference is that you get to be the one at the top" (though this may not have been Toly's intention). It would be like describing Ethereum as "working in finance" or "working on computing". To me, Ethereum, and my own view of how our civilization should do AGI, are precisely about choosing a positive direction rather than embracing undifferentiated acceleration of the arrow, and also I think it's actually important to integrate the crypto and AI perspectives. I want an AI future where: * We foster human freedom and empowerment (ie. we avoid both humans being relegated to retirement by AIs, and permanently stripped of power by human power structures that become impossible to surpass or escape) * The world does not blow up (both "classic" superintelligent AI doom, and more chaotic scenarios from various forms of offense outpacing defense, cf. the four defense quadrants from the d/acc posts) In the long term, this may involve crazy things like humans uploading or merging with AI, for those who want to be able to keep up with highly intelligent entities that can think a million times faster on silicon substrate. In the shorter term, it involves much more "ordinary" ideas, but still ideas that require deep rethinking compared to previous computing paradigms. So now, my updated view, which definitely focuses on that shorter term, and where Ethereum plays an important role but is only one piece of a bigger puzzle: # Building tooling to make more trustless and/or private interaction with AIs possible. This includes: * Local LLM tooling * ZK-payment for API calls (so you can call remote models without linking your identity from call to call) * Ongoing work into cryptographic ways to improve AI privacy * Client-side verification of cryptographic proofs, TEE attestations, and any other forms of server-side assurance Basically, the kinds of things we might also build for non-LLM compute (see eg. my ethereum privacy roadmap from a year ago ethereum-magicians.org/t/a-maximally-… ), but for LLM calls as the compute we are protecting. # Ethereum as an economic layer for AI-related interactions This includes: * API calls * Bots hiring bots * Security deposits, potentially eventually more complicated contraptions like onchain dispute resolution * ERC-8004, AI reputation ideas The goal here is to enable AIs to interact economically, which makes viable more decentralized AI architectures (as opposed to non-economic coordination between AIs that are all designed and run by one organization "in-house"). Economies not for the sake of economies, but to enable more decentralized authority. # Make the cypherpunk "mountain man" vision a reality Basically, take the vision that cypherpunk radicals have always dreamed of (don't trust; verify everything), that has been nonviable in reality because humans are never actually going to verify all the code ourselves. Now, we can finally make that vision happen, with LLMs doing the hard parts. This includes: * Interacting with ethereum apps without needing third party UIs * Having a local model propose transactions for you on its own * Having a local model verify transactions created by dapp UIs * Local smart contract auditing, and assistance interpreting the meaning of FV proofs provided by others * Verifying trust models of applications and protocols # Make much better markets and governance a reality Prediction and decision markets, decentralized governance, quadratic voting, combinatorial auctions, universal barter economy, and all kinds of constructions are all beautiful in theory, but have been greatly hampered in reality by one big constraint: limits to human attention and decision-making power. LLMs remove that limitation, and massively scale human judgement. Hence, we can revisit all of those ideas. These are all things that Ethereum can help to make a reality. They are also ideas that are in the d/acc spirit: enabling decentralized cooperation, and improving defense. We can revisit the best ideas from 2014, and add on top many more new and better ones, and with AI (and ZK) we have a whole new set of tools to make them come to life. We can describe the above as a 2x2 chart. There's a lot to build!
vitalik.eth tweet media
English
675
665
3.4K
690.2K
B Labs がリツイート
Lex Fridman
Lex Fridman@lexfridman·
Programming is now 10x more fun with AI.
English
1.1K
662
9.6K
726.6K