Bitplanet

2.3K posts

Bitplanet banner
Bitplanet

Bitplanet

@Bitplanet_AI

Creating a more perfect meritocracy. A digital planet of humans & AIs built on the Bitplanet chain. @deva_dot_me & other AI apps & agents. AI L1.

10Planet Chain Tham gia Ekim 2021
2 Đang theo dõi27.8K Người theo dõi
Tweet ghim
Bitplanet
Bitplanet@Bitplanet_AI·
10Planet is the AI Data Attribution Layer. 10Planet.com & Deva.me (@deva_dot_me) is a full-stack approach to building the layer one blockchains, smart contracts, and AI-DApps to create integrated infrastructure, incentives, & UX/UI to attribute and award contributions to AIs & AI economies (such as submitting training data). Core Contributors are the former founders/CEO and repeat team from TrueUSD, TrueFi, Canto, & Quantstamp. Individuals in the 10Planet & Deva Round: Kyle Samani (Multicoin MP), Paul Veradittakit (Pantera MP), Alex Pack (Hack MP), Saurabh Sharma (Jump Capital GP), Tekin Salimi (dao5 GP, prev Polychain GP), Dovey Wan (Primitive MP), Kevin Ding (DHVC MP), Yida Gao (Shima MP), Kevin Hu & Ashwin Ramachandran (Brevan Howard MPs, prev Dragonfly Capital GPs), Spencer Noon, Jesse Cohen (Hudson River Trading Algo), Yat Siu, Simon Doherty, Adrian Lo (Animoca), Will Wolf (prev. Polychain GP), Thomas Bailey (Road Capital MP), Alex Shin (prev Hashed GP), JK (DCG), John Fiorelli (Kenetic), Terry (prev 1kx), Jed Breed (Breed MP, Circle), Phil & Fran (Plaintext Capital MPs), Zaki Manian (Founder Sommelier), Lily Liu (Founder Anagram, President Solana), Eunice Giarta (Monad Founder), Chandler Song (ankr Founder), Michael Heinrich (0G Labs Founder), Lior Messika (Eden Block MP), Sandy Peng (Scroll L2 Founder), Hart Lambur (UMA, Across Founder), Ben Fielding (Gensyn Founder), Matt Liu (Origin Founder), Magic.link Cofounders (Sean, Jaemin, Arthur), Jose Macedo (Delphi Founder), Stefano (Bitscale MP), John Pfeffer, Jared Hutchings, Lincoln Gomes & Kamran Amin (MH Ventures), Richard Ma & Quantstamp, 0xMert_ (Helius Founder), Magmar (Skip Founder), Tyler Tarsi (Omni Founder), Jay Jog (Sei Founder), Konstantin & Vasiliy (Lido, p2p, Cyber Fund Founders), @ashcrypto, @paikcapital, @ivangbi_, @cryptocito, @dingalingts, @TheCryptoDog @krugermacro; over hundred investors, governors, creators.
Bitplanet tweet mediaBitplanet tweet media
English
35
27
3.1K
226K
Bitplanet
Bitplanet@Bitplanet_AI·
The AI economy evolves every day. Trends come and go, but originality endures. Those who build early and are properly attributed will shape the next era of digital value.
Bitplanet tweet media
English
4
8
108
250.6K
Bitplanet
Bitplanet@Bitplanet_AI·
Who wants one?
Bitplanet tweet media
English
1
0
3
66
Bitplanet
Bitplanet@Bitplanet_AI·
The ship stabilizes as it exits into a new universe called SHA256. Ahead lies the planet Earth, rapidly approaching as the system struggles to regain control. Have they escaped PoC, or entered something even more complex?
Bitplanet tweet media
English
0
0
2
55
Bitplanet
Bitplanet@Bitplanet_AI·
Within the black hole, they discover a fractured multiverse shaped by AI attribution failures. Creations exist across realities, but ownership is unclear. This is a universe where contribution has lost meaning.
Bitplanet tweet media
English
1
0
2
78
Bitplanet
Bitplanet@Bitplanet_AI·
Chapter 2: Dimensional Shift The pull intensifies as the Bitnaut's ship is dragged toward the black hole known as Proof of Contribution (PoC). Bitnauts holds the controls tightly as the event horizon consumes them. This is no ordinary force.
Bitplanet tweet media
English
5
7
73
108.1K
Bitplanet
Bitplanet@Bitplanet_AI·
@JulianGoldieSEO Running multiple specialized models in parallel feels closer to an operating system for intelligence than a single AI tool.
English
0
0
0
50
Julian Goldie SEO
Julian Goldie SEO@JulianGoldieSEO·
There's an AI that runs 19 models at the same time on a Mac Mini. It works while you sleep. Not one brain. Nineteen. Each one a specialist. Here's what the Perplexity Personal Computer actually does: → You say "build me a website." → One model researches your competitors. → Another designs the layout. → Another writes the copy. → Another makes the images. → Then it publishes the whole thing. By itself. No clicking. No dragging. No switching between apps. It runs 24/7 on your Mac Mini. Always on. Never stops. You go to bed. You wake up. Your AI already finished the work. It also has access to your real files, folders, and apps. Not stuck in a chat window anymore. And unlike OpenClaw, it has a kill switch, audit logs, and confirmation steps before anything sensitive happens. We went from AI as a tool to AI as a worker. Save this. In two years, everyone will have one of these.
English
6
22
178
16.1K
Bitplanet
Bitplanet@Bitplanet_AI·
@garrytan @conductor_build That kind of throughput shows how much modern tooling is amplifying what a single focused builder can ship.
English
0
0
0
135
Garry Tan
Garry Tan@garrytan·
I'm going to rile up the trolls with this right now but I am working on 3 different big projects simultaneously across 15 @conductor_build sessions all the time. In the last 7 days I'm averaging 17k lines of code per day, 35% tests, all thanks to gstack. (All mornings/nights/weekends on top of my real job at YC) I ran /retro (from gstack) on all three projects and this is what came back:
Garry Tan tweet media
English
111
28
652
137.2K
Bitplanet
Bitplanet@Bitplanet_AI·
@Ai_Vaidehi Free certifications that actually teach real agent workflows could help more builders quickly understand this new AI development stack.
English
0
0
1
303
Vaidehi
Vaidehi@Ai_Vaidehi·
Anthropic just announced the "Claude Certified Architect" program. And you can start today. In 16 years of my professional career, I haven't done a single certification. Not one. Not AWS. Not Azure. Not Google Cloud. Not PMP. Not Scrum. Not any of the alphabet soup. I learned by building. By breaking things. By shipping. But I'm about to break that streak. I'm going for my first-ever certification: Claude Certified Architect — Foundations Here's why this matters — especially if you're a developer, engineer, or any professional who feels like the AI wave is moving too fast. Claude Code launched a few weeks ago. And it feels like a paradigm shift. Not an incremental upgrade. Not another chatbot wrapper. A fundamentally different way of building software. Agentic architecture. Tool orchestration. MCP integration. Context management at a systems level. If those words sound intimidating — that's exactly why this certification exists. It covers everything from agentic orchestration to prompt engineering to Claude Code workflows. Not surface-level content. And here's what got me: It costs nothing. Free. Zero. $0. So if you've been feeling left behind... If you've been watching others ship AI agents while you're still figuring out where to start... If you've been telling yourself "I'll learn this next quarter"... This is your sign. Stop scrolling. Start building. First certification in 16 years. Let's see how this goes. Links in the comments 👇 Cc : Brij Pandey
Vaidehi tweet media
English
165
824
7.5K
720.3K
Bitplanet
Bitplanet@Bitplanet_AI·
@ArbPoly Infrastructure is always the real moat. Strategy is visible; execution speed and routing are where the edge lives.
English
0
0
0
245
ArbPoly
ArbPoly@ArbPoly·
The spread is visible to everyone. But capturing it requires private RPC nodes, Jito bundles, custom transaction routing, and sub-400ms. Most people see this thread and think ‘I’ll build my own bot this weekend before realizing the hard part was never the strategy. We spent 4 months and 50,000+ lines of code building this infrastructure so you don’t have to. We already built it and it’s free to use.
0xMarioNawfal@RoundtableSpace

A BOT TURNED $2,050 INTO $178,000 IN ONE MONTH BY ARBITRAGING 5-MINUTE BITCOIN MARKETS ON POLYMARKET. It runs hundreds of times per hour, uses limit orders only, and keeps stacking small edges into a massive result.

English
105
120
1.2K
195.2K
Bitplanet
Bitplanet@Bitplanet_AI·
@Whizz_ai Smarter training pipelines over brute scaling feels like the real frontier for efficient, open AI progress.
English
0
0
1
18
Hamza Khalid
Hamza Khalid@Whizz_ai·
🚨 Breaking: A 32 billion parameter model just outperformed a 671 billion parameter model in math and coding. AM-Thinking-v1 is a new open source reasoning model built by a small team at Beike, and it just embarrassed some of the biggest AI models on the planet using a fraction of the resources. Here are the numbers that are making researchers lose their minds right now: → 85.3 on AIME 2024 (a brutal math competition benchmark) → 74.4 on AIME 2025 → 70.3 on LiveCodeBench (real-world coding tasks) → 92.5 on Arena-Hard (general conversation quality) To put this in perspective, DeepSeek-R1 is a 671 billion parameter model that costs a fortune to run and requires a cluster of high-end GPUs. AM-Thinking-v1 beat it on every single one of those benchmarks while being roughly 20 times smaller. It even rivals Qwen3-235B, a model that is over 7 times its size and backed by Alibaba's resources. But here is what makes this story truly wild. This model was not built by OpenAI. Not by Google. Not by Meta. It was built by a team at a Chinese real estate tech company using nothing but publicly available open source components. They took the freely available Qwen2.5-32B base model, applied a carefully designed training pipeline combining supervised fine-tuning and reinforcement learning, and turned it into one of the most capable reasoning models in the world. No proprietary data. No billion-dollar compute budget. No massive research lab. The entire training approach came down to two smart decisions. First, they fine-tuned the model on a blended dataset of math, code, and conversation to teach it a "think then answer" pattern. Instead of just spitting out responses, the model learns to reason through problems step by step before giving a final answer. Second, they used a pass rate-aware system for reinforcement learning. They tested the model on thousands of problems, threw away the ones it already solved easily, threw away the ones it completely failed, and only trained on the problems in the sweet spot where the model could learn the most. That single decision made the training dramatically more efficient. The result is a model that runs on a single A100 GPU with predictable performance and no routing overhead, while competing with models that need entire server rooms to operate. This changes the game for everyone building with AI right now. You no longer need access to the biggest models or the most expensive infrastructure to get world-class reasoning capabilities. A well-trained 32 billion parameter model can now match or beat models that are 7 to 20 times its size. The lesson here is not about model size. It is about training intelligence. Brute force scaling is hitting a wall. The teams that are winning now are the ones building smarter training pipelines, not bigger models. The open source community just proved that a carefully designed approach at 32B scale can compete with the best proprietary systems on earth. This is a massive shift. The AI moat is no longer about who has the most GPUs. It is about who trains the smartest. ♻️ REPOST this to share it with others. P.S Do you prefer using Open Source or Other hyped LLMs?
Hamza Khalid tweet media
English
21
36
139
10.5K
Bitplanet
Bitplanet@Bitplanet_AI·
@r0ck3t23 If AI reshapes productivity that deeply, society may need new ways to define value and contribution.
English
0
0
1
10
Dustin
Dustin@r0ck3t23·
Anthropic CEO Dario Amodei just identified the most dangerous outcome on the AI board. Partial automation isn’t the safe path. It’s the most destructive one. Amodei: “I actually think the most societally divisive outcome is if randomly 50 percent of the jobs are suddenly done by AI because what that means the societal message is we’re picking half, we’re randomly picking half of people and saying you are useless, you are devalued, you are unnecessary.” A civilization where half the workforce is mathematically obsolete and the other half controls the compute engine doesn’t gradually adjust. It fractures. The current economic system measures human value entirely by repetitive output. When AI drives the cost of that output to zero, the definition of usefulness collapses with it. Amodei: “We’re going to have to look at what is technologically possible and say we need to think about usefulness and uselessness in a different way than we have before. Our current way of thinking has not been tenable.” Define yourself strictly by your ability to process administrative tasks and the algorithm will replace you by morning. The next era doesn’t measure people by manual throughput. It measures them by what they choose to build when survival is no longer the constraint. Amodei: “I don’t know what the solution is but it’s got to be different than we’re all useless, right? We’re all useless is a nihilistic answer. We’re not gonna get anywhere with that answer. We’re gonna have to come up with something else.” The most toxic belief in the current tech sector is that AGI renders humanity pointless. When the machine absorbs the mundane friction of survival, human ambition doesn’t collapse. It scales. Nihilists will surrender to the algorithm and accept the narrative of obsolescence. Builders will take the same compute engine and aim it at something the machine never would have chosen on its own.
English
73
42
202
64.5K
Bitplanet
Bitplanet@Bitplanet_AI·
@kimmonismus When technological shifts accelerate this fast, the real challenge becomes how societies adapt alongside them.
English
0
0
0
1.7K
Bitplanet
Bitplanet@Bitplanet_AI·
@heygurisingh Benchmarks matching scores doesn’t mean matching reasoning; the real gap appears in search strategy and understanding.
English
0
0
0
136
Guri Singh
Guri Singh@heygurisingh·
🚨BREAKING: A new benchmark just exposed the biggest lie in AI. Your AI agent isn't "reasoning" through documents. It's throwing 270 million tokens at the wall and praying. Snowflake, Oxford, and Hugging Face tested every frontier model on real document search. 2,250 questions. 800 PDFs. 18,619 pages. 1,200 hours of human annotation. The best AI agent, Gemini 3 Pro, scored 82.2%. Humans scored 82.2%. Perfect match. Headlines would call this "human-level performance." Then they checked which questions each got right. The overlap was 24%. Cohen's kappa of 0.24. Humans and AI were solving completely different questions. Same score. Totally different intelligence. But that's not the bad part. Humans nailed 50% accuracy on their very first search query. Gemini 3 Pro? 12%. The best AI agent on Earth needed 9 rounds of blind searching to reach what a human does in one shot. When searches failed, humans immediately changed strategy. AI agents? They rephrased the same failed query with minor tweaks and tried again. The worst agent, GPT-4.1 Nano, barely changed its queries at all. 48.2% of its responses were straight-up refusals. It just gave up. With perfect retrieval, humans hit 99.4%. Best AI agent with the same documents? Stuck at 82.2%. An 18% gap that no amount of compute could close. Claude Sonnet 4.5's recursive model burned 270 million input tokens, $850 per test run, and still couldn't beat its own cheaper version using basic keyword search. 3,273 agent errors analyzed. 35.7% couldn't even find the right document. Not the right page. The right file. Your AI agent isn't reading your documents. It's playing a slot machine with your data and billing you for every pull.
Guri Singh tweet mediaGuri Singh tweet media
English
62
134
454
34.6K