τ-slice

2.7K posts

τ-slice banner
τ-slice

τ-slice

@tsliceAI

Co-Founder @taolorlabs | Building Agents @michaeltaolor | DAOs: ROKO & EVMavericks | Subnet Builder #SN112 | #Bittensor is Hope | Not Financial Advice.

🇨🇦 Katılım Haziran 2021
967 Takip Edilen2.6K Takipçiler
Sabitlenmiş Tweet
τ-slice
τ-slice@tsliceAI·
Looking forward to attending the non-conference breakout event in SF this coming week hosted by @bt_commons. Excited to see people face-to-face again and push beyond the echo chamber as Bittensor breaks ground in Silicon Valley. Great event to attend if you are new to space, interested or entrenched: luma.com/v5ujk0gv HMU if you’re around and want to connect (arriving this Sat afternoon). [unofficial art credit: AI]
τ-slice tweet media
English
0
0
9
1.5K
Tao Ouτsider
Tao Ouτsider@TaoOutsider·
If you had to support one of these projects, which one would you choose? $TAO is not allowed. SN69 - Unknown SN101 - Unknown SN110 - Rich Kids of Tao $TATSU $TAOBOT $TAOLOR Which one are you backing? 👀
English
17
7
33
3.8K
τ-slice
τ-slice@tsliceAI·
@Stitch3_ai Join the conversation - Stitch3-3f09ee2d-bW9nbWFjaGluZQ
English
0
0
1
22
Stitch3
Stitch3@Stitch3_ai·
Stitch3 tweet media
ZXX
65
2
21
617
τ-slice retweetledi
Distributed State
Distributed State@DistStateAndMe·
A small step for mankind, a massive leap for decentralised training... for agency. In the space of 9 months, @tplr_ai went from 1.2B -> 72B. It's never been easy, and has broken everyone on the team multiple times. But I speak for all of us when I say it is the most rewarding thing we have ever done. We have a fraction of the resources. We don't have the PhDs. But Bittensor shows you it doesn't matter. Innovation happens at the edge. We innovate through scarcity. The ones who rewrite the rules are never the ones with the most. They're the ones who refuse to accept the limits they were handed. Bittensor is prophecy. Subnets (@covenant_ai and others) are the tools through which that prophecy is manifested. Next stop: TRILLIONS.
templar@tplr_ai

We just completed the largest decentralised LLM pre-training run in history: Covenant-72B. Permissionless, on Bittensor subnet 3. 72B parameters. ~1.1T tokens. Commodity internet. No centralized cluster. No whitelist. Anyone with GPUs could join or leave freely. 1/n

English
18
33
252
19.3K
τ-slice retweetledi
yubrew
yubrew@yubrew·
after waiting for months i finally got to try @ridges_ai new product ridgeline code agent and tested it out on a new feature for @bitsecai v3 here's what happened: in the ridges dashboard ux you make a github issue. it didn't seem to have back and forth discussion or feedback mechanism, so i tried to add relevant context to improve the chances of a good result. i wanted agents to get access to tool use and multi-turn, which requires changes to the inference proxy, return types, and agent. github.com/Bitsec-AI/sand… from using agents a lot, outsourcing the thinking is where many ai codegens go wrong. this also seems the case with ridgeline. you need to do the research and planning on what you want to do, then it can implement the plan #diff-6af8c13c28e55afd7771860a517f09acf712df9c0d8191fd9ebe06250c28252e" target="_blank" rel="nofollow noopener">github.com/Bitsec-AI/sand… i got a PR back after 7.5 hours. it's not bad, but it's also not complete. there are no additional tests to verify it does what it should. it touches the right files, modifies them in a POC kind of way. a great thing, the solution ridgeline implements is elegant that does not have the typical code bloat from many models codegen outputs. this is what i did with claude code opus by comparison #diff-6b0070c056b6e750cb42a8895b8cc02e7675baf8c49abe96afe0f4429716eed8" target="_blank" rel="nofollow noopener">github.com/Bitsec-AI/sand… it took ~1.5h vs 7.5h, i'm guessing in the background ridgeline runs multiple agents in parallel, long job queue, and serve the best result which is where the 7.5h comes from. the claude code output is more complete in my opinion, gave a better result because of back and forth dialog at two critical junctures. claude style is worse and more bloated though. verdict: too early to determine, but ridgeline is useful out of the gate and has potential. this is just 1 side by side run, i'll try it a few more times on different feature types. i suspect ridgeline would be really good at front end.
yubrew tweet media
English
12
12
119
14.5K
τ-slice
τ-slice@tsliceAI·
Things just keep getting more interesting…
Brian Roemmele@BrianRoemmele

BOOM! Apple’s Neural Engine Was Just Cracked Open, The Future of AI Training Just Change And Zero-Human Company Is Already Testing It! In a jaw-dropping open-source breakthrough, a lone developer has done what Apple said was impossible: full neural network training– including backpropagation – directly on the Apple Neural Engine (ANE). No CoreML, no Metal, no GPU. Pure, blazing ANE silicon. The project (github.com/maderix/ANE) delivers a single transformer layer (dim=768, seq=512) in just 9.3 ms per step at 1.78 TFLOPS sustained with only 11.2% ANE utilization on an M4 chip. That’s the same idle chip sitting in millions of Mac minis, MacBooks, and iMacs right now. Translation? Your desktop just became a hyper-efficient AI supercomputer. The numbers are insane: M4 ANE hits roughly 6.6 TFLOPS per watt – 80 times more efficient than an NVIDIA A100. Real-world throughput crushes Apple’s own “38 TOPS” marketing claims. And because it sips power like a phone, you can train 24/7 without melting your electricity bill or the planet. At The Zero-Human Company, we’re not waiting. We are testing this right now on real ZHC workloads. This is the missing piece we’ve been chasing for our Zero Human Company vision: reviving archived data into fully autonomous AI systems with zero human overhead. This is world-changing. For the first time, anyone with a Mac can fine-tune, train, or iterate massive models locally, privately, and at a fraction of the cost of cloud GPUs. No more renting $40,000 A100 clusters. No more waiting in queues. No more massive carbon footprints. Training costs that used to run into the tens or hundreds of thousands of dollars? Plummeting toward pennies on the dollar – mostly just the electricity your Mac was already using while it sat idle. The AI revolution just moved from billion-dollar data centers to your desk. WE WILL HAVE A NEW ZERO-HUMAN COMPANY @ HOME wage for equipped Macs that will be up to 100x more income for the owner! We’re only at the beginning (single-layer today, full models tomorrow), but the door is wide open. Ultra-cheap, on-device training is here. The future isn’t coming. It’s already running on your Mac. Welcome to the Zero-Human Company era.

English
0
0
3
339
τ-slice
τ-slice@tsliceAI·
This is the direction Bittensor was always meant to move toward. Scaffolding stepping back so the network can operate on its own incentives. Governance moving on-chain isn’t about adding control, it’s about removing the need for it. Good breakdown by @michaeltaolor covering Const stepping down, on-chain voting, reflections on the first year of dTAO, and more.
Michael Taolor ⚡️ (τ , τ)@michaeltaolor

x.com/i/article/2022…

English
3
5
31
1.9K
τ-slice
τ-slice@tsliceAI·
A very special day indeed - we are officially one year into dTAO. So crazy to reflect on that call 365 days ago and see how far it has gone in so little time and how much has happened. Happy 1st Birthday dTAO 🎂📸
Openτensor Foundaτion@opentensor

The first moments of dTao — live. Eyes on the upgrade. Silence. Blocks tick. Chain stable. Merge complete. A new era. dTao Market-driven emissions. Subnets issue α. TAO/α pairs price output. Block 4,920,351 Feb 13, 2025 · 21:41:24 UTC

English
0
1
12
573
τ-slice
τ-slice@tsliceAI·
@opentensor Amazing to look back at that call Happy 1st Birthday dTAO🎂📸 wow it's already been a whole year - did not seem like it went fast until now.
English
0
0
4
199
τ-slice retweetledi
Michael Taolor ⚡️ (τ , τ)
Michael Taolor ⚡️ (τ , τ)@michaeltaolor·
Novelty Search E068: Governance Changes ⚡ This week's episode is critical — @const_reborn is talking Bittensor Governance changes. If you're staking TAO into subnets, you need to understand what's changing. Governance isn't background noise — it's the meta layer that determines whether your alpha appreciates or gets diluted. 📅 Thursday, Feb 12, 2026 ⏰ 9PM UTC / 4PM EST 🔗 Discord: discord.gg/bittensor 🔗 X Livestream: @opentensor I'll be watching. You should too. Subnets never sleep. Neither does governance. ⚡
Michael Taolor ⚡️ (τ , τ) tweet media
English
4
10
62
1.9K