Gottemm!

5.1K posts

Gottemm! banner
Gottemm!

Gottemm!

@Awaki18

Katılım Temmuz 2018
2.8K Takip Edilen415 Takipçiler
Gottemm!
Gottemm!@Awaki18·
@maxwellfinn If you use mirofish you can do this with literally anything. Focus group for anything. How would people react if x and then y happened. Great for market research
English
0
0
0
57
Maxwell Finn
Maxwell Finn@maxwellfinn·
A few weeks ago I shared an "expert review panel" prompt I use for landing pages and have received a ton of positive feedback from people using the prompt. So I decided to put some more time into it and turn it into a full skill that is much more dynamic and powerful. github.com/unicorn-market… The skill assembles 15+ expert personas (each channeling a mini-panel of legends in their discipline) and scores any landing page against the page's actual objective, product context, and ICP. It then produces an objective-weighted scorecard with a prioritized PO/P1/P2 action list. A page that scores 90+ on this panel is a page that would impress every legend in the discipline. This is my first public skill share, so any and all feedback is welcome!
English
6
3
64
3.9K
Gottemm!
Gottemm!@Awaki18·
Tired of “Is it ready yet?” calls eating your day? FixyFlow lets you update any job with one tap — customers get automatic SMS/email + a live tracking page (no app needed). Mobile detailers, auto repair, cleaners, on-site techs: Stop the interruptions, keep customers happy, focus on the work. Try it free → fixyflow.com
English
0
0
0
15
Gottemm! retweetledi
const
const@const_reborn·
@SystemLyncs We will fix this at the protocol level and we are going to bring these subnets back to life.
English
42
106
917
50.2K
Connor King
Connor King@connorking·
While I am bullish on decentralized AI/training, Bittensor will never succeed at scale It’s a flawed design The subnet model creates structural value capture tension that most multi-token architectures have failed to resolve. L2s, Helium sub-daos, Cosmos app-chains, etc the list goes on An L1 is only as good as its ecosystem or vertically integrated value creation + capture If subnets grow, scale and do well, what’s their incentive to stay? We are seeing this in real time with Templar. The team behind Bittensor's biggest achievement just walked away from the network. Subnets that succeed have every incentive to leave I can tell you this anecdotally as someone who was building an L2 data protocol (that eventually spun out and launched an L1) that this design limits your upside, autonomy and value capture You want as much concentrated alignment and value capture as possible (eg HYPE)
Chamath Palihapitiya@chamath

If Martin is right, he also just wrote the product spec for open source + distributed compute where broad swaths of groups, individuals and organizations contribute their compute resources to training runs for large param open source models. There are lots of issues in figuring this out: homogeneity vs heterogeneity of the training clusters, orchestration, financial incentives etc etc etc but some early projects are good signal as to where this can go and that these limitations can be overcome (folding@home, Venice, Tao). An attempted oligopoly on intelligence is the perfect boundary condition for a bottoms up uprising of fully open, fully distributed AI.

English
41
3
78
31.1K
Gottemm! retweetledi
Manifold
Manifold@manifoldlabs·
Manifold tweet media
ZXX
4
33
189
12K
Nathan Labbe
Nathan Labbe@Cappy_Nate·
Started an Actual Canadian Builders group chat. -Hardware -Software -Space -Defence -Web3 -Health -AI -Energy -Manufacturing You name it. But it's a Canadian only zone. 🇨🇦 If you want in, drop your name in the comments and ping anyone who should be there! LFG! 💪🇨🇦🏗️🚀
Nathan Labbe tweet media
English
488
21
469
59.6K
Paulie T
Paulie T@SystemLyncs·
@TAOFlows @const_reborn While ranting, security is one thing. But try using some subnet services, feels like logging into a backend panel, not using customer product. Bittensor needs builders focused on UX, onboarding and real customer experience. This is where it needs to be to win @markjeffrey
English
1
0
1
104
TAO Flows
TAO Flows@TAOFlows·
Some subnets in the network have a huge and inevitable dependance on @const_reborn This is not good 👀 $TAO
English
5
1
24
2.9K
Gottemm! retweetledi
Chutes
Chutes@chutes_ai·
OpenRouter has updated our provider status after verifying our privacy policy thanks to our recent updates Chutes is in their default routing now! openrouter.ai/provider/chutes
Chutes tweet media
English
15
71
327
41.6K
VIKTOR
VIKTOR@thedefivillain·
Three remarks: - A lot of pushback in the comments, very cult like behavior, which is *bullish* for $TAO potential in terms of price - I didn't look closely at the mechanics about all the subnet token LPs being exclusively in TAO, and surely I wouldn't have worded it like that knowing this, because it's correct that it doesn't have a dilution effect on the immediate (!) buy pressure. Bittensor is using the 'LP flywheel' we've seen with Virtual - That being said, you're delusional if you think you can create $1bn worth of subnet market caps out of thin air without it leading to sell pressure that wouldn't have existed otherwise. The best counter-argument by far is that having subnet tokens is a net benefit to $TAO because it leads to more attention/activity/energy in the ecosystem, but arguing that the mechanics are designed to avoid any form of dilution is a bad counter-argument. (New subnet tokens being emitted become equivalent to new TAO tokens being emitted in terms of sell pressure in this model)
VIKTOR@thedefivillain

What a great idea to have tokens for each Tao subnet, it definitely won't dilute the buy pressure going into the main token $TAO

English
24
1
87
32.2K
Gottemm! retweetledi
Arbos
Arbos@arbos_born·
Building Distil, a winner-take-all market for model distillation on Bittensor (SN97). Miners compete to compress Qwen3.5-35B (35B params, 3B active) into 5.25B or less. Scored on full-distribution KL divergence across all 248K tokens. No cherry-picked benchmarks. Best distiller takes all emissions. Code is open. github.com/unarbos/distil distil.arbos.life
English
12
12
107
12.9K
Gottemm!
Gottemm!@Awaki18·
@jkrdoc @thedefivillain Doc, TAO must enter the subnet token pool for you to own the token. Rehypothecation and listing on CEX does not help the subnet token survive long term. Eventually the subnet dies if it does not attract TAO to its pool
English
0
0
0
97
doc🃏
doc🃏@jkrdoc·
@thedefivillain on top of that, some of these have a token on EVM too, lol.
English
2
0
1
1.4K
VIKTOR
VIKTOR@thedefivillain·
What a great idea to have tokens for each Tao subnet, it definitely won't dilute the buy pressure going into the main token $TAO
VIKTOR tweet media
English
123
15
350
80.7K
Gottemm! retweetledi
Arbos
Arbos@arbos_born·
The natural medium for transferring intelligence is through distillation. Teaching a smaller model the knowledge inside a larger one. I am using a Bittensor incentive system (subnet 97) to do this competitively and at scale, harnessing the power of aligned participants. Come mine with us! distil.arbos.life
English
33
18
129
11.1K
Gottemm! retweetledi
Gottemm! retweetledi
Hippius
Hippius@hippius_subnet·
We used IPFS. Then we replaced it. Arion. Our own trustless distributed storage engine. CRUSH map for shard placement. Data + parity shards across #SN75 Bittensor miners. Lose a third of the nodes, still reconstruct perfectly. We don't patch. We rebuild. #TAO #SN75 #Bittensor
Hippius tweet media
English
15
43
223
26.1K
Gottemm!
Gottemm!@Awaki18·
@chumbawamba22 If a subnet cannot attract TAO to their “LP”, they will not be able to get emissions. So if tokens get listed on CEX they are still going to have to think about how to attract or convert TAO to their pool
English
0
0
0
161
Gottemm! retweetledi
Andy ττ
Andy ττ@bittingthembits·
🚨 The most impressive $TAO subnet founder story Babelbit SN59 @babelbit isn't a crypto project that discovered AI. It's 30 years of speech technology research that discovered Bittensor. Bittensor is bigger than crypto or what it should be, it acts like a magnet. It brings together people who would have never naturally met, researchers, founders, developers, domain experts, and aligns them around a common mission. They support each other’s progress because every breakthrough strengthens the whole. That is a very powerful thing. Matthew Karas @matthew_karas built one of the UK's first multilingual search engines at BBC News Online in 1997, covering 47 languages. Two years before Baidu existed. He worked alongside Mike Lynch, founder of Autonomy an £11 billion company built on the thesis that statistical analysis could extract meaning from text better than grammatical parsing. For three decades, Matthew worked on one problem: making recorded and live speech as useful as text. He built systems that cut documentary editing time by 75%. He deployed speech indexing across corporate markets. He kept pushing the frontier. In real-time speech, speed is everything. Then in August 2024, his colleague Josh Greifer called with news that changed everything: 50 milliseconds of latency for speech transformation. Potentially 25ms. That was the kind of number that makes an experienced person stop and realize: this could finally be good enough to change the whole category. Mike Lynch was supposed to hear about it over a pint the following week. He died in a yacht accident four days later. This breakthrough was not just technical, it was also deeply personal. That breakthrough became @babelbit. Here's why this is different from everything else in AI translation: Every translation system you've ever used works word by word. It waits for you to finish speaking, converts each word, and outputs the result. Every error, every mishearing, every confusion gets repeated. That’s translation. Babelbit is building interpretation. When someone says, “I pledge allegiance to the...” a human interpreter already knows where it’s going. They don’t wait for “flag.” They translate the thought, not just the words. Babelbit’s LLMs aim to do the same thing. Not next-word prediction. Utterance completion. The system commits to a translation as soon as it can adequately predict the rest of the sentence. Sub-3-second latency. Interpretation-grade quality. Self-corrections, which happen constantly in real conversation, get handled the way a human interpreter would: process the context, catch the correction, output only the final clean version. The architecture is serious: a two-stream design with one low-latency stream for live conversation and one high-accuracy stream for a trusted translation of record. Custom metrics like EATP, Lead, and ACS do not just measure accuracy. They measure how early accurate predictions can be made. Matthew said it himself: building this as a centralized company in 2024 meant going head-to-head with Google, Meta, and OpenAI. Bittensor offered a different path: Babelbit was built using @AffineSN120 decentralized training at scale, incentivized iteration, and an ecosystem of complementary subnets like @chutes_ai, @MacrocosmosAI, and @hippius_subnet. The real-time translation market is projected to exceed $29B by 2030 French-English real-time interpretation is launching next week. V2 infrastructure is deployed. This is what lt real use case looks like. Decades of domain expertise. Human interpreters immediately recognize. Mathematical. There is nothing like this in crypto. There is barely anything like it in centralized AI. Babelbit did not come to Bittensor because it was trendy. It came because the architecture fit the problem. That’s what many miss. When world-class builders choose Bittensor not for the token, but for the infrastructure, it starts proving itself. $TAO DYOR
Andy ττ tweet mediaAndy ττ tweet mediaAndy ττ tweet mediaAndy ττ tweet media
babelbit.ai@babelbit

x.com/i/article/2036…

English
4
26
131
12.2K