Chairman τao

10.3K posts

Chairman τao banner
Chairman τao

Chairman τao

@MarsSmuff

Irresponsibly long τao. Life. Liberτy. Biττensor.

Church of Rao, τaoτardisτan Beigetreten Eylül 2021
2.7K Folgt1.4K Follower
RVCrypto
RVCrypto@RvCrypto·
The first signs of how fast Score’s Vision AI on Manako is scaling are already here. In just 7 days, miners on $TAO Subnet 44 have already hit the 85% target for person detection. I’ll call it: with the new Manako infrastructure, Score is about to redefine what’s possible in Vision AI.
RVCrypto tweet media
English
4
0
17
678
Chairman τao
Chairman τao@MarsSmuff·
@369_theory The dread of soy aged him but cooking up sota brought him back to life
English
0
0
0
2
Te$la
Te$la@369_theory·
@MarsSmuff Max looks like hes aged, must be that SOTA souce 🤣🤣
English
1
0
0
10
Gully Foyle #UKTrade
Gully Foyle #UKTrade@TerraOrBust·
"So you want the UK to join the EU?" "Yes" "So you want to give up the Pound and accept the Euro" "No" "So you want to join the Schengen area and allow completely open borders" "No" "So you want to be part of the EU Migration Pact, another 100k illegal migrants to the UK a year" "No" "So you want to re-introduce the testing of cosmetics onto animals, required by EU law" "No" "So you want to re-introduce live exports of animals for fattening or slaughter" "No" "So you want to give control of UK fishing waters and quotas back to the EU" "No" "So you want to reverse the protections of UK marine birds like puffins, who were endangered due to EU overfishing of the main food source" "No" "So you want to give up the better trade relationship the UK now has with the USA, Japan, India, Australia, New Zealand, Mexico, Singapore, and countless other countries" "No" "Well it sounds like you don't want to join the EU then"
English
49
480
1.3K
35.1K
Calli Tahmo
Calli Tahmo@BrokenSerenity·
@MarsSmuff @ia9226365 @MartinOsborne_ but what if you do a thing that the government decides is terrorism? given how bought and paid for alot of parties are these days by israel that's an extreme possibility...so the police only being able to hold you for 4 days on said nonsense charges is better than 14 no?
English
1
0
0
46
Martin Osborne 💚
Martin Osborne 💚@MartinOsborne_·
Another day, another journalist who doesn’t understand how Green Party makes policy 🤦‍♂️ “The Green Party” doesn’t draft proposals, it’s individual members, or groups of likeminded members, that do that 🤝 Then, any proposal will need a lot of support from other members to finally be debated and its only ever the most serious ones that get voted on 🗳️ I predict that you’ll hear a lot about supposed Green proposals over the next few years from journalists looking for stories but many will unlikely ever come close to becoming policy ☑️
Politics UK@PolitlcsUK

🚨 NEW: The Green Party has drafted proposals to reduce the time police can detain terror suspects from 14 days to 4 [@genevieve_holl]

English
31
38
506
62.4K
Chairman τao
Chairman τao@MarsSmuff·
@TuckerClemens Right so the 60+ million people that died under socialist governments in the 21st century were the rich folks then?
English
0
0
0
7
Chairman τao
Chairman τao@MarsSmuff·
@0Calamity @orlaminihane Please explain how people who disproportionately create vast amounts of tax income for the country, directly and indirectly, who never use the services they pay for are to blame?
English
0
0
1
10
CrémantCommunarde #4402 💚👊🕊️
Why misquote what she said when people are able to watch the video? She said: "The NHS is broke not because of immigration but because it's been underfunded, and billionaires are getting richer and richer". Didn't she, @orlaminihane ? Why do the far right have to lie every damn time to make their points? It's pathological with you lot. You wouldn't know the truth if it jumped up and bit you on the nose.
Orla Minihane@orlaminihane

“The NHS is Failing because Billionaires are getting Richer and Richer” thanks for clearing that up Hannah !!! 🙄deary me… 🤡 @TheGreenParty

English
59
87
417
9.7K
Mothin Ali
Mothin Ali@MothinAli·
Your obsession with it is a racist dog-whistle, that's why no one takes you seriously. However this is the wrong Eid...
Rupert Lowe MP@RupertLowe10

@MothinAli Thoughts on banning halal slaughter mate? Nobody from the 'Greens' seem keen to answer me on that?

English
838
99
1.4K
236.6K
J Stewart
J Stewart@triffic_stuff_·
🚨STARMER’S THOUGHT POLICE STRIKE AGAIN! 😡👮‍♂️ SHOCKING MIDNIGHT RAID: POLICE WAKE MOTHER & CHILD AT 2:30AM TO DEMAND SHE DELETE SOCIAL MEDIA VIDEOS 💬 🚫 What the hell has Keir Starmer done to Britain? At 2:30 in the morning, police banged on a sleeping mother’s door and woke both her and her young child. Their reason? Not an emergency. Not a crime in progress. They wanted her to immediately delete videos she had posted online. Bodycam and doorbell footage captures the officer’s exact words: “I’ve been made aware that videos have been posted onto social media.” “I’m here to ask that you remove said videos from social media.” “You might be committing offences under the Malicious Communications Act.” She refused to delete them. She recorded the entire late-night visit and made an official IOPC complaint. This is Britain under Starmer: police sent to people’s homes in the middle of the night to act as social-media censors.
J Stewart@triffic_stuff_

🚨IS STARMER COMPROMISED? FROM FREE SPEECH DEFENDER TO JAILING PEOPLE FOR SOCIAL MEDIA POSTS 🤔🔒 What Happened To The Uk Prime Minister And Why His Stance On Social Media Arrests Flipped So Dramatically In 2012, as Director of Public Prosecutions, Keir Starmer proudly declared himself a guardian of free speech. In that now-infamous video he insisted: “Where a communication is merely offensive... principles of free speech require a high threshold, and dictate that a prosecution is unlikely to be in the public interest.” Today, Prime Minister Starmer presides over a regime that has racked up around 12,000 arrests tied solely to social media posts. The numbers are so grotesque that even Russia and China suddenly look like bastions of free speech by comparison. The hypocrisy is breathtaking. Many on X are no longer asking politely — they’re outright declaring that Starmer has been compromised. Whether it’s political cowardice, shadowy external influence, or naked authoritarian instinct, the man who once set a high bar for prosecution now jails people for hurting feelings online. He still trots out the same tired lines about “protecting children” and preventing disorder after the 2024 riots. But the latest escalation is his determined push for brand-new laws specifically targeting “Islamophobic” content. Critics say this is not about hate — it’s about criminalising legitimate public concern and dissent over mass migration, the rapid demographic changes, growing Islamic influence in public life, grooming gang scandals that were long ignored, and the general state of the country. What many see as valid political criticism and free debate is now being re-labelled as illegal “Islamophobia” to shut it down. What started as emergency measures after unrest has quietly morphed into a broader, relentless expansion of censorship dressed up in ever-changing justifications. The old Keir is nowhere to be seen. The contrast is damning.

English
1.1K
7.2K
14.2K
534.5K
Chairman τao
Chairman τao@MarsSmuff·
RT @const_reborn: Surely the idea of universal human rights has been shattered by now. The behavior of our governments and the people pulli…
English
0
31
0
16
The Green Party
The Green Party@TheGreenParty·
“Trump is a bully and a blackmailer, and he has made absolutely no plan for what an exit strategy looks like. When he asks for our help we should say: Donald, ‘you broke it, you fix it.’” @CarolineLucas speaking on the Iran war on #BBCQT.
English
65
380
1.1K
14.6K
Chairman τao
Chairman τao@MarsSmuff·
"This is still pre-training. The real edge in AI comes from post-training, RLHF, alignment loops, basically where models become actually useful." Enter @grail_ai $TAO
Karamata_ 💎@Karamata2_2

🔥 Exactly. Templar changed how I think about AI infra. I didn’t expect much from decentralized AI, but seeing @tplr_ai train a 72B model on 1.1T tokens across ~70 permissionless nodes on Bittensor ( $TAO). That alone is already unusual, but what really changed my mind is how they made it work. - At this scale, training is limited by coordination. Normally you’re pushing ~280GB of data per synchronization step between nodes, which makes decentralized training basically dead on arrival. - @tplr_ai compressed that down to ~2.2GB and reduced sync frequency massively using SparseLoCo. When I look at that, I see them removing the core bottleneck that killed every previous attempt 🤯. That’s why I think calling this a DeepSeek moment is actually not exaggerated. DeepSeek showed models can be trained cheaper. Templar shows they can be trained without central coordination at all. -> Those are two very different directions, and this one feels structurally harder to compete with. Another signal I don’t ignore: when people like Anthropic’s Jack Clark publicly frame it as real infra: - In my experience, that kind of validation usually comes after something already works, not before. - This is still pre-training. The real edge in AI comes from post-training, RLHF, alignment loops, basically where models become actually useful. Templar is moving there next with Grail, and for me that’s the real test. If they can decentralize that layer too, then we’re no longer talking about decentralized compute, they’re talking about a fully permissionless AI production pipeline. What makes Templar stand out to me is the timing and direction they chose. 1/ They went after coordination when the entire AI industry is quietly hitting scaling limits. - That’s a very different bet, and usually the ones who attack constraints, not trends, are the ones that matter later. 2/ Another catalyst I see is the permissionless design. - Most decentralized AI systems still gate participation in some way, which kills network effects early. - Templar went fully open from the start, which means if this model works, it doesn’t just scale linearly, but compounds with more contributors, more experimentation, more edge cases being solved in parallel. Also, the fact they are building toward post-training (RL layer) tells me they understand where real value sits. Pre-training gets attention, but post-training is where models become usable, sticky, and monetizable. If they execute here, they start owning part of the intelligence layer itself. 3/ My prediction based on this: In the short term, most people will still underestimate it because model quality gap vs centralized labs will be the easy argument. But over time, I think Templar becomes: - a backend layer for open AI development. - a coordination network for distributed compute. - and eventually a marketplace for intelligence refinement. Not dominant overnight, but quietly embedded everywhere. And if that plays out, the upside comes from becoming the system that anyone can build on when they don’t want to rely on @OpenAI at all.

English
12
1
14
744
Jolly Green Investor 🍀
Jolly Green Investor 🍀@jollygreenmoney·
Nothing to see here… Just Jensen Huang (CEO of the world’s most valuable company Nvidia) and Chamath discussing Bittensor $TAO 🤯
English
88
281
1.5K
306.9K
IBBY 🍉
IBBY 🍉@ia9226365·
@MarsSmuff @MartinOsborne_ Imagine this government dislikes your tweets and calls you a terrorist, would you rather they have 4 days to charge you or 14?
English
1
0
8
306
Karamata_ 💎
Karamata_ 💎@Karamata2_2·
🔥 Exactly. Templar changed how I think about AI infra. I didn’t expect much from decentralized AI, but seeing @tplr_ai train a 72B model on 1.1T tokens across ~70 permissionless nodes on Bittensor ( $TAO). That alone is already unusual, but what really changed my mind is how they made it work. - At this scale, training is limited by coordination. Normally you’re pushing ~280GB of data per synchronization step between nodes, which makes decentralized training basically dead on arrival. - @tplr_ai compressed that down to ~2.2GB and reduced sync frequency massively using SparseLoCo. When I look at that, I see them removing the core bottleneck that killed every previous attempt 🤯. That’s why I think calling this a DeepSeek moment is actually not exaggerated. DeepSeek showed models can be trained cheaper. Templar shows they can be trained without central coordination at all. -> Those are two very different directions, and this one feels structurally harder to compete with. Another signal I don’t ignore: when people like Anthropic’s Jack Clark publicly frame it as real infra: - In my experience, that kind of validation usually comes after something already works, not before. - This is still pre-training. The real edge in AI comes from post-training, RLHF, alignment loops, basically where models become actually useful. Templar is moving there next with Grail, and for me that’s the real test. If they can decentralize that layer too, then we’re no longer talking about decentralized compute, they’re talking about a fully permissionless AI production pipeline. What makes Templar stand out to me is the timing and direction they chose. 1/ They went after coordination when the entire AI industry is quietly hitting scaling limits. - That’s a very different bet, and usually the ones who attack constraints, not trends, are the ones that matter later. 2/ Another catalyst I see is the permissionless design. - Most decentralized AI systems still gate participation in some way, which kills network effects early. - Templar went fully open from the start, which means if this model works, it doesn’t just scale linearly, but compounds with more contributors, more experimentation, more edge cases being solved in parallel. Also, the fact they are building toward post-training (RL layer) tells me they understand where real value sits. Pre-training gets attention, but post-training is where models become usable, sticky, and monetizable. If they execute here, they start owning part of the intelligence layer itself. 3/ My prediction based on this: In the short term, most people will still underestimate it because model quality gap vs centralized labs will be the easy argument. But over time, I think Templar becomes: - a backend layer for open AI development. - a coordination network for distributed compute. - and eventually a marketplace for intelligence refinement. Not dominant overnight, but quietly embedded everywhere. And if that plays out, the upside comes from becoming the system that anyone can build on when they don’t want to rely on @OpenAI at all.
Karamata_ 💎 tweet media
templar@tplr_ai

On the @theallinpod this week, @chamath asked @nvidia CEO Jensen Huang about decentralized AI training, calling our Covenant-72B run "a pretty crazy technical accomplishment." One correction: it's 72 billion parameters, not four. Trained permissionlessly across 70+ contributors on commodity internet. The largest model ever pre-trained on fully decentralized infrastructure. Jensen's answer is worth hearing too.

English
32
21
118
10.5K