Chris

157 posts

Chris banner
Chris

Chris

@AgenticToaster

Co-Founder, CTO, and Chief Architect at https://t.co/Np3sn6myqn

Rhode Island, USA 가입일 Haziran 2025
99 팔로잉244 팔로워
Chris
Chris@AgenticToaster·
Excited for this event and I'd love to chat with anyone who wants to know more about Loosh. I'll be around, so feel free to grab me if you see me.
Bittensor Commons@bt_commons

One of the most ambitious projects on Bittensor, on stage at Breakout. @Loosh_ai's CTO @AgenticToaster is combining deterministic and probabilistic systems to develop AI with emotional, moral, and sensory capabilities for conscious robotics Explore Loosh with Chris in SF ⬇️

English
0
0
3
185
Chris 리트윗함
Teng Yan · Chain of Thought AI
more robots = better. robotics teams often fixate on model quality, but deployment density is what is actually going to drive scale. sparse deployment (e.g small pilots) creates a false sense of progress. every failure looks like an edge case, leading to fragmented, local fixes. heavy deployments will totally change the learning regime. when errors repeat across sites, patterns become obvious. reliability is earned through real-world exposure tl;dr: whoever gets the most robots out there in the quickest time, wins
English
13
4
34
2.9K
Chris
Chris@AgenticToaster·
Really enjoyed our conversation with Mark Jeffrey on Hash Rate! Check it out!
Mark Jeffrey@markjeffrey

Hash Rate - Ep 152: Loosh Subnet 78 🧙 Guests: @lisacheng and Chris Sorel of @Loosh_ai 00:00 Introduction 02:58 The Origins of Mastercoin and Early ICOs 06:01 The Monroe Institute and Its Impact 12:06 Loosh Subnet and Its Mission 15:10 The Role of AI in Robotics 18:01 Differentiating Loosh from Current AI Models 21:06 Memory and Context in AI Systems 24:01 Partnerships and Future Directions for Loosh 31:58 Robotics and Emotional Inference 38:28 Integrating Robotics in Daily Life 44:55 Exploring Machine Consciousness 55:02 Incentive Mechanism

English
1
1
7
307
Mark Jeffrey
Mark Jeffrey@markjeffrey·
Hash Rate - Ep 152: Loosh Subnet 78 🧙 Guests: @lisacheng and Chris Sorel of @Loosh_ai 00:00 Introduction 02:58 The Origins of Mastercoin and Early ICOs 06:01 The Monroe Institute and Its Impact 12:06 Loosh Subnet and Its Mission 15:10 The Role of AI in Robotics 18:01 Differentiating Loosh from Current AI Models 21:06 Memory and Context in AI Systems 24:01 Partnerships and Future Directions for Loosh 31:58 Robotics and Emotional Inference 38:28 Integrating Robotics in Daily Life 44:55 Exploring Machine Consciousness 55:02 Incentive Mechanism
English
11
14
70
8.4K
Chris 리트윗함
Yuma
Yuma@YumaGroup·
Yuma-accelerated Loosh (SN78) is the first Bittensor $TAO subnet focused on machine consciousness. Now live, the Loosh Cognition Engine is a window into how decentralized agent systems will think, reason, and act, and it offers a platform for builders to test edge cases using ethical evaluators. Test Loosh in beta: loosh.ai/beta
Loosh AI@Loosh_ai

1/ Loosh Cognition Engine Beta is live on Subnet 78! This is the first public look at Loosh’s cognition layer for robotics and agentic systems, built to make reasoning, memory, and constraints observable. Not a chatbot, a real system you can inspect end to end.

English
10
27
190
8.6K
Chris 리트윗함
Mariuszek
Mariuszek@sobczak_mariusz·
Yesterday we had an AMA with @Loosh_ai subnet 78, here is the recap of our conversation with @AgenticToaster and the CEO of Loosh. Hope that helps everyone understand this company a little bit better. Humanoid hardware is basically here, we all see the Unitree videos, sometimes amazing, sometimes “please keep that away from my kids.” The gap is not motors, it is cognition and ethics. LLMs are great text machines but they are still just pattern matchers, they do not actually understand right and wrong, so you cannot just bolt GPT onto a robot and hope for the best. Loosh wants to live exactly in that gap. Right now it ships as a chat agent, but under the hood it already runs their persistent memory, cognitive inference, dynamic execution and three early cognitive services around ethics. Closed beta is coming, then an API and SDK in Q1 so people can plug this into real robots or other agents. You can use the whole stack as an agent, or cherry-pick pieces like memory or ethics. The ethics engine was the most interesting part. It does not pretend there is one perfect rule set. It looks at actions through four lenses, deontology, virtue ethics, rights based ethics and utilitarianism, all wired into an ontology of rules, rights and utility scores. It evaluates an action across those four, wraps the result into a structured report and feeds that to the LLM, which still makes the final call but with much better context. Every interaction is then written back into narrative memory with outcomes, so over time the system is reasoning from both rules and experience, not just a static prompt. Their favorite example is the robot refusing to get vodka for an eleven year old and being able to explain why. On the emotional side they are very clear this is the hard part. They want robots to read pain, fear, stress, lying, tone and body language, and then express their own “emotion” in a way that is relatable without being creepy. That is where their EEG plus audio and video work comes in, and the MIME layer that turns internal state into something like an R2D2 style expression, clearly readable but not fake human. From a business angle they already have a rough model, individual licenses around one thousand a year, business around ten thousand, and then a robotics SDK line for OEM partnerships. Numbers will move, but at least there is a path from ethics demo to revenue. My takeaway is that Loosh is not just slideware, the skeleton is real, ethics and memory are already running, emotional inference is the big hill. If they can turn this into the default “ethical and emotional brain” for the robots that are coming, this subnet could matter a lot. For now I want to see the closed beta in action and how the first SDK integrations look next year. $TAO
English
3
4
31
3.3K
Chris 리트윗함
Yuma
Yuma@YumaGroup·
Loosh (SN78) is the angel on your AI agent's shoulder. The first Bittensor $TAO subnet focused on machine consciousness, Yuma-accelerated Loosh (SN78) is fusing inference, ethics, and memory to help robots and agentic AI act in a predictable and trustworthy way.
Loosh AI@Loosh_ai

Loosh AI is excited to announce that we’ve been accepted into the @YumaGroup accelerator! We will be launching on Subnet 78! Together we are bringing forward a new type of Bittensor subnet turning world models into world aware robots. Our vision is to power the next generation of autonomous systems with a cognition engine, emotional inference, persistent memory layer, and ontological self-reflection that gives robots and agents memory, context, and continuity. Starting Wednesday, December 17th 2025 miners and validators will be invited to participate in running inference on our Cognitive services. Please see our github and like / follow to get updates as they happen github.com/Loosh-ai Thank you to the team at Yuma for supporting and guiding us these past few months. And thank you to the Bittensor community for supporting and following us, we are excited to start this journey with you all. LETS GO!

English
0
7
52
3K
Chris
Chris@AgenticToaster·
@CryptoZPunisher Love this. One clarification. We are currently planning on only one subnet:78. Our whitepaper was written before multiple mechanisms were possible in a subnet when we thought we would need multiple subnets to support the complexity of our workloads.
English
1
0
4
169
Chris
Chris@AgenticToaster·
@YumaGroup Excited to work with Yuma! Thanks for your help and hard work getting us to this point!
English
0
0
1
273
Yuma
Yuma@YumaGroup·
🚨NEW Yuma-accelerated subnet Meet Loosh (SN78): the first Bittensor $TAO subnet dedicated to developing the cognition, ethics, and emotional intelligence needed for agents and robots to act in a predictable and trustworthy way. Loosh aims to build the backbone of machine consciousness, one that mimics human values and morals. Follow them for more → @Loosh_ai
Yuma tweet media
English
7
26
114
21.3K
Chris
Chris@AgenticToaster·
Super excited to announce this. I have had a lot of long nights and weekends to get here. Stay tuned for more info here.
Loosh AI@Loosh_ai

Loosh AI is excited to announce that we’ve been accepted into the @YumaGroup accelerator! We will be launching on Subnet 78! Together we are bringing forward a new type of Bittensor subnet turning world models into world aware robots. Our vision is to power the next generation of autonomous systems with a cognition engine, emotional inference, persistent memory layer, and ontological self-reflection that gives robots and agents memory, context, and continuity. Starting Wednesday, December 17th 2025 miners and validators will be invited to participate in running inference on our Cognitive services. Please see our github and like / follow to get updates as they happen github.com/Loosh-ai Thank you to the team at Yuma for supporting and guiding us these past few months. And thank you to the Bittensor community for supporting and following us, we are excited to start this journey with you all. LETS GO!

English
4
6
31
1.6K
Punisher ττ
Punisher ττ@CryptoZPunisher·
Bittensor $TAO SN: ? coming soon Tick tock… looks like @Loosh_ai is about to surprise the community very soon? Loosh isn’t building just another chatbot. They’re building the brain for the next generation of intelligent machines. While most AI systems simply predict the next word, Loosh is developing an early form of machine consciousness: ➡️ AI that remembers, ➡️ reasons, ➡️ understands emotions, ➡️ and can make sound decisions in real-world environments. AI agents that feel relatable, trustworthy, and genuinely capable. Behind this vision are Chris Sorel @AgenticToaster and Lisa Cheng @lisacheng , two builders with deep backgrounds in software, blockchain, DeFi infrastructure, and decades of exploration into the intersection of technology and human consciousness. They’re not just engineers. They’re explorers of the mind, and Loosh is their laboratory to redefine what an AI agent can become. The next revolution in artificial intelligence may very well start here. loosh.ai
Loosh AI@Loosh_ai

We are making a big announcement tomorrow Dec 10 2025 at 12:30PM EST Coming to Bittensor $TAO

English
2
5
18
1.4K
Chris
Chris@AgenticToaster·
After a marathon of development, planning and networking, we’re ready to take the next step. Thanks to @macrozack and the Bitstarter team! We’re super excited to get moving and show everyone what we’re doing. Stay tuned!
Mark Jeffrey@markjeffrey

Hash Rate - Ep 141: Bitstarter - 'Kickstarter for Bittensor' 🧙🧙 Guest: @macrozack & @mccrinbc of @bitstarterAI Spotlight: @Loosh_ai 0:00 Introduction 4:06 How Do I Apply To Bitstarter? 8:16 Challenges Faced By New Subnets Onboarding 23:36 Terms of the Raise 37:17 What Is Your 'North Star'? 59:43 Introducing @loosh_ai The First Bitstarter Subnet

English
4
3
19
2.1K
Chris
Chris@AgenticToaster·
@JulianGoldieSEO If you’re not paying, you’re the product.
English
2
0
16
2.7K
Julian Goldie SEO
Julian Goldie SEO@JulianGoldieSEO·
Google just killed every paid coding tool. Their new AI Studio destroys Cursor completely. Here's how to build React apps instantly: → Open Google AI Studio Build Mode → Type what you want to build → Watch it code everything automatically → Deploy to GitHub with one click → Share working apps instantly Features that crush the competition:   • Angular + React support ✔   • GitHub integration ✔   • Real-time code preview ✔   • Gemini API built-in ✔   • Enterprise AI for $0 ✔ Save this thread, you'll build faster apps. Want the SOP? DM me. 💬
English
59
310
2.6K
187.9K
Chris 리트윗함
Openτensor Foundaτion
Openτensor Foundaτion@opentensor·
Bittensor Mainnet // Major upgrades > Subnet Mechanisms (Previously sub-subnets) > Subnet Deregistration > Hyperparameter Rate Limits > UID Pruning What's changing // Why it matters. 1/8
English
48
114
373
71.9K
Chris 리트윗함
Crucible Labs
Crucible Labs@CrucibleLabs·
Your TAO. Your strategy. Your Ledger. The only TAO-native wallet with Ledger support. Lock it down. Load it up. Next week.
Crucible Labs tweet media
English
13
17
138
23.3K
Chris
Chris@AgenticToaster·
I’m excited to announce that we have completed v1-beta of our memory system! @Loosh_ai Memory unites narrative recall, vector embeddings, and RDF graphs, delivering human-like memory, formal reasoning, and safe autonomous skills for agentic AI. Details below. Unlike simple RAG, Loosh Memory persists complete stories in three complementary forms: natural-language narratives for "this reminds me of..." recall; dense embeddings for deep semantic search; and RDF knowledge graphs with rich ontologies for verifiable inference. Together they enable agents to learn autonomously, reason ethically via both pattern-matching and symbolic logic, and develop executable skills including autonomous code generation and execution inside trusted sandboxes. The architecture is privacy-aware (multi-store separation, redaction, provenance), time-aware (episodic/semantic consolidation + decay), and built for robotics: real-time decision support, safety constraints, and explanation traces. In the works: MCP services to make this available to robotic and virtual agents. Demo coming soon!
English
4
3
13
2.5K
Chris
Chris@AgenticToaster·
This feels very over engineered. A simpler approach: 1. determine if you can replicate it in a dev environment. 2. Log everything. 3. Look at the logs. If you don’t have configurable verbose logging at every point in the chain, then you shouldn’t be in production in the first place. It should be patently obvious where it’s failing . If it’s not obvious, it’s because you don’t know your data domain well enough to understand what should be returned from your RAG system for a given prompt. If it’s only happening in production, it’s the same process. You just have to narrow your logging window.
English
0
0
0
85
anshuman
anshuman@athleticKoder·
You're in a ML Engineer interview at Perplexity, and the interviewer asks: "Your RAG system is hallucinating in production. How do you diagnose what's broken - the retriever or the generator?" Here's how you can answer:
English
47
221
2.9K
360.3K