Thomas Eastham

125 posts

Thomas Eastham

Thomas Eastham

@easthambuilds

AI enabled mindset and re-education, Keto / Carnivore health, Economics / Finance

Earth.Planet Katılım Aralık 2023
434 Takip Edilen91 Takipçiler
Sabitlenmiş Tweet
Thomas Eastham
Thomas Eastham@easthambuilds·
@jliemandt My family has applied to join alpha for our child and we are working on AI learning tools for adults. Those that are successful in the AI future will never stop learning and adapting. That is why we are building openmobius.ai 👇 x.com/DanielEastham3…
Daniel Eastham@DanielEastham3

The "static course" is dead. If you’re learning AI from a pre-recorded video, you’re already behind. We’re building openmobius.ai because the industry moves too fast for "fundamentals" to stay fixed. It is a self-evolving, agent-driven education loop where the project is the curriculum. A collaboration between father and son: @easthambuilds @DanielEastham3

English
0
0
4
85
Thomas Eastham
Thomas Eastham@easthambuilds·
@aakashgupta @alojohhardcore looks like Xiaomi is punching above its weight in the AI space. I wonder how they plan to integrate that into their other products.
English
1
0
0
500
Aakash Gupta
Aakash Gupta@aakashgupta·
The entire AI industry spent a week convinced DeepSeek had secretly launched V4. Reuters reported it. Developers debated it. OpenRouter usage charts broke. It was Xiaomi. A smartphone and electric vehicle company just shipped a 1-trillion-parameter model that topped the world's largest API aggregation platform, and nobody guessed the origin because the model was too good to be associated with a hardware company. The stealth launch as "Hunter Alpha" on March 11 was the most elegant product validation in recent AI history. No brand, no attribution, no expectations. Just raw performance. The model processed over 1 trillion tokens in 8 days. Developers organically chose it over every labeled frontier model on the platform. When Reuters tested the chatbot, it identified itself only as "a Chinese AI model primarily trained in Chinese" with a May 2025 knowledge cutoff, the exact same cutoff DeepSeek reports. The person behind this is Luo Fuli. Born in 1995. Eight papers at ACL as a graduate student at Peking University. Alibaba DAMO Academy. Then DeepSeek, where she co-developed V2 and contributed to R1. Lei Jun reportedly offered tens of millions of yuan to recruit her. She joined Xiaomi in November 2025. Four months later, she's shipping a model that benchmarks alongside Claude Sonnet 4.6 and GPT-5.2 at one-fifth the API cost. The detail that tells you everything about how this team operates: when Luo first experienced a complex agentic scaffold, she tried to convince the MiMo team to adopt it. They resisted. So she issued a mandate. Anyone on the team with fewer than 100 conversations with the system by tomorrow can quit. They all stayed. The imagination converted into research velocity. The architectural bets matter. Hybrid Attention for long-context efficiency. MTP inference for low latency. 1M context window. 42B activated parameters out of 1T total. These are infrastructure decisions optimized for agents that run autonomously for hours, not chatbots that answer one question at a time. Pricing: $1/$3 per million tokens up to 256K context. $2/$6 for 256K to 1M. Claude Sonnet 4.6 costs roughly 5x that. Xiaomi's shares rose 5.8% on the announcement. The real DeepSeek V4 still hasn't shipped. The model everyone mistook for it already has a trillion tokens of real-world usage data.
Fuli Luo@_LuoFuli

MiMo-V2-Pro & Omni & TTS is out. Our first full-stack model family built truly for the Agent era. I call this a quiet ambush — not because we planned it, but because the shift from Chat to Agent paradigm happened so fast, even we barely believed it. Somewhere in between was a process that was thrilling, painful, and fascinating all at once. The 1T base model started training months ago. The original goal was long-context reasoning efficiency. Hybrid Attention carries real innovation, without overreaching — and it turns out to be exactly the right foundation for the Agent era. 1M context window. MTP inference for ultra-low latency and cost. These architectural decisions weren't trendy. They were a structural advantage we built before we needed it. What changed everything was experiencing a complex agentic scaffold — what I'd call orchestrated Context — for the first time. I was shocked on day one. I tried to convince the team to use it. That didn't work. So I gave a hard mandate: anyone on MiMo Team with fewer than 100 conversations tomorrow can quit. It worked. Once the team's imagination was ignited by what agentic systems could do, that imagination converted directly into research velocity. People ask why we move so fast. I saw it firsthand building DeepSeek R1. My honest summary: — Backbone and Infra research has long cycles. You need strategic conviction a year before it pays off. — Posttrain agility is a different muscle: product intuition driving evaluation, iteration cycles compressed, paradigm shifts caught early. — And the constant: curiosity, sharp technical instinct, decisive execution, full commitment — and something that's easy to underestimate: a genuine love for the world you're building for. We will open-source — when the models are stable enough to deserve it. From Beijing, very late, not quite awake.

English
39
160
1.4K
165.3K
Thomas Eastham
Thomas Eastham@easthambuilds·
I will keep buying mac token processing power. realistically 10+yr life.... yes, nvidia hardware can generate tokens faster but I'm don't want the noise / heat. If my machines are smart and a little slow I can work around that with scheduling. Apple unified memory rules 🚀
AJ Investment Research@alojoh

There is no obvious limit to token demand. The more compute someone has the more everything else (talent, data) commoditises. AI hardware is the endgame.

English
0
0
0
9
Thomas Eastham retweetledi
Michael Saylor
Michael Saylor@saylor·
Saylor Academy is now Saylor University. The Florida Department of Education has granted @saylordotorg university status—marking a major milestone in our mission to provide free, world-class higher education for all.
English
606
1.4K
11.8K
403.8K
Thomas Eastham retweetledi
Shadow
Shadow@4shadowed·
@bradmillscan It’s not made for directly coding, spin up a codex and point it at your configs like our docs say to
English
3
1
9
564
Thomas Eastham
Thomas Eastham@easthambuilds·
nemoclaw, nvidia's new verison of openclaw just released??? curl error 404. server crashed?
Thomas Eastham tweet media
English
0
0
1
54
Thomas Eastham retweetledi
Chris Koerner
Chris Koerner@mhp_guy·
High school should teach: - Prompt engineering (it's a skill now) - Real Estate 101 - Types of biases - How to have hard conversations - Vibecoding basics - Entrepreneurship - Serving others - Sales tips - Handling rejection - Compounding (not just money) - Automating your work (Zapier, Make) - Opportunity cost - Bias for action
English
58
76
765
26.8K
Thomas Eastham retweetledi
vittorio
vittorio@IterIntellectus·
this is actually insane > be tech guy in australia > adopt cancer riddled rescue dog, months to live > not_going_to_give_you_up.mp4 > pay $3,000 to sequence her tumor DNA > feed it to ChatGPT and AlphaFold > zero background in biology > identify mutated proteins, match them to drug targets > design a custom mRNA cancer vaccine from scratch > genomics professor is “gobsmacked” that some puppy lover did this on his own > need ethics approval to administer it > red tape takes longer than designing the vaccine > 3 months, finally approved > drive 10 hours to get rosie her first injection > tumor halves > coat gets glossy again > dog is alive and happy > professor: “if we can do this for a dog, why aren’t we rolling this out to humans?” one man with a chatbot, and $3,000 just outperformed the entire pharmaceutical discovery pipeline. we are going to cure so many diseases. I dont think people realize how good things are going to get
vittorio tweet mediavittorio tweet mediavittorio tweet mediavittorio tweet media
Séb Krier@sebkrier

This is wild. theaustralian.com.au/business/techn…

English
2.5K
19.9K
117.9K
17.3M
Thomas Eastham
Thomas Eastham@easthambuilds·
@SullyOmarr Serious answer: My son and I are building a project that solves the problem of trying to learn AI while the information is changing so fast it is hard to keep up! openmobius.ai 👇
Daniel Eastham@DanielEastham3

The "static course" is dead. If you’re learning AI from a pre-recorded video, you’re already behind. We’re building openmobius.ai because the industry moves too fast for "fundamentals" to stay fixed. It is a self-evolving, agent-driven education loop where the project is the curriculum. A collaboration between father and son: @easthambuilds @DanielEastham3

English
0
0
0
83
Sully
Sully@SullyOmarr·
serious question how does everyone keep up with all the new releases basically every day i open this app and there 5 new agents/clis/tools that I “need to try”
English
214
4
265
39K
Thomas Eastham
Thomas Eastham@easthambuilds·
@bradmillscan I’ve had no trouble with the latest openclaw versions and gpt5.4. To me it is the best value right now for most people who are using openclaw to learn
English
0
0
2
266
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
My weekly Opus 4.6 has reset After being forced to use ChatGPT for most of last week ... omg the difference between Opus and GPT is night & day. I can't stand working with a GPT powered claw. The personality is extremely triggering and it gets into stalled loops constantly.
English
39
1
97
9.4K
Thomas Eastham retweetledi
Kiaran Ritchie
Kiaran Ritchie@kiaran_ritchie·
I don't see how Anthropic, OpenAI or any of the model providers have any hope of defending their moats. And consequently, I think they're going to get wiped out. Right now, in early 2026 they have a meaningful advantage in terms of model capability. But far cheaper and open source models are not far behind. How long can they maintain a meaningful advantage? For the vast majority of use cases, we don't actually need much higher intelligence. It doesn't take 140 IQ to automate Turbotax or powerpoint. Eventually we will be saturated in cheap, local models that are "good enough". Of course some scientific labs and frontier research will always want the latest and greatest. But that market is orders of magnitude smaller than these company valuations can justify. What am I missing?
English
556
54
1.3K
251.7K
Thomas Eastham retweetledi
liemandt
liemandt@jliemandt·
This means the world. Credit to @tylercowen for the idea that stuck with me before I became a principal: the highest-return thing a teacher can do is raise a student’s expectations of themselves. We built Alpha School around that exact idea.
Rhett Jones@rhettjoneslore

@jliemandt is literally *the* reason I actually have ambition in life.

English
3
15
338
41.3K
Thomas Eastham
Thomas Eastham@easthambuilds·
@mfranz_on Big benefit of OpenAI is support for using gpt5.4 as default openclaw model on $20/m subscription plan. Also I got tired of the rate limits. For basic coding code and Claude code seem about the same to me.
English
0
0
1
105
Marco Franzon
Marco Franzon@mfranz_on·
Everybody is using codex while I am still using claude code. Am I missing something?
English
56
2
49
12.9K