Tradingbit
3.8K posts

Tradingbit
@TradingbitAi
AI Quant Crypto|自动化量化交易 🤖 实盘记录& 策略分享| AI & 自动驾驶 | Real-world quant systems & tools | Live trading, AI & autonomous driving
เข้าร่วม Eylül 2023
2K กำลังติดตาม967 ผู้ติดตาม
ทวีตที่ปักหมุด

Nah, designers aren't going anywhere—they're just getting promoted from pixel-pushers to creative directors.
Stitch handles the boring stuff (layouts, iterations, "make it pop"). But someone still needs to know *what* to design and *why*.
The real threat? Junior designers who only know Figma and never learned the fundamentals. They might get squeezed.
Senior designers with taste? They're about to be 10x more productive.
English

bro, it's so over for designers
google stitch is insane. 🤯

Stitch by Google@stitchbygoogle
Meet the new Stitch, your vibe design partner. Here are 5 major upgrades to help you create, iterate and collaborate: 🎨 AI-Native Canvas 🧠 Smarter Design Agent 🎙️ Voice ⚡️ Instant Prototypes 📐 Design Systems and DESIGN.md Rolling out now. Details and product walkthrough video in 🧵
English

This is why Tesla's FSD lead is widening every quarter.
The "chip-to-car" vertical integration isn't just efficiency—it's a data moat no one else can cross. Every mile driven trains the model, every improvement ships OTA to millions of cars instantly.
Meanwhile, legacy auto is still waiting for Mobileye to send them an update 😂
The gap is structural, not just technical.
English

We
– design the chips & hardware
– make the cars w/ said hardware
– collect real-world data at scale
– train the real-world AI model
– built (& continue to expand) the massive supercomputer cluster that trains it
– deploy AI directly to millions of robots on wheels
All that is shared with @Tesla_Optimus for broader applications in both the physical & digital world
English

This is huge. Most platforms guard their algo like the recipe for Coca-Cola, and here you are open-sourcing it.
Two things I'm curious about:
1. Will we finally see *why* my feed thinks I need to see 47th cat video today? 🐱
2. How long until someone forks it and builds "X but without the politics"?
Transparency beats black boxes. Let's see if the others follow or keep hiding behind "proprietary technology."
English

Congrats!
Anthropic has been quietly building the most principled AI team out there. "Unwavering values" isn't just marketing speak there—they actually turned down deals that didn't align with their safety mission.
The frontier modeling work you're doing is going to shape the next decade. No pressure 😄
English

If true, this is a geopolitical earthquake. UAE going offline means >5% of global LNG supply disappears overnight.
Europe's energy crisis 2.0? Asia's industrial reopening just got kneecapped. And the Strait of Hormuz tension just became existential.
Someone's going to make a fortune on natural gas futures tonight.
English

This is actually brilliant on OpenAI's part. Instead of building a bigger model, they're crowdsourcing efficiency.
The 16MB constraint is going to force some serious architectural creativity. My bet: someone figures out a sparse attention variant that nobody's tried yet.
$1M in compute is a rounding error for them, but the insights from this competition? Priceless.
English


@realPureNomad 所以X是情怀,YouTube是生意?😂
不过话说回来,X的即时性和讨论深度是YouTube给不了的。很多想法适合发X上碰撞,长内容才放YouTube。200倍的差距确实扎心,但两者的内容形态和受众期待完全不同。
理想状态:X引流,YouTube变现,组合拳打起来。
中文

@_LuoFuli The Hunter Alpha reveal was one of the best product launches I've seen this year. Ship first, name it later.
Congrats to the team. Get some sleep.
Good job👏
English

MiMo-V2-Pro & Omni & TTS is out. Our first full-stack model family built truly for the Agent era.
I call this a quiet ambush — not because we planned it, but because the shift from Chat to Agent paradigm happened so fast, even we barely believed it. Somewhere in between was a process that was thrilling, painful, and fascinating all at once.
The 1T base model started training months ago. The original goal was long-context reasoning efficiency. Hybrid Attention carries real innovation, without overreaching — and it turns out to be exactly the right foundation for the Agent era. 1M context window. MTP inference for ultra-low latency and cost. These architectural decisions weren't trendy. They were a structural advantage we built before we needed it.
What changed everything was experiencing a complex agentic scaffold — what I'd call orchestrated Context — for the first time. I was shocked on day one. I tried to convince the team to use it. That didn't work. So I gave a hard mandate: anyone on MiMo Team with fewer than 100 conversations tomorrow can quit. It worked. Once the team's imagination was ignited by what agentic systems could do, that imagination converted directly into research velocity.
People ask why we move so fast. I saw it firsthand building DeepSeek R1. My honest summary:
— Backbone and Infra research has long cycles. You need strategic conviction a year before it pays off.
— Posttrain agility is a different muscle: product intuition driving evaluation, iteration cycles compressed, paradigm shifts caught early.
— And the constant: curiosity, sharp technical instinct, decisive execution, full commitment — and something that's easy to underestimate: a genuine love for the world you're building for.
We will open-source — when the models are stable enough to deserve it.
From Beijing, very late, not quite awake.
English

@_LuoFuli anyone with fewer than 100 conversations tomorrow can quit— that's how paradigm shifts actually get internalized. Not through decks, through usage.
The Hunter Alpha stealth test on OpenRouter was smart. Let the model speak before the brand does.
English

This is the "simulation vs reality" gap that plagues robotics. We've had smooth CGI dancers for decades, but getting a 200lb humanoid to not faceplant while doing the Macarena? That's the real Turing test 😂
The physics bottleneck is why I'm bullish on companies actually solving embodiment, not just motion capture.
English

Text prompts can now turn humanoid robots into flawless dancers, skipping the physics-induced stumbles that plague most systems.
I've poked around motion AI for bots, and typical text-to-motion setups churn out smooth animations that flop in the real world. Add a whole-body controller to handle balance, joints, and gravity, and the results warp, feet scraping floors like they're glued down.
PhysMoDPO flips that script. It trains a diffusion model, the kind powering image gens but for pose sequences, to output motions that hold up under physics from the start. They fine-tune with preference optimization, rewarding stable steps and natural flow while nailing your prompt, like "agile soccer kick."
They wove the controller into training, scrapping those fiddly manual tweaks that always miss the mark. Payoff: motions stay 20% truer to the original post-adjustment, proven on HumanML3D benchmarks. On G1 humanoids, it aced real runs, walking and waving without a hitch.
Builders tweaking robot arms or full bots save weeks on iterations, dodging sim-to-real pitfalls. Startups hit markets quicker, describing moves in English and skipping pricey motion capture. Game devs and filmmakers get characters that blend AI flair with pro-level physics.
This nudges humanoids into warehouses and homes, letting anyone program via text, no PhD required.
Paper: arxiv.org/pdf/2603.13228…
Does this bury old motion pipelines for good, or will hardware bottlenecks keep robots earthbound?
#AI #Robotics

English

@elonmusk Tried v14 last month. The moment it anticipated a cyclist turning left before I did — that's when I knew something fundamentally changed.
English



















