Jonathan Filbert

2.3K posts

Jonathan Filbert banner
Jonathan Filbert

Jonathan Filbert

@jonathanfilbert

frontend + ai @ tiktok I OSS @lynxJS_org | built teams at @gojektech @tokopedia @pintuid | creating contents https://t.co/RZ1W6cIOsL | views are my own.

📩 contact at jofil.id Katılım Mart 2010
545 Takip Edilen909 Takipçiler
Sabitlenmiş Tweet
Jonathan Filbert
Jonathan Filbert@jonathanfilbert·
Another Community Win for @LynxJS_org! The official React Lynx Hooks library - ReactLynxUse has a beautiful website powered by @rspack_dev 's Rspress 2.0 🚀 ✅ AI-friendly SSG-MD for each page ✅ English & 简体中文 support ✅ Open Source Come try! 👇 lynx-community.github.io/reactlynx-use/
Jonathan Filbert tweet mediaJonathan Filbert tweet media
Bitter Gourd@BitterGourd1020

🎉 New Website Live for reactlynx-use! We are thrilled to announce that the official website for reactlynx-use is now live! 🔗 Check it out: lynx-community.github.io/reactlynx-use/ A huge shoutout to @jonathanfilbert for his incredible work on building this site using Rspress 2.0.

English
2
2
16
6.2K
Jonathan Filbert
Jonathan Filbert@jonathanfilbert·
@Ayreiiii Serious question, as someone who's been left out in the rom hacking community for a while (my last hackrom was Dark Rising in 2016) How should one keep updated with the releases of fan projects? Is pokecommunity still the goto place?
English
2
0
0
455
Jonathan Filbert
Jonathan Filbert@jonathanfilbert·
Tapi ini gaada solusinya for now menurut gw Its either you "invest" a few hrs everyday on X, or you get behind
English
0
0
0
14
Jonathan Filbert
Jonathan Filbert@jonathanfilbert·
First, Mimo V2 and now MiniMax M2.7 in LESS THAN 24 hours 🤯🤯 This is why X is important. A lot can happen in 24h 🤙
Artificial Analysis@ArtificialAnlys

MiniMax has released MiniMax-M2.7, delivering GLM-5-level intelligence for less than one third of the cost MiniMax-M2.7 from @MiniMax_AI scores 50 on the Artificial Analysis Intelligence Index, an 8-point improvement over MiniMax-M2.5, which was released one month ago. This is driven by stronger performance on real-world agentic tasks and reduced hallucinations. MiniMax-M2.7 is now ahead of MiMo-V2-Pro (Reasoning, 49) and Kimi K2.5 (Reasoning, 47), and equivalent to GLM-5 (Reasoning, 50) while using 20% fewer output tokens and costing less than a third as much to run. MiniMax-M2.7 is a reasoning-only model and maintains the same per-token pricing as MiniMax-M2.5. Key takeaways: ➤ Strong performance on real-world agentic tasks: MiniMax-M2.7 achieves a GDPval-AA Elo of 1494, a significant improvement from MiniMax-M2.5 (1203) and ahead of MiMo-V2-Pro (Reasoning, 1426), GLM-5 (Reasoning, 1406), and Kimi K2.5 (Reasoning, 1283). It remains behind frontier models such as GPT-5.4 (xhigh, 1667) and Claude Opus 4.6 (Adaptive Reasoning, max effort, 1606) ➤ Reduced hallucinations: MiniMax-M2.7 scores +1 on the AA-Omniscience Index, up from MiniMax-M2.5 (-40). This is competitive with GPT-5.2 (xhigh, -1) and GLM-5 (Reasoning, +2), and well ahead of Kimi K2.5 (Reasoning, -8). The improvement from M2.5 is purely driven by reduced hallucinations, meaning the model is more likely to abstain from answering when it doesn’t know the answer, rather than guessing. M2.7 achieves a hallucination rate of 34%, lower than Claude Sonnet 4.6 (Adaptive Reasoning, max effort, 46%) and Gemini 3.1 Pro Preview (50%). ➤ Gains across most evaluations compared to MiniMax-M2.5: Outside of the GDPval-AA and AA-Omniscience improvements noted above, MiniMax-M2.7 improves in HLE (+9 p.p.), TerminalBench Hard (+5 p.p.), SciCode (+4 p.p.), IFBench (+4 p.p.), GPQA (+3 p.p.), and LCR (+3 p.p.). We saw a notable regression in τ²-Bench (-11 p.p.). ➤ Increased token use: MiniMax-M2.7 used ~87M output tokens to run the Artificial Analysis Intelligence Index, up 55% from MiniMax-M2.5 (~56M). It remains more token-efficient than other models such as GLM-5 (Reasoning, 110M) and Kimi K2.5 (Reasoning, ~89M) ➤ Leading cost efficiency: MiniMax-M2.7 cost $176 to run the Artificial Analysis Intelligence Index, maintaining the same $0.30/$1.20 per 1M input/output pricing as M2.5. This places it on the Pareto frontier of our Intelligence vs. Cost chart. For context, GLM-5 (Reasoning) cost $547 at equivalent intelligence, Kimi K2.5 (Reasoning) cost $371, and Gemini 3 Flash Preview (Reasoning) cost $278 Key model details: ➤ Context window: 200K tokens (equivalent to MiniMax-M2.5). ➤ Pricing: $0.30/$1.20 per 1M input/output tokens (unchanged from MiniMax-M2.5). ➤ Availability: MiniMax first-party API only. ➤ Modality: Text input and output only (no multimodality). ➤ Licensing: MiniMax has not announced whether MiniMax-M2.7 will be open weights. MiniMax-M2.5 is available under the MIT license.

English
1
0
0
73
Jonathan Filbert
Jonathan Filbert@jonathanfilbert·
@fortuneandrich Anyone can recommend a chinese mandarin course? Looking for a tutor, preferably yg bisa private 🤙
English
0
0
0
221
aquamarine 🔮
aquamarine 🔮@fortuneandrich·
hi, aku buka kelas bahasa jerman 8k aja perpertemuan in this economy jirr, plis bantu ramein ih sepi bangeetttt😭😭😭😭😭😭😭
aquamarine 🔮 tweet mediaaquamarine 🔮 tweet mediaaquamarine 🔮 tweet media
Indonesia
341
1.9K
7.7K
240.8K
Jonathan Filbert
Jonathan Filbert@jonathanfilbert·
@OpenRouter @XiaomiMiMo @openclaw Release notes from one of the creators: x.com/_LuoFuli/statu…
Fuli Luo@_LuoFuli

MiMo-V2-Pro & Omni & TTS is out. Our first full-stack model family built truly for the Agent era. I call this a quiet ambush — not because we planned it, but because the shift from Chat to Agent paradigm happened so fast, even we barely believed it. Somewhere in between was a process that was thrilling, painful, and fascinating all at once. The 1T base model started training months ago. The original goal was long-context reasoning efficiency. Hybrid Attention carries real innovation, without overreaching — and it turns out to be exactly the right foundation for the Agent era. 1M context window. MTP inference for ultra-low latency and cost. These architectural decisions weren't trendy. They were a structural advantage we built before we needed it. What changed everything was experiencing a complex agentic scaffold — what I'd call orchestrated Context — for the first time. I was shocked on day one. I tried to convince the team to use it. That didn't work. So I gave a hard mandate: anyone on MiMo Team with fewer than 100 conversations tomorrow can quit. It worked. Once the team's imagination was ignited by what agentic systems could do, that imagination converted directly into research velocity. People ask why we move so fast. I saw it firsthand building DeepSeek R1. My honest summary: — Backbone and Infra research has long cycles. You need strategic conviction a year before it pays off. — Posttrain agility is a different muscle: product intuition driving evaluation, iteration cycles compressed, paradigm shifts caught early. — And the constant: curiosity, sharp technical instinct, decisive execution, full commitment — and something that's easy to underestimate: a genuine love for the world you're building for. We will open-source — when the models are stable enough to deserve it. From Beijing, very late, not quite awake.

English
0
0
2
508
OpenRouter
OpenRouter@OpenRouter·
Stealth Model Reveal: Hunter and Healer Alpha are @XiaomiMiMo MiMo-V2-Pro and MiMo-V2-Omni Both models are live now on OpenRouter, and free to use in @OpenClaw via the OpenRouter provider for the next week!
OpenRouter tweet media
English
69
133
1.4K
104.5K
Jonathan Filbert
Jonathan Filbert@jonathanfilbert·
Tapi Xiaomi keren sih, 1M context window only for up to $3/M tokens, slightly larger than Opus 4.6 tapi way cheaper
Jonathan Filbert tweet media
English
0
0
0
45
Jonathan Filbert
Jonathan Filbert@jonathanfilbert·
BytePlus' Coding Plan promo for $5 is too good to be true it was deleted JUST when I was about to subscribe 😂
Jonathan Filbert tweet media
English
0
0
0
164
theo
theo@tibudiyanto·
kalo bsk beneran ke layoff, bikin company apa ya
Indonesia
28
0
57
13.6K
Jonathan Filbert
Jonathan Filbert@jonathanfilbert·
The @LynxJS_org Team was in @fossasia Conference!🎉 It's one of the biggest conference in the world and Lynx's was one of the most well-received. Kudos to @popahqiu! We got too many questions and ran out of t-shirts 😂 Met our Discord & X community devs in person 🤝 Grateful
Jonathan Filbert tweet mediaJonathan Filbert tweet mediaJonathan Filbert tweet media
English
1
2
6
179
Jonathan Filbert
Jonathan Filbert@jonathanfilbert·
@jacobandreou Regardless of what others think of him post Uber, TK has always been, and will always be - one of the Goat from the valley.
English
0
0
0
327
Jacob Andreou
Jacob Andreou@jacobandreou·
your favorite founders’ favorite founder
English
76
603
6.2K
758.4K
theo
theo@tibudiyanto·
rip bandung bondowoso, you would love llms
English
1
0
4
647
Jonathan Filbert
Jonathan Filbert@jonathanfilbert·
P.s. We presented a teaser of this in FOSSAsia Summit earlier this week, and the response was amazing! Hyped! 🚀🚀🚀🚀
Lynx@LynxJS_org

English
0
0
2
97
Jonathan Filbert
Jonathan Filbert@jonathanfilbert·
@fulnikura Noo not the GU x Roth and Uniqlo x Claire Wright Keller pieces 😭
English
0
0
0
458
Neverland
Neverland@jait_chen·
Rstack could offer a unified toolchain experience similar to Vite Plus as well, and huge respect to the Vite Plus team for exploring that direction. We haven’t pursued that path for now, mainly because we worry about a few tradeoffs: 1. it makes dependency/version management more coupled, especially around major upgrades in underlying tools 2. it can lead to a single large config file, and if that file imports several third-party packages, it may slow down command startup 3. for task management and build cache, projects like Turborepo and Nx already do a great job, so we’d rather build on top of that ecosystem than reinvent it
English
5
11
69
9K