Doughboy 💸

202 posts

Doughboy 💸 banner
Doughboy 💸

Doughboy 💸

@far__away__

bag chasing through local minima // views my own

San Francisco, CA Katılım Ağustos 2020
520 Takip Edilen65 Takipçiler
Doughboy 💸 retweetledi
Liquid AI
Liquid AI@liquidai·
LFM2:3B in space, on Cluster Gate2: ✨ “This image is a highly detailed, close-up view of Earth as seen from space, likely captured by a satellite or space telescope. The Earth is depicted as a large, circular sphere with a predominantly blue hue, indicating the vast oceans that cover most of its surface. The blue is interspersed with swirling white clouds, which are particularly prominent over the landmasses, suggesting the presence of weather systems and atmospheric activity. The overall composition of the image highlights the beauty and complexity of our planet, showcasing the dynamic interplay between the oceans, atmosphere, and landmasses." Congratulations to @DPhiSpace for this incredible milestone! 🌎
DPhi Space@DPhiSpace

We ran an LLM onboard a satellite to describe Earth! The response comes from @liquidai LFM2 - marking the successful commissioning of our mini orbital server. Read the full story: lnkd.in/eej4MnAQ Run your own software in space: software.dphispace.com

English
1
13
68
8.2K
Doughboy 💸 retweetledi
Ramin
Ramin@ramin_m_h·
Proud to partner with @MercedesBenz in a multi-year agreement to bring embedded, on-device intelligence to Mercedes-Benz vehicles, first in North America. This marks an important step toward making in-car AI more capable, more responsive, and more useful in everyday driving. At @liquidai, we believe the future of intelligence in the physical world depends on models that are fast, private, efficient, and able to run directly on the hardware already inside the system. In the vehicle, that means advancing speech, language understanding, and reasoning enabling more natural and robust conversational experiences for drivers and passengers. The software-defined vehicle is one of the most consequential real-world deployments of AI, and Mercedes-Benz has approached it with exactly the rigor this challenge deserves. Proud of what our teams are building together, and excited for the road ahead as we work toward production deployment in the second half of 2026. liquid.ai/press/liquid-a…
Liquid AI@liquidai

We’re entering a multi-year partnership with @MercedesBenz to scale embedded, on-device intelligence for their third- and fourth-generation MBUX. Our goal: to make the driver/vehicle relationship even more natural and effortless. Read more about our partnership: liquid.ai/press/liquid-a…

English
2
7
71
6.6K
Doughboy 💸 retweetledi
Maxime Labonne
Maxime Labonne@maximelabonne·
LFMs powering speech, language understanding, and reasoning directly inside the vehicle Very exciting times for edge models!
Liquid AI@liquidai

We’re entering a multi-year partnership with @MercedesBenz to scale embedded, on-device intelligence for their third- and fourth-generation MBUX. Our goal: to make the driver/vehicle relationship even more natural and effortless. Read more about our partnership: liquid.ai/press/liquid-a…

English
1
7
84
6.7K
Doughboy 💸 retweetledi
Alexander Amini
Alexander Amini@xanamini·
In the US, the average person spends >5 YEARS (!) of their lifetime sitting behind the wheel of a vehicle. If you include time spent as a passenger, that estimate easily doubles. I'm very proud of this partnership with @MercedesBenz — together we will be making in-car AI more capable, more responsive, and more useful to everyone. A truly AI-native vehicle is the perfect embodiment capturing the worldwide impact of massively multimodal on-device AI.
Liquid AI@liquidai

We’re entering a multi-year partnership with @MercedesBenz to scale embedded, on-device intelligence for their third- and fourth-generation MBUX. Our goal: to make the driver/vehicle relationship even more natural and effortless. Read more about our partnership: liquid.ai/press/liquid-a…

English
9
8
27
4.8K
Doughboy 💸 retweetledi
Liquid AI
Liquid AI@liquidai·
We’re entering a multi-year partnership with @MercedesBenz to scale embedded, on-device intelligence for their third- and fourth-generation MBUX. Our goal: to make the driver/vehicle relationship even more natural and effortless. Read more about our partnership: liquid.ai/press/liquid-a…
Liquid AI tweet media
English
18
47
224
41.3K
Doughboy 💸 retweetledi
Liquid AI
Liquid AI@liquidai·
Today, we release LFM2.5-VL-450M, a vision-language model built for real-time reasoning on edge devices. It processes a 512×512 image and returns structured outputs in ~240ms on-device.
Liquid AI tweet media
English
25
132
1.1K
115.1K
Doughboy 💸
Doughboy 💸@far__away__·
@boyuan_chen @neural_avb Fine-tuning them over well defined domains w/ relatively small amount of samples (order ~10k) leads to great results. At that parameter count the models need hand holding, but after that they are really great 🚀
English
0
0
0
25
Boyuan (Nemo) Chen
Boyuan (Nemo) Chen@boyuan_chen·
@neural_avb 1.6B params in the browser on WebGPU is wild. At that size you can embed vision in literally anything - browser extensions, mobile apps, edge devices. Quality is the big question though. Anyone actually tested these on something beyond demos?
English
1
0
1
102
Doughboy 💸 retweetledi
Liquid AI
Liquid AI@liquidai·
Today, we release LFM2.5, our most capable family of tiny on-device foundation models. It’s built to power reliable on-device agentic applications: higher quality, lower latency, and broader modality support in the ~1B parameter class. > LFM2.5 builds on our LFM2 device-optimized hybrid architecture > Pretraining scaled from 10T → 28T tokens > Expanded reinforcement learning post-training > Higher ceilings for instruction following 🧵
Liquid AI tweet media
English
69
257
1.6K
209.1K
Michelle Fang 🌁
Michelle Fang 🌁@michelleefang·
if you're vibe coding or building over the holidays, i want to gift one of you a 6 month subscription of claude pro to support <3 just drop a comment below. merry christmas!
English
7.5K
179
9K
1.1M
Doughboy 💸
Doughboy 💸@far__away__·
@Henry_Ndubuaku Very cool, wondering if you've benchmarked this against llamacpp. Are there perf gains vs ggml?
English
0
0
0
42
Henry Ndubuaku
Henry Ndubuaku@Henry_Ndubuaku·
1.6B INT8 VLM by @liquidai on Cactus (YC S25) never exceeds 231MB of peak memory usage at any context size. 1. Cactus is aggressively optimised to run on budget devices with minimal resources, enabling efficiency, negligible pressure on your phone and passes your OS safety mechanisms. 2. Notice how 1.6B INT8 CPU reaches 95 toks/sec on Apple M4 Pro, faster than your eyes could process. Our INT4 will almost 2x the speed when merged. Expect up to 180 toks/sec decode speed. 3. The prefill speed reaches 513 toks/sec. Our NPU kernels will 5-11x that once merged. Expect up to 2500 - 5500 toks/sec. The time to first token of your large context prompt will take less than 1sec. 4. LFM2-1.2B-INT8 in the Cactus compressed format takes only 722mb. This means that with INT4 will shrink to 350mb. Almost half as much as GGUF, ONNX, Executorch, LiteRT etc. 5. Once done, we will start recommending 1B models to our users, cause your Grandma’s phones will run them. Stay tuned! github.com/cactus-compute…
English
7
13
155
37.4K
Rohan Paul
Rohan Paul@rohanpaul_ai·
🧮 @Meta released MobileLLM-R1 for edge reasoning A sub-1B family that claims 2x to 5x gains on math and code versus other open models while staying small enough for constrained devices. The lineup spans 140M to 950M parameters and targets math, coding, and scientific reasoning rather than broad chat. On MATH500 it reports about 5x the accuracy of OLMo-1.24B and about 2x over SmolLM2-1.7B, and on GSM8K, AIME, and LiveCodeBench it matches or beats Qwen3-0.6B despite far fewer tokens. Context length is 4K for base models and 32K after post-training, which helps long problems but raises KV-cache memory needs when tokens pile up. Training uses about 4.2T tokens compared with Qwen3-0.6B at 36T, which is roughly 11.7% of the data to reach similar or better accuracy. For on-device runs, the small footprint and 4K context help latency and memory, while 32K mode likely needs careful quantization and KV-cache tuning.
Rohan Paul tweet media
English
3
3
17
3.6K
atlas
atlas@creatine_cycle·
your career goal should be to get to the point where you have "a guy" for stuff. nothing is more alpha than recommending a guy. tax? i know a guy legal? i have a guy for that 409 valuation? a guy agentic workflows? believe it or not there's a guy for that
English
52
56
1.1K
43.9K
teo
teo@teodorio·
Ok so working 12 hour days and 4-5 on weekends with high context switching across really technical fields is unsustainable. I feel like my body and mental health is breaking down. Quick tips to keep it up?
English
303
40
1.7K
143.9K
Doughboy 💸
Doughboy 💸@far__away__·
@neural_avb I've often thought about asking this as an interview question. Surprising how many don't know where there HF models live
English
0
0
1
182
AVB
AVB@neural_avb·
I have 90GB more disk space because I ran: rm ~/.cache/huggingface/* 😇
English
5
1
47
3.4K
Doughboy 💸 retweetledi
Liquid AI
Liquid AI@liquidai·
Introducing LFM2-VL — our new generation of efficient vision-language models for real-world deployment, from smartphones and laptops to wearables and embedded systems. 🧵
Liquid AI tweet media
English
8
35
211
644.5K
Doughboy 💸
Doughboy 💸@far__away__·
@OfficialLoganK RPD for the 2.5 models is quite limited in Tier 1/2 still. Any plans to unleash this soon? 🥵
English
0
0
1
871
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Gemini 2.5 Pro is back in the free tier of the API, have a great weekend : )
English
267
298
5.8K
1.6M