Liquid AI

508 posts

Liquid AI banner
Liquid AI

Liquid AI

@liquidai

Build efficient general-purpose AI at every scale.

Cambridge, MA Katılım Mart 2023
42 Takip Edilen24K Takipçiler
Sabitlenmiş Tweet
Liquid AI
Liquid AI@liquidai·
Today, we release LFM2.5, our most capable family of tiny on-device foundation models. It’s built to power reliable on-device agentic applications: higher quality, lower latency, and broader modality support in the ~1B parameter class. > LFM2.5 builds on our LFM2 device-optimized hybrid architecture > Pretraining scaled from 10T → 28T tokens > Expanded reinforcement learning post-training > Higher ceilings for instruction following 🧵
Liquid AI tweet media
English
69
247
1.5K
198.3K
Liquid AI retweetledi
Ramin
Ramin@ramin_m_h·
AI operating at the edge: space! 👌🏻
Liquid AI@liquidai

AI is beginning to move beyond the clouds… Registration is open for Hack #05: AI in Space, in collaboration with @DPhiSpace. A hackathon exploring what becomes possible when AI operates closer to satellites, orbital systems, and space-based data. For developers, researchers, and builders interested in the future of AI in space. Register → luma.com/n9cw58h0 Learn more → hackathons.liquid.ai 🚀 Join the conversation → discord.com/channels/13854…

English
1
2
7
1.3K
Liquid AI
Liquid AI@liquidai·
AI is beginning to move beyond the clouds… Registration is open for Hack #05: AI in Space, in collaboration with @DPhiSpace. A hackathon exploring what becomes possible when AI operates closer to satellites, orbital systems, and space-based data. For developers, researchers, and builders interested in the future of AI in space. Register → luma.com/n9cw58h0 Learn more → hackathons.liquid.ai 🚀 Join the conversation → discord.com/channels/13854…
Liquid AI tweet media
English
0
9
46
5.1K
Liquid AI retweetledi
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
Fantastic discussion between my friends Alex and Ramin on why the collaboration between @liquidai and @InsilicoMed is so important in advancing BioAI! Alex also demos real-life examples of using AI models in drug discovery! No doubt that AI will make the world a better place!
Insilico Medicine@InSilicoMeds

NEW Video: The era of Pharmaceutical Superintelligence has arrived. Join Alex Zhavoronkov (Insilico Medicine) and Ramin Hasani (@liquidai) for an exclusive fireside chat and live demo of LFM2-2.6B-MMAI. This 2.6B-parameter scientific foundation model is a game-changer for drug discovery. Trained via the Science #MMAIGym, it delivers state-of-the-art performance in molecular property prediction and ADMET endpoints—all while running locally on-premise. Why it matters: Data Sovereignty: High-performance AI that keeps proprietary IP behind your firewall—no cloud transmission required. Efficiency: Outperforms models 10x its size, reducing compute costs and R&D timelines. End-to-End Capability: Supports every stage from initial hit discovery to optimization. Watch it here: youtube.com/watch?v=WoLyym… #LiquidAI #InsilicoMedicine #Biotech #OnPremiseAI #DrugDiscovery #DeepTech

English
1
4
42
11.6K
Liquid AI retweetledi
deterministic_dolphin
deterministic_dolphin@deterministic_d·
Just got this set up on my 2024 M3 MacBook Air with 24Gb of RAM. MLX build. 12Gb total memory footprint with 1-4 second response time for simple prompts. This memory footprint is insane! I didn’t expect any local model to even work half decent on this Mac. Well done @liquidai
Liquid AI@liquidai

> 385ms average tool selection. > 67 tools across 13 MCP servers. > 14.5GB memory footprint. > Zero network calls. LocalCowork is an AI agent that runs on a MacBook. Open source. 🧵

English
5
20
388
77.5K
Liquid AI
Liquid AI@liquidai·
> 385ms average tool selection. > 67 tools across 13 MCP servers. > 14.5GB memory footprint. > Zero network calls. LocalCowork is an AI agent that runs on a MacBook. Open source. 🧵
Liquid AI tweet media
English
61
155
1.6K
253.2K
Liquid AI retweetledi
Lior Alexander
Lior Alexander@LiorOnAI·
A 24-billion-parameter model just ran on a laptop and picked the right tool in under half a second. The real story is that tool-calling agents finally became fast enough to feel like software. Liquid built LFM2-24B-A2B using a hybrid architecture that mixes convolution blocks with grouped query attention in a 1:3 ratio. Only 2.3 billion parameters activate per token, even though the full model holds 24 billion. That sparse activation pattern is why it fits in 14.5 GB of memory and dispatches tools in 385 milliseconds on an M4 Max. The architecture was designed through hardware-in-the-loop search, meaning they optimized the model structure by testing it directly on the chips it would run on. No cloud translation layer. No API roundtrip. The model, the tools, and your data stay on the machine. This unlocks three things that were impractical before: 1. Regulated industries can run agents on employee laptops without data leaving the device. 2. Developers can prototype multi-tool workflows without managing API keys or rate limits. 3. Security teams get full audit trails without vendor subprocessors in the loop. The model hit 80% accuracy on single-step tool selection across 67 tools spanning 13 MCP servers. If this performance holds at scale, two assumptions need updating. First, on-device agents are no longer a battery-life trade-off; they're a compliance feature. Second, the bottleneck in agentic workflows is shifting from model capability to tool ecosystem maturity.
Liquid AI@liquidai

> 385ms average tool selection. > 67 tools across 13 MCP servers. > 14.5GB memory footprint. > Zero network calls. LocalCowork is an AI agent that runs on a MacBook. Open source. 🧵

English
13
29
339
45.4K
Liquid AI
Liquid AI@liquidai·
When latency, memory, and privacy are real constraints, they decide whether your agent is a product or just a demo. That’s where LFM2-24B-A2B shines. Read the blog: liquid.ai/blog/no-cloud-… LocalCowork is open source and available in our Cookbook: github.com/Liquid4All/coo…
English
4
10
99
9.9K
Liquid AI
Liquid AI@liquidai·
An example workflow the agent executed locally: search receipts → OCR → parse vendor/date/items → check duplicates → export CSV → flag anomalies → generate reconciliation report. A real workflow. Running entirely on consumer hardware. youtu.be/WnxxW2jTDgE
YouTube video
YouTube
English
1
2
66
22.4K
Liquid AI retweetledi
Liquid AI
Liquid AI@liquidai·
We’re teaming up with @InSilicoMed to create lightweight scientific foundation models for pharmaceutical research. Together, we are building a series of liquid foundation models with state-of-the-art performance across multiple drug discovery subdomains. 💊 Our goal is to push the frontier of drug discovery beyond single-purpose, specialized models and towards foundational generalist models that are useful and capable of ingesting proprietary molecules, assays, and target data entirely within local private instances. The first model in line is LFM2-2.6B-MMAI, a small model that achieves cloud-scale performance while operating entirely on private infrastructure: > Molecular optimization: Up to 98.8% success on MuMO-Instruct multi-parameter optimization. > Affinity prediction: Outperformed GPT-5.1, Claude Opus 4.5, and Grok-4.1 on Insilico’s 2.5M / 689-target benchmark. > Chemical reasoning: Strong functional-group reasoning (FGBench) and solid 1-step retrosynthesis (ChemCensor). By combining Liquid AI’s efficient LFM technology with Insilico’s MMAI Gym, a comprehensive training platform w/over 1,000 pharmaceutical benchmarks, we observe that on-premise deployment can deliver competitive results across the full spectrum of drug discovery tasks, all in a single system. These capabilities unlock immediately useful applications for pharmaceutical companies, particularly in high-frequency ADMET screening, medicinal chemistry-facing lead optimization, and retrosynthesis feasibility assessment that prevents wasted experimental effort.
Liquid AI tweet media
English
5
19
106
13.3K