AI at Meta

2.6K posts

AI at Meta banner
AI at Meta

AI at Meta

@AIatMeta

Together with the AI community, we are pushing the boundaries of what’s possible through open science to create a more connected world.

Katılım Ağustos 2018
296 Takip Edilen763.5K Takipçiler
Sabitlenmiş Tweet
AI at Meta
AI at Meta@AIatMeta·
🔉 Introducing SAM Audio, the first unified model that isolates any sound from complex audio mixtures using text, visual, or span prompts. We’re sharing SAM Audio with the community, along with a perception encoder model, benchmarks and research papers, to empower others to explore new forms of expression and build applications that were previously out of reach. 🔗 Learn more: go.meta.me/568e5d
English
399
928
6.5K
1.2M
AI at Meta
AI at Meta@AIatMeta·
CHMv2 is already supporting public sector efforts in the United States, Europe, and beyond. By making these advances open source, we aim to accelerate research and inform carbon offsetting, reforestation, and land management decisions globally. 🔗 Read the paper: go.meta.me/9a9e42 🔗 Download the model: go.meta.me/2edd52
English
6
5
25
9.8K
AI at Meta
AI at Meta@AIatMeta·
We’re announcing Canopy Height Maps v2 (CHMv2), an open source model for high-resolution global forest canopy mapping, developed in partnership with the @WorldResources. CHMv2 leverages our DINOv3 Sat-L vision model, specifically optimized for satellite imagery, to deliver substantial improvements in accuracy, detail, and global consistency. 🔗 Learn more: go.meta.me/70d2e9
English
36
83
634
56.3K
AI at Meta
AI at Meta@AIatMeta·
Custom silicon is critical to scaling next-gen AI. We’re detailing the evolution of the Meta Training and Inference Accelerator (MTIA), our homegrown silicon family designed to power the next era of AI experiences. Traditional chip cycles span years, but model architectures change in months. To close this gap, we’ve accelerated MTIA development to release four generations in just two years. See our roadmap and tech specs here: go.meta.me/16336d
AI at Meta tweet media
English
32
68
394
63.4K
AI at Meta
AI at Meta@AIatMeta·
@bwood_m Incredible body of work from the team 🚀
English
1
0
0
109
Brandon Wood
Brandon Wood@bwood_m·
UMA-S 1.2 is here! ~50% faster, ~40% more accurate on Open Molecules test set, and expanded data coverage for catalysts (oxides and interfaces), molecules, and polymers! We hope this release addresses a number of items on the collective wish list (definitely not all 😜). 🧵1/6
GIF
Brandon Wood tweet media
English
3
12
62
3.3K
AI at Meta
AI at Meta@AIatMeta·
Meta 🤝 AMD Today we’re announcing a multi-year agreement with @AMD to integrate their latest Instinct GPUs into our global infrastructure. With approximately 6GW of planned data center capacity dedicated to this deployment, we’re scaling our compute capacity to accelerate the development of cutting-edge AI models and deliver personal superintelligence to billions around the world. Learn more: go.meta.me/220f12
English
165
322
2.3K
359.3K
AI at Meta
AI at Meta@AIatMeta·
ICYMI: @alexandr_wang spoke at the India AI Impact Summit where he shared Meta’s vision for personal superintelligence and how developers in India are already using AI to solve major societal challenges. See highlights 👇 and then watch his full speech here: youtube.com/live/WgW7cC-kH…
YouTube video
YouTube
English
61
90
794
87.9K
AI at Meta
AI at Meta@AIatMeta·
Our team is heading to India this week for the AI Impact Summit & Expo 🇮🇳 Stop by the Meta booth (Exhibition Hall 3, Booth No. 3.7) to meet our team and experience: 📚 Demos of research, including Omnilingual Automatic Speech Recognition (ASR) and SeamlessExpressive ⚡ Lightning talks from experts on how AI is unlocking real-world benefits across language, accessibility and health 👓 Hands-on demos with our latest AI glasses including the Oakley Meta Vanguard We look forward to seeing you there!
AI at Meta tweet media
English
27
38
418
33.8K
AI at Meta
AI at Meta@AIatMeta·
@vllm_project Impressive deep dive! It’s great to see the vLLM team maximizing the GB200’s potential. These kinds of kernel-level optimizations are exactly why the PyTorch ecosystem continues to be the foundation for next-gen inference performance.
English
2
0
34
5.4K
vLLM
vLLM@vllm_project·
🚀🚀🚀 vLLM on NVIDIA GB200: 26.2K prefill TPGS, 10.1K decode TPGS for DeepSeek R1/V3. 📈 3-5x throughput vs H200 - with half the GPUs! Key optimizations: - NVFP4 GEMM for MoE experts - FP8 GEMM for MLA - Kernel fusion (RoPE+Quant+Q Write) - Weight offloading v2 with async prefetch Thanks to the @AIatMeta and @NVIDIAAIDev teams for the collaboration! 🙏 🔗 Blog: blog.vllm.ai/2026/02/03/dsr…
vLLM tweet mediavLLM tweet media
English
12
25
233
37.3K
Hu Xu on Sth. New
Hu Xu on Sth. New@Hu_Hsu·
Following the philosophy of Meta CLIP (github.com/facebookresear…, arxiv.org/abs/2309.16671) that emphasizes learning from authentic supervision signals, researchers @AIatMeta introduce Pixio (arxiv.org/abs/2512.15715, github.com/facebookresear…), a vision foundation model pre-trained solely using #Pixel -level supervision. Pixio is well suited for dense computer vision tasks, including depth estimation (e.g., Depth Anything arxiv.org/abs/2406.09414), feed-forward 3D reconstruction (e.g., MapAnything arxiv.org/abs/2509.13414), semantic segmentation, and robot learning. Built upon the simplicity of Masked Autoencoders (MAE) (arxiv.org/abs/2111.06377), Pixio introduces four major enhancements: 1, deeper decoder, 2, larger masking granularity, 3, more class tokens, 4, web-scale Meta CLIP data with self-curation. This is a joint work done by Lihe Yang (first author of Depth Anything v1, v2), @ShangwenLi1 , @yangli625, Xinjie Lei, @dongwang218, @AbdoMohamedML, @HengshuangZhao, @Hu_Hsu Paper: arxiv.org/abs/2512.15715 GitHub: github.com/facebookresear… Huggingface Transformers: huggingface.co/docs/transform… Huggingface Collections: huggingface.co/collections/fa… #AI #Robotics #Robot #3DModel
Hu Xu on Sth. New tweet media
Niels Rogge@NielsRogge

In collaboration with @AIatMeta, we added support for Pixio in the Transformers library! It proposes 4 changes to Masked AutoEncoders (MAE), including scaling it to 2B images. It outperforms/matches DINOv3 trained at similar scales Find the models here: huggingface.co/collections/fa…

English
3
14
30
4.1K
AI at Meta
AI at Meta@AIatMeta·
Our Segment Anything Models are helping advance flood monitoring and disaster response. See how @USRAedu and @USGS have fine-tuned SAM to automate a key bottleneck in real-time river mapping, enabling faster, scalable, and more cost-effective disaster preparedness: go.meta.me/9ec621
English
59
75
528
50.9K
AI at Meta retweetledi
Pierre Fernandez
Pierre Fernandez@pierrefdz·
We're thrilled to share the open source release of Meta Seal, a comprehensive, SOTA, and MIT-licensed suite of AI watermarking research, models, & training code. Learn more in the 🧵 below and explore the artifacts here: facebookresearch.github.io/meta-seal
English
13
29
105
28.1K
AI at Meta
AI at Meta@AIatMeta·
Great question! We do support fine-grained separation within instrument categories, for example, isolating acoustic vs. electric guitars or lead vs. background vocals. Keep in mind that while these sub-instrument layers are supported, the performance is generally more challenging and may vary compared to broader category separation (like separating guitar from vocals vs acoustic guitar from electric guitar).
English
0
0
3
612
出易武
出易武@inspiralarms·
@AIatMeta Could this isolate layers within an instrument? (Such as melody vs harmony vocals and lead vs rhythm guitars?)
English
1
0
0
597
AI at Meta
AI at Meta@AIatMeta·
🔉 Introducing SAM Audio, the first unified model that isolates any sound from complex audio mixtures using text, visual, or span prompts. We’re sharing SAM Audio with the community, along with a perception encoder model, benchmarks and research papers, to empower others to explore new forms of expression and build applications that were previously out of reach. 🔗 Learn more: go.meta.me/568e5d
English
399
928
6.5K
1.2M
AI at Meta
AI at Meta@AIatMeta·
We’re open-sourcing Perception Encoder Audiovisual (PE-AV), the technical engine that helps drive SAM Audio’s state-of-the-art audio separation. Built on our Perception Encoder model from earlier this year, PE-AV integrates audio with visual perception, achieving state-of-the-art results across a wide range of audio and video benchmarks. Its native multimodal support can assist people in everyday tasks, including sound detection and richer audio-visual scene understanding. 🔗 Read the paper: go.meta.me/e541b6 🔗 Download the code: go.meta.me/7fbef0
AI at Meta tweet media
English
117
169
1.4K
84.1K
TheKingElephant
TheKingElephant@thekingelephant·
@AIatMeta Super excited to try this. Especially with a kid in marching band, trying to better isolate and hear drum tracks could be pretty valuable
English
2
0
2
1.3K
AI at Meta
AI at Meta@AIatMeta·
@IgorIlyinsky The playground provides a quick preview of what SAM can do, you’ll have to host for API level interoperability.
English
1
0
2
1.6K