Pascale Fung

157 posts

Pascale Fung banner
Pascale Fung

Pascale Fung

@pascalefung

Cofounder and CRIO of AMI Labs. Chair Professor of ECE, HKUST. Fellow of AAAI, ACL, IEEE, ISCA.

Paris, France เข้าร่วม Ağustos 2010
48 กำลังติดตาม6K ผู้ติดตาม
Pascale Fung
Pascale Fung@pascalefung·
Be still my trekkie heart…
Chris 🌎@chriskclark

@amilabs AMI: The final frontier. These are the voyages of a new AI enterprise. Its 5-year mission: To explore & learn about strange new worlds, To seek out & support new life and new civilizations, To boldly go where no man or woman has gone before.

English
4
1
35
5.7K
Pascale Fung
Pascale Fung@pascalefung·
I am hiring researchers and builders for our #Paris team to build advanced machine intelligence that is fundamentally human-centered. amilabs.xyz
English
59
114
1.2K
83.8K
Pascale Fung รีทวีตแล้ว
WIRED
WIRED@WIRED·
Meta’s former chief AI scientist has long argued that human-level AI will come from mastering the physical world, not language. His new startup, AMI, plans to prove it. wired.com/story/yann-lec…
English
49
127
580
70.8K
Pascale Fung
Pascale Fung@pascalefung·
I am happy to share that I have joined forces with @ylecun and fellow founders as Co-Founder and Chief Research & Innovation Officer at AMI - Advanced Machine Intelligence. I will lead research initiatives that push AI to be genuinely human-centered - AI that perceives, learns, reasons and acts like we do and in our best interest. I am thankful for the trust placed in us and deeply aware of the responsibility we share to making the world a better place through our work everyday. Join us!
AMI Labs@amilabs

Advanced Machine Intelligence (AMI) is building a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe. We’ve raised a $1.03B (~€890M) round from global investors who believe in our vision of universally intelligent systems centered on world models. This round is co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions, along with other investors and angels across the world. We are a growing team of researchers and builders, operating in Paris, New York, Montreal and Singapore from day one. Read more: amilabs.xyz AMI - Real world. Real intelligence.

English
36
82
1.1K
85K
Pascale Fung
Pascale Fung@pascalefung·
People tend to either overestimate or underestimate the power of LLMs. No, LLMs do not know the world first hand but are second hand learners from linguistic, symbolic and visual descriptions of human knowledge of the world. No, LLMs are not just “parrots” trying to string words together, NGrams are. LLMs can make correlations from the words they trained to predict. When you know these two fundamental characteristics of LLMs you would neither be surprised by hallucinations nor overly impressed or reliant on LLMs for everything. #LLM #llmhallucinations #GenerativeAI #LanguageModel
English
1
2
29
1.9K
Pascale Fung รีทวีตแล้ว
Delong Chen (陈德龙)
Delong Chen (陈德龙)@Delong0_0·
See our paper on arXiv (arxiv.org/abs/2601.10592) -- Action100M: A Large-scale Video Action Dataset by Delong Chen (@Delong0_0), Tejaswi Kasarla (@tkasarla_), Yejin Bang (@yejin_bang), Mustafa Shukor (@MustafaShukor1), Willy Chung (@willyhcchung), Jade Lei Yu, Allen Bolourchi (@AllenBolourchi), Théo Moutakanni (@TheoMoutakanni), and Pascale Fung (@pascalefung). Dataset can be acceesed from this repo: github.com/facebookresear…
Paris, France 🇫🇷 English
1
2
22
2.4K
Pascale Fung รีทวีตแล้ว
Delong Chen (陈德龙)
Delong Chen (陈德龙)@Delong0_0·
We release Action100M, the hero behind VL-JEPA. It is a large dataset with O(100 million) dense action annotations on HowTo100M procedural videos. We hope it serves as a robust data foundation to advance physical world modeling research.
DailyPapers@HuggingPapers

Meta just released Action100M on Hugging Face A massive video dataset with 100M+ hierarchical action annotations. Every video includes tree-of-captions with action labels, brief and detailed summaries.

English
8
38
347
102.7K
Pascale Fung
Pascale Fung@pascalefung·
Introducing VL-JEPA: Vision-Language Joint Embedding Predictive Architecture for streaming, live action recognition, retrieval, VQA, and classification tasks with better performance and higher efficiency than large VLMs. • VL-JEPA is the first non-generative model that can perform general-domain vision-language tasks in real-time, built on a joint embedding predictive architecture. • We demonstrate in controlled experiments that VL-JEPA, trained with latent space embedding prediction, outperforms VLMs that rely on data space token prediction. • We show that VL-JEPA delivers significant efficiency gains over VLMs for online video streaming applications, thanks to its non-autoregressive design and native support for selective decoding. • We highlight that our VL-JEPA model, with an unified model architecture, can effectively handle a wide range of classification, retrieval, and VQA tasks at the same time. by @Delong0_0 @MustafaShukor1 @TheoMoutakanni @willyhcchung Jade Lei Yu Tejaswi Kasarla @AllenBolourchi @ylecun @pascalefung arxiv.org/abs/2512.10942
English
13
88
556
89.3K
Pascale Fung
Pascale Fung@pascalefung·
My father is gone suddenly on Oct 21st 2025. A painter, director, educator and a pioneer in Chinese animation, he was also a devoted father and husband to the very end, he is missed but will be remembered forever.
Pascale Fung tweet media
English
10
2
95
13.2K
Pascale Fung รีทวีตแล้ว
Delong Chen (陈德龙)
Delong Chen (陈德龙)@Delong0_0·
Thanks @_akhaliq for sharing! More about our VLWM: - Non-pixel-generative world model that reasons in abstract semantic space - Learned from 20k hours of unlabeled egocentric / web procedural videos with 5.7M action steps - System-2 planning with reasoning by cost-guided plan search Congrats to the whole team! @TheoMoutakanni @willyhcchung @yejin_bang , @ZiweiJi184538 @AllenBolourchi @pascalefung
Delong Chen (陈德龙) tweet media
AK@_akhaliq

Planning with Reasoning using Vision Language World Model

English
1
7
31
7.5K
Pascale Fung
Pascale Fung@pascalefung·
New Vision Language World Model for planning and reasoning in the physical world
Delong Chen (陈德龙)@Delong0_0

Thanks @_akhaliq for sharing! More about our VLWM: - Non-pixel-generative world model that reasons in abstract semantic space - Learned from 20k hours of unlabeled egocentric / web procedural videos with 5.7M action steps - System-2 planning with reasoning by cost-guided plan search Congrats to the whole team! @TheoMoutakanni @willyhcchung @yejin_bang , @ZiweiJi184538 @AllenBolourchi @pascalefung

English
0
0
12
2.1K
Pascale Fung รีทวีตแล้ว
AI at Meta
AI at Meta@AIatMeta·
Announcing the newest releases from Meta FAIR. We’re releasing new groundbreaking models, benchmarks, and datasets that will transform the way researchers approach molecular property prediction, language processing, and neuroscience. 1️⃣ Open Molecules 2025 (OMol25): A dataset for molecular discovery with simulations of large atomic systems. 2️⃣ Universal Model for Atoms: A machine learning interatomic potential for modeling atom interactions across a wide range of materials and molecules. 3️⃣ Adjoint Sampling: A scalable algorithm for training generative models based on scalar rewards. 4️⃣ FAIR and the Rothschild Foundation Hospital partnered on a large-scale study that reveals striking parallels between language development in humans and LLMs. Read more ➡️ go.fb.me/q5l4cz
English
67
318
1.6K
898.7K
Pascale Fung
Pascale Fung@pascalefung·
Why do you need another benchmark for LLM hallucinations? Having studied its source and mitigation methods, it became apparent to us that we could never solve the problem if we go around in circles by conflating model "hallucination" with the "factuality" of the answers. So we designed a benchmark that separates the two when evaluating models. We also designed a benchmark that cannot be easily saturated by fine-tuning on itself. "HalluLens" is all you need as LLM hallucination benchmark from now on. arxiv.org/abs/2504.17550
English
0
3
12
1.5K
Pascale Fung
Pascale Fung@pascalefung·
I will be giving an A*STAR Distinguished Lecture tomorrow on Mental World Modeling under the title “NLP Is Dead, Long Live NLP” Date: 29 Apr (Tues) Time: 10am-11am Venue: INFUSE Theatre, INFUSE, 14th SOUTH Connexis Tower, Fusionopolis (1 Fusionopolis way, Singapore 138632
English
1
1
8
1.3K
Pascale Fung
Pascale Fung@pascalefung·
The counterpart to the #AI #governance work are the scientists working on mitigating algorithmic shortcomings such as bias, hallucinations, privacy leaks etc. They are doing the real work to prevent harm but they are much less known than those who know little but talk loudly. #ResponsibleAI
English
0
1
4
1.5K