Charlie S. Burlingham

133 posts

Charlie S. Burlingham banner
Charlie S. Burlingham

Charlie S. Burlingham

@csburlingham

Vision Science & AI @ Meta Reality Labs

Redmond, WA Katılım Nisan 2021
241 Takip Edilen182 Takipçiler
Charlie S. Burlingham
Charlie S. Burlingham@csburlingham·
I'm hiring my first Research Scientist (PhD) Intern Topic areas: • Vision science (brightness, blur) • Environmental & retinal image statistics • Perceptually-aligned vision models • Sensor-to-display control algorithms Please apply if interested: lnkd.in/ea8B-kJv
English
0
6
8
791
Charlie S. Burlingham
Charlie S. Burlingham@csburlingham·
🎉 New paper out! We show training improves motion categorization but doesn't reduce (or even worsens) misperceptions—explained via model combining efficient coding + implicit categorization + increased encoding precision journals.plos.org/ploscompbiol/a…
English
0
2
6
531
Charlie S. Burlingham retweetledi
Eiko Fried
Eiko Fried@EikoFried·
So in 2007, physicists wrote a paper that made the headlines: according to their calculations, human coin flips aren’t 50/50 - more like 51/49. Why is that, and did students in Amsterdam really flip 350,000 coins to find out? 🧵
English
33
222
1.4K
310.5K
Charlie S. Burlingham retweetledi
Michael J. Proulx
Michael J. Proulx@MichaelProulx·
All-day AR would benefit from AI models that understand a person's context & eye tracking could be key for task recognition. Yet past work - including our own research.facebook.com/publications/c… - hasn't found much added value from gaze in addition to computer vision & egocentric video 2/
English
1
2
3
143
Charlie S. Burlingham retweetledi
Leah Banellis
Leah Banellis@LeahBanellis·
Got Butterflies in your Stomach?😵‍💫I am super excited to share the first major study of my postdoc @visceral_mind! We report a multidimensional mental health signature of stomach-brain coupling in the largest sample to date 🧵👇biorxiv.org/content/10.110…
GIF
English
9
85
296
77.5K
Charlie S. Burlingham
Charlie S. Burlingham@csburlingham·
Once the input state space is well-aligned with human action & vision, and appropriate models that can represent long-term dependencies are used, we believe that multiple problems in contextual AI may be solved convergently by a single (gaze-based) visual foundation model. 6/
English
0
0
1
89
Charlie S. Burlingham
Charlie S. Burlingham@csburlingham·
But object-part-based image segmentation is just starting to gain traction, and universal segmentation (segmenting and labelling all image pixels) is still a challenge. So this is one major bottleneck for aligning the model input state space with that of human active vision. 5/
English
1
0
2
121
Charlie S. Burlingham
Charlie S. Burlingham@csburlingham·
New paper alert! @RealityLabs Eye gaze in everyday life contains multi-scale temporal dependencies across objects (1-7 fixations into past, depending on task). Akin to natural language. Key to foundation models for visual understanding in mixed reality dl.acm.org/doi/10.1145/36…
English
2
4
25
2.1K