Adam Abdalla

50 posts

Adam Abdalla banner
Adam Abdalla

Adam Abdalla

@yabadba

ece @uoft and ml researcher @MIT | i like making cool stuff

Katılım Haziran 2024
167 Takip Edilen57 Takipçiler
eugene
eugene@e_chx4·
smth like that yea
eugene tweet mediaeugene tweet media
English
3
0
12
668
Hamza Ammar
Hamza Ammar@HamzakAmmar·
Moving to Toronto for the summer, i'll be interning at @Shopify but trying to hit as many events and meetups as possible while i'm there! If you're in the city and working on something cool, let's grab coffee. DMs open (:
Hamza Ammar tweet media
English
18
0
108
3.8K
Fahmi
Fahmi@fahmi___omer·
PSA: fullmetal alchemist brotherhood is the best piece of fiction in history and th greatest anime oat
English
7
0
8
341
subodh
subodh@SubodhThallada·
@_wilsonchenn gotta be uoft with midterm avgs like this
subodh tweet media
English
2
0
1
115
wilson
wilson@_wilsonchenn·
I’m curious to know which universities cortisolmaxx the most during this season
English
2
0
8
970
Monte.HL💧
Monte.HL💧@monteGBC·
@srijitiyer @fuckgrowth hey is there a way to use this for images (static) content ? Do you think adding a 5 second video of a static frame will replicate how the brain is responding to this image using cortex ?
English
3
0
2
305
Matthew Allen Fisher
Matthew Allen Fisher@mathyouf·
@srijitiyer @fuckgrowth Thanks for making this, you should do something during the loading time though because it takes a while to process and it's hard to know how long I'll be waiting.
English
2
0
1
850
Adam Abdalla
Adam Abdalla@yabadba·
@srijitiyer This is rly cool! Don’t gotta worry about renting a gpu out now haha
English
0
0
1
23
Srijit Iyer
Srijit Iyer@srijitiyer·
MaxToki is a new cell aging model from NVIDIA/Gladstone trained on 175M cells. It's CUDA-only so I ported it to run on Apple Silicon via MLX. I ran perturbations on 20 genes across the heart, brain, and skin. An interesting finding: RASGEF1B ages cardiac cells by +1.05y but has the opposite effect in the brain. cellage.vercel.app github.com/srijitiyer/max…
English
2
0
2
440
DK
DK@donghaxkim·
you can now copy, paste, and delete in react-rewrite
English
6
2
17
1.2K
Adam Abdalla
Adam Abdalla@yabadba·
@Hesamation To test if it would actually be good at optimizing ads using brain response I built Cortex with a buddy of mine. You can upload your ad, and see the exact neural response it gets + how to improve it to better suit your purposes. Check it out below, its free :)
Adam Abdalla@yabadba

Folks @AIatMeta just released TRIBE v2, a model that predicts fMRI brain activity from video. We used it to build Cortex, so you can test ads on a "digital human" to see where attention drops before you spend a dime validating before launch. cortex.buzz

English
0
0
1
35
Adam Abdalla
Adam Abdalla@yabadba·
@EvanLuthra We built Cortex to see if Tribe could really be used to monetize people’s attention. Let us know what you think!
Adam Abdalla@yabadba

Folks @AIatMeta just released TRIBE v2, a model that predicts fMRI brain activity from video. We used it to build Cortex, so you can test ads on a "digital human" to see where attention drops before you spend a dime validating before launch. cortex.buzz

English
0
0
0
13
Evan Luthra
Evan Luthra@EvanLuthra·
🚨WHAT META JUST DROPPED IS MORE DANGEROUS THAN ANYTHING OPENAI HAS EVER BUILT!!!!! while everyone was losing their mind over Claude Mythos.. Meta dropped something that nobody noticed.. they built an AI called TRIBE v2.. it's basically a digital copy of your brain.. you show it a video, a sound, a sentence.. and it already knows how your brain is going to react.. 70,000 different parts of your brain.. blood flow, oxygen, everything.. they trained it on 1,000 hours of brain scans from 700 real people lying inside MRI machines.. it doesn't read your thoughts.. it does something worse.. it knows what's going to make you feel something before you even feel it.. think about that for a second.. if an AI already knows which image, which sound, which word is going to hit your dopamine.. you don't need to read someone's mind.. you just build the perfect trap.. and meta didn't even keep it locked up.. they open-sourced it.. gave the code, the weights, everything to the entire world.. this is the same company that got caught making instagram destroy teenage girls.. the same company whose own research said their algorithm pushes rage because rage keeps you scrolling.. that company now has a working copy of how your brain responds to everything you see and hear.. they don't have to guess what keeps you glued to the screen anymore.. they can rehearse it on a copy of your brain before you ever see it.. the product was never the app.. the product was always you.. now they have the blueprint.
AI at Meta@AIatMeta

Today we're introducing TRIBE v2 (Trimodal Brain Encoder), a foundation model trained to predict how the human brain responds to almost any sight or sound. Building on our Algonauts 2025 award-winning architecture, TRIBE v2 draws on 500+ hours of fMRI recordings from 700+ people to create a digital twin of neural activity and enable zero-shot predictions for new subjects, languages, and tasks. Try the demo and learn more here: go.meta.me/tribe2

English
226
716
4.2K
1.5M
Max Wobst
Max Wobst@maxwobst·
@iblamejulius finna be able to test ads without having to throw spend on them
English
2
0
32
3.7K
Adam Abdalla
Adam Abdalla@yabadba·
@haider1 Another interesting usecase could be marketing. Imagine knowing exactly how your ad would land without consulting a single person. Well it exists now with cortex!! cortex.buzz x.com/yabadba/status…
Adam Abdalla@yabadba

Folks @AIatMeta just released TRIBE v2, a model that predicts fMRI brain activity from video. We used it to build Cortex, so you can test ads on a "digital human" to see where attention drops before you spend a dime validating before launch. cortex.buzz

English
0
0
0
3
Haider.
Haider.@haider1·
INCREDIBLE meta built a model that predicts brain activity tribe v2 is a digital brain foundation model that shows how the brain responds to images, audio, and text. i think this gives researchers a much better way to study the brain, and could also help future work in AI and healthcare
Haider. tweet media
AI at Meta@AIatMeta

Today we're introducing TRIBE v2 (Trimodal Brain Encoder), a foundation model trained to predict how the human brain responds to almost any sight or sound. Building on our Algonauts 2025 award-winning architecture, TRIBE v2 draws on 500+ hours of fMRI recordings from 700+ people to create a digital twin of neural activity and enable zero-shot predictions for new subjects, languages, and tasks. Try the demo and learn more here: go.meta.me/tribe2

English
11
12
94
8.1K
Vaibhav Sisinty
Vaibhav Sisinty@VaibhavSisinty·
Meta just dropped something that understands your brain. TRIBE v2 (Trimodal Brain Encoder) is a foundation model. It’s trained to predict how the human brain responds to almost any sight or sound. It’s a simulation layer for human cognition. Here’s what just happened: → A single model can predict brain activity from video, audio, text → Across 700+ people → At 70,000+ voxel resolution → With zero-shot generalization This isn’t modeling language. It’s modeling how you experience reality. Implications are wild: → Run neuroscience experiments without humans → Generate synthetic brain datasets at scale → Reverse-engineer perception, language, emotion → Build AI that aligns with how humans actually think (not just what they say) This isn’t just “neuro-AI.” It’s the beginning of: Cognition-as-a-Service.
English
10
8
34
4.5K
Karan
Karan@karankendre·
We’re not ready for this Meta just built an AI model of the human brain: >TRIBE v2 predicts neural activity >handles vision, sound, and language together >scaled to ~70,000 brain regions >works on new people without retraining >can run experiments without real humans we’re getting closer to decoding thoughts
AI at Meta@AIatMeta

Today we're introducing TRIBE v2 (Trimodal Brain Encoder), a foundation model trained to predict how the human brain responds to almost any sight or sound. Building on our Algonauts 2025 award-winning architecture, TRIBE v2 draws on 500+ hours of fMRI recordings from 700+ people to create a digital twin of neural activity and enable zero-shot predictions for new subjects, languages, and tasks. Try the demo and learn more here: go.meta.me/tribe2

English
17
13
111
16.5K
Naku News
Naku News@daysbeforeagi·
🧵 The "Digital Twin" of the Human Brain Imagine a model that doesn’t just "see" a movie, but predicts exactly which neurons in your visual cortex will fire when you watch it. That’s TRIBE v2 (Trimodal Brain Encoder). It’s a foundation model for the human internal experience. 🧠 @AIatMeta
Naku News tweet media
English
2
2
2
1.6K