Chaofei

127 posts

Chaofei banner
Chaofei

Chaofei

@FanChaofei

Stanford PhD Student | Advancing Brain-Computer Interfaces 🧠 | Decoding the Language of Thoughts 💭

Palo Alto Katılım Haziran 2009
516 Takip Edilen328 Takipçiler
Chaofei
Chaofei@FanChaofei·
After some reflection, I think I’ll share my experience of moving from a safe place toward something more exciting but risky. It’s less about whether BCI or AI is more promising—that’s personal and depends a lot on timing—and more about the universal question: How do we balance our fear of risk with passion for what we love?
English
0
0
1
83
Nick Edwards
Nick Edwards@Nick___Edwards·
@FanChaofei Curious, what was the motivation? I moved the opposite direction (neuro to AI for science). Lots of interesting overlap
English
1
0
1
346
Chaofei
Chaofei@FanChaofei·
4 yrs ago, I jumped from AI into Brain-Computer Interface (BCI) research—with almost zero neuroscience experience. Today, I'm at NPTL, one of the world's leading BCI labs, contributing to breakthroughs like restoring speech for people with ALS and developing a robust handwriting BCI. There's never been a more exciting time to enter the BCI field! I'm writing a blog about my journey and advice for newcomers. What questions do YOU need answered? Ask below 👇
English
13
11
94
12.5K
Chaofei
Chaofei@FanChaofei·
@neurosutras Tough to put a number on it. Neuroscience is kind of like the physics behind rockets—it gives us the core principles. Then we build cool BCI on top. Plus, there’s a flywheel: better BCIs help us learn more about the brain, which makes the next BCIs even better.
English
1
0
0
81
Chaofei
Chaofei@FanChaofei·
@NS_Card Congrats to y'all! The 250hrs of personal use is so impressive!
English
0
0
5
556
Nick Card
Nick Card@NS_Card·
Our new study is out today in the New England Journal of Medicine! We demonstrate a speech neuroprosthesis that decodes the attempted speech of a man with ALS into text with 97.5% accuracy, enabling him to communicate with his family, friends, and colleagues in his own home. 1/9
English
34
314
1.2K
262.8K
Chaofei retweetledi
Frank Willett
Frank Willett@WillettNeuro·
Congratulations to @CathyrenOleande for winning the Brain-to-Text Benchmark '24 (eval.ai/web/challenges…). Her entry drove the word error rate down from 9.72% (our baseline) to 5.81%, a substantial improvement! Excited to see what we can learn from this approach and others.
English
5
9
51
4.6K
Chaofei retweetledi
Tyler Benster
Tyler Benster@tbenst·
Our silent speech preprint is live! Using a cross-modal training technique enhanced by LLMs, we set a new state-of-the-art for silent speech (12.2% word error rate, open vocabulary) and brain-to-text (8.9% WER; Rank 1 on Brain-to-Text Benchmark '24) arxiv.org/abs/2403.05583
Tyler Benster tweet media
English
4
19
98
19.8K
Chaofei retweetledi
Chethan Pandarinath
Chethan Pandarinath@chethan·
Are you a postbac interested in neural engineering / BCIs? 🧠🤖🗣️ Come join our team!! Work directly with our amazing BCI participants. Great exposure for prospective grad/med school applicants! snel.ai/positions
English
1
50
176
32.3K
Chaofei retweetledi
Nick Card
Nick Card@NS_Card·
If you're attending SfN 2023 and you're interested in speech decoding, come check out my poster on Wednesday morning! We demonstrate a very high accuracy and rapidly calibrating brain-to-text BCI for restoring communication. PSTR488.12 / JJ23 abstractsonline.com/pp8/#!/10892/p…
English
1
10
66
12.5K
Chaofei
Chaofei@FanChaofei·
💭Imagine relying on a brain-computer interface (BCI) for your only means of communication, but it's inconsistent and needs frequent recalibration. For those who can't move or speak, this isn't just frustrating; it's a huge barrier. 🌟Our #NeurIPS2023 paper introduces a promising solution: Continual Online Recalibration with Pseudo-labels (CORP). Over an entire year (403 days), CORP demonstrated remarkable stability in an online handwriting BCI task. With a 6.16% word error rate, it significantly outperforms existing recalibration methods. 🥇To the best of our knowledge, this is the longest demonstration of intracortical BCI plug-and-play stability to date. ⚙️Technical Insight: 1⃣CORP leverages large language models to auto-correct BCI outputs. The outputs are accurate enough to be used as pseudo training labels, enabling unsupervised recalibration without user interruption. 2⃣We also implemented a replay buffer and data augmentation strategies to tackle the challenges of continual learning. 🤝We encourage the research community to build on our findings. Our data and code are open for further research and collaboration. Paper: arxiv.org/abs/2311.03611 Code: github.com/cffan/CORP Data: doi.org/10.5061/dryad.… This work was the result of amazing collaboration with @WillettNeuro, @xoxo_meme_queen, Nick Hahn, Foram Kamdar, Donald Avansino, Leigh Hochberg, Krishna Shenoy, Jaimie Henderson, and our clinical-trial participant T5!
Chaofei tweet mediaChaofei tweet media
English
1
21
102
17.3K
Chaofei retweetledi
Frank Willett
Frank Willett@WillettNeuro·
Our new study is out today in Nature! We demonstrate a brain-computer interface that turns speech-related neural activity into text, enabling a person with paralysis to communicate at 62 words per minute - 3.4 times faster than prior work. 1/3 nature.com/articles/s4158…
Frank Willett tweet media
English
9
220
794
120.5K
Chaofei
Chaofei@FanChaofei·
@karpathy Everytime I see people using dimension that is not power of 2, I would 🫣
English
0
0
0
260
Andrej Karpathy
Andrej Karpathy@karpathy·
The most dramatic optimization to nanoGPT so far (~25% speedup) is to simply increase vocab size from 50257 to 50304 (nearest multiple of 64). This calculates added useless dimensions but goes down a different kernel path with much higher occupancy. Careful with your Powers of 2.
English
77
363
5.4K
1.3M
Nick
Nick@nickcammarata·
imo not worth worrying about exactly what Buddha said. He was an early hacker that discovered cool and important stuff (eg tanha/dukkha), but some details were lost to history. Luckily, we are running the same hardware, any important bits we should be able to easily rediscover
English
14
16
351
0
Chaofei
Chaofei@FanChaofei·
@fchollet Teacher matters as well. A great teacher saves you ton of time.
English
0
0
0
0
François Chollet
François Chollet@fchollet·
It doesn't take 10,000 hours of deliberate practice to master a skill. Obviously it depends on the difficulty & breadth of the skill, what you already know, and how smart you are. Typically it seems to range between 200 & 2000 hours, but there's a long tail of much greater times.
English
9
32
281
0
Chaofei
Chaofei@FanChaofei·
@_KarenHao Are you suggesting that no new technology could be developed until there is guarantee for equal access?
English
0
0
2
0
Karen Hao
Karen Hao@_KarenHao·
I see this argument all the time from tech people: Building gargantuan AI models may be computationally, environmentally, and financially costly. But if those models then go on to solve cancer, isn't that on balance better for the world? NOOOO. A thread.
English
140
569
2K
0