Arindam Basu

195 posts

Arindam Basu

Arindam Basu

@BasuNeuro82

Professor at City Uni of Hong Kong, MIT TR35 Asia Pacific, IEEE CASS Distinguished Lecturer, GT 40 under 40

Hong Kong Katılım Haziran 2020
147 Takip Edilen253 Takipçiler
Arindam Basu
Arindam Basu@BasuNeuro82·
We have used this already in running competitions at BioCAS 2024 and will be supporting other competitions in future.
English
0
0
0
16
Arindam Basu
Arindam Basu@BasuNeuro82·
v1 has open-loop benchmarks, stay tuned for v2 where we will present closed-loop versions. great job Vincent Sun and Biyan Zhou Please use this in your work and reach out to us if you need any support.
English
1
0
0
28
Arindam Basu
Arindam Basu@BasuNeuro82·
The Alexnet moment arrived in DL because there was the Imagenet benchmark in the first place. For neuromorphic systems (NCS), what is such a benchmark? It is not static images like Imagenet, for sure. The importance of benchmarks for NCS has been widely acknowledged.
English
1
0
2
52
Arindam Basu
Arindam Basu@BasuNeuro82·
Its difficult to do regression with SNN. We show that for motor decoding problems, SNNs combined with traditional signal filtering does wonders! @CityUHongKong
Neuromorphic Computing and Engineering@IOPneuromorphic

@BasuNeuro82 and @CityUHongKong colleagues show that combining signal filtering with SNNs improve their decoding performance significantly for regression tasks, closing the gap with Long Short Term Memory networks for little added computing cost iopscience.iop.org/article/10.108…

English
0
0
0
45
Arindam Basu
Arindam Basu@BasuNeuro82·
That means that after each layer calculates its matrix products in analog, the results must be converted to digital to generate activations, which must then be converted back to analog for the next layer.
English
1
0
0
14
Arindam Basu
Arindam Basu@BasuNeuro82·
nature.com/articles/s4146… New paper out in @NatureComms . What does this work achieve? Current In-memory compute mostly accelerates the Vector-matrix multiply of DNNs, which is indeed a major workload. But once that becomes really efficient, Amdahl's law catches up ..
English
2
1
4
69
Arindam Basu
Arindam Basu@BasuNeuro82·
We show event collisions are manageable and introduce a dual threshold scheme to battle the tradeoff between reconstruction accuracy and compression ratio. Great collaboration between @CityUHongKong and @NTUsg
English
0
0
0
22
Arindam Basu
Arindam Basu@BasuNeuro82·
However, if the threshold for change detection is large, spike reconstruction is poor & if threshold is small, background noise generates a lot of events. In our paper, my student Vivek explores the feasibility of such schemes in large scale recordings from Neuropixels.
English
1
0
0
33
Arindam Basu
Arindam Basu@BasuNeuro82·
Intra-cortical Brain-machine interfaces (iBMI) have great potential to change lives of patients with paralysis, locked-in syndrome etc. especially if they can be made WIRELESS. Power-dissipation & bandwidth constraints necessitate some form of compression on the implant. A 🧵
Neuromorphic Computing and Engineering@IOPneuromorphic

@BasuNeuro82 (@CityUHongKong) and colleagues from @NTUsg show neuromorphic compression based neural sensing architecture with address-event representation inspired readout protocol for massively parallel, wireless implantable brain machine interface (iBMI) iopscience.iop.org/article/10.108…

English
1
0
0
48
Arindam Basu
Arindam Basu@BasuNeuro82·
Interestingly, a recent article on In-memory compute mentions a company Sagence whose sub-threshold Flash circuits could be a game changer :) Hope this paper is useful to industry folks as well :) semiengineering.com/is-in-memory-c…
English
0
0
1
14
Arindam Basu
Arindam Basu@BasuNeuro82·
It was a great experience walking through 30+ years of history of designing NVM based crossbars for Neural Networks. A lot of recent work on In-memory compute "re"uses many techniques/principles we had done 20-30 years back :) .. hope the paper gives you new research ideas!
Neuromorphic Computing and Engineering@IOPneuromorphic

This review article from Jennifer Hasler (@GeorgiaTech) and @BasuNeuro82 (@CityUHongKong) gives an historical perspective of the use of computing-in-memory with non-volatile memory, looking at some of the paths that have led towards neuromorphic computing: iopscience.iop.org/article/10.108…

English
1
0
1
160