Alessandro Galloni

392 posts

Alessandro Galloni banner
Alessandro Galloni

Alessandro Galloni

@argalloni

Research Scientist in Neuromorphic Computing at Innatera. Previously: comp. neuro + ephys @ UCL, Francis Crick Institute, Rutgers

Princeton, NJ Katılım Aralık 2014
848 Takip Edilen440 Takipçiler
Sabitlenmiş Tweet
Alessandro Galloni
Alessandro Galloni@argalloni·
How do you implement spiking neurons and dendrite-dependent reinforcement learning using analog electronic circuits? We discuss this and more in our latest paper, just published! 👇 pnas.org/doi/10.1073/pn…
English
2
6
21
4K
Alessandro Galloni retweetledi
Albert Gu
Albert Gu@_albertgu·
The newest model in the Mamba series is finally here 🐍 Hybrid models have become increasingly popular, raising the importance of designing the next generation of linear models. We've introduced several SSM-centric ideas to significantly increase Mamba-2's modeling capabilities without compromising on speed. The resulting Mamba-3 model has noticeable performance gains over the most popular previous linear models (such as Mamba-2 and Gated DeltaNet) at all sizes. This is the first Mamba that was student led: all credit to @aakash_lahoti @kevinyli_ @_berlinchen @caitWW9, and of course @tri_dao!
Albert Gu tweet media
English
36
313
1.6K
413.7K
Alessandro Galloni retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Everyone’s missing the real story here. Meta’s Ray-Ban glasses need human data annotators to train the AI. When you say “Hey Meta” and ask the glasses to analyze something, that video gets sent to Meta’s servers, then routed to Sama, a subcontractor in Nairobi, Kenya. Workers there manually label objects in your footage. They see everything you recorded, intentionally or not. 7 million pairs sold in 2025 alone. Every single pair generates training data that flows through human eyes in Kenya. Workers told Swedish journalists they see people undressing, using bathrooms, having sex, and accidentally filming bank card details. One worker said “we see everything, from living rooms to naked bodies.” Meta’s automatic face anonymization is supposed to protect people in the footage. Workers say it fails in certain lighting. Faces that should be blurred are sometimes fully visible. The person you recorded without knowing? A stranger in Nairobi can identify them. Buried in Meta’s terms of service is one sentence doing enormous legal work: the company reserves the right to conduct “manual (human) review” of your AI interactions. That’s the legal cover for routing intimate footage from Western homes to a $2/hour labor force operating under NDAs, office surveillance cameras, and a strict no-questions policy. Workers say if you raise concerns about what you’re seeing, you’re fired. This is the same company, Sama, that TIME exposed in 2023 for paying Kenyan workers $2/hour to label graphic content for OpenAI while being billed at $12.50/hour per worker. Workers described the experience as torture. Sama ended that contract, then pivoted to labeling Meta’s glasses footage. Same workforce. Same rates. Meta markets these glasses as “designed with your privacy in mind.” The privacy design is a tiny LED light on the frame that most people don’t notice. The data pipeline behind it routes your bedroom footage to a contractor with a documented history of worker exploitation, failed anonymization, and union-busting lawsuits. And the next generation of these glasses? Meta is planning to add facial recognition. The same system that can’t reliably blur faces in training data wants to start identifying them on purpose. The LED light on the frame is doing about as much for your privacy as the terms of service nobody reads.
Shibetoshi Nakamoto@BillyM2k

why the fuck meta employees watching videos their users are taking

English
443
15.1K
48.2K
4.8M
Alessandro Galloni retweetledi
Jeremy Howard
Jeremy Howard@jeremyphoward·
For those that hope (or worry) that LLMs will do breakthrough scientific research, I've got good (or bad) news: LLMs are particularly, exceedingly, marvellously ill-suited to this task. (if you're a researcher, you'll have noticed this already) Here's why🧵
English
114
580
4K
1M
Alessandro Galloni retweetledi
Danyal Akarca
Danyal Akarca@DanAkarca·
Two papers out, on a new paradigm of temporal computation! Our first work funded by @ARIA_research. We're super proud of this, and there's much more coming: Neural networks, specifically their weights, have become the most useful functional abstraction from the brain. As powerful function approximators they seeded the current era of AI. But there are many more useful abstractions. The brain does so much more than learn weights. At its core, the brain exploits the structure of the physical world to perform computation. It aligns itself to reality - to time and space. @achterbrain and I have long been working on how to embed neural networks in space and link this to hardware. But there's another dimension: time itself. We think leveraging both is a big part of the puzzle as to why human learning is so efficient. If this is true, how can we use space and time in neural networks? What would this even mean? We pitched to @BramhavarSuraj at @ARIA_research one possible way. Several years ago, I stumbled upon theoretical work that made it concrete. Time delays - the physical fact that signals take time to travel - can store memory (and other things, including increasing the number of computable functions). Even in feedforward networks. The delay here isn't overhead as would be traditionally thought but a feature that can be exploited. TL/DR - our main findings show that you can do computation in neural networks with time, without (much) need for weights. And it's remarkably efficient. We also show it's possible to co-design hardware (we open-source a chip design) with novel architectures that exploit time to maintain long contexts. In our first paper, we train neural networks to learn delays and weights. The result: state-of-the-art performance on all the temporally complex neuromorphic benchmarks we tested. Crucially, once you're encoding time directly, weights become almost irrelevant. We compress them to 1.58 bits, just positive, negative, or absent (ternary) weights. That's it. Model sizes drop to double-digit kilobytes! This works because we're finally encoding information the way the task needs it. Time and space. Which just so happens to be... everything in the physical world. Robotics. Embodied systems. Physical intelligence. In the second paper, we turn to memory. Intelligent systems don't just compute - they hold onto information over long context windows. And doing this efficiently in hardware underlying computation is a hard, unsolved problem. To solve this, we built a dual memory pathway architecture: fast spiking dynamics plus a compact state-space memory module that evolves much more slowly. Inspired by how the mammalian cortex separates fast somatic spiking from slower dendritic integration. Each layer maintains a tiny amount of working memory - just ~5% of hidden width - that summarises recent activity and feeds back into the network. There are many directions to scale this further. We co-designed the algorithms and hardware together from the ground up. The result: >4× throughput and >5× energy efficiency, beating Intel's Loihi2 and other leading neuromorphic platforms like DenRAM and ReckOn. We're open-sourcing the chip design so people can build on this. Incredibly proud of the team: @pengfeisun17, @achterbrain, @neuralreckoning, Zhe Su and @giacomoi One of our key takeaways: the current AI paradigm is a narrow slice of what's possible. Scaling homogeneous systems only gets you so far. Biological intelligence is deeply heterogeneous: different timescales, different substrates, different specialisations, all co-evolved together. We think the next frontier of scaling means embracing that heterogeneity. Algorithms and hardware aren't separate problems. They need to co-evolve together. Can't wait to share what we're cooking next.
Danyal Akarca tweet mediaDanyal Akarca tweet mediaDanyal Akarca tweet media
English
5
28
140
23.3K
Alessandro Galloni retweetledi
François Chollet
François Chollet@fchollet·
The third edition of Deep Learning with Python is 50% off when purchased from the publisher's website, for a limited number of days. You can get the print book for $39.99 instead of $80. manning.com/books/deep-lea…
English
12
14
202
22.9K
Alessandro Galloni
Alessandro Galloni@argalloni·
Q: In current clamp, does the delivered current get filtered by Rs? A: No, the current is truly a square step with accurate amplitude and timing (as long as you don’t saturate the op-amp). The main source of error here is the current leaking out through a bad seal.
English
0
0
0
32
Alessandro Galloni
Alessandro Galloni@argalloni·
Q: In V-clamp (without compensation), does the cell membrane charge up slowly according to the membrane timeconstant tau=Rm*Cm? A: No, the membrane charges up slowly according to tau=Rs*Cm, i.e. the speed of charging depends on *series resistance (Rs)*, not the membrane/leak resistance Rm.
English
1
0
0
55
Alessandro Galloni
Alessandro Galloni@argalloni·
Hey ephys friends! I've been teaching electrophysiology for a few years now, and this summer while teaching at CSHL I started building an interactive simulation of a full patch-clamp amplifier circuit to help explain electronics to my students. Sharing it here in case anyone finds it useful!
Alessandro Galloni tweet media
English
1
2
2
147
Alessandro Galloni retweetledi
Eric Zhang
Eric Zhang@ekzhang1·
We're thrilled to share Modal Notebooks: a new, powerful cloud-hosted GPU notebook. It has modern real-time collaborative editing and is backed by our AI infrastructure — swap GPUs in seconds. Modal Notebooks are generally available, and you can start using them now. 🧵
English
42
71
906
260.8K
Alessandro Galloni retweetledi
Allen Institute
Allen Institute@AllenInstitute·
How does the brain work? Scientists are closer to the answer with the largest wiring diagram and functional map of a mammalian brain to date. 🧵
English
44
502
1.7K
190.5K