Sohan

6.7K posts

Sohan banner
Sohan

Sohan

@HiSohan

I collect dots to connect them later. Views are my own. Building https://t.co/QxnsCuozpY Writing on https://t.co/rOzI5yIXfV

Bengaluru, India Katılım Aralık 2011
598 Takip Edilen1.4K Takipçiler
Sabitlenmiş Tweet
Sohan
Sohan@HiSohan·
The philosophies around learning between Tagore and Socrates look very different. Tagorean Curiosity is one of wonder and bringing a fresh perspective. Socratic Inquiry is based on doubt - to reveal a deeper truth by questioning I love both.
Sohan tweet media
English
3
1
14
5.8K
Sohan
Sohan@HiSohan·
None of us know what consciousness is. And intelligence is conflated with consciousness. But you either believe in panpsychism (in which case consciousness is trivial) or concede that we don't understand consciousness. We have established that logic gates can and do emulate intelligence because they are turing complete.
English
0
0
0
17
vixhaℓ
vixhaℓ@TheVixhal·
Could consciousness emerge from logic gates? Modern AI systems ultimately run on just 3 basic logic gates: - AND Gate - OR Gate - NOT Gate Individually these gates are extremely simple. But when billions of them are combined together in complex systems, they can process language, generate code, recognize patterns, and simulate human-like reasoning. If intelligence-like behavior can emerge from massive combinations of simple logic gates, could consciousness emerge too? And if human brains are also made from simpler units like neurons, is consciousness just an emergent property of complexity?
English
234
21
250
23.1K
Sohan
Sohan@HiSohan·
You either die a coder, or live long enough to see yourself become a slop coder.
English
0
0
1
51
Sohan
Sohan@HiSohan·
@ValerioCapraro Humans predict everything, from sensory inputs to societal structure.
English
0
0
0
50
Valerio Capraro
Valerio Capraro@ValerioCapraro·
Important Nature Neuroscience paper shows how humans differ from LLMs. Many people currently believe that humans are just next-word predictors, like LLMs. But this new paper by Zou, Poeppel and Ding suggests something more interesting. The human brain does predict words. But it does not predict every word with the same precision. Prediction is constrained by linguistic structure. When a word continues the current phrase, brain activity tracks word surprisal in a way that resembles an LLM. But when a word crosses a major phrase boundary, the match weakens. In other words, the brain does not simply ask: “What is the next word?” It also asks: “What structure am I currently building?” This challenges one of the most common biases in today’s technological world: the belief that human language works like a large language model. The answer is: no. Human language is not just next-token prediction. * Paper in the first reply
Valerio Capraro tweet media
English
62
147
551
35.9K
🌸 Bunga 🌸
🌸 Bunga 🌸@Idyllic0812·
@NTFabiano Hyperconnected autistic brains = built-in quantum WiFi! No wonder we sync up like a secret neural club, we’re not wired wrong, we’re overclocked for awesome.
English
1
1
18
1.8K
Nicholas Fabiano, MD
Nicholas Fabiano, MD@NTFabiano·
Children with autism have hyperconnected brains.
Nicholas Fabiano, MD tweet media
English
89
542
6.3K
831.5K
Sohan
Sohan@HiSohan·
@_svs_ Yes and no. I have done a lot of work through agents. Spend combined $20K in tokens alone. Found patterns, wrote a paper that is being appreciated. But what I can't shake off is fundamentals. They need to be drilled in. Sometimes unblocking the agents is a few manual changes.
English
0
0
1
151
svs 🇮🇳
svs 🇮🇳@_svs_·
Building AI applications is more like gardening than like construction. When you 'build' something, you know where everything is and how it interacts with its environment and the stresses and strains it takes. When you garden, you plant seeds and let it grow, keep an eye on the health and intervene only where necessary. Now that we've unleashed the golems, there's no point in knowing exactly how the system is built. Rather we must train ourselves to spot the diseases - the antipatterns, the two components that should be the same but look different, and so on. Tools for checking code health and tests for verifying behaviour are all the understanding we need. for the rest, let the agents cook.
English
9
20
152
8.5K
Sohan
Sohan@HiSohan·
@just_avik Bengal was supposed to he what Bengaluru was.
English
0
0
1
669
Sohan
Sohan@HiSohan·
@al0k_91 @bratrat @StuartHameroff @srikipedia No plans yet, but if you want to try we can find something to simulate. Mostly the API isn't stable yet and committing publicly means I can't make breaking changes as easily.
English
0
0
2
36
Stuart Hameroff
Stuart Hameroff@StuartHameroff·
Right, After 40 years cartoon neuron theories can’t simulate behavior of a simple worm. Thats because each of those 300 or so neurons has about a billion tubulins in hundreds of microtubules where the cognition (and consciousness) originate. For example see this paper journals.physiology.org/doi/full/10.11… Signaling among neurons causing axonal firings is mediated by megahertz and gigahertz oscillations in microtubules. The only theory of consciousness with any supportive evidence is Orch OR academic.oup.com/nc/article/202…
Prof. Brian Keating@DrBrianKeating

A worm with 309 neurons. Mapped completely since 1986. Forty years later, no simulation reproduces its behavior. Joscha Bach (@Plinz) on what that means for BigTech's plan to upload a human brain. youtu.be/CzjWGkXlK8k

English
34
44
375
29.8K
Sohan
Sohan@HiSohan·
if you're a computational neuroscientist, a PhD student working with NEURON/Brian2/Jaxley, or just someone who thinks neural simulation should run on modern hardware without a 1990s-era workflow, we'd love to hear from you. would you use something like this? let us know 👇
English
1
0
0
122
Sohan
Sohan@HiSohan·
what's next: - synaptic connectivity between cells in the grid - scripted "experiments" you can define in Python and render as video - more morphology types (we're pulling from NeuroMorpho.org) - offline rendering pipeline for publication-quality output
English
1
0
0
134
Sohan
Sohan@HiSohan·
this isn't a toy demo anymore i feel. every cell solves the actual cable equation with proper spatial discretization. the channel dynamics are validated against NEURON (the gold standard simulator in computational neuroscience).
English
1
0
0
147
Sohan
Sohan@HiSohan·
now we render real neuron morphologies as 3D meshes with voltage propagation mapped directly onto the geometry in real time. (still work tbd) you can see action potentials travel down dendrites. you can see different firing patterns across cell types — regular spiking, bursting, fast-spiking interneurons — all running simultaneously.
English
1
0
1
148
Sohan
Sohan@HiSohan·
the setup is a grid of biophysically distinct neurons, each with real morphology from published databases, each with independently configurable ion channel parameters. all simulated together. all rendered together in a single GPU.
English
1
0
0
132
Sohan
Sohan@HiSohan·
second: the visualization. our old pipeline was built on pygame — a 2D rendering engine. we were manually drawing everything. it worked, but it was never going to scale to what we actually needed. so we rewrote it on top of a proper 3D game engine.
English
1
0
1
173
Sohan
Sohan@HiSohan·
update on the neurosim project. we've been rewriting everything from the ground up — solvers, visualization, the whole stack. here's where we're at:
English
1
4
23
2.6K
Sohan
Sohan@HiSohan·
first: the solver rewrite. we now have custom implicit solvers (IMEX splitting + Hines cable equation) running entirely on GPU. what this means in practice — dozens of morphologically-detailed neurons simulated in parallel on a single consumer GPU. full Hodgkin-Huxley channel kinetics. real dendritic trees from published reconstructions.
English
1
0
0
161