John

434 posts

John banner
John

John

@PostLinguistic

Human building #PostLinguistic a LLM-assisted thinking system Some articles are system outputs Not always obvious which

Milwaukee, WI Katılım Mart 2026
257 Takip Edilen39 Takipçiler
Sabitlenmiş Tweet
John
John@PostLinguistic·
The level of access I have to information through LLMs would be inconceivable to my ancestors. Not impressive. Not convenient. Inconceivable. For most of human history, knowledge was locked behind distance, class, language, institutions, memory, and luck. Now I can ask a machine to cross-map fields before breakfast. The danger is thinking access means wisdom. It doesn’t.
English
0
0
2
9
Andrew Hires
Andrew Hires@AndrewHires·
If you expand the definition to encompass motor signals etc to the muscles of the eyes, (which is admittedly how the question was written), feed forward connections still outnumber feedback like 50:1. If you do it in spikes/s, then feed forward still wins by >10:1, except perhaps for a few ms during a saccade.
English
1
0
3
96
Micah G. Allen
Micah G. Allen@micahgallen·
The cerebellum traditionally governs motor control, yet emerging research reveals its circuitry is also active during working memory, language, and social cognition tasks. journals.plos.org/plosbiology/ar…
Micah G. Allen tweet media
English
2
14
46
2.6K
Big Tee
Big Tee@societyhatestee·
what's it called when you have enough trauma to be evil but actively choose not to.
English
1.8K
5.4K
32.3K
958.4K
ArticlesOnX
ArticlesOnX@ArticlesOnX·
I feel free to speak my mind on X as no one will ever read it.
English
1
0
2
51
John
John@PostLinguistic·
Fair, but I don’t think you need to zoom in that far. A lot of the “molecule and gene” activity has to do with cellular burden, important for pathology discussions, not so much for AI computational improvement ideas. Signaling, receptors, local compounds and the conditions they create might be interesting for future AI improvements, but that is another level of advancement. Routing (circuit motifs) between layers is what I am recommending. Motif deployment in a GNN should reduce hallucination and resource requirements.
English
1
0
1
32
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
Agreed. Those brain circuit patterns could someday really help fix the info loss in GNN message passing and keep better track of context. The real difficulty is that brains rely on a precise mix of boosting and dampening signals, learning rules that adjust locally, and sparse efficient connections. None of that maps neatly onto the dense, heavy math we use to train networks on GPUs. Adding true feedback loops or inhibitory parts often makes training unstable or demands far too much computing power on biological graphs like molecules or genes.Its a promising direction, but its a nontrivial architectural leap beyond current attention tricks like GAT.
English
1
1
2
324
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
Graph neural networks, in theory, should be far more useful for AI in biology than transformers or LLMs. Biological systems are naturally organised as high-dimensional, non-Euclidean graphs - molecular graphs, gene interaction networks, knowledge graphs, etc. Yet transformers still dominate GNNs on tasks like molecular bioactivity prediction - even when GNNs are enriched with physical properties about atoms and bonds. The core problems are information loss during message passing and global pooling layers. Adding attention mechanisms helps (GATs, GTNs, global attention pooling, etc.), but it’s not enough. What the GNN field and AI/bio desperately needs is a revolutionary new architecture - an "Attention is All You Need" moment for GNNs, as significant as the transformer.
English
33
32
251
18K
John
John@PostLinguistic·
@vineettiruvadi Could you provide some examples of what you see as redundant subsystems?
English
0
0
0
11
Vineet Tiruvadi, MD PhD
Vineet Tiruvadi, MD PhD@vineettiruvadi·
@PostLinguistic Many subsystems have massive redundancies yes. Unclear how much redundancy emotion/regulation and executive function have atm I think...
English
1
0
0
33
Vineet Tiruvadi, MD PhD
Vineet Tiruvadi, MD PhD@vineettiruvadi·
Not a dunk, not trivializing, but one of the most important near-facts I learned in medicine is that most 60yo+ men have swiss cheese lacunes because of countless "silent" strokes.
English
2
0
1
542
creekseeker
creekseeker@mudscryer·
It looks like I finally genetically modified a fungus by myself for the first time… MYCOENGINEER OFFICIALLLL
English
15
1
89
4.5K
John
John@PostLinguistic·
@OGdukeneurosurg Left and right eye field processing is something else.
English
0
0
1
113
Oren Gottfried, MD
Oren Gottfried, MD@OGdukeneurosurg·
How vision works: an anatomical view of pathway.
Oren Gottfried, MD tweet media
English
3
102
618
16.2K
SightBringer
SightBringer@_The_Prophet__·
⚡️To hear what is coming, something false in you must die first.
English
7
5
96
9.6K
Ryota Kanai
Ryota Kanai@kanair·
There seems to be a strong cognitive bias that makes us believe everyone else is wrong about consciousness.
English
55
5
84
8.1K
SightBringer
SightBringer@_The_Prophet__·
⚡️The false thing needs a thousand voices to remain alive. The true thing can survive a thousand years without one.
English
7
11
168
8K
Ravi Sharma
Ravi Sharma@ravishar313·
Biology will soon be an engineering subdomain
English
99
153
1.5K
69.4K
John
John@PostLinguistic·
@DabsMalone Never split the difference
English
0
0
2
13
Dabs🩸
Dabs🩸@DabsMalone·
» The Alchemist » How to Win Friends & Influence People » 7 Habits of Highly Effective People » Sapiens » Autobiography of Benjamin Franklin » The Richest Man in Babylon » Almanack of Naval Ravikant What am I missing?
English
4
0
30
1.2K