DHD

26 posts

DHD banner
DHD

DHD

@DHDev0

RL

شامل ہوئے Aralık 2022
50 فالونگ6 فالوورز
پن کیا گیا ٹویٹ
DHD
DHD@DHDev0·
Pioneer ideas with boundless curiosity. Ignite minds toward relentless discovery.
English
0
0
1
353
SciTech Era
SciTech Era@SciTechera·
Memory Loss Breakthrough New study reverses memory loss by reactivating the gut–brain connection and achieving a full cognitive reset. Stanford researchers discovered that age-related decline may start in the gut, not the brain, and can potentially be reversed. This groundbreaking study revealed that aging gut bacteria can silence the vagus nerve, effectively "switching off" the brain's memory center. Researchers found that specific microbes, particularly Parabacteroides goldsteinii, produce metabolites that trigger intestinal inflammation. This inflammation interferes with vagus-nerve signaling, reducing communication between the gut and brain and weakening activity in the hippocampus, the brain's memory center. By restoring vagus-nerve activity and correcting the gut microbiome, scientists were able to make the brains of old mice function like those of 2-months old mice. This "remote control" strategy suggests that memory loss may not be an inevitable brain disease, but a communication failure that can potentially be repaired through the digestive system.
SciTech Era tweet media
English
27
339
1.3K
85.5K
DHD ری ٹویٹ کیا
Jorge Bravo Abad
Jorge Bravo Abad@bravo_abad·
Solving PDEs on photonic quantum computers—without nested gradients Physics-informed neural networks (PINNs) are powerful tools for solving differential equations, but they hit a wall when you need higher-order derivatives: nested gradient calculations explode in complexity and often compromise accuracy. For quantum implementations—where gradient computation is already challenging—this becomes a serious bottleneck. Giorgio Panichi, Sebastiano Corli, and Enrico Prati take a different route. Working with continuous-variable quantum computing—where information lives in the quadratures of light rather than discrete qubits—they design a quantum neural network architecture that sidesteps nested differentiation entirely. The trick: use multiple output modes from the same circuit, training one to approximate the solution and another to approximate its derivative, enforced through a "consistency loss" that keeps them aligned. This lets you compute second-order derivatives (and higher) with just one level of automatic differentiation. Using Strawberry Fields and TensorFlow, they demonstrate the approach on two classic problems: the 1D Poisson equation (RMSE ~ 10⁻⁴) and the heat equation as a proof-of-concept PDE (RMSE ~ 10⁻²)—matching or slightly outperforming classical PINNs with equivalent parameter counts. Crucially, they also characterize real photonic hardware—Xanadu's X8 processor—to model optical losses, and show that their variational algorithm naturally compensates for systematic noise through parameter adaptation. The broader point: photonic quantum computers offer unique advantages for edge deployment—room-temperature operation, immunity to decoherence, portability to satellites or underwater vehicles. By extending QPINNs to handle PDEs without the nested gradient problem, this work opens a path toward quantum-assisted simulation of physical systems in environments where classical hardware struggles. Future directions? Integrating quantum sensing data to tackle many-body Schrödinger equations that remain classically intractable. Paper: journals.aps.org/prapplied/abst…
Jorge Bravo Abad tweet media
English
0
16
71
2.9K
DHD
DHD@DHDev0·
I suspect that a lot of people think the same way.
English
0
0
1
110
DHD ری ٹویٹ کیا
Jonathan Gorard
Jonathan Gorard@getjonwithit·
Like @davidbessis and others, I think that Hinton is wrong. To explain why, let me tell you a brief story. About a decade ago, in 2017, I developed an automated theorem-proving framework that was ultimately integrated into Mathematica (see: youtube.com/watch?v=mMaid2…) (1/15)
YouTube video
YouTube
vitrupo@vitrupo

Geoffrey Hinton says mathematics is a closed system, so AIs can play it like a game. They can pose problems to themselves, test proofs, and learn from what works, without relying on human examples. “I think AI will get much better at mathematics than people, maybe in the next 10 years or so.”

English
123
432
2.4K
748.5K
DHD
DHD@DHDev0·
Emergent phenomenon ≅ knowledge ≅ intelligence ≅ observer
English
1
0
1
112
DHD
DHD@DHDev0·
@Teknium it kind of means that if you saturate, it's our sim or model compression/scale that's narrow, like a signal telling you it's time to self-evolve/improve/increase something. It seems obvious, but it was not straightforward in my mind. Thank for the thought exp
English
0
0
1
69
DHD
DHD@DHDev0·
@Teknium It would imply a uniqueness in nash equilibrium which is more of an oscillator with a lot of exception. You can't over-fit a fractal at scale.
English
1
0
1
106
Teknium (e/λ)
Teknium (e/λ)@Teknium·
I think the entropy collapse thing with RL was a hoax
English
21
2
167
20.8K
God of Prompt
God of Prompt@godofprompt·
🚨 DeepMind discovered that neural networks can train for thousands of epochs without learning anything. Then suddenly, in a single epoch, they generalize perfectly. This phenomenon is called "Grokking". It went from a weird training glitch to a core theory of how models actually learn. Here’s what changed (and why this matters now):
God of Prompt tweet media
English
128
445
4.5K
378.9K
DHD
DHD@DHDev0·
When you make two different LLMs compete on solving the same problem and have them take turns improving each other solutions, it kind of mimics exploration. Interesting...
English
0
0
0
274
DHD
DHD@DHDev0·
ps: for the grok reply use right click "open image in new tab" or "save image as..." after clicking on it. hf
English
0
0
1
378
DHD
DHD@DHDev0·
It's probably flawed/naive (these systems build wild complexity) or niche for most folks (gotta eat and deadline problem), but I just wanted to share. Perhaps it was just an attempt to quantize beauty in coding.
English
1
0
1
415
DHD
DHD@DHDev0·
I am building a graph that executes operations based on traversal paths. It got pretty complex, and I was stuck in that "atomize everything" mindset. (I prefer code that doesn't physically hurt my eyes) But atomizing leads to exploding permutations of ops. [continue...]
English
2
1
5
1.1K