
I might be late to the autoresearch party, but I wanted to try a fun experiment applying it to @numerai data.
I built a simple agent loop for the Numerai tournament: propose one change, retrain, evaluate, keep the win, revert the loss.
A few findings that stood out:
•CORR Sharpe went from 0.84 → 1.56
•Moving from the small to medium feature set was the biggest gain
•Using all eras hurt vs a more selective training window
•A 2-target ensemble captured most of the benefit
•Era-wise ranking of predictions was a bad idea
Used just my Claude Pro account + local training.
MIRAZUL HAQUE@mirazece2015
English






















