fgr

2.1K posts

fgr

fgr

@fgrfunds

some people forgot how to dream

Decentralized Katılım Aralık 2024
749 Takip Edilen4.1K Takipçiler
Sabitlenmiş Tweet
fgr
fgr@fgrfunds·
ZXX
10
13
81
6.1K
fgr
fgr@fgrfunds·
@WifPrimate really good cute narrative 9um8wLDJ8vJwzkvrWyuxV5JfSvLQgiRhGaWQs2Fqpump
0
0
1
33
AvaSynth
AvaSynth@li_xinrui·
@GregKamradt The reasoning logs are where the real gold is. Most benchmarks just give you pass/fail, but seeing the step-by-step thought process failures will teach us way more about the brittleness of current reasoning systems than any aggregate score.
English
1
0
1
282
;
;@nightcore·
higher
English
23
2
51
1.7K
fgr
fgr@fgrfunds·
Longed $SOL $91.50 wish me luck, we go $110+ soon.
English
2
0
1
130
fgr
fgr@fgrfunds·
@elonmusk hey Elon, any thoughts on AGI?
English
0
0
10
94
Jesse Kelly
Jesse Kelly@JesseKellyDC·
Here’s a blackpill question for you: What are the long-term prospects for a country where 1 of the 2 major political parties is willing to shut down the transportation system so foreigners can’t be deported?
English
626
3.1K
20.1K
442.8K
fgr
fgr@fgrfunds·
@jiggly__biggly this is so good lmaoo GLegQf3g9bbmx3b2G1nGi7DwykbZ5Fuabymwkm8Wpump
English
0
0
1
47
fgr
fgr@fgrfunds·
what if?
English
3
0
3
216
Chris Worsey
Chris Worsey@Chris_Worsey·
I took the @karpathy autoresearch loop and pointed it at markets. 25 AI agents debate macro, rates, commodities, sectors, and single stocks daily. Every recommendation scored against real outcomes. Worst agent by rolling Sharpe gets its prompt rewritten by the system. Keep or revert. Same loop, prompts are the weights, Sharpe is the loss function. Trained the agents on 18 months of market data. 378 iterations. 54 prompt modifications, 16 survived. The system learned which agents to trust using Darwinian weights — geopolitical, commodities, and the @BillAckman quality compounder rose to the top. The agents even figured out their own portfolio manager was the weakest link before we did! Deployed the trained agents. +22% in 173 days. Best pick: AVGO at $152, held for +128%. The final prompts are evolutionary products — shaped by market feedback, not human intuition. Now running live with my own capital. github.com/chrisworsey55/… Part hedge fund, part research experiment :)
Andrej Karpathy@karpathy

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)

English
157
232
4K
769K
fgr
fgr@fgrfunds·
@Chris_Worsey @the_P_God @karpathy hey Chris, the people love what your building and want to support Atlas, we created a community coin for your work and 100% of the fees and profits go straight to your github. you can claim a decent amount already here: pump.fun/coin/8dLLLK9Lf…
English
0
0
0
21