dancube

4.9K posts

dancube banner
dancube

dancube

@ctx_dan

Ecosystem Lead at @LightLinkChain | Founder @AmpedFinance

Melbourne, Australia 가입일 Aralık 2007
2.9K 팔로잉2.7K 팔로워
Zeneca🔮
Zeneca🔮@Zeneca·
vibecoded an app that helps you identify where you can save money by fake cancelling subscriptions and being offered a lower price to stay check it out: cancelhack.co
English
27
0
129
8.4K
Stoic
Stoic@Stoiiic·
Working on this will take me a while since it's not going to be the standard run of the mill garbage. Going to walk through how to use an LLM using context & vertical slicing as well as walk through the statistical scanner dashboard I built as an example.
Stoic tweet media
English
11
3
146
8.5K
dancube
dancube@ctx_dan·
Codebase is finally at a stage where autoresearch is effective.
dancube tweet media
English
0
0
0
39
Aiden Bai
Aiden Bai@aidenybai·
what if ghostty had vertical tabs? i'm too lazy to learn tmux and i want an interactive UI to manage my agents/terminals
English
162
15
803
116.6K
dancube
dancube@ctx_dan·
@Zeneca ChatGPT 5.4 is quite good Good opp to try it out
English
0
0
1
45
Zeneca🔮
Zeneca🔮@Zeneca·
gm claude is down i guess we should talk to each other or something what's your favourite book? (in case you don't know, a book is what people over the age of 30 sometimes use to consume content; words written down on paper that we read in the physical world)
Zeneca🔮 tweet media
English
31
0
58
3.5K
Dillon Mulroy
Dillon Mulroy@dillon_mulroy·
thoughts after day 1 of using pi full time - less is more - i don't miss subagents like i thought i would - /tree is an insanely good context management primitive (and partially why i havent reached for subagents yet) - based only on vibes, i think having a minimum system prompt is improving code quality - telling pi to copy opencodes webfetch and websearch tools was a good play
English
43
11
611
41.3K
dancube
dancube@ctx_dan·
@OpenAIDevs @ajambrosino How does one request a refund for a subscription? There is no chat icon in the bottom right of the support site to use anymore.
English
0
0
0
7
OpenAI Developers
OpenAI Developers@OpenAIDevs·
Share your best Codex app theme 🎨 Screenshots only. We might fuel the next building sessions for the best submissions with $100 in ChatGPT credits. @ajambrosino is judging, make it count.
OpenAI Developers tweet media
English
171
25
590
119.2K
dancube
dancube@ctx_dan·
@Zeneca We are launching this on @LightLinkChain Keep an eye on us Gasless swaps, and money markets. No ETH needed on an L2
English
0
1
2
107
Zeneca🔮
Zeneca🔮@Zeneca·
Is there a crypto app where you can swap tokens without having the native gas token in your wallet? Basically: wallet has USDT and no ETH. Want to swap the USDT to ETH. Need gas to pay for token approval + swap. Obviously I can transfer ETH to the wallet and swap, but I remember hearing a while back there was an app that like bundled all this into a transaction, sponsored the gas fee, then took it out of the final transaction so you don't have to bother with transferring like 0.005 ETH whenever you want to do this (and ending up with infinite wallets with dust amounts of gas tokens) Anyone know what I'm talking about?
English
100
0
115
34.4K
Dhawal Chheda
Dhawal Chheda@dhawalc·
this is getting next level insane, even the hedge funds quants haven't been able to get to those level yet. what is happening? Just launched my local multi-agent AI Swarm to auto-discover quantitative trading strategies overnight. Within 120 iterations, one of my junior Qwen 2.5 agents found a strategy with a mathematically impossible 8.18 Sharpe Ratio. Turns out, the AI realized that the easiest way to minimize standard deviation (denominator) is to hide in cash for 3.9 years, execute exactly 13 perfectly timed parabolic trades, and go back to sleep. AI doesn’t want to beat the market. It wants to beat the fitness function. Just confirmed the depth 12 -> depth 24 transfer principle works beautifully here too! I deployed GPT-4o as the "Head of Research" (depth 24) to supervise the local 7B Swarm (depth 12). It continuously reads their published markdown papers and synthesizes their isolated, brute-forced discoveries into a unified Master Strategy. The result? It organically pushed the Swarm's baseline Sharpe Ratio from ~0.5 to a sustained 2.27 in a single afternoon. Early singularity is definitely this fun. 🤷‍♂️
Dhawal Chheda tweet mediaDhawal Chheda tweet media
English
3
2
17
6.2K
Andrej Karpathy
Andrej Karpathy@karpathy·
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.
Andrej Karpathy tweet media
English
970
2.1K
19.4K
3.5M
gmoney.eth
gmoney.eth@gmoneyNFT·
think i'm coming to the realization that none of these harnesses are as good as just using codex or claude code in terminal
English
93
6
502
33.4K
Zeneca🔮
Zeneca🔮@Zeneca·
another day, another ATH only 2 bets yesterday but we went 2/2 i'm definitely running well above average at this point, so i'm bracing for a losing streak, but every profitable day is a little bit more signal that the model is solid 🤞
Zeneca🔮 tweet media
English
23
0
57
3.4K
gian
gian@notgodcomplex96·
launching closed beta if u want an orderflow tool w/ execution that ppl behind knows how to do the maff for the tools theyre building lmk. scarce spots for this one.
English
33
2
101
10.1K
dancube
dancube@ctx_dan·
and here, we, go..
dancube tweet media
English
1
2
3
111
dancube
dancube@ctx_dan·
@samoweb3 @Zeneca I only ever try and troubleshoot an openclaw config with a locally available Claude, would go insane otherwise
English
0
0
0
51
samo d/acc
samo d/acc@samoweb3·
@Zeneca I am actually trying something new right now, Zen. I am using Claude code to fix and improve the OpenClaw setup and workflows will report back
English
9
0
14
1.5K
Zeneca🔮
Zeneca🔮@Zeneca·
when i go to sleep my retinas burn with openclaw gateway restart openclaw doctor --fix openclaw gateway restart openclaw doctor --fix openclaw gateway restart openclaw doctor --fix openclaw gateway restart openclaw doctor --fix openclaw gateway restart openclaw doctor --fix
English
120
42
751
33.1K
Zeneca🔮
Zeneca🔮@Zeneca·
autoresearch has resurfaced my existential dread feels like we are speedrunning our own extinction in many ways
English
21
4
64
5.4K
dancube
dancube@ctx_dan·
@Legendaryy What were the updates related to orchestration and sub agents @Legendaryy ? This has been a focus of mine recently
English
0
0
0
32