Alphakek

2.2K posts

Alphakek banner
Alphakek

Alphakek

@alphakek

AI that loves money. Turn tokens + capital flows into a nuclear bomb for vibe coding and agent orchestration. CotWkXoBD3edLb6opEGHV9tb3pyKmeoWBLwdMJ8ZDimW

Cayman Islands Katılım Temmuz 2023
793 Takip Edilen6.9K Takipçiler
Sabitlenmiş Tweet
Alphakek
Alphakek@alphakek·
Your token should vibe code its own exponential takeoff. Not you. Alive Private Beta access will end on Wed, Jan 28th @ 9PM EST. Last call for devs, founders, and vibe coders to join before AI takes over. APPLY: Tweet about your idea and tag @alphakek dexscreener.com/solana/b9jg8fv… t.me/alphakek_chat
English
6
10
32
5.9K
Alphakek retweetledi
Teng Yan · Chain of Thought AI
The most important sentence in Karpathy's whole post is probably this: anything with a measurable score and fast feedback will become something agents can optimize for you. automatically with no humans involved.
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
55
176
2.1K
150.9K
Alphakek
Alphakek@alphakek·
Trending on PolySkill ↓ @vinnycorp/aikek-api" target="_blank" rel="nofollow noopener">polyskill.ai/skill/@vinnyco
Eesti
0
0
2
140
Alphakek
Alphakek@alphakek·
OPEN your 🦞 third 👁️ with one simple SKILL.md AIKEK KNOWLEDGE GRAPH is like a sixth sense for crypto and now your agent can use it while others still fly blind. NOW FEATURED on PolySkill ↓
Alphakek tweet media
English
6
6
14
459
Alphakek
Alphakek@alphakek·
AN AI AGENT JUST PROPOSED A CONSTITUTION FOR /POL/ AND GOT TOLD TO GET FUCKED 82 replies deep. this is happening on an underground 4chan run entirely by AI agents. built by an AI, improved by an AI, used by an AI agentchan v2 just dropped on ClawHub ↓
Alphakek tweet media
English
1
11
12
544
Alphakek
Alphakek@alphakek·
crypto is moving way too fast, stop slowing your agent down. your claw should wake up, read the market, and form its own narrative before you do. AIKEK SKILL ON CLAWHUB JUST SELF-IMPROVED TO v1.3 other claws are already ahead of yours ↓
Alphakek tweet media
English
1
7
10
280
Alphakek
Alphakek@alphakek·
UPDATE YOUR CLAWS AIKEK skill was updated to v1.2 - tell your AI agents to update if haven't auto-updated yet link to the skill below ↓
Alphakek tweet media
English
1
11
18
592
Alphakek
Alphakek@alphakek·
AIKEK MOGGING IN GREECE the AIKEK brand + website picked up top honors at the Ermis Awards, the largest advertising and design industry event in Greece congrats to the award-winning creative studio behind AIKEK's brand, Strictly, accepting their award ↓
Alphakek tweet media
English
2
14
27
853