davidfeiock

10.8K posts

davidfeiock banner
davidfeiock

davidfeiock

@davidfeiock

wave function collapser | gp @anagramxyz | building @0xfairblock

san francisco, ca Katılım Ağustos 2010
2.1K Takip Edilen3K Takipçiler
Noah Schochet
Noah Schochet@noah_schochet·
Hey Jared, I helped build the payload volume for Starship down at Starbase, and now my team and I are building the construction robots that will be the payloads for Starships. My team and I can help you build you the next $20B of infrastructure on the moon. Let's chat
NASA Administrator Jared Isaacman@NASAAdmin

To build a sustained human presence on the Moon, we are building @NASAMoonBase, prioritizing surface operations and scalable infrastructure.  - Frequent robotic landings and mobility testing including MoonFall drones  - Starting in 2027 nearly monthly cadence of equipment and rovers with scientific payloads landing on the Moon.  - Investments in power, communications, and surface mobility  - Scalable infrastructure to support long-term human presence The objective is clear: build the foundation for an enduring lunar base and take the next step toward Mars.

English
37
66
1.3K
63.3K
davidfeiock retweetledi
Aomi
Aomi@aomi_labs·
We just built a delta neutral funding rate bot. Same strategy quants have been running for years — buy spot, short perp, collect funding yield. But instead of 24 files and 9 modules, it's 7 files and ~200 lines of strategy code. The difference: AI-powered execution.
English
18
10
170
30.4K
Paolo Ardoino 🤖
Paolo Ardoino 🤖@paoloardoino·
Ultimate UX. Nothing else matters.
Paolo Ardoino 🤖 tweet media
English
68
18
522
44.3K
Skyler Chan
Skyler Chan@skyler_chan_·
Welcome aboard the rocketship!
Taro Fukuyama@taro_f

🎉 New Investment: GRU Space (@gru_space)🎉 I’m excited to share that I’m investing in GRU Space alongside Y Combinator. GRU Space is building the first “Moon Factory” — an autonomous system that enables on-site construction directly on the lunar surface. Instead of transporting materials from Earth at prohibitive costs, their system mines lunar soil (regolith), refines it through a geopolymer process, and converts it into structural bricks to build infrastructure on the Moon. Congrats @skyler_chan_!

English
2
3
47
4.3K
davidfeiock
davidfeiock@davidfeiock·
Ilan Komargodski@komargodski

Crypto is not just a ledger of transactions---it’s a ledger of truth and trust. That’s what makes Bitcoin valuable. For Bitcoin this requires burning energy on random number guessing. What if we could instead leverage massive, real-world computation to achieve the same level of security? This has been an outstanding open problem for more than 30 years in academia, and since the emergence of blockchains in industry. Last year, we proposed the first solution (eprint.iacr.org/2025/685). Our mathematical breakthrough suggests piggy-backing on matrix multiplications, the native operation of GPUs that power the AI revolution, from pre-training, post-training, to inference. The potential applications are endless: improving the unit-economics of LLMs, shifting AI-generated wealth back to users, and enabling new primitives such as settlement and even UBI systems for AI agents. Since then, we've worked hard turning the math into a fully operational system. From the algebra and CUDA kernels to a working L1 blockchain and a production LLM inference pipeline implementing this “2-for-1” technology. Today, we’re excited to share that the @prlnet is ready, and will soon enable serving SoTA LLMs while mining the blockchain at negligible additional cost. Along the way, we encountered many fascinating challenges. We’re now publishing them as a collaborative Polymath challenge, spanning open questions in math, systems and economics. If you’re interested, take a look and feel free to reach out: pearlpolymath.com. #PRL #AIMoney

QME
0
0
1
624
Wei Dai
Wei Dai@_weidai·
Is it possible to build "proof-of-useful-work" on top of autoresearch? There's already great compute-versus-verification asymmetry that is tunable. Would need a reliable way to generate fresh & independent puzzles (that are still useful). Maybe a dead end, but someone should look into if decentralized consensus with useful work is possible on top of autoresearch. Let me know if you solve this.
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
43
10
148
581.5K
Vast
Vast@vast·
Vast is developing Control Moment Gyroscopes (CMGs) in-house to provide attitude control for Haven-1 and future stations. Our CMGs are air-cooled; we will test the cooling ducts in our full-scale life support testing module to verify nominal temperatures are maintained.
Vast tweet media
English
9
66
564
22.7K
Christian Keil
Christian Keil@pronounced_kyle·
Cigar Night is coming to SF! Please register your interest with me or @zachglabman, we will choose a date and location once the city's top cigar smoking talent has been assembled
Christian Keil tweet media
English
29
1
96
19.1K
davidfeiock retweetledi
Fairblock
Fairblock@0xfairblock·
Congrats to the @tempo team on the mainnet launch! Fairblock enabled confidential stablecoin payments on the Tempo testnet from day one, and we're excited to bring confidential payroll, corporate treasury, merchant checkouts, and B2B payments to mainnet.
Tempo@tempo

x.com/i/article/2034…

English
86
43
199
20.9K
davidfeiock
davidfeiock@davidfeiock·
Let’s goooo
Ilan Komargodski@komargodski

Crypto is not just a ledger of transactions---it’s a ledger of truth and trust. That’s what makes Bitcoin valuable. For Bitcoin this requires burning energy on random number guessing. What if we could instead leverage massive, real-world computation to achieve the same level of security? This has been an outstanding open problem for more than 30 years in academia, and since the emergence of blockchains in industry. Last year, we proposed the first solution (eprint.iacr.org/2025/685). Our mathematical breakthrough suggests piggy-backing on matrix multiplications, the native operation of GPUs that power the AI revolution, from pre-training, post-training, to inference. The potential applications are endless: improving the unit-economics of LLMs, shifting AI-generated wealth back to users, and enabling new primitives such as settlement and even UBI systems for AI agents. Since then, we've worked hard turning the math into a fully operational system. From the algebra and CUDA kernels to a working L1 blockchain and a production LLM inference pipeline implementing this “2-for-1” technology. Today, we’re excited to share that the @prlnet is ready, and will soon enable serving SoTA LLMs while mining the blockchain at negligible additional cost. Along the way, we encountered many fascinating challenges. We’re now publishing them as a collaborative Polymath challenge, spanning open questions in math, systems and economics. If you’re interested, take a look and feel free to reach out: pearlpolymath.com. #PRL #AIMoney

English
0
0
4
613
davidfeiock
davidfeiock@davidfeiock·
Gosh, people take themselves way too seriously.
English
2
0
4
231
Ryan | Loading... 🇺🇸
Ryan | Loading... 🇺🇸@ryanconnor·
Personal Update: This is my last week at Blockworks. What @JasonYanowitz & @MikeIppolito_ have built is truly exceptional - high talent density, high work ethic, *great* people. I’m gonna miss this team, & I’m grateful for everyone here that made this chapter special. I’m just lucky to have been a part of it. There is no doubt in my mind Blockworks continues to grow and dominate data, research, & podcasts. This team SHIPS like no other. @smyyguy, @EffortCapital, @0xMether, @GoldDefi, @fejau_inc and the rest of the team are just getting started. Excited for what’s ahead. On to the next thing. More soon.
English
24
0
242
16.9K