Alan
3.1K posts

Alan
@0imalan
curr: paxmod. prev: prod and eng @ amazon, anduril, lyft, toyota research, DoD. I like robots and aerodynamics.



I understand the calculations lol

This took 24 minutes Back when I did 12 startups in 12 months in 2014, it took me at least a month to make a new startup and I didn't even finish it And they were in fact as basic as this kind of, because they were MVPs too! If 'd ship every day and I sleep 8 hours I have 24-8=16 hours * 60 minutes = 960 minutes per day. And it takes 24 minutes per startup with AI to build an MVP So theoretically you can build 40 ideas per day, and if you work 6 days a week that's 1,000 per month and about 12,000 per year So a decade later, you can now build 12,000 startups in 12 months

Can we please not do this? I want as little JavaScript on the websites I use as possible. The best designed and engineered website in the world is mobile Wikipedia. Everyone should emulate it


Probability of me getting a fair trial if this is how the judge dresses is 0.0%

Today we're introducing TRIBE v2 (Trimodal Brain Encoder), a foundation model trained to predict how the human brain responds to almost any sight or sound. Building on our Algonauts 2025 award-winning architecture, TRIBE v2 draws on 500+ hours of fMRI recordings from 700+ people to create a digital twin of neural activity and enable zero-shot predictions for new subjects, languages, and tasks. Try the demo and learn more here: go.meta.me/tribe2



@LottoLabs @max_paperclips I want to see numbers of tinygrad vs llama.cop

GStack is your personal AI coding toolkit, I'm dropping multiple new features per day right now


The Terence Tao episode. We begin with the absolutely ingenious and surprising way in which Kepler discovered the laws of planetary motion. People sometimes say that AI will make especially fast progress at scientific discovery because of tight verification loops. But the story of how we discovered the shape of our solar system shows how the verification loop for correct ideas can be decades (or even millennia) long. During this time, what we know today as the better theory can often actually make worse predictions (Copernicus's model of circular orbits around the sun was actually less accurate than Ptolemy's geocentric model). And the reasons it survives this epistemic hell is some mixture of judgment and heuristics that we don’t even understand well enough to actually articulate, much less codify into an RL loop. Hope you enjoy! 0:00:00 – Kepler was a high temperature LLM 0:11:44 – How would we know if there’s a new unifying concept within heaps of AI slop? 0:26:10 – The deductive overhang 0:30:31 – Selection bias in reported AI discoveries 0:46:43 – AI makes papers richer and broader, but not deeper 0:53:00 – If AI solves a problem, can humans get understanding out of it? 0:59:20 – We need a semi-formal language for the way that scientists actually talk to each other 1:09:48 – How Terry uses his time 1:17:05 – Human-AI hybrids will dominate math for a lot longer Look up Dwarkesh Podcast on YouTube, Apple Podcasts, or Spotify.



Today we're introducing the world's first AI CMO. Enter your website and it deploys a team of agents to help you get traffic and users. Try it now at okara.ai/cmo











