Justin Mart
987 posts

Justin Mart
@j_mart
prev: Corp Dev + Ventures @coinbase, Eng & Data @coinbase, Nuclear Eng @ch2mhill


AI is deeply unpopular. According to Pew, sadly only 17% of Americans think AI will have a positive impact. In China, 83% believe AI will be positive. A token tax & political backlash is coming unless the narrative changes. 🇺🇸👀🧐







head of anthropic’s safeguards research just quit and said “the world is in peril” and that he’s moving to the UK to write poetry and “become invisible”. other safety researchers and senior staff left over the last 2 weeks as well... probably nothing.




My district is $18 trillion, nearly 1/3 of US stock market in a 50 mile radius. We have 5 companies with a market cap over a trillion dollar companies. If I can stand up for a billionaire tax, this is not a hard position for 434 other members or 100 Senators. Those saying that we wouldn't have a future NVIDIA in the Bay if this tax goes into effect are glossing over Silicon Valley history. Jensen was at LSI Logic and his co-founders at Sun. He started NVIDIA in my district because of the semiconductor talent, Stanford, innovation networks, and venture funding. We have 37 times the VC money as Austin given the innovation ecosystem & Florida isn't even on the map. Jensen wasn't thinking I won't start this company because I may have to one day pay a 1 percent tax on my billions. He built here because the talent is here. AI was created with our tax dollars. ImageNet was created by Fei-Fei Li at Stanford using NSF money. This was a visual database. Hinton presented at an ImageNet conference his famous paper. The seminal innovation in tech is done by thousands often with public funds. NSF, DARPA, Stanford, Berkley, San Jose State, Santa Clara and the UCs are the foundation for what has made Silicon Valley a powerhouse. It's why we won 5 Nobel Prizes this year in the UC system. Yes, we need entrepreneurs to commercialize disruptive innovation. Stanford blazed a trail in licensing technology & partnering with the private sector. The university enabled companies like Google which began as a research project called BackRub, looking at back links to rank pages. And entrepreneurs like Brin & Page reap huge rewards when they succeed. But the idea that they would not start companies to make billions, or take advantage of an innovation cluster, if there is a 1-2 percent tax on their staggering wealth defies common sense and economic theory @paulkrugman @DAcemogluMIT @baselinescene. We cannot have a nation with extreme concentration of wealth in a few places but where 70 percent of Americans believe the American dream is dead and healthcare, childcare, housing, education is unaffordable. What will stifle American innovation, what will make us fall behind China, is if we see further political dysfunction and social unrest, if we fail to cultivate the talent in every American and in every city and town. The industrial revolution saw soaring inequality in Britain for nearly 60 years. On the continent, it lead to revolutions in France with worker uprisings (1848) and contributed to one in Russia (1917). America's central challenge is to make sure the AI revolution works for all of us, not just tech billionaires. So yes a billionaire tax is good for American innovation which depends on a strong and thriving American democracy.










The @ilyasut episode 0:00:00 – Explaining model jaggedness 0:09:39 - Emotions and value functions 0:18:49 – What are we scaling? 0:25:13 – Why humans generalize better than models 0:35:45 – Straight-shotting superintelligence 0:46:47 – SSI’s model will learn from deployment 0:55:07 – Alignment 1:18:13 – “We are squarely an age of research company” 1:29:23 – Self-play and multi-agent 1:32:42 – Research taste Look up Dwarkesh Podcast on YouTube, Apple Podcasts, or Spotify. Enjoy!



I am pleasantly surprised by Ilya. He has identified some key aspects of intelligence that are largely absent from the popular AI discourse. These are: 1. Intelligence is about the ability to learn and not about knowing many things. The right goal is a system that can learn from experience in deployment. 2. A value function is needed for human-like sample-efficient learning. It can provide dense feedback (TD learning) in the absence of reward. Both of these are essential and doable. A key bottleneck is that we don't have algorithms that can learn reliably using similar amounts of compute as inference. Such algorithms are needed if we are to learn continually. I think we are close. We just don't have enough people working on finding these algorithms. I am also glad that Ilya acknowledged that to make progress we need more ideas and not just more compute. I would predict that the key algorithmic improvements can be made with a relatively small amount of compute. A handful of GPUs with many CUDA cores (5090s or better) per person, or a couple of state-of-the-art multicore CPUs (9995 WX or better) per person, are enough to find the right algorithm. Large scale demonstrations would only be important to convince the rest of the world that you have found the right recipe for learning. *Tensor Cores are not flexible enough for trying new ideas quickly.








