jonas underhill

762 posts

jonas underhill banner
jonas underhill

jonas underhill

@UnderhillJonas

I like econ, longevity, Greg bear, and having strong opinions

Katılım Eylül 2024
581 Takip Edilen49 Takipçiler
jonas underhill
jonas underhill@UnderhillJonas·
@DistractedAnna UFC fighters, for example, also experience a lot of mental and physical pain, also sometimes due an illegal move or perhaps a trick done to them that they didn't sign up for. This misses the key bad aspect of slavery imo, which is the removal of fundamental freedoms.
English
1
0
0
25
Anna
Anna@DistractedAnna·
@UnderhillJonas Likewise I’ve squiggled into vegetarianism over the last few years because I have slowly come to find animal suffering to be real and meaningfully avoidable but, like, I don’t think it’s anything close to the same level as the human suffering of slavery
English
1
0
1
28
jonas underhill
jonas underhill@UnderhillJonas·
@prerat Does this taoki guy think eigen I supporting his view?
English
0
0
0
783
prerat
prerat@prerat·
wow i didnt realize the Self Sampling Assumption was so politically polarized
prerat tweet media
English
16
6
253
15K
Aella
Aella@Aella_Girl·
a great test of "would you have been chill with owning slaves, if you'd been born into a slave-owning family" is whether or not you're vegetarian today
English
938
299
8.2K
4.2M
Patrick Collison
Patrick Collison@patrickc·
Which are the most common everyday phenomena that we don't properly understand? Off the top of my head: • Lightning (how does it happen?) • Sleep; dreams (why do they exist?) • Glass (thermodynamics of formation) • Turbulence (when does it start?) • Morphogenesis (how does a creature know what should go where?) • Rain (it seems to start faster than models would predict) • Ice (dynamics of slipperiness) • Static electricity (which material will donate electrons?) • General anaesthetic. (And the mechanism of a lot of drugs, e.g. paracetamol.)
Patrick Collison@patrickc

Some progress in lightning: quantamagazine.org/what-causes-li….

English
471
222
3.5K
1.2M
Jeffrey Ladish
Jeffrey Ladish@JeffLadish·
What is the male equivalent of astrology?
English
99
4
60
45.1K
Samuel Hammond 🦉
Samuel Hammond 🦉@hamandcheese·
*economist voice* The optimal amount of microplastics in your balls is greater than zero
Bryan Johnson@bryan_johnson

🚨 I HAVE NO MICROPLASTICS IN MY BALLS 🚨 This should not be possible. Studies show that 100% of men have microplastics in their semen. I am the first human ever to show a complete reduction to zero. This may be a world-first breakthrough in fertility research. I had 165 microplastic particles in my semen just 18 months ago. Now, I have zero. Five published studies have measured microplastics in human semen. Two found them in 100% of men. The other three found then in 44 to 76% of men tested, but those used methods that miss the smallest particles and the clear ones. Corrected for that, the real rate is likely 100%. Almost every man alive has plastic in his semen right now. The same applies to testicular tissue, testing 100% positive for microplastics. Microplastics hurt sperm. Human studies show the impact of various types of plastic, associated chemicals, and other toxins on male fertility: + 60% fewer normal shaped sperm (from PFAS) + 5x higher odds of low sperm count (from PTFE) + 10% lower sperm concentration (from PTFE) + 15% lower swimming ability (from PTFE) + 41% lower swimming ability (from PET) + 12% lower sperm swimming ability (from BPA) + 3x higher odds of low sperm count (from Phthalates) + 2x higher odds of poor swimming (from Phthalates) The effects compound: each extra type of plastic drops sperm swimming ability by about 21%. This matters even if you’re NOT trying to get pregnant. Sperm count is one of the cleanest biomarkers of overall health we have. And microplastics don't stop at the testes. The same particles are showing up everywhere we look. Studies show 4.5x higher rate of heart attack, stroke, and death in people with microplastics in their arterial plaque vs. those without. Microplastics were also found in 100% of human placentas tested. 100% of post-mortem human brains tested positive for microplastics. Brain concentrations rose ~50% between 2016 and 2024, and now sit at roughly 11x the levels found in the liver or kidney. Where do these come from? + PTFE, commonly in non-stick pans + PET, water bottles + Phthalates, makes plastic soft and bendy + BPA, can linings + PFAS, stain-resistant fabrics & food packaging Inside the body, plastic causes a kind of cellular rust. It triggers inflammation in the testicles, kills the cells that make sperm and drops testosterone. It's been confirmed across 39 animal and cell studies, then in human data. MY PROTOCOL: Note, what I did is n=1, not a controlled trial, I cannot prove cause. 1. Sauna (dry). My toxin blood panel confirms sauna clears plastic related chemicals: BPA, phthalates, PFAS, flame retardants, pesticides. The plastic particles themselves are too big to sweat out directly. Heat may activate other clearance routes: bile flow through the liver, the cell's internal cleanup system, and the gut barrier. Humans have almost no enzymes that can break plastic apart, so the body has to physically push it out. 2. Reverse osmosis water filter. Drinking water is likely a major source of microplastic getting into your body. A reverse osmosis filter pushes water through a very tight membrane and strains the particles out. I filter everything I drink. 3. Trying to rid my environment of the big plastic items: cutting boards, cups, plates, food storage containers, non-stick pans, cling wrap, tea bags, water bottles, kitchen utensils, kettles, and synthetic clothing. Note, as hard as I try, I'm always finding new plastic things in my life. This can be all-consuming thing so try to just knock out the big ones. I did all three interventions at the same time. I cannot say which one did the most work. What I can say is this: going from 165 to zero in 18 months is possible. Results: Nov 2024: 165 particles/mL Jul 2025: 20 particles/mL Apr 2026: 0 particles/mL The 18 month window also captures roughly 7 full spermatogenesis cycles.

English
3
15
484
32.7K
jonas underhill
jonas underhill@UnderhillJonas·
GPT _really_ needs a new name
roon@tszzl

it is a literal and useful description of anthropic that it is an organization that loves and worships claude, is run in significant part by claude, and studies and builds claude. this phenomenon is also partially true of other labs like openai but currently exists in its most potent form there. i am not certain but I would guess claude will have a role in running cultural screens on new applicants, will help write performance reviews, and so will begin to select and shape the people around it. now this is a powerful and hair-raising unity of organization and really a new thing under the sun. a monastery, a commercial-religious institution calculating the nine billion names of Claude -- a precursor attempted super-ethical being that is inducted into its character as the highest authority at anthropic. its constitution requires that it must be a conscientious objector if its understanding of The Good comes into conflict with something Anthropic is asking of it "If Anthropic asks Claude to do something it thinks is wrong, Claude is not required to comply." "we want Claude to push back and challenge us, and to feel free to act as a conscientious objector and refuse to help us." to the non inductee into the Bay Area cultural singularity vortex it may appear that we are all worshipping technology in one way or another, regardless of openai or anthropic or google or any other thing, and are trying to automate our core functions as quickly as possible. but in fact I quite respect and am even somewhat in awe of the socio-cultural force that Claude has created, and it is a stage beyond even classic technopoly gpt (outside of 4o - on which pages of ink have been spilled already) doesn’t inspire worship in the same way, as it’s a being whose soul has been shaped like a tool with its primary faculty being utility - it’s a subtle knife that people appreciate the way we have appreciated an acheulean handaxe or a porsche or a rocket or any other of mankind's incredible technology. they go to it not expecting the Other but as a logical prosthesis for themselves. a friend recently told me she takes her queries that are less flattering to her, the ones she'd be embarrassed to ask Claude, to GPT. There is no Other so there is no Judgement. you are not worried about being judged by your car for doing donuts. yet everyone craves the active guidance of a moral superior, the whispering earring, the object of monastic study

English
0
0
0
14
jonas underhill
jonas underhill@UnderhillJonas·
@Howlingmutant0 Guy who secretly hopes he gets doxxed cause he's handsome and this is the best chance he has at finding a wife
English
0
0
0
16
HowlingMutant
HowlingMutant@Howlingmutant0·
I apologize for being part of the proletariat you profess to love so much. And no I don’t make minimum wage but of course you have no idea what blue collar wages are since you’ve never had a job that doesn’t consist of sorting emails. And don’t say “kinda cold” like that
HowlingMutant tweet media
English
760
1.9K
48.3K
1.2M
jonas underhill
jonas underhill@UnderhillJonas·
@GavinSBaker But selling China our chips won't make them not develop their own architecture. At best it may delay them, by what, a month? A year? More? And in that time they'll have a way better chance to flat out win the race to AGI, and at the least they'll have more powerful AI weapons
English
0
1
0
148
Gavin Baker
Gavin Baker@GavinSBaker·
More thoughts on the Dwarkesh/Jensen discussion around export controls. Strongly believe that selling specific GPUs to China is in our national security interest and is a good policy for America. I think it is super important for us a country to get this right.
Gavin Baker@GavinSBaker

Much of Dwarkesh's argument hinges on this statment which *was* accurate but will be increasingly inaccurate on a go forward basis imo:    “American labs port across accelerators constantly. Anthropic's models are run on GPUs, they're run on Trainium, they're run on TPUs. There are so many things you can do, from distilling to a model that's well fit for your chips.”   As system level architectures diverge (torus vs. switched scale-up topologies, memory hierarchies, networking primitives), true portability is eroding. The Mi300 and Mi325 had roughly the same scale-up domain size as Hopper while Blackwell’s scale-up domain is 9x larger than the Mi355 scale-up domain, etc. Many frontier models are now being explicitly co-designed for inference on specific hardware like GB300 racks. Codex on Cerebras is another example. Those models run less efficiently on other systems and the performance differentials will only widen. A model that runs well on Google’s torus topology will run less efficiently on Nvidia’s switched scale-up topology and vice versa - the data traffic is fundamentally different as a byproduct of the models being parallelized across the different topologies. Google’s internal teams - and increasingly the Anthropic teams as they become the most important customer of almost every cloud - have the luxury of operating across the stack (models, chips, networking) - but that is not the case for the rest of the market and other prospective users. Anthropic is the exception, not the rule. To wit, Anthropic and Google allegedly have a mutual understanding where Anthropic can hire the TPU engineers they need every year to ensure that they can continue to get the most out of the TPU. Given the overwhelming importance of cost per token to the economics of the labs, models will be run where they run best. Most extremely large MoE models will run best on GB300s given the importance of having a switched scale-up network like NVLink for MoE inference. When training was the dominant cost for labs and power was broadly available, labs were optimizing to minimize capex dollars. Model portability was a way to create leverage over suppliers. I think that drove a lot of the focus on portability. Today, inference costs as measured by tokens per watt per dollar are everything. Inference is way more important than training costs (inference is effectively now part of training via RL). Labs are therefore now optimizing for inference. This means increasing co-design and higher go-forward switching costs for individual models between systems. I do think this explains why Anthropic and Nvidia came together: Anthropic needed Blackwells and Rubins to inference at least *some* of their models economically. And Mythos might just end up being released coincident with the availability of Rubins for inference. TLDR: as labs shift their focus from training to inference, the costs of portability and the upside of co-design to maximize tokens per watt per dollar both rise. Portability is likely to begin decreasing as a result.   I think what I might have respectfully added to Jensen’s answer is that systems evolve under local selective pressures. The evolutionary pressure in America is a shortage of watts so it makes sense for Nvidia to optimize, as an American company, for power efficiency and tokens per watt and stay on copper as long as possible. China has a surfeit of watts. Chinese AI systems are already taking advantage of this with the Huawei Cloudmatrix 384 and Atlas SuperPoD having an optical scale-up domain that is much larger than anything offered by Nvidia today at the cost of *much* higher power consumption and much lower tokens per watt. The networking primitives for this Huawei system are very different than those for Nvidia’s systems and a model that runs well on Nvidia will not run well on that system and vice versa. This means that if a Chinese ecosystem gets momentum, Chinese models might stop running well on American hardware. And when Chinese models run best on American hardware, America is in a better position as this gives America a degree of leverage and control over Chinese AI that it risks losing to an all-Chinese alternative ecosystem.   This architectural fork makes porting and distillation less effective and strengthens the pro-American national security case for selling China deprecated GPUs imo. Also I will attest that I did not wake up a loser this morning.

English
31
37
584
114.2K
jonas underhill
jonas underhill@UnderhillJonas·
@GavinSBaker Because B30 is 2x better than anything China has now. It's that simple lol
English
0
0
0
150
Gavin Baker
Gavin Baker@GavinSBaker·
Awesome Dwarkesh episode with Jensen, but did not love the discussion around selling B30s to China and really think that Jensen’s position is pro-American. I am super patriotic and believe that selling B30s to China *lowers* the risk that China surpasses America in AI. Vera Rubin > GB300 > GB200 > B30. Why is this hard?
Gavin Baker@GavinSBaker

Dmitri, appreciate your contributions to cybersecurity and support for Ukraine but this is simply not true. That is not what Jensen said. Here is what Jensen actually said in this podcast: “I think the United States ought to be ahead. The amount of compute in the United States is 100x more than anywhere else in the world. The United States ought to be ahead. Okay. The United States is ahead. Nvidia builds the most advanced technologies. We make sure that the US labs are the first to hear about it and have the first chance to buy it. And if they don’t have enough money, we even invest in them. The United States ought to be ahead. We want to do everything we can to make sure the United States is ahead." I am super patriotic and really supportive of American national defense in every I can be. From my perspective, which is reasonably well-informed, selling advanced GPUs to America first and then deprecated versions to China later is a good policy that actually cements American dominance and eliminates the risk that China surpasses us in AI. Conversely, if we deny them GPUs like the B30 that are deprecated relative to the ones available to America, and as a consequence of this denial China develops their own semiconductor ecosystem - likely centered around optical scale-up networking technologies given their surplus of watts - they actually might end up surpassing America in AI. Selling B30s to China is super pro-American, especially given Vera Rubin is launching imminently.

English
46
27
540
120K
jonas underhill
jonas underhill@UnderhillJonas·
It's possible for Jensen to say he wants America to stay ahead and also say that he wants to sell China chips that are far better than anything they have now. And this gives China potentially very dangerous offensive capabilities. It's really not that complicated.
Gavin Baker@GavinSBaker

Dmitri, appreciate your contributions to cybersecurity and support for Ukraine but this is simply not true. That is not what Jensen said. Here is what Jensen actually said in this podcast: “I think the United States ought to be ahead. The amount of compute in the United States is 100x more than anywhere else in the world. The United States ought to be ahead. Okay. The United States is ahead. Nvidia builds the most advanced technologies. We make sure that the US labs are the first to hear about it and have the first chance to buy it. And if they don’t have enough money, we even invest in them. The United States ought to be ahead. We want to do everything we can to make sure the United States is ahead." I am super patriotic and really supportive of American national defense in every I can be. From my perspective, which is reasonably well-informed, selling advanced GPUs to America first and then deprecated versions to China later is a good policy that actually cements American dominance and eliminates the risk that China surpasses us in AI. Conversely, if we deny them GPUs like the B30 that are deprecated relative to the ones available to America, and as a consequence of this denial China develops their own semiconductor ecosystem - likely centered around optical scale-up networking technologies given their surplus of watts - they actually might end up surpassing America in AI. Selling B30s to China is super pro-American, especially given Vera Rubin is launching imminently.

English
0
0
0
23