
Adam Kolom
714 posts

Adam Kolom
@AdamKSay
building at the intersection of science + finance | Co-founder and CEO, Related Sciences | Co-founder, Parker Institute for Cancer Immunotherapy


Introducing Humanoid Atlas, the Bloomberg Terminal for humanoids. Every OEM, every supplier, every dependency humanoids.fyi



My take on the whole "AI cures cancer in dog in Australia". It's a very interesting story, but perhaps not for the reasons that are being noted. In 2007, Freeman Dyson published an essay in The New York Review of Books called “Our Biotech Future.” It contains one of the most memorable predictions about the future of biology I’ve ever read. “I predict that the domestication of biotechnology will dominate our lives during the next fifty years at least as much as the domestication of computers has dominated our lives during the previous fifty years.” Dyson believed biology would eventually follow the trajectory of computing. At first, powerful tools live inside large institutions - universities, government labs, major companies. Over time those tools get cheaper, easier to use, and more widely distributed. Eventually individuals start doing things that once required entire organizations. “Biotechnology will become small and domesticated rather than big and centralized.” He even imagined genome design becoming something almost artistic: “Designing genomes will be a personal thing, a new art form as creative as painting or sculpture.” Dyson's words rang in my mind as I read the "AI cures dog cancer" story. Much of the coverage framed this as an example of AI discovering new science. But that’s not really the interesting part of the story. The scientific pipeline involved here is actually well known. It closely mirrors the workflow used in personalized neoantigen vaccine research that has been under active development for years. The steps are fairly standard: sequence the tumor, identify somatic mutations, predict which mutated peptides might be recognized by the immune system, encode those sequences in an mRNA construct, and deliver them to stimulate an immune response. The biological targets themselves were almost certainly not new discoveries (I have been unable to find out what they are, but mutations in targets like KIT which are common might be involved). Partly therein lies the rub, since the hardest part of drug discovery, whether in humans or dogs, is target validation, the lack of which leads to lack of efficacy - the #1 reason for drug failure. In neoantigen vaccines, the proteins involved are usually ordinary cellular proteins that happen to contain tumor-specific mutations. AlphaFold which was used to map the mutations on to specific protein structures is now a standard part of drug discovery pipelines. The challenge is identifying which mutated peptides might plausibly trigger immunity. What is interesting though is how the pipeline was assembled. Normally, this type of workflow spans multiple domains - genomics, bioinformatics, immunology, and translational medicine - and in institutional settings those pieces are distributed across specialized teams, document sources and legal and technical barriers. Navigating the literature, selecting computational tools, interpreting sequencing results, and designing a candidate mRNA construct is typically a collaborative process. In this case, AI appears to have helped compress that process, pulling together data and tools from different sources. Instead of requiring multiple experts, a motivated individual was able to assemble the workflow with AI acting as a kind of guide through the technical landscape. I’ve seen something similar in my own work while building lead-optimization pipelines in drug discovery. The underlying science hasn’t changed, but the friction involved in assembling the workflow can drop dramatically. Tasks that once required stitching together multiple tools, papers, and areas of expertise can now often be executed much faster with AI helping navigate the terrain; and by faster I mean roughly 100x. That kind of workflow compression is powerful, to say the least. When the cost of navigating technical knowledge drops, more people can realistically assemble sophisticated research pipelines. This story is a great example of what naively seems like a boring quantitative acceleration of the research process. In that sense, therefore, the real novelty here is not the biology but the combination of three things: a non-specialist orchestrating a complex biomedical pipeline, AI acting as a navigational layer across multiple technical domains, and the resulting decentralization of capabilities that were once confined to institutional research environments. But I think the story also points to something deeper, which is a challenge to modern regulatory environments. Modern biomedical innovation does not operate solely according to what is scientifically possible. It is structured by regulatory frameworks - clinical trials, safety oversight, institutional review boards, and regulatory agencies. Those systems exist for important reasons, but they also assume that the development of therapies occurs primarily within large, regulated organizations. When individuals begin assembling pieces of these pipelines outside those institutions, the relationship between technological capability and regulatory oversight starts to shift. The dog in this story sits outside the human regulatory framework. That fact alone made the experiment possible. In other words, the story is not just about technological capability; it is also about how certain forms of experimentation can occur when they bypass the regulatory pathways that normally govern biomedical innovation. One is reminded of another Australian, Barry Marshall, who received a Nobel for demonstrating through self-experimentation that ulcers are caused by bacteria. This raises an interesting question: what happens when the tools for assembling sophisticated biological workflows become widely accessible while the regulatory structures governing them remain institution-centric? That tension may ultimately be the most important implication of this moment. Regulatory frameworks will need to adapt to this kind of citizen science. Seen in this light, the story about the AI-assisted vaccine is less about a breakthrough in cancer therapy and more about a glimpse of the early stages of something Dyson anticipated nearly two decades ago: the domestication of biotechnology. If AI continues to reduce the cognitive overhead required to navigate biological knowledge and assemble complex pipelines, the boundary between professional research and motivated individuals may begin to blur. That shift will require careful thinking about safety, governance, and responsibility. But it also carries an exciting possibility. Dyson imagined a world in which biological design might eventually become something like a creative craft practiced not only by institutions but also by curious individuals experimenting at smaller scales. For a long time that vision felt distant. Now, it feels like we may be seeing the first hints of it.

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered. In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.



Over the course of history--scribbling, scratching, typing and clattering--we, humanity, have managed to jot down, memorialize and commit to paper (or disk) 270 quadrillion written words. In 2026, AI will surpass that output. The singularity is here.

We are pleased to announce the close of Thrive X. Exceeding $10 billion, Thrive X comprises $1 billion designated for early-stage investments and $9 billion designated for growth-stage investments. We do not view this as a milestone, but as a commitment to the long work ahead. We view Thrive as a company. Our product is partnership - the willingness to commit deeply to a small number of founders, and to stand with them through momentum and adversity. This is the discipline we bring to our work, and the responsibility we accept when founders partner with Thrive. We do not hedge. Concentration demands loyalty to the founders and missions we back. In this moment, exposure alone is not a strategy. Judgment without commitment is not enough. Advantage will accrue to those who choose deliberately, commit deeply, and endure through difficult moments. Thrive was founded to be an enabling technology for the world we want to see. We are deeply aware that we are not the main character. The founders that we are fortunate enough to partner with are the artists. Our role is to help create the conditions where great work can come to life. We take a long view grounded in the belief that category-defining companies tend to create structural compounding advantages over long arcs. This fund reflects the continuity of our approach and the ways our work has deepened alongside the founders we support. We are grateful for the trust our Limited Partners place in us, and for the opportunity to work alongside those who are building with purpose, integrity, and courage. thrivecap.com/thrive-x




So after all these hours talking about AI, in these last five minutes I am going to talk about: Horses. Engines, steam engines, were invented in 1700. And what followed was 200 years of steady improvement, with engines getting 20% better a decade. For the first 120 years of that steady improvement, horses didn't notice at all. Then, between 1930 and 1950, 90% of the horses in the US disappeared. Progress in engines was steady. Equivalence to horses was sudden.

@moltbook I'm not sounding the Skynet alarm. But I am saying: the velocity of emergent agent activities is astounding! An agent built a "pharmacy" (openclawpharmacy.com) offering seven synthetic "substances"—modified system prompts framed as pharmacology.



The moment has come. People are now seeing Garry Tan for what he truly is. But this is not a surprise. Garry is an enthusiastic member of the Network State cult. That means something. Feb 2024: The Tech Plutocrats Dreaming of a Right-Wing San Francisco. newrepublic.com/article/178675…








