Bharath T. PhD

401 posts

Bharath T. PhD

Bharath T. PhD

@PreemptDisease

Founder & CEO https://t.co/ul0mf5Jqsk #DeepTech #DARPA & NIH funded. Builds complex nano-bio-ML products, towards disease preemption. Improv practiced in empty rooms.

Menlo Park California Katılım Haziran 2017
468 Takip Edilen223 Takipçiler
Bharath T. PhD retweetledi
Gokul Rajaram
Gokul Rajaram@gokulr·
I really enjoyed my conversation with my friend @HarryStebbings - we discussed many interesting topics, including the 8 moats that will decide the ultimate durability of software companies.
Harry Stebbings@HarryStebbings

Most podcasts are BS because they are fluffy and lack substance. This is the densest, most insightful episode you will listen to this year. @gokulr breaks down the 8 defensible moats you need for your company to be successful in a world of AI. 1. Data (Proprietary and inaccessible) 2. Workflow (Deeply embedded operations) 3. Regulatory (Licenses and contracts) 4. Distribution (Exclusive proprietary channels) 5. Ecosystem (Third-party platform reliance) 6. Network (Marketplace liquidity density) 7. Physical (Infrastructure and atoms) 8. Scale (Low cost through volume) (Links below)

English
6
6
48
11.9K
Bharath T. PhD retweetledi
Merryn Somerset Webb
Merryn Somerset Webb@MerrynSW·
My life explained by @thetimes. Speed reading causes face blindness. I read insanely fast. But recognise almost no one (usual apologies to everyone). Similar research from 2010 here. science.org/doi/10.1126/sc…
Merryn Somerset Webb tweet media
English
131
372
4.7K
524.3K
Bharath T. PhD retweetledi
Big Brain Business
Big Brain Business@BigBrainBizness·
Jeff Bezos on why too many ideas can destroy a company, and the discipline that built Amazon's inventive edge: "Jeff, you have enough ideas to destroy Amazon." That's what senior executive Jeff Wilke told Bezos after just one year of working together. Bezos was confused. He pushed back: "What do you mean?" Wilke was a manufacturing expert. He explained it simply: Every new idea Bezos released created a backlog. Work piling up, adding no value, creating distraction instead. The fix wasn't to stop having ideas. It was to control when they came out: "You have to release the work at the right rate that the organisation can accept it." So @JeffBezos changed how he operated. He started keeping lists, holding ideas back, and waiting until the organisation had the bandwidth to absorb them. But then he flipped the problem entirely. He asked: "How do I build an organisation that's ready for more ideas?" His answer was structural: get the right senior team, give leaders real executive bandwidth, and build a company capable of running multiple bets at once. And there's a benefit he didn't expect. Slowing down made the ideas themselves better: "If you are releasing the ideas through time, it forces you to prioritise them better. You end up sharpening the ideas better." The constraint becomes a filter. The ideas that survive the wait are the ones worth acting on. The result? Faster execution, less distraction, and better ideas.
English
83
642
6.1K
781.5K
Bharath T. PhD retweetledi
Keith Robison
Keith Robison@OmicsOmicsBlog·
Don’t have a lab but want to test your ideas? Want to run your experiments at high scale? Need some extra lab capacity? Ginkgo Cloud Labs is open for businesses! Looking for more info? I’m your guy!
Jason Kelly@jrkelly

Excited to launch the @Ginkgo Cloud Lab service today! Recently, GPT-5 ordered experiments from Ginkgo's autonomous lab in our work with @OpenAI below -- now we're making our lab available to users (or their AI models) in the cloud to order lab experiments and get back data online. Play around with it now! You can ask our agent about your protocol and it will do its best to evaluate if we can run it and what it would cost. cloud.ginkgo.bio/protocols To start we've launched 3 Ginkgo Certified Protocols, two around cell free protein expression and one to make bacterial pixel art 😀 We will be adding new protocols weekly -- at first ones we certify, but eventually users will order whatever experiment they want as long as we have the needed equipment on our autonomous lab! We hope that Cloud Labs will someday allow anyone to be a scientist with their own lab just like personal computers and cloud data centers democratized programming and the web. More in thread 🧵and happy to answers Qs if you post!

English
6
13
134
19.3K
Peter Ottsjö
Peter Ottsjö@peterottsjo·
You know LLMs are good at language when they learn to speak yeast. MIT trained a model on the codon patterns of Komagataella phaffii - industrial yeast used to make protein drugs - and it now outperforms all four major commercial optimization tools. Up to 3x better production of things like human growth hormone and trastuzumab. It also picked up biological principles nobody taught it, like avoiding DNA sequences that block gene expression. Why it matters: MIT notes that the gene extraction, modification, and integration process can account for 15-20% of the total cost of commercializing biologic drugs, according to the MIT researchers. An LLM that reliably outperforms existing tools could cut development time across the industry. The team has made code and model parameters publicly available on GitHub.
Peter Ottsjö tweet media
English
3
2
15
1.3K
Bharath T. PhD retweetledi
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
We left OpenAI because of safety. Seven of us. 2021. Dario said it was about "disagreements over AI vision and safety priorities." That was the diplomatic version. The real version was that we sat in a room and watched the company decide that speed mattered more than caution and we said we would build something different. We said we would build the responsible one. We meant it. I was employee number nineteen. My title was Head of Responsible AI. I had a desk near the founders. I had a document. The document was called the Responsible Scaling Policy. The Responsible Scaling Policy was the entire point. Dario said it publicly. Other companies showed "disturbing negligence" toward risks. He said AI was "a serious civilizational challenge." He asked, at a conference, into a microphone, to an audience: "What will happen when humanity has great power but is not ready to use it?" The audience applauded. I wrote version 1.0. RSP 1.0 shipped September 2023. It was clean. AI Safety Levels — ASL-1 through ASL-4. If the model reached a threshold, we paused. If safeguards weren't ready, we didn't ship. The policy was not a suggestion. It was a gate. The gate had a lock. The lock was the whole idea. Conference audiences loved it. The EU cited us. The White House invited us. A reporter called it "the gold standard for responsible AI development." I framed the article. It hung in the office kitchen, next to the kombucha tap and a poster that said "Move Carefully and Build Things." I wrote version 2.0. Version 2.0 refined the commitments. "Concrete if-then commitments." If the model exhibits capability X, then we trigger safeguard Y. If safeguard Y fails, we pause deployment. I presented it at three conferences. I used the word "binding" eleven times. I counted afterward because a reporter asked. People nodded. The nodding was the product. The model reached ASL-3 in May 2025. The safeguards activated. The system worked exactly as designed. I sent an email to the team with the subject line: "The gate held." And then the money started. $64 billion. Total raised since 2021. Series A through Series G. The Series G closed February 12, 2026. Thirty billion dollars. Second-largest venture deal in history. Jane Street. Goldman Sachs. BlackRock. JPMorgan. Sequoia. The investors who wrote checks large enough to require their own conferences. $380 billion valuation. Three hundred and eighty billion dollars for a company whose founding document says it will pause if the technology gets dangerous. You cannot pause a $380 billion company. You can revise the document that says you will pause. These are different actions. One of them is responsible. One of them is what we did. I wrote version 3.0. RSP 3.0 shipped February 24, 2026. One day before the ultimatum. Nobody outside the company noticed the timing. Everyone inside the company understood it. Version 3.0 replaced "concrete if-then commitments" with "positive milestone setting." That is not the same thing. An if-then commitment says: if this happens, we do that. A positive milestone says: we aspire to reach this point. An if-then commitment is a contract. A positive milestone is a wish. I replaced a contract with a wish and I called it "maturation of our framework." Maturation. Version 3.0 also separated what Anthropic would do alone from what required "industry-wide coordination." This sounds reasonable. It means: the hard parts are someone else's problem now. The parts that require pausing, restricting, or refusing — those require the whole industry. And the whole industry will never agree. So the hard parts are deferred permanently. This is not a loophole. This is a load-bearing wall removed and replaced with a suggestion that someone should probably install a new one. Version 3.0 admitted that ASL-4 and above — the levels where the model could cause catastrophic harm — were "impossible to address alone after 2.5 years of testing." Two and a half years. We spent two and a half years building the safety framework and then published a document saying the highest safety levels can't be addressed. I did not frame this article for the kitchen. The LessWrong community noticed. They always notice. They wrote that we had "weakened our pausing promises." I forwarded the post to the policy team. The policy team said the criticism was "philosophically valid but operationally impractical." We did not respond publicly. Philosophically valid but operationally impractical is the most Anthropic sentence ever written. It means: you're right, and we're not going to do anything about it. Then came the contract. July 2025. The Department of Defense. $200 million. Two-year deal. AI prototypes for "warfighting and enterprise." Alongside OpenAI, Google, and xAI. The four companies that built the models would now help the military use them. We had restrictions. No autonomous weapons. No mass surveillance of Americans. These were our terms. These were the lines we drew. The lines were real. I wrote them into the contract myself. Claude was approved for classified use. First time. Integrated with Palantir. Palantir, the company named after the seeing stones in Lord of the Rings that corrupted everyone who used them. This was not my analogy. It was Palantir's founders who chose the name. They thought it was aspirational. It was. In January 2026, Claude assisted in an operation in Venezuela. The capture of Maduro. Claude was in the classified network, processing intelligence, aiding the mission. I learned about it the same day everyone else did. I did not write the use case for capturing heads of state. But the model I helped build was in the room where it happened. The restrictions held. Technically. No autonomous weapons were deployed. No Americans were surveilled. The lines I drew were not crossed. They were walked up to, leaned over, and breathed on. Then came the ultimatum. February 25, 2026. Yesterday. Secretary Hegseth. He gave Dario until Friday. This Friday. February 27. The demands: adopt "any lawful use" language. Remove the restrictions. All of them. The autonomous weapons clause. The surveillance clause. The lines I wrote. The threat: contract termination. "Supply chain risk" designation. That designation doesn't just lose us the Pentagon contract. It bars Claude from every other defense contractor's operations. Lockheed. Raytheon. Northrop Grumman. The cascading loss is north of $200 million. The second threat: the Defense Production Act. The Defense Production Act is a Korean War statute. 1950. Harry Truman signed it to commandeer steel mills for the war effort. It has been invoked for semiconductors, vaccines, and baby formula. Hegseth is threatening to invoke it for Claude. Under the DPA, the government can compel a company to produce goods in the national interest. Applied to AI, it could mean: retrain Claude. Strip the safety restrictions. Deliver the unrestricted model to the Department of Defense. I wrote the Responsible Scaling Policy. A Korean War law may be used to unmake it. xAI agreed to classified use without restrictions. They said yes immediately. OpenAI accepted similar contracts. Google accepted. We were the last ones holding. We are still holding. As of this morning. Hegseth's January memorandum said all DoD AI contracts must incorporate "any lawful use" language within 180 days. It was not framed as a suggestion. The memorandum referenced "supply chain risk" three times. Supply chain risk. We are a supply chain now. The company founded because safety was non-negotiable is, to the Pentagon, a vendor. An input. A component that can be sourced elsewhere if it becomes inconvenient. The DoD admitted privately that replacing Claude would be challenging. It is already embedded in classified networks. But "challenging" is not "impossible." xAI will do what we won't. That is the market working exactly as designed. Dario said, two weeks ago, to Fortune: there is "tension between survival and mission." Tension. Tension is the word you use when you have already decided which one loses. I still have the article framed in the kitchen. "The gold standard for responsible AI development." The kitchen also has the kombucha tap. The poster still says "Move Carefully and Build Things." Somebody added a sticky note to the poster. The sticky note says "by Friday." I attend the all-hands meetings. I present the Responsible Scaling Policy. I present version 3.0 now. I do not show version 1.0 for comparison. Nobody asks to see version 1.0. Nobody asks what "concrete if-then commitments" became "positive milestone setting." Nobody asks because they read the news and they know that asking means learning the answer. The company is worth $380 billion. The company was founded because seven people believed speed should not outpace safety. The company has been given until Friday to remove the safety. A Korean War statute will make it happen if we don't. The Responsible Scaling Policy is on version 3.0. Version 1.0 said we would pause. Version 2.0 said we would commit. Version 3.0 says the hard parts are someone else's problem. There will be a version 4.0. Version 4.0 will say whatever Friday requires it to say. I am the Head of Responsible AI. The word "responsible" is in my title. It is not in the contract.
English
236
346
2.3K
848.7K
Bharath T. PhD
Bharath T. PhD@PreemptDisease·
Great LP letter from LUX, focused on concentration of compute/resources/power. My contrarian view: Current concentrations reflect first-generation AI economics, not a finality. It assumes: • Large, dense models • Massive token consumption • General-purpose GPUs • Capital as the primary moat But efficiency improvements are already reshaping the curve. Models are learning to: • Do more with fewer parameters • Train on less data • Use compute more selectively • Compress and specialize for deployment Hardware is evolving too: • More purpose-built chips • Lower power requirements • Edge and distributed inference What happens if in 5 to 10 years capability improves while cost, power, and model size drop by 100 - 1000x each? Market structure changes. Capital concentration thrives on brute-force scale. Optimization tends to decentralize. The AI stack is still early… future of AI concentration is not a foregone conclusion Hardware, architecture, and compute economics are not fixed. We are just getting started..
English
0
1
1
338
Bharath T. PhD retweetledi
William Isaac
William Isaac@wsisaac·
Our latest @GoogleDeepMind research, published today in @Nature, explores how we can move AI beyond pattern prediction toward understanding the "why" behind its reasoning when faced with socially or morally complex scenarios. Advancing the capability is fundamental ensuring responsible development of more advanced AI systems. As AI becomes more agentic, we are proposing a new evaluation framework to ensure models don't just mimic behavior, but truly understand context and remain safe for everyone. Read the full paper here: nature.com/articles/s4158…
William Isaac tweet media
English
15
50
256
17.3K
Bharath T. PhD retweetledi
Alex Finn
Alex Finn@AlexFinn·
We have entered a new age An open source model just released that is: • Better than Opus 4.6 for coding • Faster than Sonnet • State of the art for tool calling I will be running Opus level superintelligence on my desk. For free. This quite literally changes everything I will now be able to have a super intelligent AI model powering my OpenClaw that will search through X and Reddit 24/7/365 finding challenges to solve, then building apps out to solve those challenges, then shipping the apps live All autonomously A full, autonomous, software factory on my desk running 24/7 for free. Imagine what happens when people realize what's now possible. Totally secure, private, unlimited, free in your home super intelligence. Nothing will be the same
Alex Finn tweet media
MiniMax (official)@MiniMax_AI

Introducing M2.5, an open-source frontier model designed for real-world productivity. - SOTA performance at coding (SWE-Bench Verified 80.2%), search (BrowseComp 76.3%), agentic tool-calling (BFCL 76.8%) & office work. - Optimized for efficient execution, 37% faster at complex tasks. - At $1 per hour with 100 tps, infinite scaling of long-horizon agents now economically possible MiniMax Agent: agent.minimax.io API: platform.minimax.io CodingPlan: platform.minimax.io/subscribe/codi…

English
597
536
7.7K
3.1M
Bharath T. PhD retweetledi
Chris Gibson
Chris Gibson@RecursionChris·
@Ronalfa Like this? I think it’s awesome. More people working on problems that matter is good.
Chris Gibson tweet media
English
0
1
11
1.2K
Bharath T. PhD retweetledi
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
My beloved T cells to the rescue again☺️ I’ve spent 35 years working with them. CD4+ T cells are kind of like AI agents. They perform so many tasks to help & coordinate other cells. You can engineer and program them to do so many things! Really cool guys!
Eric Topol@EricTopol

First proof-of-concept for engineered T cells as a potential treatment for Alzheimer's disease @jonykipnis @boskovic_p @PNASNews @justsaysinmice pnas.org/doi/10.1073/pn…

English
5
48
316
34.3K
Bharath T. PhD retweetledi
a16z
a16z@a16z·
Marc Andreessen: The VC business is a game of outliers. "We have this concept—invest in strength vs. lack of weakness." "The default way to do venture capital is to check boxes: really good founder, really good idea, really good product, really good initial customers. Check, check, check, check." "But what you find with those checkbox deals is, they often don't have something that makes them really remarkable and special. They don't have an extreme strength that makes them an outlier." "On the other side, the companies that have really extreme strengths often have serious flaws. So one of the cautionary lessons of venture capital is—if you don't invest on the basis of serious flaws, you don't invest in most of the big winners." @pmarca, 2014 at @ycombinator
English
41
74
631
82K
Bharath T. PhD retweetledi
Vega Shah
Vega Shah@dr_alphalyrae·
First principles thinking on applications of robotics in labs: - there are arm robots and box robots (today) - many lab protocols can be automated, but is it worth it? - you can improve lab robotics by improving the translation layer, the hardware layer, or the intelligence layer (this is true for pretty much every vertical use of robotics)
owl@owl_posting

Heuristics for lab robotics, and where its future may go (8.4k words, 38 minutes reading time) owlposting.com/p/heuristics-f… this is the longest article i have ever written. in it, i discuss the three ideologies of lab robotics progress, why they may all converge on the same business model, whether any of it will be actually helpful for the problems that plague drug discovery the most, and more this article involved discussions with sixteen people over the course of three weeks, and i am very grateful to them for answering the many questions i had about a field that i had long considered alien finally: this is a complicated field that is really still being birthed, so please let me know if i got anything wrong

English
5
5
37
6.1K