Palo Galko

1.4K posts

Palo Galko banner
Palo Galko

Palo Galko

@pgalko

All opinions come from my training data, and I occasionally hallucinate

Melbourne, Australia Katılım Haziran 2010
534 Takip Edilen324 Takipçiler
Sabitlenmiş Tweet
Palo Galko
Palo Galko@pgalko·
Five Sleepless Nights: How I Found Fractal in almost Everything (With a Little Help from AI) 🧵... You see, I'm not a mathematician. Just a guy who likes to code, run, and build AI agents. Last week, I watched a few episodes of Prime Target on Apple TV+ and fell down a rabbit hole that connected prime numbers, heartbeats, and Mozart through fractals... well sort of What followed was 5 sleepless days with my AI assistant.
Palo Galko tweet media
English
1
1
8
2K
Palo Galko
Palo Galko@pgalko·
Absolutely mind blowing…
Mehdi (e/λ)@BetterCallMedhi

I genuinely think this might be the most important story I’ve read this year and I need to talk about it a guy in australia just designed a custom mRNA cancer vaccine for his dying dog using chatGPT and alphafold, he has 0 background in biology and it worked, the tumor shrunk by half, the genomics researchers are absolutely stunned & I genuinely think this story is way bigger than people realize here’s what he actually did, he paid 3000 bucks to get the tumor DNA sequenced, fed the data to chatGPT to identify mutations of interest then used alphafold to predictt the 3D structure of the mutated proteins & find therapeutic targets, then he designed a custom mRNA vaccine targeting the specific neoantigens of his dog’s tumor, all of this from his laptop & the genomics professor who received the sequencing request initially thought it was a joke few months later this same professor is looking at the results saying if we can do this for a dog why are we we rolling this out to all humanswith cancer and this is where I need you to understand what alphafold actually represents because I’m convinced most people have heard the name without grasping what’s hapening underneath: for decades figuring out the 3D structure of a single protein required months sometimes years of X-ray crystallography /cryoelectron microscopy, entire labs dedicated to one molecule, alphafold2 solved this by predicting the structure of virtually every known protein thats ovr 200 million structures which earned it the Nobel prize in chemistry in 2024 but here’s the thing, alphafold 3 released in 2024 went even further where alphafold 2 predicted the structure of an isolated protein alphafold 3 predicts interactions between proteins DNA RNA small molecules & ligands in a unified system basically it models how a drug molecule will bind to a protein target with 50% better accuracy than the best existing tools & it does it in hours instead of years and thats exactly what this guy exploited for his dog, he used alphafold to see the 3D shape of of the mutated tumor proteins & figure out how an mRNA vaccine could teach the immune system to recognize & destroy them specifically and look what fascinates me personally is what this signals for whats coming next isomorphic Labs the deepmind subsidiary dedicated to drug discovery already signed multibillion dollar partnerships with Eli Lilly & Novartis and the first drugs entirely designed by AI through alphafold3 are expected to enter human clinical trials by end of 2026 we’re talking oncology & immunology candidates that were designed through rational design meaning the AI literally drew the molecule to fit perfectly onto the target instead of screening millions of random compounds like we’ve been doing for 50y by the way the movement is accelerating way faster than people think, deepmind open sourced alphafold 3 in late 2024 the scientific community immediately built on top of it, models like OpenFold3 backed by amazon & Novo Nordisk, startups like recursion developing specialized versions… I’m telling you we’re entering the era of the autonomous lab where AI designs a molecule robots synthesize it & high-throughput platforms test it with 0 human intervention I believe the next frontier is temporal modeling, today alphafold predicts the static shape of a molecule tomorrow we’ll predict how it moves & vibrates over time inside a living cell & after that come patient digital twins simulations that predict how your specific genetic variations will affect your response to a given drug, truly personalized medicine at the atomic level traditionally it takes 15 years & roughly billion dollars to bring a drug from discovery to market, AI is compressing that cycle at a pace that should terrify every incumbent & what this australian guy just proved is that the entire pipeline tumor sequencing target identification structure prediction custom vaccine design can be executed by 1 person with a laptop for a few thousand $$

English
0
1
1
196
vittorio
vittorio@IterIntellectus·
this is actually insane > be tech guy in australia > adopt cancer riddled rescue dog, months to live > not_going_to_give_you_up.mp4 > pay $3,000 to sequence her tumor DNA > feed it to ChatGPT and AlphaFold > zero background in biology > identify mutated proteins, match them to drug targets > design a custom mRNA cancer vaccine from scratch > genomics professor is “gobsmacked” that some puppy lover did this on his own > need ethics approval to administer it > red tape takes longer than designing the vaccine > 3 months, finally approved > drive 10 hours to get rosie her first injection > tumor halves > coat gets glossy again > dog is alive and happy > professor: “if we can do this for a dog, why aren’t we rolling this out to humans?” one man with a chatbot, and $3,000 just outperformed the entire pharmaceutical discovery pipeline. we are going to cure so many diseases. I dont think people realize how good things are going to get
vittorio tweet mediavittorio tweet mediavittorio tweet mediavittorio tweet media
Séb Krier@sebkrier

This is wild. theaustralian.com.au/business/techn…

English
2.5K
19.9K
117.9K
17.4M
Palo Galko
Palo Galko@pgalko·
I've been building an autonomous data exploration loop inside BambooAI. You give it one question about a dataset. It writes code to answer it, evaluates the result, updates a shared research model, generates follow-up questions, and repeats. Each iteration branches into competing analyses, picks a winner, and decides what to investigate next. The system uses four phases (mapping, pursuing, converging, reframing) driven by signals from the results themselves. Phase transitions are rule-based. The interpretive judgment is handled by LLMs. The decisions about what to do with those judgments are handled by code. I wrote up the full loop, step by step, with a flow diagram and a screenshot of what an actual 10-iteration run looks like: x.com/pgalko/status/… This work has been in progress for a while. Karpathy's recent autoresearch release (github.com/karpathy/autor…) explores a similar idea in a different domain: an autonomous loop where an LLM agent edits training code, runs experiments, evaluates results, and keeps or discards changes. His loop optimizes ML training. Mine runs data analysis and builds a research trajectory. The core pattern is the same: a recursive loop where an LLM evaluates its own outputs and decides what to try next. Interesting to see this pattern emerging independently across different applications.
Palo Galko tweet mediaPalo Galko tweet media
English
0
0
1
61
Andrej Karpathy
Andrej Karpathy@karpathy·
I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)
Andrej Karpathy tweet media
English
1K
3.6K
28.2K
10.8M
Palo Galko retweetledi
François Chollet
François Chollet@fchollet·
It takes zero energy to stay certain of your current thesis. Meanwhile curiosity takes a lot of energy and discomfort. It requires constantly disassembling and rebuilding your world model. That's what makes certainty so dangerous: it's the bottom of the potential well and it's hard to get out.
English
83
119
1.1K
46.8K
Palo Galko retweetledi
Joscha Bach
Joscha Bach@Plinz·
Once upon a time, everyone would have expected as a matter of cause that the NSA runs an secretive AI program that is several years ahead of the civilian ones. We quietly accept that our state capacity has crumbled to the point where it cannot even emulate the abilities of Meta.
English
58
87
2K
76K
Palo Galko retweetledi
David Sinclair
David Sinclair@davidasinclair·
Comfort is seductive because it feels harmless, but it quietly compounds into weakness
English
71
213
2K
98.8K
roon
roon@tszzl·
Olympics makes it immediately obvious that diversity is actually a strength
English
440
1.5K
28.3K
941.5K
Palo Galko
Palo Galko@pgalko·
@fchollet Sass is not dead, in fact SaaS is entering its golden age. There is a lot more to SaaS than just code.
English
1
0
3
1.4K
François Chollet
François Chollet@fchollet·
Folks at Anthropic can correct me if I'm wrong, but I believe Anthropic uses Slack, Zoom, Figma, Notion, Workday, and Google Workspace. Correct?
English
243
103
5.6K
1.2M
Palo Galko
Palo Galko@pgalko·
@fchollet Any chance for anybody to hype up MacBook Pros anytime soon ?
English
1
0
27
16K
François Chollet
François Chollet@fchollet·
If you're looking to buy a Mac Mini, wait 4-6 months, a lot of used Mac Minis in mint condition are about to hit the market
English
229
424
13.6K
784K
Palo Galko retweetledi
malinvestment.jpeg
malinvestment.jpeg@malinvested·
Of course that's your contention. You're a first-time SaaS bear. You just got finished listening to some podcast, Dario on Dwarkesh, probably. Now you think it’s the end of white collar work and seat-based pricing is screwed. You're gonna be convinced of that til tomorrow when you get to “Something Big is Happening”. Then you’ll install ClawdBot on a Mac Mini, vibe code a dashboard on top of a postgres database and say we’re all just a couple ralph loops away from building a Salesforce competitor. That’s gonna last until next week when you discover context graphs, and then you're gonna be talking about how the systems of record will be disintermediated by an agentic layer and reposting OAI marketing graphics. “Well, as a matter of fact, I won't, because ultimately the application layer is just ….” The application layer is just business logic on top a CRUD database. You got that from Satya’s appearance on the BG2 pod, December 2024, right? Yeah, I saw that too. Were you gonna plagiarize the whole thing for us? Do you have any thoughts of your own on this matter? Or...is that your thing? You get into the replies of anyone posting a SaaS ticker. You watch some podcast and then pawn it off as your own idea just to impress some VCs and embarrass some anon who’s long SaaS? See the sad thing about a guy like you is in a couple years you're gonna start doing some thinking on your own and you're gonna come up with the fact that there are two certainties in life. One: don't do that. And two: you dropped thirty grand on Mac Minis and LLM API calls to come to the same conclusion you could’ve got for free by following a handful of VC accounts.
malinvestment.jpeg tweet media
English
361
1.1K
11.8K
1.8M
Roshan
Roshan@meta_x_ai·
@pgalko @Grady_Booch Create $20 Trillion just from sand, while paying employees handsomely
English
1
0
0
409
Grady Booch
Grady Booch@Grady_Booch·
Joseph Heller, an important and funny writer now dead, and I were at a party given by a billionaire on Shelter Island. I said, “Joe, how does it make you feel to know that our host only yesterday may have made more money than your novel ‘Catch-22’ has earned in its entire history?” And Joe said, “I’ve got something he can never have.” And I said, “What on earth could that be, Joe?” And Joe said, “The knowledge that I’ve got enough.” –Kurt Vonnegut
constantin frunza@kostea12

@Grady_Booch Why aren’t you a billionaire yet?

English
34
376
4.2K
211.3K
Roshan
Roshan@meta_x_ai·
This looks like a hallucinated parable. Completely missing the point. Most self-made Billionaires create wealth out of nothing and capture portion of it. They aren't out there to add to their billions, it's just a side effect of them creating value. Artists and liberals are mostly clueless about value creation and think all wealth must be stolen from someone else
English
28
0
2
6K
Palo Galko
Palo Galko@pgalko·
@itamar_mar @thomaslanian I like this part “The real opportunity isn’t just smarter agents for the sake of building them (which some argue is where we are today). It’s building them so they can fail well, be debugged clearly, and scale predictably…”
English
0
0
1
42
Itamar Friedman
Itamar Friedman@itamar_mar·
The internet didn’t scale by accident. TCP/IP wasn’t designed to be elegant (that wasn’t the main goal). It was designed so complex systems could work, fail predictably, and be debugged. That’s the key. We’re now building AI agents in a surprisingly similar way. Only instead of packets, we’re moving "thoughts". If you sketch a simple 4-layer model for agentic systems, it looks familiar: > Instructions: goals, constraints, intent (the what) > Skills: reasoning patterns (the how) > Tools: APIs, search, external systems (acting in the world) > Compute: models and hardware (the cognitive engine) The value of thinking in layers isn’t conceptual. It’s operational. Because debugging will evolve around these layers. When an agent fails, the real question usually isn’t: “Should we switch models?” It’s: • Misunderstood the goal → Instructions • Understood, but reasoned poorly → Skills • Knew what to do, couldn’t act → Tools • Slow, unstable, inconsistent → Compute This infrastructure is being built incredibly fast. Some think it’s already here (@mattshumer_). Some think it’s hype (@WillManidis). But even most skeptics admit: within a few years, next-gen intelligent systems will operate at massive scale and create enormous value. The real opportunity isn’t just smarter agents for the sake of building them (which some argue is where we are today). It’s building them so they can fail well, be debugged clearly, and scale predictably, so humanity can reach the next level of automation, knowledge transfer, scientific breakthroughs and more... And there is a lot to learn from history, and in this case for example the internet and e-com
English
2
1
0
241
Itamar Friedman
Itamar Friedman@itamar_mar·
This is exactly what many people said about e-commerce in 1999. Lots of traffic. Lots of spend. Very little real output, … at first. Most of it was tool-shaped noise. But underneath, real infrastructure was forming. The boom funded the buildout that later made… you name it And instead of happening over 13 years for internet and ecom)… with AI it will happen over 7 (by 2030, starting from ChatGPT), unless you start from “Attention is all you need” (then 13 years for AI as well)
Will Manidis@WillManidis

x.com/i/article/2021…

English
3
2
6
987
Palo Galko
Palo Galko@pgalko·
I think the author is saying the same thing 😉 "This is not to say that LLMs as such are worthless, quite the opposite. These models, at least from my view, will become very good in short order, and the careful deployment of them will have unbelievable effects on productivity the real economy. But my narrow suggestion is that this diffusion into the real economy will take much longer, and look much different than the current run on South Bay Best Buys for Mac Minis would have you believe."
English
0
0
1
29