ᘻᗩᖇᖽᐸ

1.6K posts

ᘻᗩᖇᖽᐸ banner
ᘻᗩᖇᖽᐸ

ᘻᗩᖇᖽᐸ

@youmustfight

🌲🏔️🌲 Katılım Eylül 2011
1.8K Takip Edilen291 Takipçiler
ᘻᗩᖇᖽᐸ retweetledi
Sebastian Caliri
Sebastian Caliri@SebastianCaliri·
Evidence-based medicine is a blessing of the 20th century. Evidence-based medicine is also a curse of the 20th century. Medical interventions are studied through randomized controlled trials and those interventions are assessed for efficacy and safety on a population level. But no individual quite matches some blended average of every trial participant. Rather, everyone's biology is unique. Sid Sijbrandij just presented the story of his cancer journey at OpenAI forum. When Sid ran out of evidence-based treatment options, he didn’t accept the boundary but rather began treating his cancer like an engineer: - multi-omic tumor profiling at extreme depth - N=1 drug development (vaccines, TCR-T, radioligand therapy) - parallel treatment strategies - continuous measurement (ctDNA, single-cell, immune state) and refinement Rather than protocol-based care, Sid built a learning loop. Maybe the future of medicine in a world where gathering and interpreting data gets cheaper and cheaper looks more like a loop. Thanks for sharing @sytses and @jacobjstern !
Sebastian Caliri tweet media
English
16
112
896
62K
ᘻᗩᖇᖽᐸ retweetledi
Tim Miller
Tim Miller@Timodc·
Still trying to wrap my head around Kash firing a unit that specializes in Iranian counter-intel LAST WEEK -- when they knew war was coming -- because the agents were involved in the Trump classified docs case. ms.now/news/kash-pate…
English
1.2K
13.9K
42.8K
1.6M
ᘻᗩᖇᖽᐸ retweetledi
Jonathan Weisman
Jonathan Weisman@jonathanweisman·
In 2019, the State of New Mexico launched an investigation of Jeffrey Epstein’s Zorro Ranch, then the Trump 1 Justice Department demanded the state stop and turn it over to the feds. The probe then fizzled. Awesome reporting from @ReisThebault nytimes.com/2026/03/01/us/… via @NYTimes
English
276
6.9K
20.8K
3.5M
ᘻᗩᖇᖽᐸ retweetledi
Covie
Covie@covie_93·
Good thing Congress isn't alive to see this.
English
665
13.7K
93K
2.2M
ᘻᗩᖇᖽᐸ retweetledi
Aaron Rupar
Aaron Rupar@atrupar·
Crockett: "I don't really know where we are in this country. Why are we even having this conversation? The United States is falling apart right now, partially because he's allowing for the killings of people in the middle of the street, but the other part of it is we have a 34 count convicted felon and there are people that are still shielding him from any type of accountability as it relates to a child sex trafficking ring. I don't understand why we are pretending like any of this is normal."
English
2.5K
20.8K
80.6K
1.7M
ᘻᗩᖇᖽᐸ retweetledi
Andrew Leyden
Andrew Leyden@PenguinSix·
Former Washington Post reporters launch GoFundMe to help repatriate international staff who was fired and now facing logistical challenges getting out of their countries and back home. gofundme.com/f/support-for-…
English
623
4.3K
11.5K
3.8M
ᘻᗩᖇᖽᐸ retweetledi
ᘻᗩᖇᖽᐸ retweetledi
Hayden
Hayden@the_transit_guy·
Me finally being able to ride high speed rail in the United States at the age of 92:
English
34
1.2K
23.7K
275.8K
ᘻᗩᖇᖽᐸ retweetledi
Mohamad Safa
Mohamad Safa@mhdksafa·
Seems like the Epstein files are full of billionaires and not immigrants.
English
1.2K
24.5K
133.6K
1.3M
ᘻᗩᖇᖽᐸ retweetledi
connor
connor@ConnorEatsPants·
anyone else starting to question President Trump’s leadership
English
1.3K
2.2K
79.4K
1.7M
ᘻᗩᖇᖽᐸ retweetledi
connor
connor@ConnorEatsPants·
bitch @JDVance
English
419
14.5K
267.6K
4.4M
ᘻᗩᖇᖽᐸ retweetledi
Matt McDermott
Matt McDermott@mattmfm·
ICE is now responsible for 66% of the homicides in Minneapolis this year.
English
1.3K
16.4K
150.2K
3.2M
ᘻᗩᖇᖽᐸ retweetledi
Paul Graham
Paul Graham@paulg·
Jessica's friend's cousin, a random white guy in Minnesota, had his apartment invaded by ICE with drawn guns. They claimed they had a warrant, but they didn't. They were going door to door in his building.
English
159
273
3.9K
266.3K
ᘻᗩᖇᖽᐸ retweetledi
Jamie Bonkiewicz
Jamie Bonkiewicz@JamieBonkiewicz·
Anyone else notice how the “fishing boats full of drugs” storyline disappeared the second Trump got the oil?
English
2K
19.6K
123.2K
1.6M
ᘻᗩᖇᖽᐸ retweetledi
Kaivan Shroff
Kaivan Shroff@KaivanShroff·
The real Trump Derangement Syndrome… 2016 2026
English
98
937
3K
75.4K
ᘻᗩᖇᖽᐸ retweetledi
derek guy
derek guy@dieworkwear·
crazy to me that last year we said we don't have enough money to cure cancer but now we're trying to buy greenland
English
602
3.9K
52.8K
492.8K
ᘻᗩᖇᖽᐸ retweetledi
Eric Jang
Eric Jang@ericjang11·
This is an extremely beautiful plot because it sheds light on why scaling laws are so smooth, and reconciles empirical findings from both scaling laws and grokking. Even though mean loss across all tasks (red line) decreases smoothly, we see that individual subtask losses drop in a much more phase transition / grokking-like way. Easier subtasks are learned first, harder subtasks learn much later. This is also consistent with many LLM evals where you see a sharp inflection as you scale, rather than smooth linear improvement. It also suggests why robotics models have not yet seen sufficiently convincing scaling laws: even though we may collect enormous numbers of demonstrations & hours of data, the data do not contain enough diversity or "underlying subtask quanta" for the losses to "meld together" to form a clean scaling law. It remains a mystery why natural data, sorted by subtask difficulty, seems to form a Zipfian distribution. Are tasks we perceive as "difficult" actually nothing more than "infrequent" in our training distribution? There is relationship between the length of the shortest program that can generate some data, and the frequency of that data. If a subtask were *more* frequent, then you could actually shorten the program needed to generate it, thereby making the task easier from a Kolmogorov Complexity POV. And if you assume my previous claim about robotic scaling laws is true, why makes robotics data have such bad zipfian coefficients? Does the coefficient only get "good" once you do the tokenizer + dedup + operationalize the data collection just right? Does a hard subtask take more steps to learn because it requires representations from "easier" subtasks to be learned first? Representational dependency enforces a strict ordering in tasks that can be learned (task B cannot be learned until task A is mostly learned, task C cannot learn until B is mostly learned)? This could explain why there is an "ordering" of difficulty - it arises from natural ordering of dependencies in distributed representations like DNNs. It would be interesting to study this in a synthetic context, where one has the power to tune frequency of data independently of "minimum description length" as defined by P(Internet text), and see if it is possible to "learn hard tasks faster". Lots of potential in applying a better mechanized understanding to how scaling laws form to improving scaling properties on frontier models.
Eric J. Michaud@ericjmichaud_

How does scaling up neural networks change what they learn? Despite its importance, our understanding of this question remains nascent. I've written a long post reflecting on my model of neural scaling and its relationship to interpretability, etc.: ericjmichaud.com/quanta

English
16
82
834
100K