Spencer Moore

336 posts

Spencer Moore banner
Spencer Moore

Spencer Moore

@SponceyM

Scientist @herasight

Woodinville, WA Katılım Ocak 2022
1.4K Takip Edilen1.9K Takipçiler
Sabitlenmiş Tweet
Spencer Moore
Spencer Moore@SponceyM·
Today we reveal CogPGT, the world’s most powerful genetic predictor of IQ. We achieve a correlation with IQ of 0.51 (0.45 within-family). Herasight customers can boost the expected IQ of their children by up to 9 points by selecting the embryo with the highest CogPGT score. 🧵
Spencer Moore tweet media
English
185
275
2.3K
1.8M
Spencer Moore retweetledi
Josh Barzon
Josh Barzon@JoshuaBarzon·
Ethnic Composition of the Middle East
Josh Barzon tweet media
English
359
409
2.6K
383.1K
Spencer Moore retweetledi
Herasight
Herasight@herasight·
If polygenic embryo screening is going to exist, it should be grounded in rigorous science, open discussion with the genetics community, and continual scientific validation. That is the approach we take at Herasight.
Herasight tweet media
English
1
3
20
449
Spencer Moore
Spencer Moore@SponceyM·
This was a great conference! It included a panel of platform talks related to embryo screening, and I’m glad we could present our scientific work substantiating the efficacy of PGT-P.
Herasight@herasight

This week, the Herasight team presented three papers at the ACMG Annual Meeting in Baltimore. The work addresses three important contributions to the science of PGT-P: 1. Validation of polygenic predictors 2. Type 1 diabetes risk modeling 3. Embryo genome imputation 🧵

English
0
0
5
159
Spencer Moore retweetledi
Herasight
Herasight@herasight·
Dream team❤️
Herasight tweet mediaHerasight tweet media
English
1
8
30
2K
Spencer Moore
Spencer Moore@SponceyM·
@alexolegimas It's probably a helpful truism for many. OTOH to what extent do you think the need to get licensed in many professions undermines the point, especially for couples just starting families?
English
0
0
3
137
shako
shako@shakoistsLog·
*explaining AI to distant ancestors* "so, the thing is it might be really good. but we don't know. a lot of us are scared we might be in the permanent underclass. we're all losing sleep over it, it's very stressful" "i'm very hungry, do you have any food?"
English
3
5
160
3.7K
Spencer Moore retweetledi
aviel
aviel@aviel·
Given that we're now hours away from a vote that will forever change Washington State, I might as well do some truthmaxxing. The income tax is not a doomsday job destroyer. Those jobs are going away whether or not this bill passes. The only real choice is whether we manage the transition well or employ bad tax policy to make it near impossible. With AI, what's collapsing isn’t just "knowledge work", it’s "process work". A huge number of jobs (and even companies) existed to serve workflows, not outcomes. In the age of software, busy work changed and knowledge work expanded. In the age of AI, busy work will be gone and knowledge will be democratized. That’s the nuance too many people are missing... and it's huge. This transition is going to be brutal. The workforce will be smaller and many careers built around managing process rather than producing outcomes are already ending. This is happening now. And cities like Seattle are especially exposed. Bad policy has already begun pushing out innovators and now our biggest employers are no longer AI durable business, but instead are universities and healthcare systems who have a huge concentration of process-heavy work that is under serious pressure. I still think that there’s a way through this and that the democratization of knowledge work will unlock a new wave of innovation, but places that try to tax, regulate, or moralize their way around this instead of helping people transition are going to get crushed. That transition is also NOT through increased government spending from higher taxes. Let's please not get crushed.
English
12
15
124
10.7K
Spencer Moore retweetledi
Richard Ngo
Richard Ngo@RichardMCNgo·
I don’t think that my own friendship circles are very responsible for the choice to double down on mass migration in particular—that was made by a pretty small group of elites. However I do think that many in my circles (and my past self!) are morally culpable for their complicity in large-scale preference falsification in favor of the ideological tenets of globalism. For example, many young girls were raped just down the road from Oxford University. Yet Oxford elites (including EA founders and my classmates) largely turned a blind eye because it was politically inconvenient (and because many elites had internalized deep contempt for the native British population, as we saw during the Brexit campaign). Even a few such elites being virtuous enough to investigate further could have led them to uncover the enormous scale of state failure and plausibly accelerated the national reckoning with it by years.
English
10
25
284
12.4K
Spencer Moore retweetledi
Jason Furman
Jason Furman@jasonfurman·
Are we finally seeing AI in the productivity data? A big upward revision to earlier data and strong Q4 bring us 2.2% above CBO's pre-pandemic forecast. Annual rates: 1 year: 2.8% 2 years: 2.5% 6 years: 2.2%
Jason Furman tweet media
English
27
116
512
210.1K
Spencer Moore retweetledi
Jim O’Neill
Jim O’Neill@regardthefrost·
I’m extremely honored to be nominated by President Trump to serve as Director of the National Science Foundation.   Heroic scientists have always challenged consensus to advance the frontiers of knowledge. Recently, many institutions have weakened academic freedom and lost the trust they once enjoyed. Yet across our country, a new golden age of discovery is dawning. Information is open source and debate is public.   The marketplace of ideas is not an efficient market. Finding and funding independent thinkers and builders has taught me to eliminate bottlenecks and favor rigorous science that replicates.  Private funders are developing frontier models and useful technology. Government should take bigger financial risks to pose and answer deeper questions.   NSF’s scientists and staff have built something worth strengthening. Working together, scientists, engineers, investors, research institutions, and businesses can support American genius, enhance national security, enrich our economy, and improve our quality of life.   Entropy is on the march and China is not waiting.
Jim O’Neill tweet media
English
196
293
2.5K
436K
Spencer Moore retweetledi
Daniel Litt
Daniel Litt@littmath·
Some thoughts on AI and mathematics, inspired by "First Proof."
Daniel Litt tweet media
English
48
200
1.1K
330.9K
Spencer Moore
Spencer Moore@SponceyM·
Coming soon to CogPGT by @herasight: - peer-reviewed publication of v1 results - v2 model with more variants and improved performance - a third cohort for validation and analysis Exciting times! Check back for more on CogPGT developments later this month.
Spencer Moore@SponceyM

Today we reveal CogPGT, the world’s most powerful genetic predictor of IQ. We achieve a correlation with IQ of 0.51 (0.45 within-family). Herasight customers can boost the expected IQ of their children by up to 9 points by selecting the embryo with the highest CogPGT score. 🧵

English
0
7
28
2.6K
Spencer Moore retweetledi
Jeremy
Jeremy@jeremyli__·
Today we’re announcing an algorithmic breakthrough. Herasight’s ImputePGTA algorithm has enabled couples around the world to access polygenic embryo testing from routine IVF data (PGT-A). Now it yields substantially higher accuracy, especially for underrepresented ancestries
Jeremy tweet media
English
8
36
146
44.5K
Spencer Moore retweetledi
Noam Brown
Noam Brown@polynoamial·
There have been fair questions on whether LLM contributions to STEM are overhyped, but I've spoken with physicists about this result and they've told me it is a truly significant research contribution, roughly at the level of a solid journal paper, and GPT-5.2 played a key role.
OpenAI@OpenAI

GPT-5.2 derived a new result in theoretical physics. We’re releasing the result in a preprint with researchers from @the_IAS, @VanderbiltU, @Cambridge_Uni, and @Harvard. It shows that a gluon interaction many physicists expected would not occur can arise under specific conditions. openai.com/index/new-resu…

English
52
110
1.7K
182.1K
Spencer Moore retweetledi
Mark Gadala-Maria
Mark Gadala-Maria@markgadala·
Seedance 2 is going insanely viral and threatening to dethrone Hollywood. 15 wild examples you have to see to believe, all 100% AI: 1) Titanic alternate ending, Leo is saved 😂
English
182
478
3.8K
754.8K
Spencer Moore retweetledi
Samuel Hammond 🦉
Samuel Hammond 🦉@hamandcheese·
In-context learning is (almost) all you need. The KV cache is normally explained as a content addressable memory, but it can also be thought of a stateful mechanism for fast weight updates. The model's true parameters are fixed, but the KV state makes the model behave *as if* its weights updated conditional on the input. In simple cases, a single attention layer effectively implements a one-step gradient-like update rule. This should change how you think about the learned parameters in a transformer. They are more like the compiler for an update rule / basis vectors or fixed points for fast meta-learning, not just static features. This is still short of true continual learning, though, since the state is not persistent across sessions, and the updates are bottlenecked by context length and model capacity, especially depth limiting how many iterative refinement steps the model can express. Attempts to solve for continual learning will treat KV as a fast episodic state, then learn a consolidation operator / hypernetwork that compiles that state into a small parameter delta, so the next session starts from an empty cache but a slightly updated model you can validate matches the old model using probes. This is nontrivial though cus the KV is enormous and highly redundant and not uniquely mappable to a single parameter delta,, which can lead to overwriting skills or other kinds of interference. You also don't need the model to effectively store full conversation histories as long term memories. So you'd instead first do context compression for just the stuff that's worth retaining, and then do some kind of constrained update like LoRA or knowledge distillation. This is basically the function of sleeping in humans, and thus why you get better / more intuitive at skills you practice after a night's sleep. In practice, though, this comes pretty close to simply having a library of skills to inject into context on the fly. The biggest downside is that the model can't get cumulatively better at a skill in a compounding way. But that's in a sense what new model releases are for. So long as companies keep putting out new and smarter base models every few months, there may thus not be a huge amount to gain from true continual learning over and above in-context learning. It also sidesteps the thorny privacy issues implied by models that form persistent memories of users' data, not to mention the arguably greater moral patienthood of AIs that learn from unique experience trajectories to form the sort of continuity of identity we associate with persons.
Dean W. Ball@deanwball

Codex 5.3 and Opus 4.6 in their respective coding agent harnesses have meaningfully updated my thinking about 'continual learning.' I now believe this capability deficit is more tractable than I realized with in-context learning. One way 4.6 and 5.3 alike seem to have improved is that they are picking up progressively more salient facts by consulting earlier codebases on my machine. In short, both models notice more than they used to about their 'computational environment' i.e. my computer. In part this is because the computational environment has itself become richer *because of coding agents*. Six months ago where were perhaps a dozen codebases on my machine; today there are hundreds, many of them involving the sophisticated orchestration of complex software systems. So there is more interesting stuff for agents to notice than there would have been even just a few months ago. Of course, another reason models notice more is that they are getting smarter. When I ask 4.6 in particular to do some complex project, it will look for times when I (/my coding agents) have tackled similar problems, made similar architectural/infrastructural decisions, or even drawn on the same datasets. It will say things like, "I noticed, on this unrelated project from two months ago, that you ran into a problem here because of [e.g.] a non-obvious data preprocessing step required for using this dataset with Tool Y. Since our plan is to use Tool Y again for this project, I'll keep this in mind when I build the data processing pipeline." This sort of thing would occasionally happen with 4.5 and 5.2, especially if I told them to consult related projects, but they did not usually do this organically. Even when the earlier agents did this, they rarely extracted as salient an insight. Some of the insights I've seen 4.6 and 5.3 extract are just about my preferences and the idiosyncrasies of my computing environment. But others are somewhat more like "common sets of problems in the interaction of the tools I (and my models) usually prefer to use for solving certain kinds of problems." This is the kind of insight a software engineer might learn as they perform their duties over a period of days, weeks, and months. Thus I struggle to see how it is not a kind of on-the-job learning, happening from entirely within the 'current paradigm' of AI. No architectural tweaks, no 'breakthrough' in 'continual learning' required. This seems like a positive feedback loop from agent adoption: more people using coding agents clearly means (a) more examples of in-the-wild software engineering for labs to use for post training and (b) more examples of the user's prior coding agent projects as a form of in-context learning about the user's computing environment, preferences, etc. There are many directions you can imagine labs taking this positive feedback loop. Perhaps this helps to explain why some lab employees have claimed that they expect continual learning to be largely solved by the end of this year. I am not so sure how satisfyingly in-context learning/memory will actually solve continual learning (more sample-efficient learning algorithms seems straightforwardly better), but I am prepared to believe this will solve a lot. I've already seen performance improvements from relatively modest examples of this enhanced in-context learning, which again I doubt is the result of some architectural tweak but is simply the product of the models getting smarter *and* diffusion meaning that the models have richer data resources to mine, both in training and at inference time. Overall, 4.6 and 5.3 are both astoundingly impressive models. You really can ask them to help you with some crazy ambitious things. The big bottleneck, I suspect, is users lacking the curiosity, ambition, and knowledge to ask the right questions.

English
7
13
150
29.3K
Spencer Moore retweetledi
AnechoicMedia
AnechoicMedia@AnechoicMedia_·
I think the decline of thrift is part of loss of a shared social script for "average people" that revolves around middle class family formation. Most working people couldn't expect dramatic income gains over their life and so had to build wealth slowly through acts of self-denial. For those raised online, social comparison and visible inequality has instilled an all-or-nothing mindset about wealth. You're either one of the people destined to 10x your income or you're not. Getting rich makes thrift not matter, and if you're doomed to not be rich, then thrift also doesn't matter because nobody becomes rich through thrift. Thrift only matters if your goal is to be a slightly more respectable person of the same class. But what are you saving money for today? Probably not to have a family; Society isn't demanding you have kids, you don't attend church, there's no tribe that pats you on the back for doing the little things right every day. Home ownership, already seen as an impossible goal in top cities, also become sharply less affordable in recent years. On the other end, there isn't a penalty for going bust, either; You're never going to be allowed to starve or be refused medical treatment. And being carefree has its own social status, at least while young. So why not enjoy all the luxuries that are within reach and that everyone around you also seems to be enjoying? Most of my life I adhered to a strict moralizing attitude about personal finance and making the correct decisions to be better than everyone else. This helped avoid some personal crises, but otherwise, bringing your future retirement a few years closer doesn't change life much. I also frequently failed to live up to my own standards and spent more on eating out, then felt guilty for it. In the end, none of this mattered. Switching jobs a couple times made a bigger impact on my finances than a decade of buying store brand milk and bread. So either I was always going to make more money, in which case thrift just made me a neurotic weirdo for no reason, or I wasn't, and I was denying myself for no proximate social reward.
English
68
87
1.6K
234.9K