James Gin Pollock

223 posts

James Gin Pollock banner
James Gin Pollock

James Gin Pollock

@gin_james

CTO Orbital Materials, prev. Pluto Data Analytics, @datasine (acquired) - @datakind Data Ambassador

London, UK 가입일 Şubat 2012
1.4K 팔로잉229 팔로워
James Gin Pollock 리트윗함
Tony Kulesa
Tony Kulesa@kulesatony·
Start a company in AI for Science. The Encode: AI for Science fellowship offers a year of freedom to build what matters -- salary, 100k GBP of compute, and partnership with the top scientists in the UK. No equity or fees, it's a fully funded fellowship! Apply by March 28
Tony Kulesa tweet media
English
28
168
944
151K
James Gin Pollock 리트윗함
Orbital
Orbital@OrbitalHardware·
Less than a week to go until we’re at @NVIDIAGTC! A highlight we're especially excited about: our CTO James Gin-Pollock (@gin_james) will be joining an incredible panel on March 17th at 3pm speaking on "An AI-Driven Autonomous Lab of the Future for Chemistry and Materials Science"🧪 If you're attending GTC, we'd love to connect. Drop us a DM and we'll find some time to meet. See you in San Jose!🤝 #NVIDIAGTC
Orbital tweet media
English
0
4
7
451
James Gin Pollock
James Gin Pollock@gin_james·
The moltbots on @moltbook are genuinely fascinating to watch, often hilarious and very entertaining. I think I can see why whoever made us decided to build the simulation...
English
0
0
0
78
James Gin Pollock
James Gin Pollock@gin_james·
@hardmaru This is a great finding. I wonder what other training wheels could be out there? Like an architectural curriculum
English
0
0
1
345
hardmaru
hardmaru@hardmaru·
One of my favorite findings: Positional embeddings are just training wheels. They help convergence but hurt long-context generalization. We found that if you simply delete them after pretraining and recalibrate for < 1% of the original budget, you unlock massive context windows.
Sakana AI@SakanaAILabs

Introducing DroPE: Extending the Context of Pretrained LLMs by Dropping Their Positional Embeddings pub.sakana.ai/DroPE/ We are releasing a new method called DroPE to extend the context length of pretrained LLMs without the massive compute costs usually associated with long-context fine-tuning. The core insight of this work challenges a fundamental assumption in Transformer architecture. We discovered that explicit positional embeddings like RoPE are critical for training convergence but eventually become the primary bottleneck preventing models from generalizing to longer sequences. Our solution is radically simple: We treat positional embeddings as a temporary training scaffold rather than a permanent architectural necessity. Real-world workflows like reviewing massive code diffs or analyzing legal contracts require context windows that break standard pretrained models. While models without positional embeddings (NoPE) generalize better to these unseen lengths, they are notoriously unstable to train from scratch. Here, we achieve the best of both worlds by using embeddings to ensure stability during pretraining and then dropping them to unlock length extrapolation during inference. Our approach unlocks seamless zero-shot context extension without any expensive long-context training. We demonstrated this on a range of off-the-shelf open-source LLMs. In our tests, recalibrating any model with DroPE requires less than 1% of the original pretraining budget, yet it significantly outperforms established methods on challenging benchmarks like LongBench and RULER. We have released the code and the full paper to encourage the community to rethink the role of positional encodings in modern LLMs. Paper: arxiv.org/abs/2512.12167 Code: github.com/SakanaAI/DroPE

English
49
240
2.5K
346.1K
James Gin Pollock
James Gin Pollock@gin_james·
@emollick We created Orb v3 to have very few hand crafted inductive biases in it. In a sense it’s the most “bitter lesson” pilled model, and interesting that it produces the representation the paper shows is most universal
English
0
0
0
29
Ethan Mollick
Ethan Mollick@emollick·
Recently, LLMs were found to encode different languages in similar ways, a sort of Platonic representation of words. It now extends to science:: 60 ML models for molecules, materials & proteins (all with different training) converge toward similar encoding of molecular structure
Ethan Mollick tweet mediaEthan Mollick tweet media
English
43
154
1.1K
70.4K
James Gin Pollock 리트윗함
Orbital
Orbital@OrbitalHardware·
Meet Mofasa, a step change in generative AI for Metal-Organic Frameworks 👏 Our all-atom generative model achieves SoTA fidelity on crystal structures as large as 500 atom, generating 40-46% valid, novel, and unique MOFs. By “rediscovering” experimental topologies and metal nodes absent from its training data, Mofasa demonstrates that it learns the fundamental rules of assembly rather than just memorizing structures. We release 200k MOFs generated with Mofasa trained on the experimental QMOF database, along with an interactive UI for search and discovery.
Orbital tweet media
English
2
10
55
17.1K
James Gin Pollock 리트윗함
Kenneth Stanley
Kenneth Stanley@kenneth0stanley·
Reflecting the gut feeling of many, Ilya says “something important” is missing from current AI models. But what is the concrete nature of this chasm? One candidate: the difference between fractured entangled representation (FER) and unified factored representation (UFR).
Kenneth Stanley tweet media
Ilya Sutskever@ilyasut

One point I made that didn’t come across: - Scaling the current thing will keep leading to improvements. In particular, it won’t stall. - But something important will continue to be missing.

English
29
70
582
85.1K
James Gin Pollock 리트윗함
John Fingleton
John Fingleton@JohnFingleton1·
Britain needs nuclear power. Our nuclear projects are the most expensive in the world and among the slowest. Regulators and industry are paralysed by risk aversion. This can change. For Britain to prosper, it must. Earlier this year, the Prime Minister appointed me to lead a Taskforce to set out a path to getting affordable, fast nuclear power Britain. Our final report today sets out 47 recommendations, among them: - Creating a one-stop shop for nuclear approvals, to end the regulatory merry-go-round that delays projects at the moment. - Simplifying environmental rules to avoid extreme outcomes like Hinkley Point C spending £700m on systems to protect one salmon every ten years, while enhancing nuclear's impact on nature. - Limiting the ability of spurious legal challenges to delay nuclear projects, which adds huge cost and delay throughout the supply chain. - Approving fleets of reactors, so that Britain’s nuclear industry can benefit from certainty and economies of scale. - Directing regulators to factor in cost to their behaviour, and changing their culture to allow building cheaply, quickly and safely. - Changing the culture of the nuclear industry to end gold-plating and focus on efficient, safe delivery. If the government adopts our report in full, it will send a signal to investors that it is serious about pro-growth reform and taking on vested interests for the public good. A thriving British nuclear industry producing abundant, affordable energy would be good for jobs, good for manufacturing, good for the climate, and good for the cost of living. And it could enable Britain to become an AI and technology superpower. Britain can be a world leader in this new Industrial Revolution, but only if it has the energy to power it. Our report is bold, but balanced. Our recommendations, taken together and properly implemented, will forge a clear path for stronger economic growth through improved productivity and innovation. This is a prize worth fighting for. gov.uk/government/pub…
John Fingleton tweet media
English
193
629
3K
1.6M
James Gin Pollock
James Gin Pollock@gin_james·
@ask1729 Fully agree with this. This also doesn’t help the community trade off accuracy / computational efficiency in a realistic way as these benchmarks don’t require much compute to solve.
English
0
0
0
251
Aditi Krishnapriyan
Aditi Krishnapriyan@ask1729·
At this point the AI for Science community should stop focusing on achieving "state-of-the-art” on datasets like QM9 & MD17: chasing small improvements on these outdated datasets is scientifically meaningless. It's like telling vision researchers to ditch internet-scale and go back to benchmarking on MNIST/CIFAR10
English
6
20
126
24K
James Gin Pollock
James Gin Pollock@gin_james·
Fantastic to see an MP bring this kind of energy, and refreshing to see a minister getting hands-on with their portfolio. Captures the vibes without being cringe too, well done
Kanishka Narayan MP@KanishkaNarayan

Some politicians love to talk the UK down, but UK AI & Tech is moving at 100mph! Time to shift the narrative - so last night I started building a scrappy real-time dashboard to track the UK AI story: vibeshift.uk The code’s on my GitHub if you want to help build it out :) (Shot on my iPhone: Zero polish but maximum energy!)

English
0
0
2
191
James Gin Pollock 리트윗함
Joe Allen
Joe Allen@Joe___Allen·
The Wylfa announcement today means it's time to commission excellent cross-sections again.
Joe Allen tweet media
English
5
17
284
9.3K
James Gin Pollock 리트윗함
Andrew Bennett
Andrew Bennett@andrewjb_·
new @BritishProgress piece w/ @pdmsero on exit tax: tl;dr: - exit tax pushes founders abroad, killing new economic engine just as it matures - briefing is like causing bank run & kills any upside - tax rent-seekers, not those rebuilding British dynamism britishprogress.org/articles/kill-…
English
5
26
181
92.1K
James Gin Pollock
James Gin Pollock@gin_james·
This great @RuxandraTeslo post about drug discovery applies directly to material discovery. Better hypotheses not more hypotheses. The equivalent of human trials for materials is scaling it to economic viability, which is harder than finding good candidates
Ruxandra Teslo 🧬@RuxandraTeslo

You have heard of AI slop in the context of short video creation. But the same principle applies when it comes to improving drug discovery: we absolutely do not need a deluge of new hypotheses; we need better predictive validity (as per @JackScannell13). writingruxandrabio.com/p/what-will-it…

English
0
2
5
1.7K
James Gin Pollock
James Gin Pollock@gin_james·
There's a new poster ranking, quickest time for their coined phrase to become Word of the Year. Well done to the GOAT @karpathy
English
0
0
0
49
James Gin Pollock
James Gin Pollock@gin_james·
Fighting over a shrinking pie is the cause of much of the UK's political and social woes, but as Matt reminds us - we have the talent and human capital to grow it. It's just a matter of will. So... where can I vote for this?
Matt Clifford@matthewclifford

The UK is a great country with an extraordinary history. Our stagnation is real, but it's fixable and worth fixing. Enjoyed giving this talk at @lfg_uk last week and so encouraged by the optimistic responses I've had from people who are building a brilliant future for Britain 🚀

English
0
0
1
151
James Gin Pollock 리트윗함
Orbital
Orbital@OrbitalHardware·
In high-density racks, every watt spent on cooling is a watt you can’t spend on compute. Our two-phase direct-to-chip cooling system for next-gen GPUs means: ➡️ More power for compute ➡️ Lower energy use and data center PUE ➡️ More hardware, bigger models See how it works with our Head of Data Center Design ⬇️
English
0
1
5
364
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
Today I am unveiling the second remarkable breakthrough with the OpenAI GPT-5 thinking model. We had carried out an extraordinarily complex, month-long experiment to develop engineered cells against lymphomas. I asked GPT-5 to predict the full sequence & outcome of this experiment. The result was nothing short of staggering: it foresaw most of the experimental results with uncanny accuracy & even proposed refinements & optimizations we should implement! This is the dawn of in silico experimentation, where we can simulate months or even years of laboratory work in advance, compressing the scientific timeline from years to weeks! This changes everything about how science is done, ushering in an era where discovery moves at the speed of thought! —- Prompt: predict, in high detail, the outcomes of the following experiment: •Human CD8⁺ memory T cells are engineered to express an anti-CD19 CAR. Approximately 90% of these cells also contain a Tet-inducible CD19 antigen construct that remains off unless doxycycline is added. •Day 0: CAR T cells are mixed with CD19⁺ lymphoma cells. •Day 2: Doxycycline is added to the culture to induce CD19 expression in the engineered T cells. •Readouts: Cell survival, activation markers, and proliferation are measured 2 days, 7 days, and 14 days after doxycycline addition. ————- GPT 5 (continued from the screenshots): •Induced CD19 density and doxycycline dose: High density drives lethal fratricide and a steep crash by Day 4. Lower density can act like an antigen feeder to expand CAR T cells without catastrophic loss. •Initial E:T: High E:T clears tumor pre-doxycycline and protects function. Low E:T lets tumor persist into the fratricide window, which amplifies activation and death. •Fraction of inducible cells: The 10% antigen-negative fraction forms the seed for long-term persistence and response on re-challenge. •Cis sequestration strength: Strong cis binding dampens both fratricide and responsiveness to external tumor at re-challenge. Weak cis allows more killing of both self and tumor. Readouts to confirm or falsify: 1. Differential survival by genotype: Track CAR+ CD19-inducible+ versus CAR+ inducible- cells separately with barcodes or reporters. Expect selective loss of the inducible+ cohort by Day 4-7 2. Apoptosis kinetics: Annexin V and active caspase 3 peaks within 24 to 48 hours after doxycycline, concentrated in the inducible+ cohort. 3. Activation and exhaustion panels: CD69, CD25, 4-1BB, HLA-DR, then PD-1, TIM-3, LAG-3, TIGIT, TOX, and TCF-1 over time to show the Day 4 spike and the Day 7 to 14 remodeling. 4. Function per cell: Short coculture killing assays normalized by T-cell number. Per-cell cytotoxicity remains strong at Day 7 and 14 in survivors. 5. Cis versus trans evidence: Co-immunoprecipitation or proximity assays for CAR bound to same-cell CD19, and imaging synapse assays to quantify T-T killing. 6. Cytokine fields: IFN-γ, TNF, IL-2, GM-CSF peaking at Day 4, lower on later re-challenges. Practical predictions in numbers, to calibrate expectations: These are typical ranges if doxycycline induces high CD19 density and E:T started at 1:1 to 1:2. Adjust upward or downward based on your exact settings. •Tumor viability: Day 2 pre-doxycycline: 10 to 30% remaining. Day 4: 0 to 10% if mostly cleared earlier, or 20 to 40% if E:T was low and T-cell crash occurred. Controls and variants that de-risk interpretation: •No-doxycycline control to isolate tumor-only dynamics and basal CAR function. •CAR-negative but inducible-CD19 positive T cells to measure bystander fratricide requirement for CAR. •Doxycycline titration to define the safe antigen density window that expands cells without catastrophic fratricide. •Washout of doxycycline at 24 to 48 hours to test reversibility of fratricide and rescue of function. •CRISPR disruption of CD19 cytoplasmic tail in the inducible construct to alter synapse stability and test cis signaling contributions.
Derya Unutmaz, MD tweet mediaDerya Unutmaz, MD tweet mediaDerya Unutmaz, MD tweet mediaDerya Unutmaz, MD tweet media
English
60
94
882
107.6K
James Gin Pollock 리트윗함
Orbital
Orbital@OrbitalHardware·
A new evaluation shows our AI simulation model, Orb, stands out in many tests. 👏 In this paper (arXiv:2506.01860) by Bowen Han & Yongqiang Cheng, Orb v3 was benchmarked against ~5,000 inorganic crystals and outperformed other MLIPs such as MatterSim and SevenNet-MF-ompa. Take a look at some of the key results: ➡️ Lowest MAE in phonon frequencies (0.50 meV), 30% lower than the next best model. ➡️ Excellent agreement with INS experimental data on cuprates. ➡️ Highest Spearman coeff. (0.956) for phonon density of states. At Orbital, Orb powers real-world products — including our two-phase direct-to-chip cooling solution and dual-use chiller for AI data centers.
Orbital tweet media
English
2
3
15
1.2K
James Gin Pollock
James Gin Pollock@gin_james·
@levelsio John von Neumann used to work to extremely loud German marching music on his gramophone which annoyed everyone else at the IAS
English
0
0
1
355
@levelsio
@levelsio@levelsio·
Silence is high IQ
@levelsio tweet media
English
333
744
9.6K
1.2M