Naka-pin na Tweet
Alan Fregtman
11.9K posts

Alan Fregtman
@alanwritescode
Developer working in VFX, space, VR and AR industries. Former Rigging TD. Present: Pipeline & Cam Software Developer @ @felixandpaul. ~ My opinions are my own.
Montreal, Canada Sumali Mart 2007
2.7K Sinusundan970 Mga Tagasunod

@fifteen42_ hi! what happened to the otio.js? npmjs.com/package/otio.js why was it deleted from github? it looked good.
English
Alan Fregtman nag-retweet

Animation by Sandro Cleuzo* ( @InspectorCleuzo ) | Molesworth
#penciltest #sakuga #genga #2danimation
HT
Alan Fregtman nag-retweet
Alan Fregtman nag-retweet

Alan Fregtman nag-retweet
Alan Fregtman nag-retweet

Inspired by Quentin Blake’s multi-colored line drawings, Terence Ng Tat Mew used Moho to create this character. He has worked as a BG and visual development artist in What if, Rick and Morty, Marvel Zombies and more. Now he's proving to be an amazing 2D rigger too! #mohoanimation
English
Alan Fregtman nag-retweet

This paper from Harvard and MIT quietly answers the most important AI question nobody benchmarks properly:
Can LLMs actually discover science, or are they just good at talking about it?
The paper is called “Evaluating Large Language Models in Scientific Discovery”, and instead of asking models trivia questions, it tests something much harder:
Can models form hypotheses, design experiments, interpret results, and update beliefs like real scientists?
Here’s what the authors did differently 👇
• They evaluate LLMs across the full discovery loop hypothesis → experiment → observation → revision
• Tasks span biology, chemistry, and physics, not toy puzzles
• Models must work with incomplete data, noisy results, and false leads
• Success is measured by scientific progress, not fluency or confidence
What they found is sobering.
LLMs are decent at suggesting hypotheses, but brittle at everything that follows.
✓ They overfit to surface patterns
✓ They struggle to abandon bad hypotheses even when evidence contradicts them
✓ They confuse correlation for causation
✓ They hallucinate explanations when experiments fail
✓ They optimize for plausibility, not truth
Most striking result:
`High benchmark scores do not correlate with scientific discovery ability.`
Some top models that dominate standard reasoning tests completely fail when forced to run iterative experiments and update theories.
Why this matters:
Real science is not one-shot reasoning.
It’s feedback, failure, revision, and restraint.
LLMs today:
• Talk like scientists
• Write like scientists
• But don’t think like scientists yet
The paper’s core takeaway:
Scientific intelligence is not language intelligence.
It requires memory, hypothesis tracking, causal reasoning, and the ability to say “I was wrong.”
Until models can reliably do that, claims about “AI scientists” are mostly premature.
This paper doesn’t hype AI. It defines the gap we still need to close.
And that’s exactly why it’s important.

English
Alan Fregtman nag-retweet

THIS IS A LOVE SONG 🎩🪄 animated music video out now youtu.be/G9nUIpRZvc8?si… animated by @exit73studios 🪽

YouTube
English
Alan Fregtman nag-retweet

🌌 Felix & Paul Studios proudly presents #InterstellarArc, opening today at #AREA15, Las Vegas!
Step aboard the Interstellar Arc and begin your journey to humanity's new home, the exoplanet Arcadia.
🎟️Get your boarding pass today on interstellararc.com
English
Alan Fregtman nag-retweet
Alan Fregtman nag-retweet
Alan Fregtman nag-retweet
Alan Fregtman nag-retweet
Alan Fregtman nag-retweet

SLAM just got a serious speed boost.
Efficient LoFTR is now integrated into the @huggingface Transformers library.
It’s 2.5× faster than the original LoFTR and can even outperform the SuperPoint + LightGlue pipeline.
Image matching finds correspondences between two images taken from different angles, lighting, or scales.
It’s key for 3D computer vision tasks like:
✅ Structure from Motion (SfM)
✅ SLAM
✅ Visual Localization
Unlike traditional detector-based matchers (SuperGlue, LightGlue) that depend on a separate feature detector, LoFTR works detector-free:
•Coarse pixel-wise dense matching
•Fine-level refinement with a Transformer model
Efficient LoFTR pushes it further with:
✅ Aggregated attention + adaptive token selection
✅ Two-stage correlation for subpixel accuracy
✅ 2.5× speed boost
You can try it now in just a few lines:
pip install transformers
Thanks for sharing, @Nielsrogge!
Resources:
- model: huggingface.co/zju-community/…
- docs: huggingface.co/docs/transform…
- project page: zju3dv.github.io/efficientloftr/
- LoFTR: zju3dv.github.io/loftr/
English
Alan Fregtman nag-retweet

He did it again ♥️
Still no game looking even remotely close to this today.
Desimulate@de5imulate
omw to this castle to meet the team building the game
English
Alan Fregtman nag-retweet
Alan Fregtman nag-retweet
Alan Fregtman nag-retweet

At @theworldlabs, we built a new Gaussian splatting web renderer with all the bells and whistles we needed to make splats a first-class citizen of the incredible @threejs ecosystem. Today, we're open sourcing Forge under the MIT license.
English
Alan Fregtman nag-retweet
Alan Fregtman nag-retweet

A French film swept Japan in the '50s — and its impact on Japanese animation was immense.
Among its fans were Hayao Miyazaki, Isao Takahata and their co-workers at Toei Doga. We explore how they saw it, why they loved it and what they learned:
─➤animationobsessive.substack.com/p/the-french-f…
English










