Seth M

297 posts

Seth M banner
Seth M

Seth M

@_sethmorton

building superintelligence with thermodynamic chips. i dream of exploring exoplanets.

San Fransisco Beigetreten Kasım 2023
151 Folgt93 Follower
Seth M
Seth M@_sethmorton·
@theresidency yes! building a full LLM architecture that continually learns from scratch on energy-based models. no transformer, no attention. already at 98% char-level accuracy, need ~$10k in compute to prove it scales
English
1
0
4
162
the residency
the residency@theresidency·
our residents need compute! who wants to offer credits to them? promising startups that are working on things like: > continual learning > memory improvements > geometric models we can send you 10 applicants, bottlenecked by compute needs right now!
the residency tweet media
English
7
4
53
3.2K
Seth M
Seth M@_sethmorton·
I def need to spend more time in Zurich
Roland Graser@roland_graser

recap of our zurich visit with @theresidency: > spent 3 days in zurich, it feels like the place to be in europe > technical talent, high substance, no noise > 3 places you must visit: 1. @ethroboticsclub - ETH robotics club. a hangar with 50 students on a saturday, working on robotics & physical ML projects 2. @ETH_agent_lab - an ETH chair that let's european students work on their projects, while giving them their master's thesis + credits -> perfect for students who want to build but NOT drop out 3. @thejfloor - the community that formed at the ETH's student project house, now a coworking space for startups in zurich funniest roadtrip I could have imagined with @ArvindAGI22 , @_sethmorton and @chrisbrolin123 the conversations we've been having can be boiled down to one question: will transformers get to AGI faster than other, more exotic models that run on new alternative hardware (e.g. neuromorphic or thermo compute). until next time!

English
2
0
9
158
Seth M
Seth M@_sethmorton·
Asking someone if they know what TBPN is might be the best signal there is
Roland Graser@roland_graser

update: > went to zurich > crashed into @thejfloor > went to eth student bar > asked random ppl if they know tbpn > 5th group we asked said yes > turns out they’re all building at ‘the lab’ in zurich > 10/10 culture fit with the residency > 10/10 serendipity > immediately decided to stay longer in zurich

English
0
0
1
70
Seth M retweetet
Roland Graser
Roland Graser@roland_graser·
pov: the last thing that you see before being mansplained continual learning
Roland Graser tweet media
English
0
2
7
434
Seth M
Seth M@_sethmorton·
roadtripping through Europe visiting AI labs with a crew working on > neuromorphic compute > spiking neural nets > self referential neural nets > thermodynamic compute we're stopping in Zurich! @giffmana we would love to meet up while we're passing through
GIF
English
0
2
2
239
Seth M
Seth M@_sethmorton·
The best AI models in the world scored 0.37 out of 100 on ARC-AGI-3 - this drops you into a game with hidden rules. You have to poke at things, watch what changes, and figure it out through contact. Humans do this naturally. Models can't. Why? sethmorton.com/blog/memory_is…
English
0
0
3
69
Seth M retweetet
Guri Singh
Guri Singh@heygurisingh·
Humans: 100% Gemini 3.1 Pro: 0.37% GPT 5.4: 0.26% Opus 4.6: 0.25% Grok-4.20: 0.00% François Chollet just released ARC-AGI-3 -- the hardest AI test ever created. 135 novel game environments. No instructions. No rules. No goals given. Figure it out or fail. Untrained humans solved every single one. Every frontier AI model scored below 1%. Each environment was handcrafted by game designers. The AI gets dropped in and has to explore, discover what winning looks like, and adapt in real time. The scoring punishes brute force. If a human needs 10 actions and the AI needs 100, the AI doesn't get 10%. It gets 1%. You can't throw more compute at this. For context: ARC-AGI-1 is basically solved. Gemini scores 98% on it. ARC-AGI-2 went from 3% to 77% in under a year. Labs spent millions training on earlier versions. ARC-AGI-3 resets the entire scoreboard to near zero. The benchmark launched live at Y Combinator with a fireside between Chollet and Sam Altman. $2M in prizes on Kaggle. All winning solutions must be open-sourced. Scaling alone will not close this gap. We are nowhere near AGI. (Link in the comments)
Guri Singh tweet media
English
319
1.1K
6.4K
1.3M
Punit Arani
Punit Arani@punit_arani·
haven't heard anyone mention quant firms in the last 2 years i wonder how their talent pool has been impacted
English
1
0
1
113
Punit Arani
Punit Arani@punit_arani·
fast and accurate ci/cd will take you further than your coding agents
English
1
0
1
93
Seth M
Seth M@_sethmorton·
@punit_arani not too sure about this what kind of use cases are u talking about?
English
1
0
0
17
Punit Arani
Punit Arani@punit_arani·
agentic search with an embeddings index is more than enough for 99% of use cases simplicity beats complexity you don't need a complex vector db + graph db + context graph system to be good
English
1
0
1
124
Seth M
Seth M@_sethmorton·
I listen to Sam on More or Less and I like his thesis on the shitty business of frontier LLMs but it implies that AI is a static tool you farm around. If continual learning is unlocked then the moat widens.
English
0
0
0
24
Seth M
Seth M@_sethmorton·
This is directionally right but lacks nuance. LLMs kill interface fluency -- not knowledge. As shallow knowing dies, deep judgment and problem framing get more scarce, not less.
sam lessin 🏴‍☠️@lessin

The pride of knowing is dead, long live the farming (AI's attempt to process my notes of the last week...) --- There used to be pride in knowing things. The internet rewarded it. If you knew how to code, how markets worked, how networks scaled, how to click the right buttons in the right order—you had leverage. But walking around the zoo the other day, watching animals that don’t know a single fact about the world yet are perfectly adapted to it, it hit me: knowledge itself isn’t the scarce resource anymore. Machines know everything instantly. The interface is dissolving. There’s no menu, no clicking, no “figuring out the software.” You just state intent and the system does the work. When the interface disappears, knowing how to use the tool stops being impressive. What matters again is deciding what should exist. In a world where intelligence is cheap and software is trivial to generate, the advantage shifts from the mechanic to the farmer—the person who tends systems over time, curates environments, and patiently grows durable things. That shift also exposes a mistake the tech industry has made for decades: confusing products with companies. When software was hard, a clever product could masquerade as a business for years. But if AI collapses the cost of building and copying software, “hard to build” stops being a moat overnight. Strategy comes roaring back. Distribution, trust, markets, identity—these become the real defenses. You can see it everywhere already: the internet now assumes everything is machine-generated until proven otherwise, which means the weird new primitives are things like proving you’re a robot, not a human, or paying for APIs with compute puzzles instead of dollars. The future internet may look less like a collection of apps and more like a living ecosystem—bots negotiating, humans directing, machines paying rent in compute. In that world the builders who win won’t just ship clever tools. They’ll cultivate environments where value compounds. They’ll think like farmers, not product managers.

English
1
0
0
39
Seth M retweetet
Punit Arani
Punit Arani@punit_arani·
hiring great people as a founder must be so hard how do you convince people just as if not more smart than you to come work for you
English
0
1
2
127
Ben Borgers
Ben Borgers@benborgers·
Asked Claude to continuously quiz me on a class - it unexpectedly built a little spaced repetition engine that fires questions every 2 hours
Ben Borgers tweet mediaBen Borgers tweet media
English
2
0
4
320
Tika
Tika@iocapon·
Tika tweet media
ZXX
1
0
4
125
Punit Arani
Punit Arani@punit_arani·
ICE is the modern day KKK
English
1
0
2
110
Seth M
Seth M@_sethmorton·
we still have a long ways to go youtu.be/56HJQm5nb0U?si…
YouTube video
YouTube
Alex Cui@alexcdot

Okay so, we just found that over 50 papers published at @Neurips 2025 have AI hallucinations I don't think people realize how bad the slop is right now It's not just that researchers from @GoogleDeepMind, @Meta, @MIT, @Cambridge_Uni are using AI - they allowed LLMs to generate hallucinations in their papers and didn't notice at all. It's insane that these made it through peer review👇

English
0
0
0
280
Seth M
Seth M@_sethmorton·
@punit_arani The marketing team goes crazy 🙌
English
0
0
0
25
Punit Arani
Punit Arani@punit_arani·
Databases and B-trees are so beautiful What do you mean you can find a few bytes of right information buried in a terabyte of data in a few 100 ms
English
1
0
3
97