Alexander Boesgaard

284 posts

Alexander Boesgaard banner
Alexander Boesgaard

Alexander Boesgaard

@0xBoesgaard

Building @OpenhagenAI || I do complex stuff that my family finds boring, so they don't even ask

Katılım Nisan 2025
286 Takip Edilen1.1K Takipçiler
Alexander Boesgaard
Alexander Boesgaard@0xBoesgaard·
It's legit crazy how fast we normalized just having a bunch of AI assistants able to help in most aspects of ones work. Notes, data interpretation, spitballing, coding etc. They're such force multipliers and we're still so so early.
English
1
0
3
33
jason liu
jason liu@jxnlco·
Fuck @tszzl we need to go to Japan. Marty mode let’s go.
English
6
3
112
19.6K
Alexander Boesgaard
Alexander Boesgaard@0xBoesgaard·
@mSanterre @birdabo IF your model has huge jump in capabilities rather than just marginal gains AND if inference is expensive as hell, there's literally no reason to run it in arenas. Evals will speak for themselves.
English
0
0
1
59
max
max@mSanterre·
@birdabo Why would it not run as an anonymous model on the PvP servers if it's truly amazing? Logical reason: it's just marginally better, as usual.
English
2
1
44
2.9K
sui ☄️
sui ☄️@birdabo·
the Anthropic leak is bigger than we thought. three weeks before Mythos leaked, rumors were circulating that a frontier lab completed its largest training run ever and the model performed roughly 2x above what scaling laws predicted. no lab was named but now we know it was probably Anthropic. they confirmed to Fortune that the new model is a step change. their own leaked blog says it scores dramatically higher than Opus 4.6 in coding and reasoning and is far ahead of every other model right now. this also explains OpenAI gutting Sora and scrambling to raise capital. if massive training runs are the only way to stay competitive, you cut everything else and stockpile compute. the part nobody wants to hear is that frontier AI is about to get much more expensive to run and use. instead of getting cheaper, the best models might cost us a house 💀 compute is the new oil. the middleman (Jensen) wins again.
Andrew Curran@AndrewCurran_

Three weeks ago there were rumors that one of the labs had completed its largest ever successful training run, and that the model that emerged from it performed far above both internal expectations and what people assumed the scaling laws would predict. At the time these were only rumors, and no lab was attached to them. But in light of what we now know about Mythos, they look more credible, and the lab was probably Anthropic. Around the same time there were also rumors that one of the frontier labs had made an architectural breakthrough. If you are in enough group chats, you hear claims like this constantly, and most turn out to be nothing. But if Anthropic found that training above a certain scale, or in a certain way at that scale, produces capabilities that sit far above the prior trendline, then that is an architectural breakthrough. I think the leaked blog post was real, but still a draft. Mythos and Capybara were both candidate names for the new tier, though Mythos may now have enough mindshare that they end up keeping it. The specific rumor in early March was that the run produced a model roughly twice as performant as expected. That remains unconfirmed. What is confirmed is that Anthropic told Fortune the new model is a 'step change,' a sudden 2x would certainly fit the definition. We will find out in April how much of this is true. My own view is that the broad shape of this is correct even if some of the numbers are wrong. And if it is substantially accurate, then it also casts OpenAI's recent restructuring in a new light. If very large training runs are about to become essential to staying in the game, then a lot of their recent decisions, like dropping Sora, make even more sense strategically. For the public, this would mean the best models in the world are about to become much more expensive to serve, and therefore much more expensive to use. That will put pressure on rate limits, pricing, and subscription plans that are already subsidized to some unknown degree. Instead of becoming too cheap to meter, frontier intelligence may be about to become too expensive for most of humanity to afford. Second-order effects; compute, memory, and energy are about to become much more important than they already are. In the blog they describe the new model as not just an improvement, but having 'dramatically higher scores' than Opus 4.6 in coding and reasoning, and as being 'far ahead' of any other current models. If this is the new reality, then scale is about to become king in a whole new way. It would also mean, as usual, that Jensen wins again.

English
78
82
1.4K
305.6K
Alexander Boesgaard
Alexander Boesgaard@0xBoesgaard·
ever since I read "Society of Thought" I had this nagging feeling I had to act on - in the paper they describe what's happening; models reasoning through internal debate BUT everyone is either observing it emerge from RL or prompting for it externally. Nobody seemed to be asking whether you can enforce it architecturally inside a single forward pass
Alexander Boesgaard tweet media
English
0
4
42
1.8K
Alexander Boesgaard
Alexander Boesgaard@0xBoesgaard·
@patrickc @RuxandraTeslo The sort of reply that reminds oneself that even incredible founders have flaws. The only thing buildings on the right do better is maximizing developer ROI at the cost of aesthetics.
English
1
0
4
661
Patrick Collison
Patrick Collison@patrickc·
@RuxandraTeslo I think this just reflects a lack of expertise on your part -- like, if you'd actually studied architecture, you'd understand that the buildings on the right are better. x.com/PedroCo6744396…
Bob Sacamano@PedroCo67443965

@UrbanCourtyard Call it what you want, but they are in fact imitating old buildings, which doesn't work along with functionality, materials and technology. Trust me, I'm an architect. It's popular among nostalgic and sentimental people without being full conscious about architecture.

English
90
6
483
174.3K
Ruxandra Teslo 🧬
Ruxandra Teslo 🧬@RuxandraTeslo·
València: street with old buildings vs street with new ones. Why is everything built in the modern era so distasteful?
Ruxandra Teslo 🧬 tweet mediaRuxandra Teslo 🧬 tweet media
English
32
25
576
130.1K
Alexander Boesgaard
Alexander Boesgaard@0xBoesgaard·
@clintoptions You're selecting for the same thing: People who have the ability to amass $2–3M and a paid-off house are the sort of people who are never satisfied at $2–3M.
English
3
0
0
25
Clint | Options
Clint | Options@clintoptions·
I have a secret to share After your first $2–$3 million, a paid off home and a good car, there is no difference in quality of life between you and Jeff Bezos. Both of you have limited amount of time on earth; you have twice if not more than Jeff, so you are richer than him. A cheeseburger is a cheeseburger whether a billionaire eats or you do. Money is nothing but a piece of paper or a number in your app. Real life is outdoors. Become financially independent; that’s usually 2–3mil. Have good food. Enjoy the relations. Workout. Sleep well. Call your parents. That’s all there is to life. Greed has no end. Repeat after me: Time is the currency of life. Money is not. Sooner you figure this out, happier you will be.
English
1K
3.2K
23.7K
4M
Shawn
Shawn@Shawnryan96·
@austinc3301 @ControlAI Yes and its just belief. There is no prior to use there is nothing but gut feeling and that is it. We do not have any evidence at all that intelligence alone is dangerous
English
1
0
1
51
ControlAI
ControlAI@ControlAI·
Ex-OpenAI researcher and AI 2027 coauthor Daniel Kokotajlo: There's a 70% chance superintelligence leads to human extinction. "We at the AI Futures Project think that there's a 70% chance of all humans dead or something similarly bad." "All humans dead?" "Correct. Extinction."
English
36
32
147
36.3K
ellen livia ᯅ 🇺🇸🇮🇩
Starting an AI Researcher group chat. The space is growing fast! Comment “literature review” to join.
English
875
28
756
56.9K
Alexander Boesgaard
Alexander Boesgaard@0xBoesgaard·
Been working hard on new reasoning architecture. Chain of Thought and reasoning in general was good but we were missing stuff. I think we've found the formula and I can't wait to present our paper. I keep rechecking evals and doing ablation cause I can hardly believe our results.
English
0
3
9
342
Hubert Thieblot
Hubert Thieblot@hthieblot·
Only incredible founders can reply to this tweet
English
447
1
511
44.1K
Brady Long
Brady Long@thisguyknowsai·
🚨 BREAKING: Meta researchers showed a model 2 million hours of video. No labels. No physics textbook. No supervision at all. It learned gravity. Object permanence. Inertia. And it just beat Gemini 1.5 Pro and GPT-4 level models at physics understanding. Here's what just happened:
Brady Long tweet media
English
42
164
960
110.9K
Lovable
Lovable@Lovable·
We're aware of recent reporting about Delve’s compliance practices. Lovable is not a Delve customer. We proactively moved to Vanta in late 2025, before any of this came to light. Our SOC 2 Type II was independently audited by Prescient Assurance. We’re currently undergoing an independent internal audit of our ISMS, recertifying ISO 27001, and have our next SOC 2 Type II scheduled for Q3 2026. Security is not an afterthought at Lovable. It's a company-wide commitment backed by a dedicated team and continuous investment. Our current compliance practices are all here: trust.lovable.dev
English
70
71
2.1K
360K
Alexander Boesgaard
Alexander Boesgaard@0xBoesgaard·
ngl, I prefer the architecture diagrams to the sunday OOM debugging on H100s
English
0
0
1
61
Alexander Boesgaard
Alexander Boesgaard@0xBoesgaard·
@peer_rich The endgame is that it all economic activity collapses down to the model provider
English
0
0
0
23
Peer Richelsen
Peer Richelsen@peer_rich·
i dont really see the endgame of AI labs hundreds of billions of dollars spent for a SOTA model just for a random open source model to come around and similar benchmarks doest it even matter to be “first” to whatever the goal is? or just surviving long enough i.e. Apple will simply just take what works without burning any cash
English
47
0
144
38.5K