Philip

328 posts

Philip banner
Philip

Philip

@phiandersson

building... (YC W25)

🇪🇺 Katılım Ocak 2022
1.7K Takip Edilen784 Takipçiler
Sabitlenmiş Tweet
Philip
Philip@phiandersson·
context is everything
English
4
0
20
10K
Philip
Philip@phiandersson·
hmmm
Philip tweet media
0
0
4
165
Philip
Philip@phiandersson·
we are so back
Philip tweet media
English
1
0
7
485
Gustaf Alströmer
Gustaf Alströmer@gustaf·
3 years ago we were 50 people at the @ycombinator event in Stockholm. Last night 1350 showed up. The job to become Silicon Valley of Europe is still up for grabs and Stockholm is in the running. Tack. Ha det fint! 🇸🇪
Gustaf Alströmer tweet media
English
36
27
532
70.6K
Philip
Philip@phiandersson·
@paulg hands - that’s what passion looks like
Philip tweet media
English
0
0
7
133
Philip
Philip@phiandersson·
attending the @ycombinator event in Stockholm today? come say hi! happy to chat about YC, the application process, fine-tuning, ICPs, or whatever you're working on
Philip tweet media
English
13
2
54
3.1K
Philip
Philip@phiandersson·
@paulg Nice seeing you yesterday 🇸🇪
English
0
0
4
209
Paul Graham
Paul Graham@paulg·
We're in Stockholm. You know how there are some places where you think "Nice place to visit, but I wouldn't want to live there"? Stockholm is the kind of place that makes you want to live there.
English
488
144
4.2K
588.1K
Philip
Philip@phiandersson·
when i asked claude to explain the advisor tool/multi-agent pattern
Philip tweet media
English
1
0
6
654
Philip
Philip@phiandersson·
- kv cache/memory caps long context. mla shrunk each token's vector. v4 shrinks the count of vectors with two learned compressors running in parallel: a fine track merging every 4 tokens (csa; 1m → 250k entries) and a coarse track merging every 128 (hca; 1m → 7.8k). - two attention modes, interleaved every layer: sparse top-k over the fine track for precision, full dense over the coarse (hca) track for cheap global recall, plus a 128-token uncompressed window for local fidelity. the gap between efficient and full attention basically closes. 94% mrcr at 128k! - fp4 quantization-aware training on the top-k indexer. the selector is trained at the precision it runs at. no train/serve mismatch, the kv savings actually ship trained native 1m from scratch on a 4k → 1m curriculum. long context as a first-class objective, not a bolted-on finetune - ~3-4x fewer flops, ~10x less kv cache vs v3.2 at 1m. ~10x more concurrent users per gpu. 80.6 at swe-bench. the open-vs-closed debate is getting hotter by the day
Sebastian Raschka@rasbt

April was a pretty strong month for LLM releases: - Gemma 4 - GLM-5.1 - Qwen3.6 - Kimi K2.6 - DeepSeek V4 All are now added to the LLM Architecture Gallery. More details once I am fully back in May!

English
6
0
6
687
Philip
Philip@phiandersson·
- $10b option on a $60b strike, paid mostly in idle colossus gpus. not an acquisition, a call on one after the ipo mints ai multiple stock - cursor didn't sell from weakness. it sold because escape was impossible. your biggest vendor was your biggest competitor, and anthropic owned the off-switch - xai lost all 11 co-founders by march. cursor is the rebuild. $60b prices 300 harness engineers as a lab-in-a-box, not an ide - spacex engineers reportedly preferred claude-via-cursor over grok. acquisition as forced internal adoption - composer 2 is a fine-tune of kimi k2.5, a chinese open-weight base. pentagon vendor can't ship that. job one is a grok rebuild, users get worse before better - a16z, thrive, nvidia sit on both sides. intra-portfolio price discovery dressed as strategy. neutrality was never a moat, just a truce the labs ended
SpaceX@SpaceX

SpaceXAI and @cursor_ai are now working closely together to create the world’s best coding and knowledge work AI. The combination of Cursor’s leading product and distribution to expert software engineers with SpaceX’s million H100 equivalent Colossus training supercomputer will allow us to build the world’s most useful models. Cursor has also given SpaceX the right to acquire Cursor later this year for $60 billion or pay $10 billion for our work together.

English
0
0
5
2K
Luca Rossi ꩜
Luca Rossi ꩜@lucaronin·
Introducing Tolaria! 💧 Today I am releasing a macOS desktop app for managing markdown knowledge bases, and helping both AI and humans operate them. It’s free and open source, and always will be. I have been working on it for three months, and I now use it to run my life and work. I personally have a massive workspace of 10,000 notes — the result of 6 years of Refactoring — which I now operate on Tolaria. Tolaria is the main collaboration surface with my AI agents: they create new notes there, connect them to what exists, and edit existing ones. Everything is easy to understand for them, because it’s just markdown files. In a way, it’s my implementation of @karpathy's LLM wiki. Tolaria is also the biggest experiment I have ever run about writing software with AI: • 2000 commits • 100K+ lines of code • 3000+ tests / 85% coverage • 9.9/10 code health • 70+ architecture decision records I am releasing it open source also to use it as a living artifact of how I do AI coding, so you can inspect at any time things like how I write docs, what's in my AGENTS file, what hooks do I run, and so on. You can find it below: • Newsletter announcement: refactoring.fm/p/introducing-… • Website: tolaria.md • Github repo: github.com/refactoringhq/… Let me know your thoughts!
Luca Rossi ꩜ tweet media
English
219
191
2.8K
337.2K
Oliver Molander
Oliver Molander@OliverMolander·
People tend to forget how fast progress happens. Just 2 years ago AI models were mediocre coders at best (Sonnet 3.5 was introduced in June '24). Now Codex and Claude are basically superhuman at coding.
English
1
1
5
252
Philip
Philip@phiandersson·
@imanradjavi damn i must have just missed you guys!
English
1
0
2
49
Tereza Tizkova
Tereza Tizkova@tereza_tizkova·
are people really switching from Cursor now?
English
65
0
67
17.9K
Andon Labs
Andon Labs@andonlabs·
Last week our AI opened a store in SF, this week AI is opening a cafe in Sweden. Meet Mona, our AI tasked with selling coffee and managing European bureaucracy. Visit Andon Cafe at Norrbackagatan 48 in Stockholm.
English
28
53
493
116.2K