Swen Koller

39 posts

Swen Koller

Swen Koller

@swekol

Building with LLMs. MS / MBA @HarvardHBS | Forbes U30

Boston / San Francisco Katılım Kasım 2016
240 Takip Edilen114 Takipçiler
Sabitlenmiş Tweet
Swen Koller
Swen Koller@swekol·
🤔 Where are the moats for "GPT wrappers"? After building multiple LLM-based apps used by thousands of users, here's what I've learned about creating defensible Vertical AI solutions... 🧵👇
English
1
1
12
767
Swen Koller
Swen Koller@swekol·
When are we getting the "@grok is this true" in Zoom calls?
English
1
0
2
276
Swen Koller
Swen Koller@swekol·
@ErnestRyu It'll be like software engineering. The 10% of math research work AI can't do well yet will be 100x more valuable, the other 90% will be automated.
English
0
0
2
406
Ernest Ryu
Ernest Ryu@ErnestRyu·
10. My career as a mathematician certainly isn't threatened by AI; in fact, I hope to leverage AI to accelerate my work. However, I'm unsure whether "mathematician" will remain a career path for my son’s generation. (10/10)
English
18
26
809
42.5K
Ernest Ryu
Ernest Ryu@ErnestRyu·
Two cents on AI getting International Math Olympiad (IMO) Gold, from a mathematician. Background: Last year, Google DeepMind (GDM) got Silver in IMO 2024. This year, OpenAI solved problems P1-P5 for IMO 2025 (but not P6), and this performance corresponds to Gold. (1/10)
English
54
301
3.9K
719.3K
Swen Koller
Swen Koller@swekol·
Main bottleneck isn’t compute, it’s data and exploration. The challenge now is how to turn compute into useful, generalizable knowledge.
English
0
0
0
137
Swen Koller
Swen Koller@swekol·
Next paradigm: pretraining + high-compute RL + environment interaction. Needed for general agents, science, and eventual AGI.
English
1
0
1
165
Swen Koller
Swen Koller@swekol·
@shuchaobi shared perspectives from his OpenAI research at Harvard today. Key insights on what’s next:
English
1
0
0
245
Swen Koller
Swen Koller@swekol·
Software engineers are evolving from being 10% logic designer and 90% translator to being 95% logic designer and 5% translation verifier.
English
0
0
3
138
Swen Koller
Swen Koller@swekol·
For AI apps, context is the moat. OpenAI‘s memory feature increases the usefulness and switching cost of ChatGPT. We will see the same with other AI apps.
English
0
0
2
105
Swen Koller
Swen Koller@swekol·
With the likes of OpenAI‘s o3 and its ability to produce truly novel work it feels like the intelligence take off is starting. Models will improve models.
English
0
0
3
93
Swen Koller
Swen Koller@swekol·
@Sprintax_USA Your chatbot suggests exactly the right question. Let me know if I can help you build a chat bot that actually works.
Swen Koller tweet media
English
0
0
0
36
Swen Koller
Swen Koller@swekol·
Today @JoshuaKushner came to talk at @Harvard. My top three takeaways: 1) Thrive only invested in seven AI companies over the last 3 years. Josh explained, he doesn’t know if any of the vertical AI companies will still exist in a world with extremely capable AI models 2) OpenAI now is like Apple before they had decided which apps they build vs. let developers build. They need to look after their ecosystem but also want to build some themselves 3) A big opportunity Josh sees is vertical models such as for life science, physical intelligence or industry specific (e.g. legal)
English
1
0
2
124
Swen Koller
Swen Koller@swekol·
The limiting factor of AI models today for most knowledge work is lack of context, not model capability. We’ll increasingly see tools to capture context to easily add it into prompts.
English
0
0
2
120
Swen Koller
Swen Koller@swekol·
For a course at Harvard, I developed a game using AI. My assignment was overdue, so I made that the theme of the game. In Overdue, you’re a student dodging an endless wave of homework by firing excuses at professors. You can play it here: overdue.swenkoller.com (desktop only)
GIF
English
0
0
2
146
Swen Koller
Swen Koller@swekol·
Projection: In 15 years, we will be talking to AI for 4-6h per day. Analogy: The iPhone was released 18 years ago. I now use it 4-6h per day. We should expect AI to have a similar impact on our day-to-day interaction with the world.
English
1
0
3
97
Swen Koller retweetledi
Garry Tan
Garry Tan@garrytan·
For 25% of the Winter 2025 batch, 95% of lines of code are LLM generated. That’s not a typo. The age of vibe coding is here.
Y Combinator@ycombinator

Andrej Karpathy recently coined the term “vibe coding” to describe how LLMs are getting so good that devs can “give in to the vibes, embrace exponentials, and forget that the code even exists.” In this episode of the @LightconePod, the hosts discuss this new way of programming and what it means for builders in the age of AI.

English
170
457
4.3K
1M