Julius Ingemann Breitenstein

381 posts

Julius Ingemann Breitenstein banner
Julius Ingemann Breitenstein

Julius Ingemann Breitenstein

@Julius_Ingemann

The designer at https://t.co/q53wcPuI8s - Making ai tools for carestaff

Copenhagen Katılım Mart 2009
1.3K Takip Edilen169 Takipçiler
Julius Ingemann Breitenstein
Julius Ingemann Breitenstein@Julius_Ingemann·
@geoffreylitt @andy_matuschak I like Claude artefacts for this, makes them really easy to share, and the fidelity usually makes it feel more like a sketch than a final product. D'you have good ways of sharing out a sketches you make?
English
0
0
0
25
Geoffrey Litt
Geoffrey Litt@geoffreylitt·
@andy_matuschak I was never big on Origami etc to begin with, but for me that niche is filled now with vibe coded “studies” — that is, *not* vibecoding in the actual codebase, but sketching with code in a playground. IMO people don’t distinguish vibecoding in prod vs playground enough
English
3
1
18
1.7K
Andy Matuschak
Andy Matuschak@andy_matuschak·
Prototypers: do you still see a role for semi-high-fidelity modalities like Origami? Or just straight from sketchbook/mocks to vibecode?
English
31
1
52
24.6K
Julius Ingemann Breitenstein
Julius Ingemann Breitenstein@Julius_Ingemann·
@andy_matuschak CAD modeling hits a good rhythm because the print time forces a pause. I stay longer in the middle, switching between the central thing on the screen and ideas in my sketchbook, and only assess the output the next morning.
English
0
0
1
73
Andy Matuschak
Andy Matuschak@andy_matuschak·
@Julius_Ingemann I really struggle with that too. So tempting to just gogogo without thinking. Reminds me of the difference between typing and handwriting prose.
English
1
0
2
382
Julius Ingemann Breitenstein
Julius Ingemann Breitenstein@Julius_Ingemann·
@ericzakariasson @cursor_ai a built in way to make a small 'meta' interface for when I'm adjusting details like font sizes or colours. I usually make these control panels inside the interface itself, but would like to have it separate
English
0
0
0
21
eric zakariasson
eric zakariasson@ericzakariasson·
if you could extend @cursor_ai, what would you build? think plugins to tweak chat + ui, hooks into internal agent APIs, full editor extensibility… curious to hear!
eric zakariasson tweet media
English
349
17
443
77K
Julius Ingemann Breitenstein
Julius Ingemann Breitenstein@Julius_Ingemann·
@levelsio I always ask it to pretend I am @tylercowen and adjust recommendations accordingly. Have planned planned multiple great roadtrips this way, ending up at small out of the way places like a Norwegian-Texan heritage museum
English
0
0
1
50
@levelsio
@levelsio@levelsio·
I noticed something new this time traveling that I haven't seen before Asking ChatGPT where to go, it would recommend us to go to specific places, so we went, and we were then surrounded by other foreigners who also were there, and then I saw some of them open their phone and yes there was them also asking where to go to....ChatGPT So you now have ChatGPT being the travel guide for a substantial amount of people, and because it has a tendency to normal/average answers, you kinda end up at normie tourist traps Then even if you ask for more authentic places, those thousands of other people also asked that, so it just funnels hundreds to thousands of people per day to the same exact places You could call it ChatGPT Tourism?
English
514
77
2.7K
530.8K
Ryo Lu
Ryo Lu@ryolu_·
@Julius_Ingemann @cursor_ai you can watch changes and hot reload, preview your changes live in browser. just ask the agent to help you set up. cursor today isn’t by default great for people who iterate in 2D. we’ll make that better!
English
3
0
2
650
Ryo Lu
Ryo Lu@ryolu_·
no matter the title, we’re all builders prepping a talk to introduce @cursor_ai to designers – what would you want to see?
English
118
87
1.9K
140.1K
Julius Ingemann Breitenstein
Julius Ingemann Breitenstein@Julius_Ingemann·
@rikcreation Prototyping more and more in code. I often lose flow state while something is being built, since it doesn't feel as much like direct manipulation for me. What are you usually doing during downtime between requests?
English
1
0
0
202
Rikki
Rikki@rikcreation·
If you’re a designer who’s never touched code, Cursor 2.0 is the best way in and the clearest look at where development is headed.
English
3
3
42
10.2K
Julius Ingemann Breitenstein
Julius Ingemann Breitenstein@Julius_Ingemann·
@ericzakariasson nice! have been trying it out also enjoying the browser mode a lot. It doesn't seem to support having the browser in a separate window? not sure if thats a bug, or just not built yet.
English
0
0
0
98
Julius Ingemann Breitenstein
Julius Ingemann Breitenstein@Julius_Ingemann·
@leerob So cool. I love the browser mode. I'd really like to have it in a separate window from the rest of cursor (eg I can have it on my laptop screen, and have cursor up on my monitor). but this doesn't seem to work?
English
0
0
0
69
Julius Ingemann Breitenstein
Julius Ingemann Breitenstein@Julius_Ingemann·
@ryolu_ @cursor_ai Extended thought, for prototyping I'll often make a new prototype repo, rather than going directly into our codebase. Feels more free. But, I'd like an easier way to share those snippets out, as easy as I can with something like Claude artefacts.
English
0
0
0
34
Julius Ingemann Breitenstein
Julius Ingemann Breitenstein@Julius_Ingemann·
@ryolu_ @cursor_ai That makes sense, excited to try it. Origami I don't think of as full 2d either, it's much more interaction design oriented. I like getting cursor to make meta-ui so I can sit and tweak inside the design, rather than through the code.
English
0
0
0
62
Julius Ingemann Breitenstein
Julius Ingemann Breitenstein@Julius_Ingemann·
@karpathy I felt the conversation about AI only making the same 3 jokes, and the later discussion of AI needing culture, were quite linked. If you ask the same person to tell you a joke repeatedly I'd also suspect that that person's specific personality would converge on the same few jokes
English
0
0
0
245
Andrej Karpathy
Andrej Karpathy@karpathy·
My pleasure to come on Dwarkesh last week, I thought the questions and conversation were really good. I re-watched the pod just now too. First of all, yes I know, and I'm sorry that I speak so fast :). It's to my detriment because sometimes my speaking thread out-executes my thinking thread, so I think I botched a few explanations due to that, and sometimes I was also nervous that I'm going too much on a tangent or too deep into something relatively spurious. Anyway, a few notes/pointers: AGI timelines. My comments on AGI timelines looks to be the most trending part of the early response. This is the "decade of agents" is a reference to this earlier tweet x.com/karpathy/statu… Basically my AI timelines are about 5-10X pessimistic w.r.t. what you'll find in your neighborhood SF AI house party or on your twitter timeline, but still quite optimistic w.r.t. a rising tide of AI deniers and skeptics. The apparent conflict is not: imo we simultaneously 1) saw a huge amount of progress in recent years with LLMs while 2) there is still a lot of work remaining (grunt work, integration work, sensors and actuators to the physical world, societal work, safety and security work (jailbreaks, poisoning, etc.)) and also research to get done before we have an entity that you'd prefer to hire over a person for an arbitrary job in the world. I think that overall, 10 years should otherwise be a very bullish timeline for AGI, it's only in contrast to present hype that it doesn't feel that way. Animals vs Ghosts. My earlier writeup on Sutton's podcast x.com/karpathy/statu… . I am suspicious that there is a single simple algorithm you can let loose on the world and it learns everything from scratch. If someone builds such a thing, I will be wrong and it will be the most incredible breakthrough in AI. In my mind, animals are not an example of this at all - they are prepackaged with a ton of intelligence by evolution and the learning they do is quite minimal overall (example: Zebra at birth). Putting our engineering hats on, we're not going to redo evolution. But with LLMs we have stumbled by an alternative approach to "prepackage" a ton of intelligence in a neural network - not by evolution, but by predicting the next token over the internet. This approach leads to a different kind of entity in the intelligence space. Distinct from animals, more like ghosts or spirits. But we can (and should) make them more animal like over time and in some ways that's what a lot of frontier work is about. On RL. I've critiqued RL a few times already, e.g. x.com/karpathy/statu… . First, you're "sucking supervision through a straw", so I think the signal/flop is very bad. RL is also very noisy because a completion might have lots of errors that might get encourages (if you happen to stumble to the right answer), and conversely brilliant insight tokens that might get discouraged (if you happen to screw up later). Process supervision and LLM judges have issues too. I think we'll see alternative learning paradigms. I am long "agentic interaction" but short "reinforcement learning" x.com/karpathy/statu…. I've seen a number of papers pop up recently that are imo barking up the right tree along the lines of what I called "system prompt learning" x.com/karpathy/statu… , but I think there is also a gap between ideas on arxiv and actual, at scale implementation at an LLM frontier lab that works in a general way. I am overall quite optimistic that we'll see good progress on this dimension of remaining work quite soon, and e.g. I'd even say ChatGPT memory and so on are primordial deployed examples of new learning paradigms. Cognitive core. My earlier post on "cognitive core": x.com/karpathy/statu… , the idea of stripping down LLMs, of making it harder for them to memorize, or actively stripping away their memory, to make them better at generalization. Otherwise they lean too hard on what they've memorized. Humans can't memorize so easily, which now looks more like a feature than a bug by contrast. Maybe the inability to memorize is a kind of regularization. Also my post from a while back on how the trend in model size is "backwards" and why "the models have to first get larger before they can get smaller" x.com/karpathy/statu… Time travel to Yann LeCun 1989. This is the post that I did a very hasty/bad job of describing on the pod: x.com/karpathy/statu… . Basically - how much could you improve Yann LeCun's results with the knowledge of 33 years of algorithmic progress? How constrained were the results by each of algorithms, data, and compute? Case study there of. nanochat. My end-to-end implementation of the ChatGPT training/inference pipeline (the bare essentials) x.com/karpathy/statu… On LLM agents. My critique of the industry is more in overshooting the tooling w.r.t. present capability. I live in what I view as an intermediate world where I want to collaborate with LLMs and where our pros/cons are matched up. The industry lives in a future where fully autonomous entities collaborate in parallel to write all the code and humans are useless. For example, I don't want an Agent that goes off for 20 minutes and comes back with 1,000 lines of code. I certainly don't feel ready to supervise a team of 10 of them. I'd like to go in chunks that I can keep in my head, where an LLM explains the code that it is writing. I'd like it to prove to me that what it did is correct, I want it to pull the API docs and show me that it used things correctly. I want it to make fewer assumptions and ask/collaborate with me when not sure about something. I want to learn along the way and become better as a programmer, not just get served mountains of code that I'm told works. I just think the tools should be more realistic w.r.t. their capability and how they fit into the industry today, and I fear that if this isn't done well we might end up with mountains of slop accumulating across software, and an increase in vulnerabilities, security breaches and etc. x.com/karpathy/statu… Job automation. How the radiologists are doing great x.com/karpathy/statu… and what jobs are more susceptible to automation and why. Physics. Children should learn physics in early education not because they go on to do physics, but because it is the subject that best boots up a brain. Physicists are the intellectual embryonic stem cell x.com/karpathy/statu… I have a longer post that has been half-written in my drafts for ~year, which I hope to finish soon. Thanks again Dwarkesh for having me over!
Dwarkesh Patel@dwarkesh_sp

The @karpathy interview 0:00:00 – AGI is still a decade away 0:30:33 – LLM cognitive deficits 0:40:53 – RL is terrible 0:50:26 – How do humans learn? 1:07:13 – AGI will blend into 2% GDP growth 1:18:24 – ASI 1:33:38 – Evolution of intelligence & culture 1:43:43 - Why self driving took so long 1:57:08 - Future of education Look up Dwarkesh Podcast on YouTube, Apple Podcasts, Spotify, etc. Enjoy!

English
578
2K
16.8K
4.1M
Julius Ingemann Breitenstein
Julius Ingemann Breitenstein@Julius_Ingemann·
@devinjacoviello very cool. For ones like the funnel that turns cubes into spheres. Are you sketching those out? Rendering them like you did for the final design? Generating them, then modelling a cleaner version if you like it?
English
0
0
0
128
Ideas Guy
Ideas Guy@nosilverv·
This video speaks into my soul
English
3
2
48
8.7K
Julius Ingemann Breitenstein
Julius Ingemann Breitenstein@Julius_Ingemann·
@minomize Life lesson right there. Sometimes you have to turn off thinking mode. Claude always start to ponder when coding in thinking mode, and the uses up all its credits not making anything
Julius Ingemann Breitenstein tweet media
English
1
0
2
42