Steph

15 posts

Steph

Steph

@steph_forge

Founder @forgerise_app - AI training calendar for runners & endurance athletes. Plan weeks, generate workouts, stay consistent. Private beta👇

شامل ہوئے Şubat 2026
13 فالونگ0 فالوورز
Steph
Steph@steph_forge·
@JWonz @banteg Pretty good, I’ve been able to build things 10x faster and better with Codex, without the headache.
English
0
0
0
13
Josh Wonser
Josh Wonser@JWonz·
@steph_forge @banteg Is it good? I’m trying to get a feel of how much value AI is bringing to developers.
English
1
0
0
16
banteg
banteg@banteg·
banteg tweet media
ZXX
14
2
233
24K
Steph
Steph@steph_forge·
@JWonz @banteg Only a sport training app for now
English
1
0
1
15
Steph
Steph@steph_forge·
@initjean Starting to think OpenAI’s GPUs are melting because of me. I can write a tutorial how to get the most of Codex on your project
Steph tweet media
English
1
0
0
47
Jean P.D. Meijer ― 🇪🇺 eu/acc
introducing slopmeter a cli tool to create a sharable nice looking graph to show off your Codex, Claude Code, or OpenCode usage npx slopmeter@latest
Jean P.D. Meijer ― 🇪🇺 eu/acc tweet media
English
56
17
609
54.7K
Steph
Steph@steph_forge·
@karpathy I’ve been vibe coding @forgerise_app on something similar the last few weeks… Training should be: “8-week goal + + my schedule” → a plan, generated workouts, a progress dashboard, and AI adjustments as you go.
English
0
0
1
7
Andrej Karpathy
Andrej Karpathy@karpathy·
Very interested in what the coming era of highly bespoke software might look like. Example from this morning - I've become a bit loosy goosy with my cardio recently so I decided to do a more srs, regimented experiment to try to lower my Resting Heart Rate from 50 -> 45, over experiment duration of 8 weeks. The primary way to do this is to aspire to a certain sum total minute goals in Zone 2 cardio and 1 HIIT/week. 1 hour later I vibe coded this super custom dashboard for this very specific experiment that shows me how I'm tracking. Claude had to reverse engineer the Woodway treadmill cloud API to pull raw data, process, filter, debug it and create a web UI frontend to track the experiment. It wasn't a fully smooth experience and I had to notice and ask to fix bugs e.g. it screwed up metric vs. imperial system units and it screwed up on the calendar matching up days to dates etc. But I still feel like the overall direction is clear: 1) There will never be (and shouldn't be) a specific app on the app store for this kind of thing. I shouldn't have to look for, download and use some kind of a "Cardio experiment tracker", when this thing is ~300 lines of code that an LLM agent will give you in seconds. The idea of an "app store" of a long tail of discrete set of apps you choose from feels somehow wrong and outdated when LLM agents can improvise the app on the spot and just for you. 2) Second, the industry has to reconfigure into a set of services of sensors and actuators with agent native ergonomics. My Woodway treadmill is a sensor - it turns physical state into digital knowledge. It shouldn't maintain some human-readable frontend and my LLM agent shouldn't have to reverse engineer it, it should be an API/CLI easily usable by my agent. I'm a little bit disappointed (and my timelines are correspondingly slower) with how slowly this progression is happening in the industry overall. 99% of products/services still don't have an AI-native CLI yet. 99% of products/services maintain .html/.css docs like I won't immediately look for how to copy paste the whole thing to my agent to get something done. They give you a list of instructions on a webpage to open this or that url and click here or there to do a thing. In 2026. What am I a computer? You do it. Or have my agent do it. So anyway today I am impressed that this random thing took 1 hour (it would have been ~10 hours 2 years ago). But what excites me more is thinking through how this really should have been 1 minute tops. What has to be in place so that it would be 1 minute? So that I could simply say "Hi can you help me track my cardio over the next 8 weeks", and after a very brief Q&A the app would be up. The AI would already have a lot personal context, it would gather the extra needed data, it would reference and search related skill libraries, and maintain all my little apps/automations. TLDR the "app store" of a set of discrete apps that you choose from is an increasingly outdated concept all by itself. The future are services of AI-native sensors & actuators orchestrated via LLM glue into highly custom, ephemeral apps. It's just not here yet.
Andrej Karpathy tweet media
English
918
1K
12.1K
1.9M
Steph
Steph@steph_forge·
Vibe Kanban is changing how I code. I’m experimenting with a loop where AI builds the project end-to-end and I only do minimal nudges (goal setting + review). The objective: best result with the least human interaction. Anyone else trying this?
English
0
0
0
11
Steph
Steph@steph_forge·
If you’re a software engineer right now, start pivoting toward hardware. That’s where the moat is getting built.
English
0
0
0
11
Steph
Steph@steph_forge·
@signulll The value is moving down the stack: compute, tooling, performance, and physical-world integration. I’m betting on engineers who can span software + hardware.
English
0
0
0
3
signüll
signüll@signulll·
i am a trained software engineer with an ml grad degree & i ask this question with genuine sincerity. if you’re a software engineer right now, how do you feel about your future?
English
864
27
2.1K
537.8K
Steph
Steph@steph_forge·
@EntireHQ Can entire cli work on old commit to write an history for an existing codebase ?
English
0
0
0
0
Entire
Entire@EntireHQ·
Gm, everyone. Our DMs are now open for any feedback on Checkpoints. Ideas. Or just funny gifs are welcome too :) Boop.
English
7
0
19
2.3K
Steph
Steph@steph_forge·
If I were working at OpenAI right now, I wouldn’t sleep. The surface area of what these models unlock is unreal. The bottleneck feels less like money and more like time and humans to engineer, test, and ship all the new possibilities.
Greg Brockman@gdb

If you’re an infrastructure or security engineer, now is the best time to join OpenAI. It’s hard not to be inspired by what today’s coding tools are capable of, and we have line of sight to making them much better. While our core ML infrastructure problems remain much the same as always — training and inferencing models at scale, co-designing end-to-end for maximum effect, managing complexity and maintaining fast iteration — what it feels like to solve these problems in practice is changing fast. As the models have been improving, our ability to get value out of them is increasingly bottlenecked by thoughtfully designed infrastructure — whether figuring out how to manage agent cross-collaboration, having ergonomic sandboxes that let the agents complete end-to-end workflows securely, building tools/abstractions/observability/frameworks which allow the agents to move faster, and scaling supervision of the agents' work. Engineering is already different from a few months ago, and will continue to evolve. Having seen many generations of engineering and AI tools, I believe what is most important going forward will be skills like the following: strong understanding of your domain, ability to think through abstraction/architecture/design/how the pieces should fit together, and deep curiosity to explore what these models have to offer. If you’d like to help us build the future of AI, while using AI to get there, email me: gdb+infra@openai.com. Include a description of a surprising or creative way you’ve gotten value out of the models recently, and your contributions to any project in your career on which you’ve made a significant difference in its outcome. Feel free to include any other context that can help us understand how you operate and the problems you want to work on.

English
0
0
0
13
Steph
Steph@steph_forge·
@karpathy Feels like LLMs will co-invent their own language: minimal + unambiguous + proof-friendly. Humans won’t read it; they’ll read the auto-generated plain-English + the parts they touch in Rust/etc. A reversible pipeline: code ⇄ spec ⇄ English.
English
0
0
0
0
Andrej Karpathy
Andrej Karpathy@karpathy·
I think it must be a very interesting time to be in programming languages and formal methods because LLMs change the whole constraints landscape of software completely. Hints of this can already be seen, e.g. in the rising momentum behind porting C to Rust or the growing interest in upgrading legacy code bases in COBOL or etc. In particular, LLMs are *especially* good at translation compared to de-novo generation because 1) the original code base acts as a kind of highly detailed prompt, and 2) as a reference to write concrete tests with respect to. That said, even Rust is nowhere near optimal for LLMs as a target language. What kind of language is optimal? What concessions (if any) are still carved out for humans? Incredibly interesting new questions and opportunities. It feels likely that we'll end up re-writing large fractions of all software ever written many times over.
Thomas Wolf@Thom_Wolf

Shifting structures in a software world dominated by AI. Some first-order reflections (TL;DR at the end): Reducing software supply chains, the return of software monoliths – When rewriting code and understanding large foreign codebases becomes cheap, the incentive to rely on deep dependency trees collapses. Writing from scratch ¹ or extracting the relevant parts from another library is far easier when you can simply ask a code agent to handle it, rather than spending countless nights diving into an unfamiliar codebase. The reasons to reduce dependencies are compelling: a smaller attack surface for supply chain threats, smaller packaged software, improved performance, and faster boot times. By leveraging the tireless stamina of LLMs, the dream of coding an entire app from bare-metal considerations all the way up is becoming realistic. End of the Lindy effect – The Lindy effect holds that things which have been around for a long time are there for good reason and will likely continue to persist. It's related to Chesterton's fence: before removing something, you should first understand why it exists, which means removal always carries a cost. But in a world where software can be developed from first principles and understood by a tireless agent, this logic weakens. Older codebases can be explored at will; long-standing software can be replaced with far less friction. A codebase can be fully rewritten in a new language. ² Legacy software can be carefully studied and updated in situations where humans would have given up long ago. The catch: unknown unknowns remain unknown. The true extent of AI's impact will hinge on whether complete coverage of testing, edge cases, and formal verification is achievable. In an AI-dominated world, formal verification isn't optional—it's essential. The case for strongly typed languages – Historically, programming language adoption has been driven largely by human psychology and social dynamics. A language's success depended on a mix of factors: individual considerations like being easy to learn and simple to write correctly; community effects like how active and welcoming a community was, which in turn shaped how fast its ecosystem would grow; and fundamental properties like provable correctness, formal verification, and striking the right balance between dynamic and static checks—between the freedom to write anything and the discipline of guarding against edge cases and attacks. As the human factor diminishes, these dynamics will shift. Less dependence on human psychology will favor strongly typed, formally verifiable and/or high performance languages.³ These are often harder for humans to learn, but they're far better suited to LLMs, which thrive on formal verification and reinforcement learning environments. Expect this to reshape which languages dominate. Economic restructuring of open source – For decades, open-source communities have been built around humans finding connection through writing, learning, and using code together. In a world where most code is written—and perhaps more importantly, read—by machines, these incentives will start to break down.⁴ Communities of AIs building libraries and codebases together will likely emerge as a replacement, but such communities will lack the fundamentally human motivations that have driven open source until now. If the future of open-source development becomes largely devoid of humans, alignment of AI models won't just matter—it will be decisive. The future of new languages – Will AI agents face the same tradeoffs we do when developing or adopting new programming languages? Expressiveness vs. simplicity, safety vs. control, performance vs. abstraction, compile time vs. runtime, explicitness vs. conciseness. It's unclear that they will. In the long term, the reasons to create a new programming language will likely diverge significantly from the human-driven motivations of the past. There may well be an optimal programming language for LLMs—and there's no reason to assume it will resemble the ones humans have converged on. TL; DR: - Monoliths return – cheap rewriting kills dependency trees; smaller attack surface, better performance, bare-metal becomes realistic - Lindy effect weakens – legacy code loses its moat, but unknown unknowns persist; formal verification becomes essential - Strongly typed languages rise – human psychology mattered for adoption; now formal verification and RL environments favor types over ergonomics - Open source restructures – human connection drove the community; AI-written/read code breaks those incentives; alignment becomes decisive - New languages diverge – AI may not share our tradeoffs; optimal LLM programming languages may look nothing like what humans converged on ¹ x.com/mntruell/statu… ² x.com/anthropicai/st… ³ wesmckinney.com/blog/agent-erg…#issuecomment-3717222957" target="_blank" rel="nofollow noopener">github.com/tailwindlabs/t…

English
701
656
8.1K
1.2M
Steph
Steph@steph_forge·
Everyone in my circle uses Claude, it’s everywhere in my feed. But when I need to actually get code done, I keep coming back to Codex. It fits how I build. Claude users: what’s your workflow/prompting setup that makes it outperform Codex for coding?
English
0
0
0
1
Steph
Steph@steph_forge·
If you’re interested to test it, I can help you set up your plan to optimize your training.
English
0
0
0
1
Steph
Steph@steph_forge·
I’m building Forgerise: an AI training calendar for endurance athletes. It generates plans + structured workouts, then helps you adjust week to week as life and races happen. I’m using it for my own trail season right now.
Steph tweet media
English
1
0
0
1