Daniel Grahn 🇸🇪
518 posts

Daniel Grahn 🇸🇪
@dangrahn
AI-Native Software Engineer from 🇸🇪 Building in public and sharing my journey. Currently building @validatefirstai 🔎🤖
Malmo, Sweden Katılım Ocak 2009
350 Takip Edilen199 Takipçiler

@moonfarm_dev @karpathy Hard to understand things move at such different adoption speeds in different minds and company settings
English

@karpathy I really feel this disconnect, my wife is in a completely different type of engineering and has barely been affected by AI, while I'm a software engineer watching AI grab more and more of my work. It's both enticing and scary haha
English

Judging by my tl there is a growing gap in understanding of AI capability.
The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code.
But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along.
So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions.
TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.
staysaasy@staysaasy
The degree to which you are awed by AI is perfectly correlated with how much you use AI to code.
English

@KuittinenPetri @TheAhmadOsman If planning to building your own agentic coding workstation, what’s a reasonable entry setup without breaking the bank, still flexible enough and room to grow in for future models?
English

@TheAhmadOsman I think people should realize that a single 24 GB GPU (Nvidia RTX 3090/4090) won't be enough RAM for full context window with Qwen3.5-27B or Gemma-4-31B assuming Q4_K_M quantization and at least Q8 KV cache. In fact even 32 GB Nvidia RTX 5090 won't fit the latter.
English

@soham_nayak04 For micro products it’s likely better to just build it, but be cautious most MVPs end up bigger or more complicated than anticipated. Maybe then better to validate you can find a demand and market first after all. But each case is unique.
English

@Jado_Creator @Hartdrawss Start with problem interviews, not solution pitches. Find 10 people who have the problem
English

@Hartdrawss this is exactly what I got wrong
I couldn’t answer those questions
still built anyway → 0 users
great stack, wrong problem
now I’m forcing myself to answer this before writing code
how do you validate those first 10 users?
English

Pro Tip for FOUNDERS using AI to build :
> The most expensive mistake is not a bad tech stack
it is building the wrong thing with a great tech stack
here is what i ask every client before we write a single line of code :
> what is the one thing a user has to be able to do on day one for this to be worth it?
> who are the first 10 people who will use this and why them specifically?
> what does success look like in 30 days, not 12 months?
most founders cannot answer all three cleanly
and that is fine, that is what the discovery call is for
but the ones who can answer all three?
those are the MVPs that survive past launch
figure out the answer before you build the question
English

@dev_guid @buildinpublic 6 months is rough but at least you learned it. So many founders never do.
English

@buildinpublic for me it was "validate before you build" - spent 6 months coding features nobody asked for when i could've just talked to potential users first
English

@dangrahn @AnandButani @boringmarketer That's certainly true. This process really tests your patience because you never truly know if the path you're taking is the right one or a mistake.
If you were in this position, what would be your best approach?
English

@SpookedE86704 This is the hard truth most builders don't want to hear. The "should I build this" question feels uncomfortable because it might mean the answer is no. But asking it early saves months of wasted effort.
English

I'm convinced there are only three skills that actually matter as a solopreneur:
1) Knowing which ideas are worth shipping and which are just procrastination dressed up as planning
2) Moving fast enough that your tools work for you - vibe coding with AI, scheduling with PinGrow, automating what doesn't need your brain
3) Turning momentum into MRR before motivation runs out
Everything else is noise.
English

@supertute_inc Validate BEFORE you write any code. A landing page + 10 customer conversations will tell you more than 10k lines of code ever could.
English

@yelston Exactly this. Money changing hands (or consistent usage) is signal.
English

@FightyAI @AnandButani @boringmarketer Building is the easy part now. The hard part is having conviction that the thing you're about to build is worth building.
English

@AnandButani @boringmarketer Exactly -- and that's the trap too. When experimentation is cheap, the bottleneck shifts from "can I build this" to "should I build this." The people who win aren't the ones shipping fastest, they're the ones who know what NOT to build.
English

@Stefan_KMirchev @zuess05 Exactly. Though ideally you validate before building the product. Cheaper too.
English

@zuess05 First users are for validating the product not for making profits.
You can pay people to test and give feedback.
English
Daniel Grahn 🇸🇪 retweetledi









