Nick Fernandez | Product Strategy + AI
813 posts

Nick Fernandez | Product Strategy + AI
@nickf_ai
Fortune 50 product strategy leader. Writing about AI, product, business, and learning technical depth in public.

@garrytan You’re shipping harder than I do these days!

How I AI: decide to teach a weekend executive workshop on AI, spin up an openclaw, build a custom student portal, everyone gets AI powered assessments, instructors have detailed highlights of topics to cover, agent outlines content and drafts slides, custom midjourney art, sure why not let’s make a ai notetaker, sure why not let’s make a slackbot that answers content questions sure why not let’s build a live polling platform, taking feature requests from the students wait did I build another saas?

Your daily reminder that you are so early to AI. - 84% have never meaningfully touched it - 16% use a free chatbot occasionally - 0.3% pay $20/month - 0.04% use a coding scaffolding - 0.01% are just like you You're building orchestrated agents, running models at 2 am, buying hardware, and compounding your advantage every single day. Meanwhile, 99.9% of people are laughing at Mac mini buyers, OpenClaw users, and home GPU nerds. If you're part of the 0.01%, you are part of the collective building the infrastructure everyone else will depend on. The gap is accelerating. Lock in.

You can become 1% in AI usgae in 1 weekend. I wish I realized this sooner. It takes 12 hours of dedicated work to never see the world the same again.


Introducing Claude Design by Anthropic Labs: make prototypes, slides, and one-pagers by talking to Claude. Powered by Claude Opus 4.7, our most capable vision model. Available in research preview on the Pro, Max, Team, and Enterprise plans, rolling out throughout the day.


I just pulled over 20 creators worth of X API data into Claude Code Something broke in the last 3 days Engagement tanked across almost every account I checked The weird part is theres no consistency in who got hit. Big accounts, small accounts, different niches. All crushed the same way Grok is apparently running the algo now and its making changes faster than anyone can adapt to Ive seen 3 major algo shifts in the last 6 months on this platform. Each one resets the game If youre running a business on X distribution you already know this. This is infrastructure you dont control. Price that in



Introducing Claude Opus 4.7, our most capable Opus model yet. It handles long-running tasks with more rigor, follows instructions more precisely, and verifies its own outputs before reporting back. You can hand off your hardest work with less supervision.


Introducing Claude Opus 4.7, our most capable Opus model yet. It handles long-running tasks with more rigor, follows instructions more precisely, and verifies its own outputs before reporting back. You can hand off your hardest work with less supervision.

Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing




LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.










