Nick Fernandez

797 posts

Nick Fernandez

Nick Fernandez

@trainable_nick

Building Trainable, an AI running coach that adapts when life happens. Navy vet | Fortune 50 Product | Founder. Follow along: https://t.co/2Kak183Uif

The North(east) 🥶 Katılım Ekim 2024
357 Takip Edilen94 Takipçiler
Sabitlenmiş Tweet
Nick Fernandez
Nick Fernandez@trainable_nick·
I'm a Director of Product Strategy at a Fortune 50 company. On nights and weekends, I'm building an AI running coach. Here's what the Navy taught me about doing both:
Nick Fernandez tweet media
English
3
0
4
580
Nick Fernandez
Nick Fernandez@trainable_nick·
As I pulled on the thread from Karpathy’s post, I realized the existing EPUB → TXT tools were still too ugly and clunky for turning DRM-free books into clean markdown. So I made my own. I’ve only been vibe coding for a few months, and this is my first App Store Connect submission. Feels like a small milestone, but an exciting one. Grateful for this moment... we get to build better and faster than ever.
Nick Fernandez tweet media
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
11
14
242
79.7K
Nick Fernandez
Nick Fernandez@trainable_nick·
here's all 36 books. early overreaction: omg. From claude: "This is what 36 books producing one coherent review looks like. No single framework would have caught the StoryBrand gap AND the UPL risk AND the copy-guideline violation AND the missing Cialdini principles AND the JTBD emotional dimension all at once. The wiki did." For added context, the 2nd photo shows all 1,750+ raw chapter files as noise.
Nick Fernandez tweet mediaNick Fernandez tweet media
English
0
0
0
40
Albert Buchard 🇪🇺
Albert Buchard 🇪🇺@AlbertBuchard·
I built this a few months ago: one-more-epub-converter.com It never showed up on any search engine. There are probably countless projects like it, sitting completely unnoticed and anonymous in the depths of the web. I hope ASI will finally break the search hegemony for good. We need search approaches that are less biased, more open, and better at surfacing valuable work.
English
1
0
1
155
Nick Fernandez
Nick Fernandez@trainable_nick·
"How was your weekend?" I don't know where to start. Curated 36 books on product strategy, positioning, and GTM. Bought them all as EPUBs. Didn't like any of the tools for converting them to structured markdown for AI workflows. So I built one and shipped it to the Mac App Store. Andrej Karpathy posted an idea for "LLM Knowledge Bases" on Friday. I quote posted about my app - he responded, driving 50k+ views. So I spent the rest of the weekend compiling all 36 books into an interlinked knowledge base using the Karpathy pattern... a founder/operator wiki I can query against for real product and strategy decisions. This is the Obsidian graph view after 2 of 8 compilation sessions. Anyway. It was good. How was yours?
Nick Fernandez tweet media
English
0
0
1
46
Nick Fernandez
Nick Fernandez@trainable_nick·
ChatGPT pro mode with an unsolicited SourcePrep compliment: “…already unusually usable for LLM workflows…” 🚀
Nick Fernandez tweet media
English
0
0
0
28
Nick Fernandez
Nick Fernandez@trainable_nick·
@karpathy Totally fair 😂 I’m just trying to sell the convenience layer for $2.99. Jokes aside, I’d love to benchmark it: SourcePrep vs. your favorite agent to convert 1,000 files to .md files in 1 chapter chunks.
English
1
0
11
1.8K
Andrej Karpathy
Andrej Karpathy@karpathy·
@trainable_nick The best epub to txt converter I found is just asking your favorite agent to do it. Epubs can be very diverse, the agent just goes in, figures it out, creates the output markdown and ensures it looks good works great.
English
44
29
984
46.7K
Nick Fernandez
Nick Fernandez@trainable_nick·
Too much of a good thing is bad and AI simply gives us too much leverage. Im out. If you want to reach out, please consider using morse code via flashlight. Semaphore works too.
English
0
0
0
37
Thereallo
Thereallo@Thereallo1026·
The White House App has OneSignal's full GPS pipeline compiled in, polling your location every 4.5 minutes, syncing your exact coordinates to a third party server.
Thereallo tweet media
The White House@WhiteHouse

🇺🇸 🚀 LAUNCHED: THE WHITE HOUSE APP Live streams. Real-time updates. Straight from the source, no filter. The conversation everyone’s watching is now at your fingertips. Download here ⬇️ 📲 App Store: apps.apple.com/us/app/the-whi… 📲 Google Play Store: play.google.com/store/apps/det…

English
269
4.3K
22.5K
2.1M
Lenny Rachitsky
Lenny Rachitsky@lennysan·
Today I'm releasing my entire newsletter archive (350+ posts) and all podcast transcripts (300+ episodes) as AI-friendly Markdown files. Plus an MCP server and GitHub repo. A few months ago I shared my podcast transcripts on a whim, and y'all built the most amazing things—an RPG game, a parenting wisdom site, infographics, a Twitter bot, and 50+ other projects. Let's see what happens when I give you even more data. Grab the data here: LennysData.com. Paid subscribers get all of the data (some 350 posts and 300 transcripts). Free subscribers get a subset. I don’t think anyone’s ever done anything like this before, and I’m excited to give you this excuse to play with that AI tool you've been meaning to try. Here’s my challenge to you: build something, and let me know about it. I’ll pick my favorite and give you a free 1-year subscription to the newsletter. Just post a link to your project in the comments here: lennysnewsletter.com/p/how-i-built-…. If you’ve already built something, slurp in this new data and submit it, too. I’ll pick a winner on April 15th. Check out today's newsletter post for inspiration on what you could to build: lennysnewsletter.com/p/how-i-built-… LFG.
Lenny Rachitsky tweet media
English
188
300
2.5K
673.3K
Nick Fernandez
Nick Fernandez@trainable_nick·
Want to see the future? Ask ChatGPT for the SF tech-bro stack. Did that this weekend… and I’m blown away by the leverage.
English
0
0
1
44
Nick Fernandez
Nick Fernandez@trainable_nick·
Correction: 14B (Qwen2.5)
Filipino
0
0
0
20
Nick Fernandez
Nick Fernandez@trainable_nick·
Installing a local 8B model to train Trainable while I sleep. I have been a little quiet - very much still here making Trainable ready to make a splash when it lands.
English
1
0
0
30
Nick Fernandez
Nick Fernandez@trainable_nick·
Was talking to a dev manger about this yesterday. He said something funny and very true: “Everyone has access to a hammer and I know a ton of people that have no business using one” My take: builders who could code before become master builders. Builders who couldn’t code before also become master builders, just slower. Consumers remain consumers (and that’s totally ok - imagine the infrastructure problems we would have if everyone started to ship at the same time)
@levelsio@levelsio

I think this collective feeling of "I don't enjoy coding anymore because it's so easy with AI" is good to talk about and realize, and I have it too I miss going to bed with a coding challenge I have to get through and then wake up and in the shower I get the answer and I scream EUREKA!!!!! But then you quickly just have to accept that the world has permanently changed now and it's just not going back because letting AI code for you is simply so much faster and effective and will only get better with every passing year So the better mental approach for me to these things is to just aggressively embrace it and change myself instead, if the fun in solving the challenges is gone, where else can I find the fun? I'm lucky a bit because for me the fun has always been building new things in general, not so much the coding part, although the coding challenges were fun for me too. But having ideas and just building new things was always the most fun. So I have to double down on that now, making more things and making better things and making them much faster than before. Especially now that literally everyone in the world has access to the same coding skill as everyone else (which is AI), the focus will have to aggressively be on what remains as a differentiator for me as a creator, which is my ideas and the way I execute them, not coding them So that's what I will try focus on from now on I think

English
0
0
0
25
Guri Singh
Guri Singh@heygurisingh·
Holy shit... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU. It's called BitNet. And it does what was supposed to be impossible. No GPU. No cloud. No $10K hardware setup. Just your laptop running a 100-billion parameter model at human reading speed. Here's how it works: Every other LLM stores weights in 32-bit or 16-bit floats. BitNet uses 1.58 bits. Weights are ternary just -1, 0, or +1. That's it. No floats. No expensive matrix math. Pure integer operations your CPU was already built for. The result: - 100B model runs on a single CPU at 5-7 tokens/second - 2.37x to 6.17x faster than llama.cpp on x86 - 82% lower energy consumption on x86 CPUs - 1.37x to 5.07x speedup on ARM (your MacBook) - Memory drops by 16-32x vs full-precision models The wildest part: Accuracy barely moves. BitNet b1.58 2B4T their flagship model was trained on 4 trillion tokens and benchmarks competitively against full-precision models of the same size. The quantization isn't destroying quality. It's just removing the bloat. What this actually means: - Run AI completely offline. Your data never leaves your machine - Deploy LLMs on phones, IoT devices, edge hardware - No more cloud API bills for inference - AI in regions with no reliable internet The model supports ARM and x86. Works on your MacBook, your Linux box, your Windows machine. 27.4K GitHub stars. 2.2K forks. Built by Microsoft Research. 100% Open Source. MIT License.
English
882
2.7K
15.3K
2.2M
Nick Fernandez
Nick Fernandez@trainable_nick·
This isn’t the future. This is now. 🤯🤯🤯
Ole Lehmann@itsolelehmann

i can't believe nobody caught this. Anthropic's entire growth marketing team was just ONE PERSON (for 10 months, confirmed) a single non-technical person ran paid search, paid social, app stores, email marketing, and SEO for the $380B company behind claude here's exactly how one human is doing the job of a full marketing team: it starts with a CSV. 1. he exports all his existing ads from his ad platforms along with their performance metrics (click-through rates, conversions, spend, etc) 2. feeds the whole file into claude code 3. and tells it to find what's underperforming. claude analyzes the data, flags the weak ads, and generates new copy variations on the spot this is where he gets clever: he then splits the work into 2 specialized sub-agents: 1. one that only writes headlines (capped at 30 characters) 2. and one that only writes descriptions (capped at 90 characters). each agent is tuned to its specific constraint so the quality is way higher than cramming both into a single prompt so now he's got hundreds of fresh headlines and descriptions. but that's just the text. he still needs the actual visual ad creative, the images and banners that go on facebook, google, etc. so he built a figma plugin that: 1. takes all those new headlines and descriptions 2. finds the ad templates in his figma files 3. and automatically swaps the copy into each one. up to 100 ready-to-publish ad variations generated at half a second per batch. what used to take hours of duplicating frames and copy-pasting text by hand so now the ads are live. the next question is which ones are actually working. for that he built an MCP server (basically a custom integration that lets claude talk directly to external tools) connected to the meta ads API. so he can ask claude things like: • "which ads had the best conversion rate this week" • or "where am i wasting spend" and get real answers from live campaign data without ever opening the meta ads dashboard and the part that ties it all together and closes the loop: he set up a memory system that logs every hypothesis and experiment result across ad iterations. so when he goes back to step one and generates the next batch of variations... claude automatically pulls in what worked and what didn't from all previous rounds. the system literally gets smarter every cycle. that kind of systematic experimentation across hundreds of ads would normally need a dedicated analytics person just to track the numbers from the doc: ad creation went from 2 hours to 15 minutes. 10x more creative output. and he's now testing more variations across more channels than most full marketing teams a $380 billion company. and their entire growth marketing operation (not GTM) = just one person and claude code lol truly unbelievable

English
1
0
0
37
Nick Fernandez
Nick Fernandez@trainable_nick·
Inspired by @lennysan’s conversation with @pmarca, I had OpenClaw build me an AI agent that teaches me how Trainable actually works. Right now, these are just impressive words: “Deterministic, event-sourced backend. Contract-first iOS frontend. Idempotency. Async operations. Safety gates. Entitlement gates. Replayable audit trails.” If I can’t explain them, I don’t really understand them. So before any of that, it assigned: Day 1: bits, bytes, binary and why your computer thinks in 1s and 0s. It quizzes me. Waits for my answer. Only unlocks the next lesson when I earn it. I’m not outsourcing my thinking. I’m compounding it. The agent’s name is Professor Xavier (yes, like X-Men). See you on the other side 🧬
GIF
English
1
0
0
92