Kevin Ho

1.5K posts

Kevin Ho

Kevin Ho

@kho

Making wonderful things.

Entrou em Ağustos 2009
261 Seguindo599 Seguidores
Tweet fixado
Kevin Ho
Kevin Ho@kho·
Had to jump in and experiment with @_chenglou's Pretext. BioMap is a 52 biomarker blocks that expand as you explore, reflowing text across every block every frame. 0.04ms for all 52 layouts only possible with Pretext turning text measurement into pure math. No DOM reads, no reflows. kevinho.com/experiments/bi…
English
34
114
2K
122.1K
Kevin Ho retweetou
Robin
Robin@solarise_webdev·
Another brilliant example of how pretext can deliver smooth, real-time content manipulation in an interface that actually makes sense!
Kevin Ho@kho

Had to jump in and experiment with @_chenglou's Pretext. BioMap is a 52 biomarker blocks that expand as you explore, reflowing text across every block every frame. 0.04ms for all 52 layouts only possible with Pretext turning text measurement into pure math. No DOM reads, no reflows. kevinho.com/experiments/bi…

English
2
10
76
7.1K
Denis Turbin
Denis Turbin@den_turbin·
@kho @_chenglou Whoa, this is insane—exactly what I need! How did no one think of this before? My respect
English
1
1
2
15
Kevin Ho retweetou
MasteringMachines AI
MasteringMachines AI@MstrMachines·
A BioMap that allows you to submit your own blood work. Interesting project @kho
English
1
1
1
41
Kevin Ho
Kevin Ho@kho·
@joulee I find it helpful too if you start at very high level (market analysis) and walk through step by step to the details (conversion metrics).
English
0
0
0
37
Julie Zhuo
Julie Zhuo@joulee·
A recent unlock for me on AI + data analysis: think less about prompting. Think more about cooking. See a lot of people use AI like a microwave. They drop in one chart, one problem statement, one KPI dip, and type: “Think like a senior analyst. What should I do?” Then they hit analyze and act surprised when what comes back is lukewarm slop. But good analysis is not microwave work. It’s chef work. If you give a great chef a microwave and say “make dinner,” you should not be shocked if the result is random. A chef needs more than that. They need a pantry. They need various tools. They need to know who they’re cooking for. They need to know whether this is Tuesday dinner or a wedding. They need to know what was already served. They need to taste as they go. They need constraints. Same with AI. Most people give AI one slice of the situation: “My growth is slowing. What should I do?” “Our retention is down. What’s happening?” “Revenue is up. Is that good?” That is not enough. Because a good answer depends on other context that narrows what is actually true. For example: What exactly is the metric? How is it defined? What changed recently? Which segments matter most? What are we optimizing for? What happened the last time this moved? What constraints are real? That’s what I mean by orthogonal context (which is a fancy way of saying, context that comes at right angles. That is independent from each other.) Different kinds of context that rule things in and out. This is why “better prompts” are overrated. “Act like a strategic analyst” is basically: “Cook like a Michelin chef.” The problem is not that the model is dumb. It’s that you gave it one thing and asked it to invent the meal. A better question is: What are the 5–7 things my best analyst would want to know before making a recommendation? Then, answer those questions. Give your AI the pantry and tools that it needs.
English
10
6
94
9K
Kevin Ho
Kevin Ho@kho·
38 biomarkers. 6 need attention. What does your blood say about you? Get your free BioMap ottolab.com/u/kho
English
0
0
1
58
Kevin Ho retweetou
LΞO
LΞO@LeoAlmighty·
I just mapped 38 biomarkers. Here’s my BioMap — an interactive visualization of my bloodwork. ottolab.com/u/ed57dbad-14f…
English
0
1
3
105
Kevin Ho
Kevin Ho@kho·
This gets me thinking about how to make OttoLab.com to be more persistent, user-visible health knowledge that grows over time. Making the entire health profile a living, LLM-maintained document. Raw data (labs, stacks, health metrics) and the LLM integration (chat, extraction, recommendations).
English
0
0
0
44
Kevin Ho
Kevin Ho@kho·
My main takeaway: don't treat the LLM as a chatbot. Treat it as a compiler that builds persistent, queryable knowledge from raw data.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1
0
1
99
Kevin Ho
Kevin Ho@kho·
@MarkoP16594845 @_chenglou The web platform is proprietary but I published an OpenClaw extensions so anyone can build on top of the data layer.
English
0
0
1
53
Ari
Ari@MarkoP16594845·
@kho @_chenglou nice! to u plan to open source it?
English
1
0
0
57
Kevin Ho
Kevin Ho@kho·
@akiffpremjee @reddy2go @_chenglou That's a great point and where I want to take the project. Not "order everything" but "here are the 5 that matter most for you based on your history and data." Your thread + the AHA link are really helpful context. DM me if you ever want to try Otto!
English
0
0
2
38
Akiff Premjee, MD
Akiff Premjee, MD@akiffpremjee·
all good i think your tool is cool but always worth understanding that preventative care isn't one size fits all re your q i talk about it here: x.com/akiffpremjee/s… tldr: patient specific and i think most of these labs companies are overkill vs what patient actually needs example: I'm south asian with fam hx of DM2 and heart disease and high cholesterol so getting Lp(a)+ApoB + even A1c at an earlier age is more helpful for me also i do agree that guidelines likely change at a slower pace and so we should be thinking where the puck is going to go not where it is but that doesn't mean order everything for everyone. ACC just updated their cholesterol screening guidelines for example which now do match what people were doing on their own but still doesn't fully cover a need for all labs companies like function order for for example: ahajournals.org/doi/10.1161/CI…
Akiff Premjee, MD@akiffpremjee

if it’s cheaper it makes sense to get but would ask which labs are actually necessary. i agree with things like Lpa and ApoB for the right people and agree those should be cheaper and available my issue with companies like function is that they don’t say that and blanket statement say that everyone needs 400 biomarkers

English
1
0
2
394
Kevin Ho
Kevin Ho@kho·
You're right, and appreciate the catch. HOMA-IR is a calculated score (fasting insulin x fasting glucose / 405), not a direct lab order. I should have been clearer on "ask for fasting insulin and fasting glucose so you can calculate HOMA-IR" rather than listing it as a standalone test. I'll hold the clinical details to the same standard I'd hold code, I will tighten it up! Curious, as a doc, what would your ideal "order this beyond the standard panel" list look like? Would love to get a physician's take on what's actually actionable for someone optimizing proactively.
English
1
0
2
139
Akiff Premjee, MD
Akiff Premjee, MD@akiffpremjee·
this is cool but i think you should proofread the AI generated article - especially for things like recommended labs and thresholds. people may not know it’s AI generated and take it for face value example: HOMA-IR isn’t a lab it’s a score from labs (as a doc i’ve never even used this so don’t know its utility) people should have access to their data and track it but AI articles without some checking aren’t the best way to actually help people understand their health
English
1
0
4
297
Kevin Ho
Kevin Ho@kho·
Yeah there's API costs but it's manageable for now. Eventually, you could let users pick between different models (Google, Anthropic, OpenAI) depending on what they need. But if you're just getting started, Gemini Flash is a good option. Cool to see more people building in this space, happy to chat if you want to compare notes 🤝
English
1
0
1
185
Coen Deurloo
Coen Deurloo@coendeurloo·
@kho @_chenglou Very cool! How did you integrate the chat with Google? Doesn't it cost you money for using the API? (I'm building labtracker.app, so I'm looking for a good way to integrate AI as well)
English
1
0
0
235