Kevin Ho

1.5K posts

Kevin Ho

Kevin Ho

@kho

Making wonderful things.

가입일 Ağustos 2009
261 팔로잉596 팔로워
고정된 트윗
Kevin Ho
Kevin Ho@kho·
Had to jump in and experiment with @_chenglou's Pretext. BioMap is a 52 biomarker blocks that expand as you explore, reflowing text across every block every frame. 0.04ms for all 52 layouts only possible with Pretext turning text measurement into pure math. No DOM reads, no reflows. kevinho.com/experiments/bi…
English
34
114
2K
121.8K
Kevin Ho
Kevin Ho@kho·
38 biomarkers. 6 need attention. What does your blood say about you? Get your free BioMap ottolab.com/u/kho
English
0
0
1
38
Kevin Ho 리트윗함
LΞO
LΞO@LeoAlmighty·
I just mapped 38 biomarkers. Here’s my BioMap — an interactive visualization of my bloodwork. ottolab.com/u/ed57dbad-14f…
English
0
1
3
83
Kevin Ho
Kevin Ho@kho·
This gets me thinking about how to make OttoLab.com to be more persistent, user-visible health knowledge that grows over time. Making the entire health profile a living, LLM-maintained document. Raw data (labs, stacks, health metrics) and the LLM integration (chat, extraction, recommendations).
English
0
0
0
33
Kevin Ho
Kevin Ho@kho·
My main takeaway: don't treat the LLM as a chatbot. Treat it as a compiler that builds persistent, queryable knowledge from raw data.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1
0
1
84
Denis Turbin
Denis Turbin@den_turbin·
@kho @_chenglou Whoa, this is insane—exactly what I need! How did no one think of this before? My respect
English
1
0
2
6
Kevin Ho
Kevin Ho@kho·
@MarkoP16594845 @_chenglou The web platform is proprietary but I published an OpenClaw extensions so anyone can build on top of the data layer.
English
0
0
1
48
Ari
Ari@MarkoP16594845·
@kho @_chenglou nice! to u plan to open source it?
English
1
0
0
53
Kevin Ho
Kevin Ho@kho·
@akiffpremjee @reddy2go @_chenglou That's a great point and where I want to take the project. Not "order everything" but "here are the 5 that matter most for you based on your history and data." Your thread + the AHA link are really helpful context. DM me if you ever want to try Otto!
English
0
0
2
37
Akiff Premjee, MD
Akiff Premjee, MD@akiffpremjee·
all good i think your tool is cool but always worth understanding that preventative care isn't one size fits all re your q i talk about it here: x.com/akiffpremjee/s… tldr: patient specific and i think most of these labs companies are overkill vs what patient actually needs example: I'm south asian with fam hx of DM2 and heart disease and high cholesterol so getting Lp(a)+ApoB + even A1c at an earlier age is more helpful for me also i do agree that guidelines likely change at a slower pace and so we should be thinking where the puck is going to go not where it is but that doesn't mean order everything for everyone. ACC just updated their cholesterol screening guidelines for example which now do match what people were doing on their own but still doesn't fully cover a need for all labs companies like function order for for example: ahajournals.org/doi/10.1161/CI…
Akiff Premjee, MD@akiffpremjee

if it’s cheaper it makes sense to get but would ask which labs are actually necessary. i agree with things like Lpa and ApoB for the right people and agree those should be cheaper and available my issue with companies like function is that they don’t say that and blanket statement say that everyone needs 400 biomarkers

English
1
0
2
390
Kevin Ho
Kevin Ho@kho·
You're right, and appreciate the catch. HOMA-IR is a calculated score (fasting insulin x fasting glucose / 405), not a direct lab order. I should have been clearer on "ask for fasting insulin and fasting glucose so you can calculate HOMA-IR" rather than listing it as a standalone test. I'll hold the clinical details to the same standard I'd hold code, I will tighten it up! Curious, as a doc, what would your ideal "order this beyond the standard panel" list look like? Would love to get a physician's take on what's actually actionable for someone optimizing proactively.
English
1
0
2
131
Akiff Premjee, MD
Akiff Premjee, MD@akiffpremjee·
this is cool but i think you should proofread the AI generated article - especially for things like recommended labs and thresholds. people may not know it’s AI generated and take it for face value example: HOMA-IR isn’t a lab it’s a score from labs (as a doc i’ve never even used this so don’t know its utility) people should have access to their data and track it but AI articles without some checking aren’t the best way to actually help people understand their health
English
1
0
4
285
Kevin Ho
Kevin Ho@kho·
Yeah there's API costs but it's manageable for now. Eventually, you could let users pick between different models (Google, Anthropic, OpenAI) depending on what they need. But if you're just getting started, Gemini Flash is a good option. Cool to see more people building in this space, happy to chat if you want to compare notes 🤝
English
1
0
1
180
Coen Deurloo
Coen Deurloo@coendeurloo·
@kho @_chenglou Very cool! How did you integrate the chat with Google? Doesn't it cost you money for using the API? (I'm building labtracker.app, so I'm looking for a good way to integrate AI as well)
English
1
0
0
230
Kevin Ho
Kevin Ho@kho·
@TheGregYang @deliprao Hey Greg! I’ve been following your journey and it’s partly inspired me into building Otto (launched just today) It’s not a finished product by any means, and I would love to have your feedback. Let me know if you’re opened to DM, thanks!
Kevin Ho@kho

Thousands of you liked the BioMap I built with @_chenglou's Pretext last week. Now you can upload your own blood work and get yours at ottolab.com. I also wrote about why I'm building this — preventive health is still stuck in 1985.

English
0
0
0
38
Greg Yang
Greg Yang@TheGregYang·
not entirely sure because I don't have a baseline and the improvement is gradual, but maybe accumulation of more deliberate rest (no meeting people, minimizing computer and phone usage), better management of symptoms and avoiding crashes to give my body more chance to heal, neurofeedback, supplements, HBOT
English
3
1
4
755
Kevin Ho
Kevin Ho@kho·
@_chenglou Haha, I posted my experiment right before hopping on a flight. And the post started blowing up right after I landed. Thanks for breaking the internet!
English
1
0
4
595
Cheng Lou
Cheng Lou@_chenglou·
Next time maybe I shouldn’t release a side project in the middle of vacations…
English
39
9
802
30.9K
David
David@DavidSHolz·
@kho this sounds cool! What kind of data would you put in it? DM me sometime!
English
1
0
1
80
Kombucho
Kombucho@Pedro2076194039·
@kho @_chenglou Nice man, what are you using for chat? Do you rely on web search (Google api?) ?
English
1
0
1
332