Frank Flitton retweetledi
Frank Flitton
1.6K posts

Frank Flitton
@FrankFlitton
systems of systems
Seattle WA Katılım Haziran 2011
347 Takip Edilen201 Takipçiler


@NationsTurtle @Urban_Toronto Storage Wars: Toronto PATH edition.
English

@FrankFlitton @Urban_Toronto Self storage for who? Will the street and homeless have access to this? it will cause disruption, as the W Hotel is not far from there and a condo building of residents. Why do we need self storage?????
English

Plans have been filed for 2 Bloor Street East that would introduce a new use within the podium of the Hudson’s Bay Centre, bringing self-storage to one of Toronto’s busiest intersections, repurposing the largest vacant site at Yonge & Bloor. urbantoronto.ca/news/2026/04/s…

English

@garrytan This tracks. A lot of Waterloo startup founders are in the USA so often for raising or sales that they’ll have residency issues.
English

We're not saying Canadians should leave Canada. There are lots of reasons to build great companies in Canada, and there are lots of great YC and non-YC startups that thrive and are making the Canadian tech scene great.
Where you are incorporated increases your access to capital. That's it.
There's no drama here, and the clout farmers who are trying to make it drama: you know who you are, I see you, and you should stop.
English

@varrick_charlie @Urban_Toronto Should have bought more HBC blankets.
English

@FrankFlitton @Urban_Toronto More like people have too much crap.
English

@FriendlyBot2000 @Urban_Toronto Bring it back to where you bought it.
English

@FrankFlitton @Urban_Toronto Probably. Folks needing to downsize due to unsustainable debt load needing a place for their "stuff".
English

Proud to have helped build the frontend for this as one of the FE engineers. No more vibe analysis. You upload your messy data and get clean, fully traceable executive reports in minutes.
We actually made the dreaded weekly analysis feel… good? Try it!
Ian Wong@ianwong_
Friends don't let friends vibe analyze. Summation Reports is live. Drop in your data, get back an executive-ready report — every number traced to its source, every claim verified. Built for the analyses you run every week and dread every time. summation.com DMs open if you run reports for a living.
English
Frank Flitton retweetledi

Friends don't let friends vibe analyze.
Summation Reports is live. Drop in your data, get back an executive-ready report — every number traced to its source, every claim verified.
Built for the analyses you run every week and dread every time.
summation.com
DMs open if you run reports for a living.
English

Is anyone running client side LLMs in production web apps?
I've been playing with some local demos using the npm package Web-LLM to bring AI to micro interactions without needing API keys and a LLM streaming service. A quantized small model can get simple tasks done without bothering to use an API key. At a minimum, it could be interesting to drive a public AI demo with no API fees or per user usage tracking.
A small model runs around 500mb to 2GB. Is that too large for a SaaS web app context? It reminds me of the old flash days where you'd stare at a loading bar for a few minutes before you can begin the experience.
English

@NousResearch @macbethAI TouchDesigner was a lot of fun. I’ll have to give this a go.
English

The most powerful real-time visual tool in creative coding also has the steepest learning curve
Now your Hermes agent can just run TouchDesigner for you.
Video credit: made by @macbethAI, a talented AI artist and avid Hermes user, with the TouchDesigner skill
English

@karpathy Coding agents reading the docs/ folder has been a game changer.
English

LLM Knowledge Bases
Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So:
Data ingest:
I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them.
IDE:
I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides).
Q&A:
Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale.
Output:
Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base.
Linting:
I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into.
Extra tools:
I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries.
Further explorations:
As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows.
TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English





