Frank Flitton

1.6K posts

Frank Flitton banner
Frank Flitton

Frank Flitton

@FrankFlitton

systems of systems

Seattle WA Katılım Haziran 2011
347 Takip Edilen201 Takipçiler
Frank Flitton retweetledi
little-scale
little-scale@littlescale·
breakout-esque mechanic for Ableton
English
27
245
3.3K
158.4K
DiscussingFilm
DiscussingFilm@DiscussingFilm·
New teaser for the Star Wars racing game ‘STAR WARS: GALACTIC RACER’. Releasing on October 6.
English
727
3.4K
48.3K
7.8M
Turtle Island/First Nations
Turtle Island/First Nations@NationsTurtle·
@FrankFlitton @Urban_Toronto Self storage for who? Will the street and homeless have access to this? it will cause disruption, as the W Hotel is not far from there and a condo building of residents. Why do we need self storage?????
English
1
0
0
47
UrbanToronto
UrbanToronto@Urban_Toronto·
Plans have been filed for 2 Bloor Street East that would introduce a new use within the podium of the Hudson’s Bay Centre, bringing self-storage to one of Toronto’s busiest intersections, repurposing the largest vacant site at Yonge & Bloor. urbantoronto.ca/news/2026/04/s…
UrbanToronto tweet media
English
104
12
98
415.2K
Frank Flitton
Frank Flitton@FrankFlitton·
@garrytan This tracks. A lot of Waterloo startup founders are in the USA so often for raising or sales that they’ll have residency issues.
English
0
0
0
7
Garry Tan
Garry Tan@garrytan·
We're not saying Canadians should leave Canada. There are lots of reasons to build great companies in Canada, and there are lots of great YC and non-YC startups that thrive and are making the Canadian tech scene great. Where you are incorporated increases your access to capital. That's it. There's no drama here, and the clout farmers who are trying to make it drama: you know who you are, I see you, and you should stop.
English
131
55
1.1K
304.8K
Frank Flitton
Frank Flitton@FrankFlitton·
GitHub search is down again.
Frank Flitton tweet media
English
1
0
1
17
Kirill Skrygan
Kirill Skrygan@kskrygan·
Would you be interested if JetBrains releases a totally local AI agent, working 100% on your laptop, using our code insight engine and deeply integrated into the IDE? Yes, it will be probably 1 month behind the very recent frontier models, but no token blood bath anymore WDYT?
English
809
236
7.2K
484.9K
Frank Flitton
Frank Flitton@FrankFlitton·
Proud to have helped build the frontend for this as one of the FE engineers. No more vibe analysis. You upload your messy data and get clean, fully traceable executive reports in minutes. We actually made the dreaded weekly analysis feel… good? Try it!
Ian Wong@ianwong_

Friends don't let friends vibe analyze. Summation Reports is live. Drop in your data, get back an executive-ready report — every number traced to its source, every claim verified. Built for the analyses you run every week and dread every time. summation.com DMs open if you run reports for a living.

English
0
0
0
22
Frank Flitton retweetledi
Ian Wong
Ian Wong@ianwong_·
Friends don't let friends vibe analyze. Summation Reports is live. Drop in your data, get back an executive-ready report — every number traced to its source, every claim verified. Built for the analyses you run every week and dread every time. summation.com DMs open if you run reports for a living.
English
0
4
14
951
Hugeicons
Hugeicons@huge_icons·
Drop your portfolio URL Consider this as marketing.
English
847
29
642
76.7K
Frank Flitton
Frank Flitton@FrankFlitton·
Is anyone running client side LLMs in production web apps? I've been playing with some local demos using the npm package Web-LLM to bring AI to micro interactions without needing API keys and a LLM streaming service. A quantized small model can get simple tasks done without bothering to use an API key. At a minimum, it could be interesting to drive a public AI demo with no API fees or per user usage tracking. A small model runs around 500mb to 2GB. Is that too large for a SaaS web app context? It reminds me of the old flash days where you'd stare at a loading bar for a few minutes before you can begin the experience.
English
0
0
1
29
Nous Research
Nous Research@NousResearch·
The most powerful real-time visual tool in creative coding also has the steepest learning curve Now your Hermes agent can just run TouchDesigner for you. Video credit: made by @macbethAI, a talented AI artist and avid Hermes user, with the TouchDesigner skill
English
104
185
2.5K
250.3K
Frank Flitton
Frank Flitton@FrankFlitton·
Why do every junior designer’s app mockups feature sneakers? 👟 Noticed it reviewing portfolios at work. Clean UI, great flows… but always a pair of hype sneakers on a phone screen. What’s the origin story here? Trend or template default?
English
0
0
0
17
Frank Flitton
Frank Flitton@FrankFlitton·
@karpathy Coding agents reading the docs/ folder has been a game changer.
English
0
0
0
4
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
7K
58.1K
20.8M