Tianzhen Lin

2.4K posts

Tianzhen Lin

Tianzhen Lin

@tangentlin

Ideas, thoughts about UX, design, code, geek stuff or just life while I still own my thoughts. Father of two, engineer at @SandboxAQ

Katılım Nisan 2007
166 Takip Edilen116 Takipçiler
Tianzhen Lin retweetledi
Jeremy Yudkin
Jeremy Yudkin@yudDIDit·
Today, @editframe emerges from stealth. Agents need video. Editframe Agent Skills: npm create @editframe@latest Just prompt Claude Code, Cursor, or Codex and get a working video or a full interactive GUI. This video was created just by prompting 👇
English
146
121
1.3K
309.5K
Tianzhen Lin retweetledi
Charly Wargnier
Charly Wargnier@DataChaz·
🚨 This is absolute GOLD. The @AnthropicAI engineer who literally wrote "Building Effective Agents" just dropped a 14-minute masterclass. saves you months of headaches trying to figure this out alone. bookmark for the weekend + read @Av1dlive's great guide below 👇
Avid@Av1dlive

x.com/i/article/2044…

English
43
828
5.4K
895.1K
Tianzhen Lin
Tianzhen Lin@tangentlin·
I use Claude for work and intentionally turn off memory because it interferes with assessing the quality of prompts. Memory can either enhance or diminish a prompt’s outcome quality, making it a challenge to share the prompt with colleagues with confidence of the outcome, because the memory is the another hidden part of the source code. However, for personal use, it’s a different story.
English
0
0
1
49
Esther Crawford ✨
Esther Crawford ✨@esthercrawford·
Whenever Claude says, "Saving a memory so I don't repeat this mistake" I can feel my usefulness for future corrections and steers being reduced.
English
2
0
30
2.9K
Tianzhen Lin retweetledi
TANSTACK
TANSTACK@tan_stack·
We replaced our entire smoke-test suite with 147 deterministic E2E tests powered by aimock. 🧪 Every PR now verifies every feature across 7 LLM providers in about 2 minutes. No tokens burned. No flakiness. ✅ Here's how → tanstack.com/blog/how-we-te…
TANSTACK tweet media
English
9
48
832
63.4K
Tianzhen Lin retweetledi
Om Patel
Om Patel@om_patel5·
THIS GUY VIBE CODED A TOOL THAT TURNS ANY SVG INTO A 3D OBJECT YOU CAN SPIN, ANIMATE, AND EMBED drag in an SVG, type some text, or draw pixel art. it becomes a 3D object instantly. spin it around, animate it, and embed it on your site. export as 4K image or video. runs entirely in your browser and nothing gets uploaded to any server with no account needed. 100% free AND its open source. this is one of those tools you didn't know you needed until you see it
English
98
257
3.7K
227.1K
Tianzhen Lin retweetledi
Roan
Roan@RohOnChain·
This 2 hour Stanford lecture shows exactly how Stanford trains it's engineers to build AI systems. It's more practical than every Claude tutorial & prompting threads you've seen. Bookmark & give it 2 hours, no matter what. It'll be the most productive thing you do this weekend.
English
159
1.9K
13.7K
1.6M
Tianzhen Lin retweetledi
Jen Zhu
Jen Zhu@jenzhuscott·
As I build my own 2nd brain 🧠 on Obsidian using @karpathy ‘s wiki idea, it suddenly dawned on me - one day when we r gone, our kids could inherit an interactive map to your mind, passion, obsessions, work, fascinations… It’s kind of beautiful way to think abt your 2nd 🧠.
English
85
100
1.8K
201.1K
Tianzhen Lin retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1.1K
2.8K
26.6K
6.9M
Tianzhen Lin retweetledi
edwin
edwin@edwinarbus·
Matt Maher tested frontier models in Cursor v. other harnesses. Cursor boosted model performance by 11% on average: Gemini: 52% → 57% GPT-5.4: 82% → 88% Opus: 77% → 93% His benchmark measures how well models implement a 100-feature PRD. @cursor_ai consistently outperformed.
English
122
124
1.4K
896.6K
Tianzhen Lin retweetledi
Tianzhen Lin retweetledi
Rimsha Bhardwaj
Rimsha Bhardwaj@heyrimsha·
🚨BREAKING: A dev just open-sourced the #1 ranked OCR model on Earth. It's called GLM-OCR and it just hit 94.62 on OmniDocBench V1.5, beating every OCR model in existence. Only 0.9B parameters. One pip install. Handles documents no other model could touch. 100% Open Source.
Rimsha Bhardwaj tweet media
English
47
392
2.9K
206.3K
Tianzhen Lin retweetledi
Boris Cherny
Boris Cherny@bcherny·
3/ Two of the most powerful features in Claude Code: /loop and /schedule Use these to schedule Claude to run automatically at a set interval, for up to a week at a time. I have a bunch of loops running locally: - /loop 5m /babysit, to auto-address code review, auto-rebase, and shepherd my PRs to production - /loop 30m /slack-feedback, to automatically put up PRs for Slack feedback every 30 mins - /loop /post-merge-sweeper to put up PRs to address code review comments I missed - /loop 1h /pr-pruner to close out stale and no longer necessary PRs - lots more!.. Experiment with turning workflows into skills + loops. It's powerful. code.claude.com/docs/en/schedu…
English
46
52
1.6K
321K
Tianzhen Lin retweetledi
Cheng Lou
Cheng Lou@_chenglou·
My dear front-end developers (and anyone who’s interested in the future of interfaces): I have crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow
English
1.3K
8.3K
65.5K
23.8M
Tianzhen Lin retweetledi
Tom Turney
Tom Turney@no_stp_on_snek·
Google dropped the TurboQuant paper yesterday morning. 36 hours later it's running in llama.cpp on Apple Silicon, faster than the baseline it replaces. the numbers: - 4.6x KV cache compression - 102% of q8_0 speed (yes, faster, smaller cache = less memory bandwidth) - PPL within 1.3% of baseline (verified, not vibes) the optimization journey: 739 > starting point (fp32 rotation) 1074 > fp16 WHT 1411 > half4 vectorized butterfly 2095 > graph-side rotation (the big one) 2747 > block-32 + graph WHT. faster than q8_0. 3.72x speedup in one day. from a paper I read at dinner last night. what I learned along the way: - the paper's QJL residual stage is unnecessary. multiple implementations confirmed this independently - Metal silently falls back to CPU if you mess up shader includes. cost me hours - "coherent text" output means nothing. I shipped PPL 165 thinking it worked. always run perplexity - ggml stores column-major. C arrays are row-major. this will ruin your afternoon everything is open source. the code, the benchmarks, the speed investigation logs, the debugging pain, all of it. github.com/TheTom/turboqu… paper to parity in 36 hours. what a time to be alive.
Tom Turney tweet media
English
65
177
2K
127.3K
Tianzhen Lin
Tianzhen Lin@tangentlin·
@Teslarati This is exciting news nonetheless. I am wishing the banner image weren't showing a GM vehicle.
English
1
1
2
613
Tianzhen Lin
Tianzhen Lin@tangentlin·
@AmericanAir Love that free Wi-Fi is now part of the experience. As a million miler, I’d say the next real differentiator for business travelers is connection quality. @Starlink has set the bar for fast, reliable onboard internet. On many flights, AA’s Wi-Fi still feels too spotty for sustained productivity, with interruptions that require reconnecting and retrying work. Closing that gap would meaningfully improve the travel experience.
English
1
0
0
38
Tianzhen Lin retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨 OpenAI charges $0.006/minute. Google charges $0.024. AWS charges $0.024. Someone just open sourced a tool that does it for $0. And it's faster than all of them. It's called Insanely Fast Whisper. And that's not hype. That's the benchmark. 150 minutes of audio. 98 seconds to transcribe. On your own machine. No API key. No cloud. No per-minute billing. Here's what the numbers look like: → Whisper Large v3 + Flash Attention 2: 150 min of audio in 98 seconds → Distil Whisper + Flash Attention 2: 150 min in 78 seconds → Standard Whisper without optimization: 31 minutes for the same job → That's a 19x speedup. Same model. Same accuracy. Just faster. Here's what it does: → One command to transcribe any audio file or URL → Speaker diarization — knows WHO said WHAT → Transcription AND translation to other languages → Runs on NVIDIA GPUs and Mac (Apple Silicon) → Flash Attention 2 for maximum speed → Clean JSON output with timestamps → Works with every Whisper model variant Here's the wildest part: Otter.ai charges $100/year. Rev charges $1.50/minute. Descript charges $24/month. Enterprise transcription contracts cost thousands. Podcasters, journalists, researchers, lawyers, content creators — anyone still paying for transcription is lighting money on fire. 8.8K GitHub stars. 633 forks. MIT License. 100% Open Source. (Link in the comments)
Nav Toor tweet media
English
154
498
5.7K
503.6K