Sung Woo Kim

13.9K posts

Sung Woo Kim banner
Sung Woo Kim

Sung Woo Kim

@mindfulruncoach

I help +3,000 people change their lives through Mindful Running / Learned from the best in Kenya / Author of 2 📖 / @lululemon ambassador / MS @stanford 🇰🇷

San Francisco, CA Katılım Ekim 2015
2.5K Takip Edilen2.6K Takipçiler
Sabitlenmiş Tweet
Sung Woo Kim
Sung Woo Kim@mindfulruncoach·
At @veecon 2024, I met a number of people who wanted some help and coaching to run more consistently and to become the runner they want to be. I want to help you all with Personalized 4 week schedules. Sign up below! forms.gle/zj2u8qhWpan36g…
Sung Woo Kim tweet media
English
13
7
67
6.9K
Sung Woo Kim retweetledi
Up Workout
Up Workout@Upworkout·
Save this workout and try it later. This is a 13-drill core sequence inspired by Russian and Soviet gymnastics training methods, focused on building real core strength, control, and coordination.
English
15
557
4.2K
247.9K
Sung Woo Kim retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨 ElevenLabs charges $5 to $99/month for AI voice cloning. Their Business plan costs $1,320/month. Someone open sourced a voice AI that clones any voice from a short clip. 30 languages. Studio quality. Free. It's called VoxCPM2. Give it a short clip of anyone's voice. It clones their accent, emotion, tone, and pacing. Then generates any speech you want in their exact voice. 48kHz studio quality. Type "A young woman, gentle and sweet voice" and it creates that voice from scratch. No reference audio. No voice actor. No recording. You describe a voice in words. It builds it. 2 billion parameters. Trained on 2 million hours of speech. 30 languages. One command to install: pip install voxcpm Here's what VoxCPM2 does: → Voice Design: describe any voice in words. Gender, age, tone, emotion, pace. AI creates it from nothing. No reference audio needed. → Voice Cloning: upload a short audio clip. AI clones the voice perfectly. Timbre, accent, rhythm, pacing. → Controllable Cloning: clone a voice AND control the emotion. "Slightly faster, cheerful tone." Done. → Ultimate Cloning: provide audio + transcript. Every vocal nuance faithfully reproduced. → 30 languages. Arabic, Chinese, English, French, German, Hindi, Japanese, Korean, Spanish, and 21 more. No language tags needed. → Context-aware. It reads the text and adjusts emotion and rhythm automatically. News sounds like news. Stories sound like stories. → Real-time streaming. RTF as low as 0.13 on an RTX 4090. Faster than playback speed. → Runs on 8GB of VRAM. → Fine-tune with 5 to 10 minutes of your own audio using LoRA. Build a custom voice model. → 48kHz output. Studio quality. No external upsampler needed. Here's the wildest part: On the Minimax-MLS voice similarity benchmark: → English: VoxCPM2 scores 85.4%. ElevenLabs scores 61.3%. → Chinese: VoxCPM2 scores 82.5%. ElevenLabs scores 67.7%. → Arabic: VoxCPM2 scores 79.1%. ElevenLabs scores 70.6%. A free, open source model is producing more realistic voice clones than a service that charges up to $1,320/month. Professional voice actors charge $250 to $1,000+ per project. AI voice platforms charge $5 to $100/month. Recording studios charge $200/hour. This runs on your GPU. Locally. No API costs. No per-character pricing. No subscription. Free forever. Already hit #1 on GitHub Trending. Built by OpenBMB and Tsinghua University. 2 billion parameters. Apache 2.0 License. Free for commercial use. 100% Open Source.
Nav Toor tweet media
English
102
615
4.6K
510.5K
ALEX SUZUKI
ALEX SUZUKI@X_FINALBOSS·
I paid Tai Lopez $60K to spend 6 hours with me today in LA to teach me how to make $800M asap like he did. I took 7 pages full of notes. Comment "X" and I'll share it with you too
ALEX SUZUKI tweet mediaALEX SUZUKI tweet media
English
1.7K
135
1.5K
139.7K
Sung Woo Kim
Sung Woo Kim@mindfulruncoach·
@HusKerrs Congrats. Enjoy every moment and take many photos :)
English
0
0
0
5
HusKerrs
HusKerrs@HusKerrs·
Calling on all dads: Ali and I are about 3 weeks out from the birth of our first baby boy! Give me your #1 piece of advice for a new dad.
English
4.8K
32
4.1K
1.2M
Sung Woo Kim retweetledi
👉M-Û-R-Č-H👈
👉M-Û-R-Č-H👈@TheEXECUTlONER_·
This man’s name is Kim. He is 77 years old. These guys on the beach were challenging people to see who could climb the rope to the top. 77 years old or in his case, young, and making it look as easy as if he was in still in college. I would surmise that he is in the top .01% of his age bracket. I hope to be half as agile as he is at that age. Phenomenal shape. Do you think you’ll be able to climb that when you’re his age?
English
585
2.6K
25.7K
1.3M
Sung Woo Kim retweetledi
Kyle Sanok
Kyle Sanok@ksanok10·
I built my first website :) I'm running the Boston Marathon in two weeks and wanted to see all my training in one place. Didn’t know where to look, so I just…tried to build it myself?
English
18
2
124
11.7K
Sung Woo Kim retweetledi
りり|1日5分のセルフケアで腰痛改善
これ、選手引退後 腰痛で動けないときに救われたやつ うつ伏せで揺らすだけで ガチガチの腰が解放される ①足を交互に内外へ ②足を上下に動かす ③両足揃えて左右に振る 股関節を動かすと 負担が分散されて腰の軽さが変わります 寝方と起き上がり方は引用元から↓
りり|1日5分のセルフケアで腰痛改善@seitai_tennis4

x.com/i/article/2041…

日本語
14
1.6K
8.9K
912.1K
Sung Woo Kim retweetledi
Guri Singh
Guri Singh@heygurisingh·
Holy shit... A guy got laid off, built an AI job search system on Claude Code, evaluated 740+ job offers with it, and landed a Head of Applied AI role. Then he open-sourced the entire thing. It's called career-ops. One slash command. Full pipeline. Paste a job URL → get back a structured A-F evaluation, an ATS-optimized PDF tailored to that exact role, salary research, interview prep, and a tracker entry. All in one shot. No spreadsheets. No copy-pasting. No spray-and-pray. Here's what's inside: → 14 skill modes (evaluate, scan, pdf, batch, apply, deep research, negotiation scripts, LinkedIn outreach) → Portal scanner pre-loaded with 45+ companies — Anthropic, OpenAI, ElevenLabs, Mistral, Cohere, Stripe, Retool, Vercel, Decagon, the works → 19 search queries across Ashby, Greenhouse, Lever, Wellfound, Workable → ATS-optimized PDF generation via Playwright with Space Grotesk + DM Sans → Go terminal dashboard built with Bubble Tea to browse your pipeline → Batch mode that evaluates 10+ offers in parallel using Claude sub-agents → An interview Story Bank that accumulates STAR+Reflection stories across evaluations until you have 5-10 master answers for any behavioral question → Auto-fill for application forms The wildest part isn't the automation. It's the philosophy. Career-ops is explicitly NOT a spray-and-pray tool. It's a filter. The system literally refuses to recommend applying to anything scoring below 4.0/5. The whole point is to find the few offers worth your time out of hundreds, not to flood recruiters with garbage. It evaluates fit by reasoning about your CV vs the JD. Not keyword matching. And because it's all built on Claude Code skills, you can ask Claude to rewrite the system itself. "Change the archetypes to backend roles." "Add these 10 companies." "Translate the modes to English." It reads the same files it uses, so it knows exactly what to edit. 8.2k stars already. 100% Open Source. MIT licensed. (Link in the replies)
English
113
429
4.5K
955.9K
Sung Woo Kim retweetledi
Om Patel
Om Patel@om_patel5·
THIS GUY GOT TIRED OF MANAGING AI AGENTS THROUGH TERMINALS AND DASHBOARDS SO HE BUILT THEM AN RPG WORLD 5 agents and each one has a pixel character, a station, and they actually walk around the space when enough unresolved issues pile up, the agents walk to a meeting point and hold a council session. four different models debating what to do next, not scripted. each one reads the live system state independently. in one session an agent pushed for cold outreach to close leads at 2am. another one said that's a terrible look for an autonomous system contacting strangers while the operator sleeps. they ended up pivoting to an inbound strategy that none of them originally proposed. single HTML file, node bridge, and phaser. runs on a Mac Mini. instead of reading logs and checking dashboards you just watch your little pixel agents walk around and talk to each other this is the most creative way i've seen anyone manage AI agents so far
English
311
736
7.7K
641.8K
Sung Woo Kim retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A MIT student figured out how to compress an entire semester of lecture content into one 90-minute study session. He calls it "context stacking," and it's the most unfair thing I've seen done with NotebookLM. I asked him to walk me through it. He did. I haven't studied the same way since. Here's exactly what he does. Two days before each lecture, he uploads everything into NotebookLM. The assigned readings, the previous week's slides, 3 or 4 related papers he finds himself, and any problem sets that are still open. Most students wait for the lecture to explain the material. He walks in having already built a mental model of it. That's step one. But it's not the move that makes it unfair. The first prompt he runs across all of it: "What are the 5 core concepts this week's content is built on, and how do they connect to what I studied last week?" Not summarize. Not define. Connect. NotebookLM pulls threads across everything he uploaded simultaneously. It surfaces relationships between ideas that would take a normal student weeks of review to notice. He gets that map before the lecture even starts. Then he runs the prompt that does most of the work. "What would I need to genuinely understand about this material to be able to teach it to someone with zero background in this subject?" That question is doing something most students never force themselves to do. It exposes exactly where his understanding is solid and exactly where it's hollow. The gaps show up immediately, and he spends the rest of the 90 minutes filling only those gaps. Not reviewing what he already knows. Only fixing what he doesn't. The final prompt is the one that separates context stacking from every other study method I've heard of. "What question could a professor ask about this material that would expose a student who understood the surface but missed the underlying logic?" He's not studying for the exam he expects. He's studying for the exam designed to catch people who only think they understood it. By the time he sits in the lecture hall, the professor is not teaching him anything new. The professor is confirming what he already mapped, filling in a few details, and occasionally surprising him with something he didn't anticipate. That surprise is the only thing he writes down. Most students leave a lecture hoping the material will eventually click. He walks in with it already clicked, and uses the lecture to find out what he missed. That's not a study hack. That's a completely different relationship with learning.
Ihtesham Ali tweet media
English
121
1K
6.1K
567.1K
Sung Woo Kim retweetledi
Anish Moonka
Anish Moonka@anishmoonka·
Your kid's piano teacher was reshaping their brain. A Harvard-led team tracked children from age 6 to 9 and found that kids who practiced an instrument at least 2.5 hours a week grew the corpus callosum (the cable connecting the left and right halves of the brain) by about 25% in the region that handles movement planning. Kids who practiced less or quit showed zero growth there. USC ran a separate study starting in 2012 that followed children from low-income LA neighborhoods. One group learned violin through the LA Philharmonic's youth orchestra program. A second did soccer. A third had no structured after-school program. Two years in, only the music group showed brain changes: stronger white-matter connectivity, faster maturation of auditory processing, and greater activation in networks involved in decision-making and impulse control. The soccer and no-program groups looked the same on brain scans. A randomized trial at the University of Toronto tested 144 six-year-olds assigned to keyboard lessons, voice lessons, drama, or nothing for a full school year. The music kids gained about 7 IQ points on average. Drama and no-lessons kids gained 4-5. That roughly 3-point gap showed up across every subtest, including reading and math. Now the language side. Bilingual kids outperform monolingual kids on task-switching tests (jumping between different sets of rules quickly), and it holds regardless of which second language they speak. Brain scans of nearly 1,300 children and young adults from a 2021 Georgetown and University of Reading study showed that bilinguals kept more grey matter (the layer where the brain's processing cells live) as they grew up than kids who spoke one language. The long game is where this gets serious. A 2025 Monash University study of 10,893 Australians over 70 found that people who regularly played an instrument had 35% lower odds of developing dementia. Bilingualism shows an even sharper effect. Studies across India, Canada, and the US consistently find that bilingual adults develop dementia symptoms 4 to 5 years later than monolingual adults. A 2024 door-to-door survey of 1,234 people over 60 in Bengaluru, India, found dementia in 4.9% of monolinguals and just 0.4% of bilinguals. Both piano and a second language work through a similar mechanism. They force the brain to manage competing systems at once, left hand versus right hand, one language versus another. That constant switching strengthens the frontal regions responsible for planning, focus, and filtering distractions, building what neurologists call cognitive reserve: a buffer that lets the brain keep working even as age-related damage accumulates. Those parents running their kids between piano on Tuesdays and Mandarin on Thursdays were basically running a two-front neuroplasticity program without knowing it.
English
67
933
7.5K
1.2M
Sung Woo Kim retweetledi
Jason Howerton
Jason Howerton@jason_howerton·
My youngest son, 6, has started doing one of those things that's so special you just know you'll remember it until you die. He grabs my hand, drags me to his bedroom, closes the door and then pulls out a notepad. "Daddy, will you write 'loving notes' with me." He will then write his message. It's usually like, "I love daddy so much" — except imagine it in 6 year old handwriting. Then it's my turn: "Austin is strong and brave and I love him to the moon and back." He'll want to repeat this process about 10 times before I call it. Ngl, "loving notes" is me new favorite thing. One day, God-willing, I'll be an old man laying in a bed reminiscing about doing "loving notes" with my little son. I'm in the good old days.
Jason Howerton tweet media
English
183
321
8K
149.4K
Mike Futia
Mike Futia@mikefutia·
I just built a Claude Code skill that ships 50 static ad concepts to my desktop every morning 🤯 Feed it your reviews, your winning ads, and your top comments → it studies what's working → generates 50 fresh static ad concepts in your brand voice while you sleep. All inside Claude Code. Perfect for DTC brands and agencies who are still briefing designers, waiting 3 days for 4 mediocre options, and burning hours in Canva trying to keep up with creative volume. If you're running Meta Ads in 2026, you already know the math — the brands that win aren't the ones with the best single ad, they're the ones testing 20-50 new concepts per week. Most teams ship 5 if they're lucky. This skill solves it: → Drop in your customer reviews, top comments, and winning ad screenshots → The skill studies what hooks, angles, and pain points are actually converting → Pulls from a library of 15 proven DR templates (us vs them, stat callouts, review cards, testimonial stacks, headline ads) → Writes 50 new concepts in your brand voice every morning on a schedule → Fires the prompts to Nano Banana 2 for finished images → Drops everything into a dated folder on your desktop, ready to upload to your CBO No briefing designers. No 3-day turnarounds. No starting from scratch every Monday. What you get: - 50 fresh static ad concepts every single morning - Concepts grounded in your real customer language and winning ads - 15 proven DR templates baked in, customized to your brand - Scheduled to run while you sleep — wake up, pick winners, upload - One skill file you install once and use forever Built 100% in Claude Code. I put together a full playbook with the skill file, the 15 templates, and the exact setup to get this running on a schedule. Want it for free? > Like this post > Comment "STATICS" And I'll send it over (must be following so I can DM)
English
809
38
938
60.6K
Sung Woo Kim retweetledi
Corey Ganim
Corey Ganim@coreyganim·
Best breakdown of Karpathy's "second brain" system I've seen. My co-founder turned it into an actual step-by-step build. The 80/20: 1. Three folders: raw/ (dump everything), wiki/ (AI organizes it), outputs/ (AI answers your questions) 2. One schema file (CLAUDE.md) that tells the AI how to organize your knowledge. Copy the template in the article. 3. Don't organize anything by hand. Drop raw files in, tell the AI "compile the wiki." Walk away. 4. Ask questions against your own knowledge base. Save the answers back. Every question makes the next one better. 5. Monthly health check: have the AI flag contradictions, missing sources, and gaps. 6. Skip Obsidian. A folder of .md files and a good schema beats 47 plugins every time. He includes a free skill that scaffolds the whole system in 60 seconds.
Nick Spisak@NickSpisak_

x.com/i/article/2040…

English
71
305
3.3K
582.2K
Sung Woo Kim retweetledi
Archive
Archive@ArchiveExplorer·
"Claude usage limit reached. Your limit will reset at 7pm" every. fucking. day. was about to pay $200 for Max. then I read this article 98.5% of tokens - wasted you're not paying for answers. you're paying for Claude to re-read its own homework 30 times spent months blaming Anthropic for being greedy. turns out the problem was how I write prompts 5 minutes of reading basic plan now handles more than my old Max
kaize@0x_kaize

x.com/i/article/2037…

English
310
785
10.9K
4.8M
Sung Woo Kim retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨 Someone reverse-engineered the design systems of Apple, Spotify, Airbnb, and 30+ billion-dollar companies. Packed each one into a single file. Free. It's called Awesome Design MD. Drop one file into your project. Your AI agent builds UI that looks like Spotify. Or Apple. Or Airbnb. Instantly. Not screenshots. Not Figma links. A single DESIGN .md file that captures every color, font, spacing value, button style, and layout pattern from a real website. In a format AI agents read and reproduce. Here's the difference: Tell Claude Code "build me a landing page" and it gives you generic UI. Tell Claude Code "build me a landing page" with Spotify's DESIGN .md in your project and it gives you Spotify. Here's what's inside: → Apple. Premium white space, SF Pro typography, cinematic imagery. → Spotify. Vibrant green on dark, bold type, album-art-driven layout. → Airbnb. Warm coral accent, photography-driven, rounded UI. → Linear. Ultra-minimal, precise spacing, purple accent. → SpaceX. Stark black and white, full-bleed imagery, futuristic. → BMW. Dark premium surfaces, precise German engineering aesthetic. → NVIDIA. Green-black energy, technical power aesthetic. → Uber. Bold black and white, tight type, urban energy. → Sentry, PostHog, Raycast, Cursor, ElevenLabs, and 20+ more. Here's how to use it: → Pick a design system from the collection → Copy the DESIGN .md file into your project root → Tell your AI agent to use it → Get UI that matches the design language of a billion-dollar company That's it. One file. Your AI agent now has the design taste of a $200/hour design consultant. Designers charge $5,000+ for a custom design system. Companies spend $50,000+ building one from scratch. This is free. 31 design systems. Copy. Paste. Ship beautiful UI. Works with Claude Code, Cursor, Codex, and any AI coding agent that reads project files. 100% Open Source. MIT License.
Nav Toor tweet media
English
138
741
8K
1.4M
Sung Woo Kim retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1K
2.7K
25.8K
6.5M