swollenlikezyzz.eth
8.5K posts

swollenlikezyzz.eth
@shockingClit
piece of shit /// intern @flash_fortune



I implemented @GoogleResearch's TurboQuant as a CUDA-native compression engine on Blackwell B200. 5x KV cache compression on Qwen 2.5-1.5B, near-loseless attention scores, generating live from compressed memory. 5 custom cuTile CUDA kernels ft: - fused attention (with QJL corrections) - online softmax -on-chip cache decompression - pipelined TMA loads Try it out: devtechjr.github.io/turboquant_cut… s/o @blelbach and the cuTile team at @nvidia for lending me Blackwell GPU access :) cc @sundeep @GavinSherry

Some pictures of cats in the DPRK


BREAKING 🚨: This is Matthew Gallagher, who made 800+ Facebook accounts for fake doctors to advertise on Facebook — and went on to build a GLP-1 telehealth company with just $20,000, AI, and only one full-time teammate: his brother. It generated $401M in 2025 and could reach $1.8B in 2026.

The male G-spot revealed - and everyone who guessed it's in the rear was wrong: 'Intensely pleasurable' trib.al/apa2iEK

a simple Propolis toothpaste could do more for your 🍆 than low-dose Cialis

first vibecoded billion-dollar company?

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.














They clamped both carotid arteries in a rat’s neck shut. For 20 minutes. Zero blood to the brain. Brain damage. Hippocampal lesions. Memory wiped. Motor coordination destroyed. The untreated rats never recovered. The brain never even tried to repair itself. The only thing that reversed the damage — was BPC-157. Memory fully restored. Coordination fully restored. Hippocampal neurons recovered at both 24 AND 72 hours. Not compensated. Not retrained. Reversed. (PMID: 32558293) Stroke is the #1 cause of long-term disability in the US. 700,000 Americans every year. Most survivors never return to baseline. Ever. You survived. Everyone told you that’s what matters. But surviving a stroke and recovering from one are two completely different things. You relearned how to button your shirt at 58. You do speech therapy 3 times a week. You write lists for things you used to remember without thinking. You tell people you’re doing great because you’re tired of the look on their faces when you say you’re not. You stopped expecting to get better. You just adapted. And everyone around you called that recovery. Your neurologist prescribed rehab. Your PT retrains your muscles. Your speech therapist retrains your words. Every single one of them is teaching your brain to work around damage that nobody tried to repair. Your aspirin prevents the next clot. Your statin manages cholesterol. Your blood pressure medication adjusts the number. They’re protecting you from the NEXT stroke while nobody repairs the damage from the FIRST one. Researchers cut blood flow to a rat’s brain completely. 20 minutes. The exact model for human stroke. BPC-157 reversed both early and delayed brain damage and achieved full functional recovery. A rat had zero blood to its brain for 20 minutes and BPC-157 brought its memory back. Your post-stroke fog is a simpler ask. → Blood to brain cut off completely: reversed → Brain damage: repaired at 24h AND 72h → Memory: fully restored → Motor coordination: fully restored → Side effects: zero Your rehab retrains the brain around what’s broken. Your medication prevents the next event. Neither repairs the damage from the one that already happened. That brain damage isn’t permanent. It’s unrepaired. Your rehab adapts to the damage. BPC-157 reversed it. Not FDA-approved. Preclinical evidence. Not medical advice.













