swollenlikezyzz.eth

8.5K posts

swollenlikezyzz.eth banner
swollenlikezyzz.eth

swollenlikezyzz.eth

@shockingClit

piece of shit /// intern @flash_fortune

Katılım Mart 2022
2.4K Takip Edilen231 Takipçiler
Sabitlenmiş Tweet
swollenlikezyzz.eth
swollenlikezyzz.eth@shockingClit·
it was a psyop set up from the beggining
English
4
0
13
25.4K
swollenlikezyzz.eth retweetledi
XLadyGlow
XLadyGlow@xladyglow·
Luxury Container Cliff House Build Overlooking River 🏝️😍
English
7
189
1.1K
215.6K
swollenlikezyzz.eth retweetledi
Saamir Mithwani
Saamir Mithwani@ssaaammiirr·
Hot take: the Medvi story has nothing to do with AI. A guy built a $1.8B company with 2 people. Cool. But strip away the AI headline and look at what's underneath: Telehealth subscriptions. I've been in this space. I've seen the numbers up close. The LTVs are borderline unfair compared to every other business model. Traditional ecom: $50 AOV, maybe $80 LTV if you're lucky, 20% returns, warehouse headaches, margin compression. Telehealth: $200+/mo, $1,200-2,400 LTV, pharmacy ships direct, no inventory, no returns, recurring revenue. AI made him efficient. Telehealth made him a billionaire. Those are two very different things. And the craziest part? He only did weight loss. ONE vertical. ED alone is a $5B+ market. Hair loss. Hormones. Peptides. Anti-aging. Skincare. Mental health. Each one of these is a billion dollar telehealth company waiting to be built. The reason more people haven't done it is the infrastructure is brutal — doctors, pharmacies, compliance, prescriptions, patient management. That's why you need a platform like Rimo.co. Full stack telehealth OS. Everything you need to launch a brand like this without building it from scratch. AND YOU OWN ALL YOUR DATA AND TOKENS The next wave of billionaires is coming out of telehealth. Not SaaS. Not ecom. Telehealth. rimo.co
nic carter@nic_carter

first vibecoded billion-dollar company?

English
20
34
424
72.1K
swollenlikezyzz.eth retweetledi
elvis
elvis@omarsar0·
Building a personal knowledge base for my agents is increasingly where I spend my time these days. Like @karpathy, I also use Obsidian for my MD vaults. What's different in my approach is that I curate research papers on a daily basis and have actually tuned a Skill for months to find high-signal, relevant papers. I was reviewing and curating papers manually for some time, but now it's all automated as it has gotten so good at capturing what I consider the best of the best. There are so many papers these days, so this is a big deal. You all get to benefit from that with the papers I feature in my timeline and on @dair_ai. The papers are indexed using @tobi qmd cli tool (all of it in markdown files along with useful metadata). So good for semantic search and surfacing insights, unlike anything out there. I am a visual person, so I then started to experiment with how to leverage this personal knowledge base of research papers inside my new interactive artifact generator (mcp tools inside my agent orchestrator system). The result is what you see in the clip. 100s of papers with all sorts of insights visualized. I keep track of research papers daily, so believe me when I tell you that this system is absolutely insane at surfacing insights. This is the result of months of tinkering on how to index research and leverage agent automations for wikification and robust documentation. But this is just the beginning. The visual artifact (which is interactive too) can be changed dynamically as I please. I can prompt my agent to throw any data at it. I can add different views to the data. Different interactions. I feel like this is the most personalized research system I have ever built and used, and it's not even close. The knowledge that the agents are able to surface from this basic setup is already extremely useful as I experiment with new agentic engineering concepts. I feel like this knowledge layer and the higher-level ones I am working on will allow me to maximize other automation tools like autoresearch. The research is only as good as the research questions. And the research questions are only as good as the insights the agents have access to. Where I am spending time now is on how to make this more actionable. I am obsessed about the search problem here. The automations, autoresearch, ralph research loop (I built one months ago) are easier to build but are only as good as what you feed them. Work in progress. More updates soon. Back to building.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
107
334
3.4K
311.5K
swollenlikezyzz.eth retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
1.7K
3.7K
34K
8.1M
swollenlikezyzz.eth retweetledi
Nick Durham
Nick Durham@pnickdurham·
The most expensive way to build a house is in a factory. Prefab housing costs more than building by hand. After centuries of attempts, new technology waves, literal robots working in factories, and billions of dollars, it's crazy that's still true. We've tried vertical integration down to the nail, we've tried automated factories, we've tried reimagining the wall itself, we've tried panel plants from large building product companies. Since the 1970s, we've seen over 30 prefab companies in the US alone take a swing. When they inevitably fail, the value narrative quickly changes to "production capacity" "efficiency" and "quality." Cop out answers, respectfully, because cost has always been the reason one ventures into prefab. What have we learned from the collective trauma of centuries of failure? What assumptions can we question? One common thread is the belief that production must be centralized ala manufacturing environments. Scale comes from centralization, of course. However, in construction, centralized factories run into three huge constraints. 1/ CapEx. A traditional factory can cost $25M+ and needs high utilization for decades to amortize. Housing starts can swing 30-40% cycle to cycle so being able to withstand periods when demand slows and fixed costs don't is nonnegotiable. When demand inevitably picks up favorably, the factory is also limited and can't chase it. 2/ Transport. Finished wall panels are bulky, fragile, and expensive to ship. Kitting is promising, however the more finished the product, the bigger the risk. A fixed factory is locked inside a ~150-200 mile delivery radius. Panelizers, for example, need ~180 plants just to serve a fraction of the US market. 3/ Capacity mismatch. Builders need flexible, local capacity that flexes their pipeline. Factories need standardization and steady volume. These needs are always at odds. As a result of these constraints, 85% of US residential construction buys manual stick-frames. A shippable microfactory is an attempt to try something new in prefab. Instead of shipping bulky wall panels from a distant plant, you ship the factory itself and do construction on-site. The factory is located on or near the job site, so transport costs for finished components drop to nearly zero. And when the project ends, the factory moves to the next one or the next builder. Need 50 homes? Deploy one unit. Need 500? Deploy a fleet. Capacity matches the pipeline, not the other way around. Seems promising enough, right? A few companies are running with this concept. @GillesRetsin and AUAR sell structural framing capacity. They ship a containerized robotic cell to your jobsite, integrate with your design process, and fabricate a full home's structural shell in under 8 hours (roughly 80% less framing labor than a manual crew). The builder never owns the factory. AUAR owns and deploys it, and the builder pays per unit of output. CapEx goes to literal zero. @AGampel1 and Cuby Technologies are doing shippable microfactories but taking a full vertical integration approach. They JV with a developer and deploy a ~$10M microfactory that produces a complete kit of parts, foundation through finishes. Their first US factory is launching in Nevada tied to a 3000+ home pipeline. (the 3DP companies are also an example of this, shipping printers to jobsites, but we'll leave them in a separate category for now)
Nick Durham tweet mediaNick Durham tweet mediaNick Durham tweet media
English
70
60
601
72.1K
swollenlikezyzz.eth retweetledi
Mikronous
Mikronous@mikronous·
Στο Μεγανήσι « Ένα νησάκι στη σκιά της Λευκάδας με μαγική θέα στο Ιόνιο και μια κρυμμένη κατοικία»
Mikronous tweet mediaMikronous tweet media
Ελληνικά
14
26
220
20.1K
swollenlikezyzz.eth retweetledi
Tech Tech China
Tech Tech China@techtechchina·
A few facts:   1️⃣ The person who pulled 510k lines of #Claude Code source code from a 60MB source map is a 25 years old boy From China. 2️⃣ His LinkedIn: UCSB in 3 years, 4.0 GPA. His comment:"too easy." Dropped out of Berkeley PhD after 2–3 years. Comment:"lol." 3️⃣ White hat. Found bugs in X, Chrome. $1.9M in bounties. 4️⃣ He called out Anthropic last year for scraping user code under the guise of "safety reviews." 5️⃣ After the leak, his take:"Claude's code is nowhere near as interesting as OpenCode's."
Tech Tech China tweet mediaTech Tech China tweet mediaTech Tech China tweet media
English
56
289
3.4K
373.2K
swollenlikezyzz.eth retweetledi
Dave Font
Dave Font@davefontenot·
Introducing 997.ai The residency for repeat unicorn chinese founders Based in Shenzhen. First batch this Fall. DM if there’s a founder you think we should meet
Dave Font tweet media
English
66
22
535
59.2K
swollenlikezyzz.eth retweetledi
Mr living
Mr living@living001155211·
What a wonderful life this can be! Some like me, this is there dreams ✨️
Mr living tweet media
English
2
15
103
1.6K
swollenlikezyzz.eth retweetledi
Bizlet
Bizlet@bizlet7·
BPC-157 is like the fountain of youth and that’s the issue. Everyone thinks it’s too good to be true and will cause turbo cancer.
Alfred 🏄🏻‍♀️@HealthyAlfred

They clamped both carotid arteries in a rat’s neck shut. For 20 minutes. Zero blood to the brain. Brain damage. Hippocampal lesions. Memory wiped. Motor coordination destroyed. The untreated rats never recovered. The brain never even tried to repair itself. The only thing that reversed the damage — was BPC-157. Memory fully restored. Coordination fully restored. Hippocampal neurons recovered at both 24 AND 72 hours. Not compensated. Not retrained. Reversed. (PMID: 32558293) Stroke is the #1 cause of long-term disability in the US. 700,000 Americans every year. Most survivors never return to baseline. Ever. You survived. Everyone told you that’s what matters. But surviving a stroke and recovering from one are two completely different things. You relearned how to button your shirt at 58. You do speech therapy 3 times a week. You write lists for things you used to remember without thinking. You tell people you’re doing great because you’re tired of the look on their faces when you say you’re not. You stopped expecting to get better. You just adapted. And everyone around you called that recovery. Your neurologist prescribed rehab. Your PT retrains your muscles. Your speech therapist retrains your words. Every single one of them is teaching your brain to work around damage that nobody tried to repair. Your aspirin prevents the next clot. Your statin manages cholesterol. Your blood pressure medication adjusts the number. They’re protecting you from the NEXT stroke while nobody repairs the damage from the FIRST one. Researchers cut blood flow to a rat’s brain completely. 20 minutes. The exact model for human stroke. BPC-157 reversed both early and delayed brain damage and achieved full functional recovery. A rat had zero blood to its brain for 20 minutes and BPC-157 brought its memory back. Your post-stroke fog is a simpler ask. → Blood to brain cut off completely: reversed → Brain damage: repaired at 24h AND 72h​​​​​​​​​​​​​​​​ → Memory: fully restored → Motor coordination: fully restored → Side effects: zero Your rehab retrains the brain around what’s broken. Your medication prevents the next event. Neither repairs the damage from the one that already happened. That brain damage isn’t permanent. It’s unrepaired. Your rehab adapts to the damage. BPC-157 reversed it. Not FDA-approved. Preclinical evidence. Not medical advice.

English
30
68
1.7K
178.2K
swollenlikezyzz.eth retweetledi
Philip Oldfield
Philip Oldfield@SustainableTall·
Apartment design in Zurich Three apartments per core, with every living space being dual aspect, ensuring good access to light and cross ventilation By Blattler Dafflon Architekten
Philip Oldfield tweet mediaPhilip Oldfield tweet media
English
5
55
584
22.3K
swollenlikezyzz.eth retweetledi
StarPlatinum
StarPlatinum@StarPlatinum_·
This is insane A man alone built a $1.8B company in months using AI - Matthew Gallagher - founder of Medvi - based in LA - self-taught programmer before this - builds websites - experiments with online businesses - fails multiple times then AI arrives - ChatGPT, Claude, Grok for code and text - Midjourney, Runway for visuals - he uses everything Medvi begins - telemedicine platform - focused on weight loss drugs (GLP-1) - built in 2 months starting capital is $20,000 first traction - 300 customers in month 1 +1,000 customers in month 2 growth explodes 2025 - $401M in revenue - first full year 2026 - projected $1.8B revenue - 500,000+ patients team - basically just him - plus his younger brother - minimal contractors how it runs - AI writes the code - AI creates the ads - AI handles support - AI analyzes performance he still oversees everything, fixes chatbot mistakes and manages marketing decisions today - generating millions personally - donating to charity - planning foundation for homeless youth what this proves you don’t need a big team anymore you don’t need huge capital AI removed the barrier the man who scaled faster than anyone expected
StarPlatinum tweet mediaStarPlatinum tweet media
English
83
126
1.6K
213.6K
swollenlikezyzz.eth retweetledi
Nate Lorenzen
Nate Lorenzen@anatelorenzen·
The new King of DTC Twitter - @galligator. 3 ads. $1.8 Billion valuation.
Nate Lorenzen tweet mediaNate Lorenzen tweet mediaNate Lorenzen tweet media
English
18
55
804
84.5K
swollenlikezyzz.eth retweetledi
Brian Blum
Brian Blum@brian_blum1·
Storytime for people who don't think this is real.. Last year I launched a GLP1 biz as well We did ~$20m in rev in 6 months It's surprisingly easy to setup but ruthlessly competitive because there is virtually no differentiation. Every brand is selling the same stuff. As a result it is a pure marketing arms race, and Medvi was the best. They are known for a few things - Shadowy billing practices - Highest converting lead funnel - Running thousands of AI UGC / Theme Pages When you search Medvi in the Meta Ads library, you'd almost never see something running from their page. They heavily rely on partnership ads, whitelisting and listicles / advertorials. On top of leaning heavily into publisher affiliate (Forbes "Best GLP1 Providers") and TikTok's beta for telehealth. Our funnel was primarily Meta Ads using TikTok Shop style UGC. Worked well til it didn't The customer is very price conscious and as a result, switches between several brands' intro offers. So tons of brands spent into CAC's expecting LTV's that didn't materialize. Huge revenue numbers but not a ton of super profitable companies. Anyways, i've never seen the speed with which we got to $4m/mo in rev. The market was that good. And behind the scenes everyone knew Medvi, Remedy Meds, Amble were doing $300M+ Everything in the article is true and this guy is a dog
nic carter@nic_carter

first vibecoded billion-dollar company?

English
108
160
3.2K
552.6K