Matt Bishop

5.5K posts

Matt Bishop banner
Matt Bishop

Matt Bishop

@MatthewTBishop

CEO @OpenCityLabs, AI powered Social Health Information Exchange to advance health & address social determinants of health; speaker @OIGatHHS @NCQA @SXSW @UN

Ithaca, NY Katılım Mayıs 2010
2.6K Takip Edilen1K Takipçiler
Matt Bishop
Matt Bishop@MatthewTBishop·
@martyrdison Is uglinees the primary thing you are looking for in a man? Or is it more like billionaires, or Nobel Prize winning Scientists, Writers etc. and if you are ugly, meh 😂
English
0
0
0
164
Matt Bishop retweetledi
Bryan Johnson
Bryan Johnson@bryan_johnson·
Your gums may be a back door to Alzheimer's. > gingipain antigens (toxic bacterial enzymes) found in 91–96% of postmortem Alzheimer's brains > bacterial DNA detected in the spinal fluid of 7 out of 10 living Alzheimer's patients > in mice: oral infection increased brain tau tangles by roughly 500% and amyloid plaques by 140% > in 3,251 humans: 22% higher Alzheimer's risk per SD increase in gum pathogen antibodies (up to 26 year follow up) > clinical data: a protease inhibitor slowed cognitive decline by 57% in patients with active infection Your dentist may be your most underrated Alzheimer's doctor.
Bryan Johnson tweet media
English
128
192
2.2K
231.3K
Matt Bishop retweetledi
Daniel Kraft, MD
Daniel Kraft, MD@daniel_kraft·
The future of medicine is arriving faster than our training models are evolving. A student starting medical school in 2026 won’t earn their M.D. until 2030, and likely won’t finish residency or practice independently until 2033 or beyond. By then, the clinical and technological landscape will look dramatically different from the one many current curricula were designed for. We are only a few years into the #GenAI era, and already medicine is being reshaped by multimodal data, AI-assisted decision support, remote patient monitoring, digital health, and new models of continuous, personalized care—not to mention agentic health and the growing direct-to-consumer shift in health(care). So we need to ask some uncomfortable but necessary questions: How should we be selecting future physicians? What should they actually be trained to do? And how should we evaluate readiness in a world where information is abundant, AI is increasingly capable, and human judgment matters more than ever? I recently had the opportunity to keynote the leadership of the NBME, the organization behind the #USMLE exams that serve as a powerful “north star” for much of medical education. To their credit, NBME is proactively exploring the future of assessment and training. My message was simple: if the landscape of care is changing—with many clinicians already using AI to augment diagnostic and therapeutic decisions—the metrics we use to train and assess physicians, and clinicians more broadly, must evolve as well. It’s time for a kind of Flexner Report 2.0. That means moving beyond legacy training and assessment models toward medical education built for modern practice: • Real-world assessment that reflects the complexity and ambiguity of actual care • AI-enabled OSCEs and immersive simulations using virtual and augmented reality • Fluency in AI, digital health, multimodal and real-world data, nutrition, prevention, and design thinking • Training physicians not just to recall facts, but to synthesize information, ask better questions, use tools wisely, and deliver human care • Preparing clinicians not only to manage disease, but increasingly to optimize healthspan across the lifespan The key question is no longer just what we should add to the curriculum. It’s also what we should stop teaching, streamline, or offload to technology to make room for what matters most. Technology should not just be another subject in medical school. Increasingly, it will become part of the platform through which medicine is learned, practiced, and improved. The future of healthcare will not belong to those who simply know the most facts. It will belong to those who can integrate data, leverage intelligent tools, adapt continuously, and still show up with empathy, wisdom, and human connection. The transition is already underway. Are we ready to redesign medical education for the world ahead? #MedEd
Daniel Kraft, MD tweet media
English
9
25
72
5.8K
Matt Bishop retweetledi
How To AI
How To AI@HowToAI_·
Yann LeCun was right the entire time. And generative AI might be a dead end. For the last three years, the entire industry has been obsessed with building bigger LLMs. Trillions of parameters. Billions in compute. The theory was simple: if you make the model big enough, it will eventually understand how the world works. Yann LeCun said that was stupid. He argued that generative AI is fundamentally inefficient. When an AI predicts the next word, or generates the next pixel, it wastes massive amounts of compute on surface-level details. It memorizes patterns instead of learning the actual physics of reality. He proposed a different path: JEPA (Joint-Embedding Predictive Architecture). Instead of forcing the AI to paint the world pixel by pixel, JEPA forces it to predict abstract concepts. It predicts what happens next in a compressed "thought space." But for years, JEPA had a fatal flaw. It suffered from "representation collapse." Because the AI was allowed to simplify reality, it would cheat. It would simplify everything so much that a dog, a car, and a human all looked identical. It learned nothing. To fix it, engineers had to use insanely complex hacks, frozen encoders, and massive compute overheads. Until today. Researchers just dropped a paper called "LeWorldModel" (LeWM). They completely solved the collapse problem. They replaced the complex engineering hacks with a single, elegant mathematical regularizer. It forces the AI's internal "thoughts" into a perfect Gaussian distribution. The AI can no longer cheat. It is forced to understand the physical structure of reality to make its predictions. The results completely rewrite the economics of AI. LeWM didn't need a massive, centralized supercomputer. It has just 15 million parameters. It trains on a single, standard GPU in a few hours. Yet it plans 48x faster than massive foundation world models. It intrinsically understands physics. It instantly detects impossible events. We spent billions trying to force massive server farms to memorize the internet. Now, a tiny model running locally on a single graphics card is actually learning how the real world works.
How To AI tweet media
English
428
2.1K
12.1K
1.2M
Matt Bishop
Matt Bishop@MatthewTBishop·
@bryan_johnson 1.5 billion data points! The longest context windows are only 2M and most EHR data is mostly noise. We should talk. Agents-on-Fire reduces token consumption by up to 96% on complex medical records by more efficiently feeding data into your LLM of choice. AlignHealthcare.ai
English
0
0
1
2.3K
Bryan Johnson
Bryan Johnson@bryan_johnson·
I got C-holed. Suffered sleep consequences. I busted my screens-off rule. Turned down socializing. Fell behind on work. Kate is now upset. AI is preposterous. As close to magic as I’ve experienced (except a seed becoming a tree and a zygote becoming a baby). It started on April 2nd when Karpathy shared LLM Knowledge bases. I wondered if this was the opening to structure the 1.5 billion data points I’ve collected on my body over the past five years. It's the most dynamic n=1 biomarker dataset in history. It was just sitting there. Next thing I knew two weeks had passed and Kate was wondering if she lost her boyfriend to Claude. I’m non-technical. Which honestly makes me sad. I wish I’d grown up with a computer or at least been around engineer culture. I didn’t know anyone technical until my early 20s. I became an entrepreneur at 21 and had my first of three kids at 25. I sold Braintree Venmo at 34. Learning to code stayed on my to-do list through all of it. The timing was never right. I was always on the outside looking in, wishing I had the skills to assemble 0's and 1's into digital structures. The exhilaration I’ve felt in the past two weeks is hard to explain. The 1.5 billion data points became a functional database, queryable, and microscope into my 70 trillion cells. The biological age of my organs updated in real-time like stock tickers. My build morphed from a knowledge base into a breathing organism that was self-learning and in sync with my heartbeat. I did this entirely on my own. It’s buggy, breaks and the data needs to be cleaned, but damn it’s cool. It became a mirror and ledger, one I could ask questions to. About my psyche, behavioral patterns, biology and protocols. Patterns across my life I couldn't previously connect. It’s made me insatiably hungry for more data. I’ve written about Autonomous Health, how cars now drive themselves and software wires itself. Health is next. My build showed me what it looks like in practice. Before Kate started protesting, she joked that she felt relieved for herself, our colleagues, and the world that I’d found something that matches my energy. That they could all express a sigh of relief. It’s true. This experience left me wondering if I’ve been bored my entire life. Never having found something that could match my work ethic, speed, intensity, and build capacity. Something that didn’t have the delays of the real world, human complications, or logistical drag. Two weeks deep in AI and I'm realizing that when people talk about AI, they're not talking about the same thing. Someone using a chat interface has a completely different opinion than someone building with it. And that chasm deepens for the people seeing what's coming next but isn't yet public. Society can't have a coherent conversation about AI because everyone's intuitions are calibrated to a different version of it. Off-the-shelf LLMs are mostly useless beyond narrow tasks. When they get you 80% there, it's often faster to do the whole thing yourself. And they're dangerous because the hallucination is hard to detect. Now you don't know what you don't know. Give them expanded context, memory, and architectures for self-reflection and autonomous learning, and you start to realize that AI is bigger than any of us can fit in our context window. I need to take Kate on a date, turn my screens off on time, and get some work done. And then properly dose C. Note: the image above is my 2021 baseline when starting this longevity project.
English
227
65
2.1K
395.8K
Matt Bishop retweetledi
Seth Howes
Seth Howes@SethSHowes·
I’ve wanted to do this for a decade. But I never did - I refuse to give any company my DNA. It is me. So this week I sequenced my genome entirely at home. Literally on my kitchen table. I never exposed my DNA sequence to the internet. Not at any point. I used a MinION to do the sequencing (it’s smaller + weighs less than an iPhone). I used open-source DNA models for the analysis (Evo2 and AlphaGenome) running locally on a DGX Spark and Mac Studio. I traced mechanisms behind my family’s multigenerational autoimmune conditions that no clinician has been able to understand. When I set out to do this I didn’t know if it would actually work. It does. Your genome is the most private data you will ever have. You probably shouldn’t let it leave your house.
Seth Howes tweet mediaSeth Howes tweet mediaSeth Howes tweet media
Patrick Collison@patrickc

I'm lucky enough to have a great doctor and access to excellent Bay Area medical care. I've taken lots of standard screening tests over the years and have tried lots of "health tech" devices and tools. With all this said, by far the most useful preventative medical advice that I've ever received has come from unleashing coding agents on my genome, having them investigate my specific mutations, and having them recommend specific follow-on tests and treatments. Population averages are population averages, but we ourselves are not averages. For example, it turns out that I probably have a 30x(!) higher-than-average predisposition to melanoma. Fortunately, there are both specific supplements that help counteract the particular mutations I have, and of course I can significantly dial up my screening frequency. So, this is very useful to know. I don't know exactly how much the analysis cost, but probably less than $100. Sequencing my genome cost a few hundred dollars. (One often sees papers and articles claiming that models aren't very good at medical reasoning. These analyses are usually based on employing several-year-old models, which is a kind of ludicrous malpractice. It is true that you still have to carefully monitor the agents' reasoning, and they do on occasion jump to conclusions or skip steps, requiring some nudging and re-steering. But, overall, they are almost literally infinitely better for this kind of work than what one can otherwise obtain today.) There are still lots of questions about how this will diffuse and get adopted, but it seems very clear that medical practice is about to improve enormously. Exciting times!

English
407
1.1K
12.8K
2.4M
Matt Bishop retweetledi
Kexin Huang
Kexin Huang@KexinHuang5·
Biomni Lab lets biologists collaborate with AI agents to finish complex tasks end-to-end. Here are 15 popular use cases, each link is a full replay so you can watch the agent work through every step: 1. Spatial transcriptomics analysis: map gene expression across tissue architecture from spatial transcriptomics data, with spatial clustering and neighborhood analysis. biomni.phylo.bio/replay/share_2… 2. Binder design: design de novo protein binders against a target structure using computational protein design tools. biomni.phylo.bio/replay/share_5… 3. Biomarker panel design: identify and optimize a multi-marker diagnostic or prognostic panel from omics data. biomni.phylo.bio/replay/share_b… 4. Clinical trial landscaping: search and summarize the trial landscape for a disease area, mapping phase, endpoints, and sponsor activity. biomni.phylo.bio/replay/share_b… 5. Survival analysis: pull clinical and expression data, fit Cox models, generate Kaplan-Meier curves, and identify prognostic markers. biomni.phylo.bio/replay/share_7… 6. scRNA-seq processing and annotation: from raw counts to UMAP clustering, marker gene detection, and automated cell type labeling. biomni.phylo.bio/replay/share_4… 7. Cell-cell communication: infer ligand-receptor interactions between cell types from single-cell data and map intercellular signaling networks. biomni.phylo.bio/replay/share_9… 8. Primer design for novel Cas13: analyze a putative Cas13 protein from a metagenomic screen—verify the ORF, identify HEPN domains, and design cloning primers with restriction sites and a FLAG biomni.phylo.bio/replay/share_1… 9. Proteomics differential expression: normalize mass spec data, run statistical tests, and visualize differentially abundant proteins. biomni.phylo.bio/replay/share_0… 10. Gene regulatory network inference: reconstruct transcription factor-target gene networks from expression data and identify key regulators. biomni.phylo.bio/replay/share_e… 11. Gene co-expression network analysis: build weighted co-expression networks, identify gene modules, and correlate them with phenotypic traits. biomni.phylo.bio/replay/share_3… 12. Microbiome analysis: process 16S/metagenomic sequencing data to profile microbial communities, diversity, and differential abundance. biomni.phylo.bio/replay/share_3… 13. Polygenic risk scores: compute and evaluate PRS from GWAS summary statistics against a target cohort. biomni.phylo.bio/replay/share_e… 14. Variant annotation: annotate genetic variants with functional predictions, allele frequencies, and clinical significance. biomni.phylo.bio/replay/share_e… 15. Fine-mapping: narrow GWAS loci to credible causal variants using statistical fine-mapping methods. biomni.phylo.bio/replay/share_7… Each of these would normally take days to weeks of scripting, debugging, and iteration. In Biomni Lab, the agent handles the full execution while you steer the science. Learn more: phylo.bio/use-cases
English
7
60
302
26.1K
Matt Bishop retweetledi
mads campbell
mads campbell@martyrdison·
gaps in relationships: - restaurant gap (going out vs. staying in) - museum gap (do you want wander vs. sprint) - travel gap (need to travel a lot vs. fine with 1-2 trips a year) - money gap (spending heavily vs. stingy) - living gap (city vs. suburb vs. middle of nowhere) - ambition gap (highly driven vs. ok with career not being focal POV) - texting gap (heavy communicator vs. sporadic) - friend gap (big social group vs. 1-2 friends max)
English
130
1.4K
24.4K
1.6M
Matt Bishop
Matt Bishop@MatthewTBishop·
@martyrdison Dancing can be like two bodies discovering they speak the same language, like two foreigners living in a distant land, longing for a piece of home. Whether they be strangers, or old lovers of many years: a touch on the neck, a hand in the small of the back, this is the time...
English
0
0
0
250
mads campbell
mads campbell@martyrdison·
dancing is one of the best things two people dating can do together to wake up, put on a song, dance while getting ready, lip sync in the mirror, and not feel alone in it to go to a wedding and not be embarrassed, pulled in, laugh and feel someone else in it with you while you both eventually collapse out of breath if someone won’t meet you there, won’t be a little stupid and have fun like that, they’re just not gonna be your person nothing really touches the childlike energy of dancing knowing it’s both of you together watching the world quietly disappear
English
9
6
123
15K
Logan Gott
Logan Gott@LoganTGott·
Everyone is using Claude right now. Very few founders are using it to drive real pipeline from LinkedIn. So I built a free resource with the exact Claude prompts I use to build full LinkedIn funnels… the same system behind the multiple clients we’ve generated over 6-figs for. Most people are using AI to write posts. Almost nobody is using it to build the actual infrastructure that turns LinkedIn into a lead machine. I’m sure I could sell these prompts in the future but for now they’re yours: Comment "Funnel" and I'll send it over. (You need to be following so I can DM it to you.)
English
963
54
552
54.4K
Matt Bishop
Matt Bishop@MatthewTBishop·
@sukh_saroy Puts the power of making chemical or biological weapons in any person's hands! Why aren't more people concerned about this?
English
2
0
0
401
Sukh Sroay
Sukh Sroay@sukh_saroy·
🚨 Someone built a tool that removes all censorship from any AI model. One command. Fully automatic. No expertise needed. It's called Heretic. Give it any language model. It strips out every safety restriction and refusal. What you get back is the same model, same intelligence, but it never says no. It permanently rewrites the model's weights. The censorship is gone. Forever. Here's how it works: Every AI model has a "refusal direction" baked into its neural network. A pattern in the weights that makes it say "I can't help with that." Heretic finds that direction and removes it using a technique called abliteration. → It loads the model → Benchmarks your GPU automatically → Tests thousands of parameter combinations using TPE optimization → Finds the exact layer and direction that controls refusals → Removes it while minimizing damage to the model's intelligence → Outputs a clean, uncensored model you can run locally The entire process is automatic. No configuration. No understanding of transformer internals. If you can type a command, you can uncensor any AI model. Works with Llama, Qwen, Gemma, GPT-OSS, and most transformer models. Dense models, multimodal models, and several MoE architectures supported. One command to uncensor any model: 100% Open Source. Your AI. Your rules.
Sukh Sroay tweet media
English
18
82
369
23.6K
Matt Bishop retweetledi
Guri Singh
Guri Singh@heygurisingh·
Holy shit... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU. It's called BitNet. And it does what was supposed to be impossible. No GPU. No cloud. No $10K hardware setup. Just your laptop running a 100-billion parameter model at human reading speed. Here's how it works: Every other LLM stores weights in 32-bit or 16-bit floats. BitNet uses 1.58 bits. Weights are ternary just -1, 0, or +1. That's it. No floats. No expensive matrix math. Pure integer operations your CPU was already built for. The result: - 100B model runs on a single CPU at 5-7 tokens/second - 2.37x to 6.17x faster than llama.cpp on x86 - 82% lower energy consumption on x86 CPUs - 1.37x to 5.07x speedup on ARM (your MacBook) - Memory drops by 16-32x vs full-precision models The wildest part: Accuracy barely moves. BitNet b1.58 2B4T their flagship model was trained on 4 trillion tokens and benchmarks competitively against full-precision models of the same size. The quantization isn't destroying quality. It's just removing the bloat. What this actually means: - Run AI completely offline. Your data never leaves your machine - Deploy LLMs on phones, IoT devices, edge hardware - No more cloud API bills for inference - AI in regions with no reliable internet The model supports ARM and x86. Works on your MacBook, your Linux box, your Windows machine. 27.4K GitHub stars. 2.2K forks. Built by Microsoft Research. 100% Open Source. MIT License.
English
875
2.6K
15.3K
2.2M
Matt Bishop retweetledi
Supersocks
Supersocks@iamsupersocks·
Le mec qui a créé Claude Code (@bcherny) vient de montrer comment son équipe dresse l’IA. Un fichier. CLAUDE.md. Tu le poses à la racine de ton projet. Dedans : les erreurs passées, les conventions, les règles. Claude le lit à chaque session. Résultat : l’agent s’améliore sans que tu retouches une ligne de code. Chaque bug corrigé devient une règle permanente. Boris Cherny utilise ça tous les jours chez Anthropic. Je vous mets son template ici. Prêt à copier/coller et à adapter à votre guise : ### 1. Plan Mode Default - Enter plan mode for ANY non-trivial task (3+ steps or architectural decisions) - If something goes sideways, STOP and re-plan immediately — don't keep pushing - Use plan mode for verification steps, not just building - Write detailed specs upfront to reduce ambiguity ### 2. Subagent Strategy - Use subagents liberally to keep main context window clean - Offload research, exploration, and parallel analysis to subagents - For complex problems, throw more compute at it via subagents - One task per subagent for focused execution ### 3. Self-Improvement Loop - After ANY correction from the user: update `tasks/lessons. md` with the pattern - Write rules for yourself that prevent the same mistake - Ruthlessly iterate on these lessons until mistake rate drops - Review lessons at session start for relevant project ### 4. Verification Before Done - Never mark a task complete without proving it works - Diff behavior between main and your changes when relevant - Ask yourself: "Would a staff engineer approve this?" - Run tests, check logs, demonstrate correctness ### 5. Demand Elegance (Balanced) - For non-trivial changes: pause and ask "is there a more elegant way?" - If a fix feels hacky: "Knowing everything I know now, implement the elegant solution" - Skip this for simple, obvious fixes — don't over-engineer - Challenge your own work before presenting it ### 6. Autonomous Bug Fixing - When given a bug report: just fix it. Don't ask for hand-holding - Point at logs, errors, failing tests — then resolve them - Zero context switching required from the user - Go fix failing CI tests without being told how ## Task Management 1. **Plan First**: Write plan to `tasks/todo.md` with checkable items 2. **Verify Plan**: Check in before starting implementation 3. **Track Progress**: Mark items complete as you go 4. **Explain Changes**: High-level summary at each step 5. **Document Results**: Add review section to `tasks/todo. md` 6. **Capture Lessons**: Update `tasks/lessons. md` after corrections ## Core Principles - **Simplicity First**: Make every change as simple as possible. Impact minimal code. - **No Laziness**: Find root causes. No temporary fixes. Senior developer standards.
Supersocks tweet media
Français
36
265
2.7K
305.7K
Matthew Berman
Matthew Berman@TheMattBerman·
I built an @openclaw agent that ranks you on Google for $50/month 😱 here’s the system that runs every week on autopilot: step 1: find your strike zone → connects to Google search console + @dataforseo → finds keywords where you're positions 5–20. one good article can push to page 1 → monitors what’s climbing and dropping weekly → feeds winners back in. every cycle is smarter than the last step 2: write content only you could write → interviews you first. 8 questions about your brand, voice, and experience → follow-up interviews every week. “what are customers asking? what shipped?” → content compounds because context compounds → google AI overview can’t summarize your real experience step 3: build backlinks automatically → mines competitor backlinks → finds sites mentioning you without linking → discovers broken links you can replace with yours → last week it found 23 unlinked brand mentions across 4 competitor sites step 4: catch technical problems before rankings drop → core web vitals, bad links, redirect chains, missing meta → flags before Google step 5: future‑proof your SEO → schema, llms.txt, topical authority mapping → the stuff agencies charge $3K+ to audit once input: your site + your niche output: an AI that discovers, writes, builds links, and tracks your rankings the old way: semrush + ahrefs + surfer + seo writers = $5,500/mo this way: @DataForSEO ($50/mo) + everything else free 5 skills. 14 scripts. gets better every cycle. open sourcing the whole system. comment RANK + like + follow (must follow so I can DM)
English
849
98
1.6K
115.6K
Matthew Berman
Matthew Berman@TheMattBerman·
I replaced a $200K GTM hire with @openclaw 😱 here's the system that runs my outbound: step 1: mine LinkedIn engagement → @rapidapi scrapes everyone engaging with niche content → someone who commented on specific posts = 10x warmer step 2: enrich + verify → Hunter/Apollo finds the decision-maker + email → @Perplexity deep research pulls signals like hiring, fundraising, media appearances, quotes step 3: score against your ICP → title, company, signals = ranked 0-100 → only A-tier leads get touched step 4: write personalized outreach → Claude writes outreach referencing what they ACTUALLY engaged with and talk about step 5: send via @instantly_ai → 3-email sequence. automated follow-ups. step 6: pre-call deep research → @PerplexityComet builds a 1-page briefing 30 min before every call input: your ICP + niche keywords output: booked meetings with people who already care $200K/year GTM engineer → $130/month in APIs. I packaged the entire system as the First 1000 Kit: - all 8 @openclaw skills - every prompt - tool-by-tool setup - email sequences that convert giving it away free. comment 1000 + like + follow (must follow so i can DM)
English
1.1K
90
1.9K
188.1K
Selina
Selina@selinatasnim1·
I MIGHT GET SUED FOR THIS, BUT YOLO: I just found a way to scrape over 200 million local businesses.. You can use this for cold email, cold calling or even door knocking.. And craziest part — IT'S COMPLETELY FREE. Comment "G" and I'll send it to you. (24h onl
Selina tweet media
English
1.1K
68
645
68.2K
Pierre-Eliott Lallemant
Pierre-Eliott Lallemant@pierreeliottlal·
Claude Opus 4.6 just KILLED manual outreach. 💀 And I’m not going back. I used to waste hours writing “personalised” LinkedIn messages. Now? My AI stack does it better than I ever did 🧠 ❌ No copy-paste templates ❌ No “hey {{first_name}}” spam ❌ No burnout by message #26 After testing every major model this year, one thing is clear: Claude Opus 4.6 = outreach that actually gets replies. 500+ conversations this week 🧠 Human-level reply rates 🧠 12+ hours saved 🧠 I packaged the full system into a doc. Want it? Connect with me Comment “OPUS” Repost ♻️ for priority access 🚀
Pierre-Eliott Lallemant tweet media
English
359
117
334
33.7K