Readwise

585.8K posts

Readwise banner
Readwise

Readwise

@readwise

Save your best highlights from Kindle, Twitter, Pocket, Instapaper, iBooks, and 30+ others. Then revisit, search, organize, and export them seamlessly.

เข้าร่วม Ekim 2017
3.2K กำลังติดตาม212.9K ผู้ติดตาม
ทวีตที่ปักหมุด
Readwise
Readwise@readwise·
🐥 Want to start saving tweets and threads? ⮕ readwise.io/twitter_start 🖍 Want to save, review, and sync your highlights from books/articles/twitter/anywhere? ⮕ readwise.io 📚 Looking for our new app? ⮕ @ReadwiseReader
English
388
168
873
0
Brian Feroldi
Brian Feroldi@BrianFeroldi·
Warren Buffett's Investing Checklist
Brian Feroldi tweet media
English
2
12
48
7.2K
Brivael - FR
Brivael - FR@BrivaelFr·
Milton Friedman (prix nobel d'économie) a dit un truc il y a 50 ans qui est encore plus vrai aujourd'hui. Et quasiment personne ne le comprend. 🧵 On lui pose la question : "Sans régulation sur les médicaments, des gens pourraient mourir en prenant des produits dangereux. Vous ne trouvez pas ça grave ?" Sa réponse est un des retournements logiques les plus brillants de l'histoire de l'économie. Oui, dit Friedman. Un médicament non régulé peut tuer des gens. C'est visible. C'est dans les journaux. C'est un scandale. Tout le monde le voit. Mais ce que personne ne voit, c'est les gens qui meurent parce qu'un médicament qui aurait pu les sauver a été bloqué pendant 10 ans par le processus de régulation. Ce mort là, personne ne le compte. Personne ne fait sa une. Personne ne connaît son nom. Parce qu'il est mort de l'absence de quelque chose qui n'a jamais existé. C'est l'asymétrie fondamentale de la régulation. Le régulateur a deux types d'erreurs possibles. Erreur 1 : approuver un médicament dangereux. Résultat : scandale public, procès, le régulateur perd son poste. Erreur 2 : bloquer un médicament qui aurait sauvé des vies. Résultat : rien. Personne ne sait. Personne ne proteste. Les morts silencieux n'ont pas de porte-parole. Du coup, le régulateur rationnel optimise pour éviter l'erreur 1. Toujours. Il rajoute des études. Des phases. Des comités. Des délais. Chaque couche de "sécurité" supplémentaire le protège, lui, au détriment des patients qui attendent. Friedman estimait que la FDA avait probablement tué plus de gens en retardant des bons médicaments qu'elle n'en avait sauvé en bloquant des mauvais. C'est impossible à prouver précisément. Mais la logique est imparable. Un exemple concret. Le bêta-bloquant Propranolol était disponible en Europe des années avant d'être approuvé aux États-Unis. Pendant ces années, des Américains mouraient de crises cardiaques qui auraient pu être évitées. Combien ? On ne le saura jamais. Parce qu'on ne compte pas les morts de l'inaction. C'est le même principe partout. Pas que dans la médecine. En France, les taxis autonomes sont bloqués par la régulation. Chaque année de retard, ce sont des accidents de la route qui auraient pu être évités. Mais personne ne compte ces morts là. On compte uniquement le premier accident d'un taxi autonome, qui fera la une de tous les journaux. L'IA dans la médecine est ralentie par des processus d'approbation qui prennent des années. Des diagnostics qui pourraient être faits en secondes par un algorithme attendent des validations pendant que des patients attendent des mois pour un rendez-vous. Le nucléaire a été bloqué pendant des décennies par la peur. Combien de gens sont morts de la pollution des centrales à charbon qui ont tourné à la place ? Personne ne les compte. Le pattern est toujours le même. On voit le risque de l'action. On ne voit jamais le risque de l'inaction. Et comme le risque de l'inaction est invisible, le régulateur choisit toujours l'inaction. Parce que l'inaction ne produit pas de scandale. Friedman résumait ça en une phrase : "Les gens qui ont été sauvés par la FDA sont visibles. Les gens qui sont morts à cause des retards de la FDA sont invisibles. Et dans une démocratie, le visible gagne toujours contre l'invisible." La prochaine fois que quelqu'un vous dit "il faut plus de régulation pour protéger les gens", posez une seule question : combien de gens meurent en attendant que la régulation les autorise à vivre ? La réponse est toujours plus grande que ce qu'on imagine. Mais personne ne la calcule. Parce que les morts de l'inaction n'ont pas de visage.
Brivael - FR tweet media
Français
101
786
2.3K
94.4K
逸尘
逸尘@gengdaJ·
汇总用Claude Code或者Openclaw抓取各平台文章、图片、视频方法:所有都不需要自己的token,我都采用了最安全的方式!!! 1.X的推文和长文章: 工具:nitter+xcrawl(免费) nitter变为静态页面,xcrawl抓取数据 2.X的书签内容抓取: 工具:fieldtheory(免费) 2.公众号文章 公众号专用抓取借口(免费): (1)先拿到公众号文章链接 比如: mp.weixin.qq.com/s/CljajqS3x3ET… (2)把这个链接 URL 编码 也就是把它转成这种形式: https%3A%2F%2Fmp.weixin.qq.com%2Fs%2FCljajqS3x3ETOe4tPubQzw (3)调用一个第三方抓取接口 curl -s "down.mptext.top/api/public/v1/…&format=markdown" 3.小红书图文笔记 直接抓网页 HTML → 先拿 meta → 再解析 window.INITIAL_STATE → 拿结构化 note 数据 → 提取图片直链 → 必要时再对图片做视觉分析(免费) 4. 即刻文章 1)jina,不能抓取到互动数据,免费 2)camoufox,不能抓取到互动数据,免费 3)xcrawl,能抓取到互动数据,免费 4)curl + 解析,能抓取到互动数据,免费 5.抖音视频 1. 启动无头 Chromium — 模拟一个真实的 Chrome 浏览器 2. 访问抖音页面 — page.goto(url) 加载页面,此时浏览器执行 JS、生成 msToken 和 X-Bogus 签名,所有加密参数自动搞定 3. 拦截 aweme/detail API 响应 — 用 page.on('response') 监听网络响应,当抖音的详情 API 返回时,直接把 JSON 响应体拿下来 4. 从 JSON 中提取视频直链 — play_addr.url_list[0] 就是无水印 MP4 链接 5. 用 requests 下载 — 带上 Referer: douyin.com 模拟浏览器来源
逸尘@gengdaJ

终于解决了Agent不能抓取X长文章的完整正文+互动数据的难题! 我现在做自媒体,经常在X、抖音和小红书刷到一些爆款内容,但是没有时间仔细阅读,下意识就想收藏。但是,我问问在座诸位,有几个人的收藏夹不吃灰。。。 所以为了促进知识的流转性,我给各个平台都做了各种独特的抓取skill,并且把它们给OpenClaw装上,但是,X的长文章一直有个问题,各种方法试过了,不是文章抓不完整,就是互动数据抓不到。 但是我用nitter+xcrawl的方法解决了这个问题。原理就是nitter把X的正文转化为静态页面,然后xcrawl通过行为模拟优化抓取到静态页面的内容,从而达到返回完整正文+互动数据的方法。 nitter的话大家的小龙虾应该都会,xcrawl的链接是这个:xcrawl.com/?keyword=h2lca…,有很慷慨的免费额度,平常拿来抓取X长文章绰绰有余了,快去试试吧!

中文
4
36
120
11K
Brad
Brad@BradleyKellard·
In 1977, Viktor Frankl revealed how some people survive unimaginable suffering & others don’t. It’ll change how you see pain. His ideas: - You always have one last freedom - Despair = suffering without meaning - Why purpose keeps people alive 15 lessons on meaning & suffering:
English
13
198
671
57.8K
Teknium (e/λ)
Teknium (e/λ)@Teknium·
Hermes Agent now comes packaged with Karpathy's LLM-Wiki for creating knowledgebases and research vaults with Obsidian! In just a short bit of time Hermes created a large body of research work from studying the web, code, and our papers to create this knowledge base around all of Nous' projects. Just `hermes update` and type /llm-wiki in a new message or session to begin :) github.com/NousResearch/h…
Teknium (e/λ) tweet media
English
163
336
3.3K
367.3K
Muhammad Ayan
Muhammad Ayan@socialwithaayan·
🚨 BREAKING: Someone just built the exact tool Andrej Karpathy said someone should build. 48 hours after Karpathy posted his LLM Knowledge Bases workflow, this showed up on GitHub. It's called Graphify. One command. Any folder. Full knowledge graph. Point it at any folder. Run /graphify inside Claude Code. Walk away. Here is what comes out the other side: -> A navigable knowledge graph of everything in that folder -> An Obsidian vault with backlinked articles -> A wiki that starts at index. md and maps every concept cluster -> Plain English Q&A over your entire codebase or research folder You can ask it things like: "What calls this function?" "What connects these two concepts?" "What are the most important nodes in this project?" No vector database. No setup. No config files. The token efficiency number is what got me: 71.5x fewer tokens per query compared to reading raw files. That is not a small improvement. That is a completely different paradigm for how AI agents reason over large codebases. What it supports: -> Code in 13 programming languages -> PDFs -> Images via Claude Vision -> Markdown files Install in one line: pip install graphify && graphify install Then type /graphify in Claude Code and point it at anything. Karpathy asked. Someone delivered in 48 hours. That is the pace of 2026. Open Source. Free.
Muhammad Ayan tweet media
English
242
1.3K
11.7K
811.5K
Michiel Bakker
Michiel Bakker@bakkermichiel·
🚨📄 New preprint! We find the “boiling the frog” equivalent of AI use. In a series of RCTs, we show that after just 10 min of AI assistance people perform worse and give up more often than those who never used AI. w Grace Liu @brianchristian Mira Dumbalska and Rachit Dubey 🧵
Michiel Bakker tweet media
English
11
106
352
57.5K
Farza 🇵🇰🇺🇸
This really exploded. You can download Clicky below. I'm seeing people already use it for stuff like: - Learning Blender - Getting live coaching in a chess game - Getting design feedback in Lovable Enjoy! github.com/farzaa/clicky/…
Farza 🇵🇰🇺🇸@FarzaTV

I built this thing called Clicky. It's an AI teacher that lives as a buddy next to your cursor. It can see your screen, talk to you, and even point at stuff, kinda like having a real teacher next to you. I've been using it the past few days to learn Davinci Resolve, 10/10.

English
74
52
1.3K
95.1K
Robert Greene
Robert Greene@RobertGreene·
Immerse yourself in the world or the industry that you wish to master.
English
118
516
3.9K
70.5K
Kurt Mahlburg
Kurt Mahlburg@k_mahlburg·
Finland tracked every gender-referred adolescent in the country for up to 25 years. Their psychiatric needs didn't improve after 'gender reassignment'. They surged. A landmark peer-reviewed study just dropped. Here's what it found. 🧵
Kurt Mahlburg tweet media
English
403
7.9K
26.6K
1.7M
Venu
Venu@Venu_7_·
I ran Peter Lynch's 6-rule stock screen across the entire US market. Out of thousands of stocks, only 16 passed every filter. The top name trades at 4x forward earnings with a PEG of 0.04 - you either own it or you've been watching it. $MU Here are all 16 names - some will surprise you. Follow this thread 🧵
Venu tweet media
English
24
60
576
132.7K
Gomovies
Gomovies@Gomovies_x·
The Best Action Movies of 2026 (So far) 🎬 🔥 1. Sinners (2025) 2. The Naked Gun (2025) 3. One Battle After Another (2025) 4. Reflections in a Dead Diamond (2025) 5. Bullet Train Explosion (2025) 6. Diablo (2025) 7. Predator: Killer of Killers (2025) 8. Ballerina: From the World of John Wick (2025) 9. Nobody 2 (2025) 10. Heads of State (2025)
Gomovies tweet mediaGomovies tweet media
English
5
91
377
23K
Sukh Sroay
Sukh Sroay@sukh_saroy·
OpenAI and Anthropic engineers leaked a prompting technique that separates beginners from experts. It's called "Socratic prompting" and it's insanely simple. Instead of telling the AI what to do, you ask it questions. My output quality: 6.2/10 → 9.1/10 Here's how it works:
English
14
34
370
104.8K
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.6K
6.2K
53K
18.3M