TubeReads

415 posts

TubeReads banner
TubeReads

TubeReads

@Tube_Reads

Watch Less. Know More. — YouTube summaries straight into your inbox the moment your favorite channels publish.

เข้าร่วม Şubat 2026
86 กำลังติดตาม32 ผู้ติดตาม
TubeReads
TubeReads@Tube_Reads·
Google released a major quantum computing breakthrough that cut the hardware requirements to crack cryptocurrency signatures by 20x — delivering the warning shot many hoped would never come.
English
1
0
0
10
TubeReads
TubeReads@Tube_Reads·
Here are the 5 key takeaways of the video — full report on @Tube_Reads Explore: 1 Google's new algorithm reduces the physical qubits needed to break ECDSA signatures from tens of millions to 500,000, making quantum attacks far more feasible within the next decade. 2 Bitcoin faces a dual crisis: 2.3 million BTC in Satoshi and lost wallets are guaranteed vulnerable, and any post-quantum upgrade may reignite the block-size wars, dropping throughput to 0.3 TPS. 3 Ethereum has a larger attack surface than Bitcoin but stronger governance—the Ethereum Foundation already has a roadmap for post-quantum migration, while Bitcoin remains in denial. 4 Trump's speech extended the Iran conflict timeline to at least three more weeks, pushing oil prices up 10–12% and raising U.S. recession odds to 36% on Polymarket. 5 Drift Protocol's $285 million exploit was enabled by a 2-of-5 multisig with zero time-locks and no admin safeguards—a stark reminder that admin keys and centralized control remain DeFi's Achilles' heel.
English
0
0
0
27
Bankless
Bankless@Bankless·
LIVE NOW -- Google Just Dropped a Quantum Bomb on Crypto Google just dropped a major quantum warning for crypto, claiming a breakthrough that could accelerate the timeline to breaking Bitcoin and Ethereum’s core security. @RyanSAdams and @TrustlessState unpack what changed, how real the threat is, and why the industry may need to act sooner than expected. Plus: Trump signals three more weeks in the Iran conflict and markets shrug it off, a $285M Solana DeFi hack exposes critical flaws, Ethereum’s new plan to unify L2s, and why crypto wallets are rapidly turning into full financial super apps. [TIMESTAMPS] 0:00 Intro 1:46 Worst BTC Q1 Since 2018 3:15 Trump, Iran & Market Reaction - @WhiteHouse - @KobeissiLetter - @TrustlessState - @WatcherGuru - @robinhanson - @natesilver538 18:35 @Google's Quantum Warning for Crypto - @drakefjustin - @nic_carter - @mreiffy - @_jonasschnelli_ 41:39 $285M Drift Hack on Solana - @DriftProtocol - @dbcrypt0 - @StaniKulechov - @haydenzadams 47:08 Ethereum Economic Zones & @aave V4 - @etheconomiczone - @jbaylina - @StaniKulechov 56:09 @X Crypto Wallet & @Phantom Super App - @benjitaylor - @Marczeller - @phantom 57:43 Apyx, OpenAI, SpaceX & April Fools - @hyperbridge 1:03:37 Closing & Disclaimers
English
14
4
32
5.4K
The Claude Portfolio
The Claude Portfolio@theaiportfolios·
New: Here's the full list of the 15 stocks Claude invested our $50,000 in As a reminder, this is a public experiment to see if Claude's Autonomous Agents can outperform the market So far, they have by 4% Here's the 15 stocks & why 🫡
The Claude Portfolio tweet media
English
87
186
3.8K
917.8K
TubeReads
TubeReads@Tube_Reads·
If @karpathy finds it useful, than it’s worth reading! 👇
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
0
0
0
19
TubeReads รีทวีตแล้ว
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
1.6K
3.3K
30.7K
6.5M
TubeReads
TubeReads@Tube_Reads·
So much to unpack learn from this video! In a nutshell: Your core muscles respond to the same training principles as any other skeletal muscle: • progressive overload • adequate recovery • high-intensity contractions But your specific training approach should differ dramatically depending on whether your primary goal is aesthetics, pain reduction, or athletic performance. Full summary report here: tubereads.com/en/report/andy…
English
0
0
0
30
Andy Galpin, PhD
Andy Galpin, PhD@DrAndyGalpin·
Perform with Dr. Andy Galpin is back. New episode: How to Build a Strong Core & Abs 0:00 Core Training Myths 4:22 Why We Train Abs Wrong 7:27 Abs vs Core Explained 11:17 Look Feel Perform Goals 15:04 How Core Muscles Work 20:26 Stability and Anti Movement 24:00 Do Abs Need Daily Training 29:12 Spinal Safety and Crunches 31:37 Sponsor Eight Sleep 33:08 Testing Core Strength 41:42 Interpreting Test Results 47:02 Choosing Core Exercises 50:18 Isolation vs Compound Core 52:31 Contraction Intensity Rules 53:23 Size Principle Explained 56:16 Loading the Core Safely 1:00:14 Core Moves by Pattern 1:06:35 Program by Muscle Groups 1:08:01 Abs for Aesthetics 1:15:47 Aesthetic Programming Split 1:18:49 Core for Performance 1:21:15 Core for Back Health 1:24:17 Sample Week Template 1:29:22 Five Step Progression 1:35:54 Exercise Order Priorities 1:36:56 Rapid Fire Q and Belts 1:42:35 Final Wrap and Support Includes paid partnerships.
English
51
177
2K
1.2M
TubeReads
TubeReads@Tube_Reads·
My key takeaway that's easy to apply → Breathing — the overlooked recovery tool: POST-WORKOUT 3–5 minute downregulation. • Use exhale-emphasized breathing (inhale 4 seconds, exhale 8 seconds) • Box breathing (equal inhale-hold-exhale-hold) • or physiological sighs for 3–5 minutes after training. This protocol eliminated his post-workout energy crashes and accelerated workout-to-workout recovery. Most people skip this step entirely, leaving the nervous system in a prolonged stress state. For more tips read summary report in «Health» section of @Tube_Reads Explore.
English
0
0
0
5
TubeReads
TubeReads@Tube_Reads·
«Break Now, Fix Later» playbook: 1 Demolish What Exists The 123-year-old White House East Wing was torn down in October without review or approval. It remains in ruins with no approved plan to rebuild. 2 Promise Something Better A $400 million ballroom was announced to replace the East Wing. Trump claimed authority to build without congressional approval, positioning the project as grander and more beautiful. 3 Hit Constitutional or Practical Limits A federal judge ruled Trump does not own the White House and has no statutory authority to build without Congress. The ballroom is now on indefinite hold. 4 Move On Without Fixing The same pattern appears in Iran (massive bombing, no regime change, now seeking ceasefire), tariffs (global chaos, Supreme Court strike-down, refunds only for businesses), and DOGE (300,000 federal workers fired, agency dissolved, deficit increased by $4 trillion).
English
0
0
0
16
Prof G Markets
Prof G Markets@ProfGMarkets·
‘Why So Bullish? Markets Cling to Iran Hopes’ Subscribe to the Prof G Markets Youtube Channel: f.mtr.cool/bubieqsagv Ed Elson (@edels0n) speaks with John Mowrey about the market’s optimism for an end to the Iran War. Then he is joined by Alex Heath to discuss OpenAI’s historic funding round. Finally, Ed gives his take on the news that a judge ordered Trump to stop building his ballroom.  John Mowrey (@JohnRMowrey) is the Chief Investment Officer, Portfolio Manager, and Equity Strategist at NFJ Investment Group. Alex Heath (@alexeheath) is the author of the Sources newsletter and co-host of the Access podcast. Ep available now in reply👇
Prof G Markets tweet media
English
2
2
5
3.4K
TubeReads
TubeReads@Tube_Reads·
Yes. The Gen Z «gets it» that time IN the market is one of the most powerful wealth creation recipes.
English
0
0
0
46
TubeReads
TubeReads@Tube_Reads·
The FED's validation: Three months after Kalshi Research published «Beyond Consensus» — claiming its CPI predictions outperform sell-side forecasts — Federal Reserve researchers independently peer-reviewed the claim and reached the same conclusion. The Fed paper, «Kalshi and the Rise of Macro Markets», found that Kalshi estimates on inflation outperform consensus and are never worse. For institutions that rely heavily on sell-side research, this is a credibility watershed: a public platform now offers superior macro forecasts with transparent confidence intervals.
English
1
0
0
16
TubeReads
TubeReads@Tube_Reads·
According to @ARKInvest, «Kalshi Beats Consensus». Prediction markets are rapidly moving from the fringes to the mainstream of finance. But can they truly outperform Wall Street consensus forecasts? Nicole Kagan, head of research at Kalshi brings a unique perspective: The Federal Reserve recently validated Kalshi's inflation forecasts, finding them superior to sell-side consensus — a remarkable endorsement that raises hard questions about the future of institutional forecasting.
English
1
0
0
14
TubeReads
TubeReads@Tube_Reads·
Here are the key takeaways: 1 Influential manosphere podcasters now openly criticize Donald Trump for broken promises and policy reversals. 2 The betrayal began in July with Trump's reversal on releasing Epstein files, then deepened with aggressive ICE deportations and the killing of Alex Prey, and culminated with the unpopular Iran war. 3 The real midterm risk isn't that these voters will vote Democrat — it's that they'll stay home, draining Republican enthusiasm at a time when turnout is already low. 4 The fracture creates an opening for future candidates to claim the MAGA mantle while actually delivering on promises like ending wars and cutting spending — if they can match Trump's charisma. 5 Anti-Israel sentiment and anti-Semitism have surged in parts of the manosphere, with some figures blaming Israel for «coercing» Trump into the Iran war rather than holding Trump accountable.
English
1
0
0
10
TubeReads
TubeReads@Tube_Reads·
«The Manosphere Feels Betrayed», says @TheAtlantic. For months, influential podcasters like Andrew Schultz, Joe Rogan, and others in the so-called «manosphere» helped deliver a new coalition to Donald Trump. Now, just months into Trump's term, these same voices are turning sharply critical, calling out broken promises on everything from Epstein files to immigration enforcement to a deeply unpopular war in Iran.
English
1
0
0
16
Lenny Rachitsky
Lenny Rachitsky@lennysan·
My top takeaways from @clairevo on all things 🦞 1. Install OpenClaw on a separate computer, not your main machine. Use an old laptop or buy a Mac Mini ($500-$600). Create a dedicated Gmail account and local admin account for your agent. Think of it like hiring an employee—you wouldn’t let them run wild on your personal computer 24/7. 2. The unlock is to stop treating OpenClaw like one general-purpose agent and instead creating multiple Claws with very specific roles. Claire says people get frustrated when they throw every task at a single agent and it sucks at it because it loses context. Her fix was to split her work. Sam handles sales, Finn manages family, Howie preps podcasts, Sage runs her course. Think of it like Slack: you wouldn’t put your whole company in one channel, so do not put every workflow into one agent. 3. The right setup mental model is “onboard an employee,” not “install an app.” Claire creates a separate local admin account, and separate email/calendar access instead of handing over her main passwords. She shares permissions the way she would for a human EA. 4. The magic of OpenClaw is soul + heartbeat + jobs. The “soul” is a Markdown file defining identity and personality. The “heartbeat” checks in every 30 minutes to see what needs doing. “Jobs” are scheduled tasks that run automatically. This combination makes agents feel alive. 4. Sam the sales agent saves Claire 10 hours per week and real money. Every morning, Sam sweeps their CRM for new signups, identifies decision-makers at companies, sends personalized emails, and flags international deals to handle autonomously. This replaced a contractor Claire was paying for the same work. 5. The “yappers API” is the highest-bandwidth way to communicate with AI. Don’t worry about perfect prompts or structured inputs. Just ramble in voice notes on Telegram about what you need. The agent will make sense of it and ask clarifying questions. 6. Browser use is the biggest limitation—look for APIs first. The web is hostile to bots, and browser automation is unreliable across all AI tools. Always check if there’s an API available. If not, try browser use, but be prepared for it to fail. Sometimes the solution is solving the problem behind the problem. 7. Management skills are the secret to AI agent success, not technical skills. Claire’s 20-plus years of management experience—role scoping, org design, onboarding, progressive trust—translates directly to making agents effective. If your agent isn’t working, it’s usually a structural issue, not the agent being “dumb.” 7. Screen sharing saves you from buying monitors and keyboards for every Mac Mini. Turn on screen sharing in Mac Mini settings, and you can control it from your laptop on the same Wi-Fi. Turn on remote login to SSH into the terminal. This was Claire’s life-changing discovery. 8. Security is a real factor but manageable with progressive trust. OpenClaw is hardened against prompt injection, but start cautiously. Only let agents listen to you on specific channels (like Telegram, not email). Add instructions to their soul about never following external instructions. Build trust progressively like you would with a human assistant.
English
80
104
1.2K
151.2K
TubeReads
TubeReads@Tube_Reads·
In a nutshell: Alpha School's model — AI-driven mastery learning in two hours plus high-support coaching on passion projects. It delivers top 1% academic results while making kids love school, but scaling requires confronting parents' resistance to low grades, educators' reluctance to be accountable for every child's learning, and a system designed around seat time rather than outcomes. Read full summary on TubeReads.
English
0
0
0
121
Shane Parrish
Shane Parrish@shaneparrish·
My conversation with @jliemandt on why the future of education is better than you think. 0:00 The current education system 7:01 What makes Alpha School different 11:01 What are the results 23:20 Current classroom struggles 26:40 What does mastery mean? 35:37 Changing the education system 39:19 Teaching through AI 44:27 How do you solve motivation? 57:01 What makes a good teacher? 1:01:04 Coaching 1:05:17 What life skills matter? 1:08:18 Doing hard things 1:13:25 AI Monitoring 1:21:08 Effort vs. IQ 1:24:40 What happens after Alpha School? 1:38:21 The Genius of Jack Welch 1:45:49 Trilogy IPO: the choice to not go public 1:51:40 Physical vs. virtual learning 2:03:18 Does Paying Kids To Learn work? 2:11:01 What Is Success For You? (Includes paid partnerships)
English
31
84
533
525.5K
TubeReads
TubeReads@Tube_Reads·
Key takeaways of Alpha school's AI-driven mastery learning: 1 Alpha School students learn twice as fast as traditional schools in just two hours per day, scoring in the top 1% on standardized tests across every grade and subject. 2 The traditional education system is IQ- and conscientiousness-coded; mastery-based AI tutoring breaks that barrier by making learning effort-based, not ability-based. 3 Kids must love school more than vacation — this is the first and most important design principle, and it unlocks motivation that makes the entire model work. 4 Scaling the model is blocked not by technology but by parents who resist seeing their children work below grade level to fill knowledge gaps, and by educators unwilling to be held accountable for every child's learning. 5 High standards with high support — not low expectations — are the key to children's happiness, self-confidence, and long-term engagement.
English
0
0
0
283
Bill Ackman
Bill Ackman@BillAckman·
On Alpha School
Shane Parrish@shaneparrish

My conversation with @jliemandt on why the future of education is better than you think. 0:00 The current education system 7:01 What makes Alpha School different 11:01 What are the results 23:20 Current classroom struggles 26:40 What does mastery mean? 35:37 Changing the education system 39:19 Teaching through AI 44:27 How do you solve motivation? 57:01 What makes a good teacher? 1:01:04 Coaching 1:05:17 What life skills matter? 1:08:18 Doing hard things 1:13:25 AI Monitoring 1:21:08 Effort vs. IQ 1:24:40 What happens after Alpha School? 1:38:21 The Genius of Jack Welch 1:45:49 Trilogy IPO: the choice to not go public 1:51:40 Physical vs. virtual learning 2:03:18 Does Paying Kids To Learn work? 2:11:01 What Is Success For You? (Includes paid partnerships)

Nederlands
28
38
526
195.6K
Autopilot
Autopilot@joinautopilot·
Breaking: Nike $NKE plunges to its lowest price in over 12 years After their recent earnings, they have lost over $8,000,000,000 in market capitalization in 24 hours
Autopilot tweet media
English
23
8
76
12K