Howarrd

9.8K posts

Howarrd banner
Howarrd

Howarrd

@howarrd_li

Applied Math @Penn | Comments for fun | prev @Bybit_Official @Mirana @Mantle_official Co-founder @DoobroCN | @BowdoinCollege Alum

New York, NY Katılım Temmuz 2020
1K Takip Edilen8.9K Takipçiler
Sabitlenmiş Tweet
Howarrd
Howarrd@howarrd_li·
A quick update on Doodles CN🇨🇳: ✔️>500 Chinese Holders ✔️> 30 Twitter Space ✔️Daily non-stop alpha and WL ✔️Official Twitter accounts established @doobroCN and @Doobro_CN ✔️> 4000 members in discord (come check it out) ✔️Active, Funky, and Resilient ✔️WE ALL LOVE DOODLES
Howarrd@howarrd_li

Not only @doodles is scaling with @jholguin stepping up as the CEO, Doodles Chinese Community on WeChat is also scaling (and vibing)! 🇨🇳 ✔️>380 Chinese Holders ✔️9999+ messages per day ✔️Alpha channel established ✔️Friendly, Smart, and Real ✔️WE ALL LOVE DOODLES

English
10
88
135
15.2K
Howarrd retweetledi
Pickle Cat
Pickle Cat@0xPickleCati·
Gonna share my 2026 hedging thesis (long tweet warning) I call it: how to get paid even if crypto bleeds and tech beta starts vomiting into year end. Strictly my personal opinion. All info below are based on PUBLIC sources. Not financial advice, DYOR With crypto potentially facing another 20-40% drawdown into year-end, I’m increasingly convinced that select oil and tanker equities are one of the cleaner hedges right now. AND NO, this isn't another tweet about gambling long or short on crude. The play is shareholder yield: dividends, supplemental dividends, and buybacks, backed by strong free cash flow, manageable leverage, and real asset exposure. Sized right, the basket could return 20-30% cash this year. My thesis is not “which oil stock does 2x or 5x.” It’s defensive: these companies are generating exceptional cash in the current freight and energy setup. Many run with low single-digit net debt to EBITDA, and select names can deliver double-digit shareholder yield through 2026 if rates stay firm. That’s real cash flow while crypto chops, and honestly I’d rather have that than be all-in into tech growth names that offer zero yield buffer when risk assets correct. Buying now can still qualify you for upcoming quarterly dividends, but you need to own shares before the official ex-date. Make sure you check share buyback policies too, because that’s where the real combo comes from: dividends + buybacks + potential share price gains. ALSO AN IMPORTANT TAX NOTE everyone should know: > US taxpayers: want the lower qualified-dividend tax rate instead of getting cooked at ordinary income rates? Usually you need to hold shares unhedged for 61+ days within the 121-day window around the ex-date. > Non-US investors: normal US dividends can get hit with a 30% withholding tax slap. BUT many tanker names are foreign-domiciled, so the tax haircut can be much lighter. Don’t be lazy though, check domicile, broker, and local tax before celebrating. The near-term dividend window is worth watching, but I’m separating confirmed declarations from forecasted ex-dates. Confirmed/recent shareholder-return updates: > ASC announced on april 29 (literally yesterday) that it is doubling its payout ratio to two-thirds of adjusted earnings, effective Q1 2026. Q1 MR spot TCE was around 33.7k/day, and Q2-to-date was around 50k/day. Dividend amount/date still needs official declaration. > Var Energi (OSL:VAR/VARRY) has a confirmed 300M Q1 2026 distribution payable June 12, with another 300M guided for Q2. > Eni (E/ENI.MI) confirmed a 2026 dividend of €1.10/share and raised its buyback plan by about 90% to €2.8B. > TTE raised its first 2026 interim dividend by 5.9% to €0.90/share and doubled Q2 buybacks to $1.5B. Not a May/June capture name, but good shareholder-return ballast. For the tanker watchlist: > DHT has one of the cleanest payout formulas: 100% of ordinary net income as quarterly cash dividends. Q1 payout/date still needs declaration. > TRMD’s last official distribution was $0.70/share. Any May dates floating around are watchlist inputs until TORM officially declares. > FRO paid $1.03/share for Q4, and Q1 looks strong with VLCC days booked around 107.1k/day. But the next dividend is still pending. > INSW’s most recent payout was $2.15/share combined ($0.12 regular + $2.03 supplemental) for Q4 2025. Next payout depends on Q1 results. > HAFN (product/chemical tankers) raised its latest quarterly dividend to $0.1762/share and is seeking a new 10% buyback mandate at the 2026 AGM. Next payout pending. > STNG is more buyback + quality product tanker exposure than a huge dividend-capture name. > NAT has visible variable yield, but I’d treat it as higher risk. The basket has 4 buckets: Variable/formula-based tanker payouts: ASC, DHT, TRMD, HAFN, FRO, INSW, NAT (highest dividend torque in the basket, but also the most variable) Product tanker buyback discipline: STNG (still shipping exposure, but more buyback + quality operator than huge dividend capture) Big energy shareholder-return ballast: SU, TTE, E/ENI.MI, CNQ, REPYY/REP.MC, OSL:VAR/VARRY (less sexy, but more grown-up hedge: dividends, buybacks, scale, and balance sheet durability) Buyback/growth oil names: VIST, ATH. TO (not dividend names, but buybacks can still create shareholder yield without sending you a cash dividend) see the table below for the full visual overview with qualification/timing notes on every name (including higher-risk examples like PBR) IMPORTANT: this is not a free dividend glitch. Stocks often adjust down around the ex-date, sometimes more than the dividend itself. variable dividends can disappear if rates collapse. Buybacks only matter if management buys at sane prices. So, the setup I like: own cash-return machines while the market is still underpricing how long energy cash flow can stay strong. Why this hedge over the usual alternatives: > tech stocks: still risk-on beta, no yield buffer > bonds: help in recession, messy if inflation/oil risk stays sticky > cash: safe but real returns are unexciting > long dated puts: clean hedge, expensive theta bleed if timing is wrong The tanker angle is different because strong Q1/Q2 cash flow can come back as dividends, supplemental dividends and buybacks. (not fixed, but in the right rate environment, cash returns fast) Even if Hormuz reopens tomorrow, the system doesn't reset overnight: > inventories still need to rebuild > refined products can stay tight > trade routes can stay inefficient > Q1 cash flow already happened > Q2 rates are the next thing to watch Crypto for asymmetric growth, oil-linked yield for cash flow ballast. I don't need every hedge to 5x, sometimes the boring trade just keeps paying you while crypto does whatever crypto does.
Pickle Cat tweet media
English
40
39
262
123.8K
Howarrd retweetledi
DeepSeek
DeepSeek@deepseek_ai·
🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length. 🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models. 🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice. Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today! 📄 Tech Report: huggingface.co/deepseek-ai/De… 🤗 Open Weights: huggingface.co/collections/de… 1/n
DeepSeek tweet media
English
1.6K
7.7K
44.9K
9.4M
Howarrd retweetledi
How To AI
How To AI@HowToAI_·
Yann LeCun was right the entire time. And generative AI might be a dead end. For the last three years, the entire industry has been obsessed with building bigger LLMs. Trillions of parameters. Billions in compute. The theory was simple: if you make the model big enough, it will eventually understand how the world works. Yann LeCun said that was stupid. He argued that generative AI is fundamentally inefficient. When an AI predicts the next word, or generates the next pixel, it wastes massive amounts of compute on surface-level details. It memorizes patterns instead of learning the actual physics of reality. He proposed a different path: JEPA (Joint-Embedding Predictive Architecture). Instead of forcing the AI to paint the world pixel by pixel, JEPA forces it to predict abstract concepts. It predicts what happens next in a compressed "thought space." But for years, JEPA had a fatal flaw. It suffered from "representation collapse." Because the AI was allowed to simplify reality, it would cheat. It would simplify everything so much that a dog, a car, and a human all looked identical. It learned nothing. To fix it, engineers had to use insanely complex hacks, frozen encoders, and massive compute overheads. Until today. Researchers just dropped a paper called "LeWorldModel" (LeWM). They completely solved the collapse problem. They replaced the complex engineering hacks with a single, elegant mathematical regularizer. It forces the AI's internal "thoughts" into a perfect Gaussian distribution. The AI can no longer cheat. It is forced to understand the physical structure of reality to make its predictions. The results completely rewrite the economics of AI. LeWM didn't need a massive, centralized supercomputer. It has just 15 million parameters. It trains on a single, standard GPU in a few hours. Yet it plans 48x faster than massive foundation world models. It intrinsically understands physics. It instantly detects impossible events. We spent billions trying to force massive server farms to memorize the internet. Now, a tiny model running locally on a single graphics card is actually learning how the real world works.
How To AI tweet media
English
428
2.1K
12.2K
1.3M
Howarrd retweetledi
Akshay 🚀
Akshay 🚀@akshay_pachaar·
Claude Code fully dissected! Researchers from UCL reverse-engineered the leaked Claude source. What they found changes how you should think about agent design. Only 1.6% of the codebase is AI decision logic. The other 98.4% is operational infrastructure. Permission gates, tool routing, context compaction, recovery logic, session persistence. The model reasons. The harness does everything else. This is the opposite of what most agent frameworks do today. LangGraph routes model outputs through explicit state machines. Devin bolts heavy planners onto operational scaffolding. Claude Code gives the model maximum decision latitude inside a rich deterministic harness, and invests all its engineering effort in that harness. The core loop is a simple while-true. Call model, run tools, repeat. But the systems around that loop are where the real design lives: A permission system with 7 modes and an ML classifier. Users approve 93% of prompts anyway, so the architecture compensates with automated layers instead of adding more warnings. A 5-layer context compaction pipeline. Each layer runs only when cheaper ones fail. Budget reduction, snip, microcompact, context collapse, auto-compact. Four extension mechanisms ordered by context cost. Hooks (zero), skills (low), plugins (medium), MCP (high). Each answers a different integration problem. Subagents return only summary text to the parent. Their full transcripts live in sidechain files. Agent teams still cost roughly 7x the tokens of a standard session. Resume does not restore session-scoped permissions. Trust is re-established every session. That friction is the point. The bet behind all of this is simple. As frontier models converge on raw coding ability, the quality of the harness becomes the differentiator, not the model. Paper: Dive into Claude Code (arXiv:2604.14228) In the next tweet, I've shared an article I wrote on Agent Harness and what every big company is building. Do check.
Akshay 🚀 tweet media
English
73
301
1.7K
175.7K
Howarrd retweetledi
Ian (伊恩)
Ian (伊恩)@ianneo_ai·
可怕,已经有人开始批量蒸馏别人的脑子了 这个 repo 里,巴菲特、PG、Karpathy、张一鸣、毛选、MrBeast,都被拆成了 skill 你拿到的就不只是观点 而是他们怎么判断、怎么拆问题、怎么做决策 这玩意儿看着像资料库 真用起来更像给自己挂了一排数字军师 以后人与人的差距,可能就是你背后站着多少个这样的外挂😂
Ian (伊恩) tweet mediaIan (伊恩) tweet media
中文
120
762
3.4K
304.8K
Howarrd retweetledi
Claude
Claude@claudeai·
We're bringing the advisor strategy to the Claude Platform. Pair Opus as an advisor with Sonnet or Haiku as an executor, and get near Opus-level intelligence in your agents at a fraction of the cost.
Claude tweet media
English
1K
2.8K
38.5K
4.7M
Howarrd retweetledi
Cursor
Cursor@cursor_ai·
We’re introducing Cursor 3. It is simpler, more powerful, and built for a world where all code is written by agents, while keeping the depth of a development environment.
English
657
872
9K
2.4M
Howarrd retweetledi
HankAI
HankAI@hank_aibtc·
兄弟们,这波Claude Code 源码泄露太乐了,直接等于Anthropic给开源了。 事情是这样的: 他们发npm包的时候,压根没在.npmignore里 把source map文件过滤掉。结果一堆开发者装完包, 就在node_modules 里翻出那个超大.map文件,里面藏着 完整的TypeScript源码映射。随便一还原, 1900多个文件原封不动摆在那儿,终端CLI架构、40多个工具、50来个命令,全都一览无余。 GitHub上已经有人打包上传了, Bun运行时、Anthropic SDK怎么接、 权限控制怎么搞、自然语言转代码的流程……全都能扒。之前老版本就出过一次这事儿,这次又重蹈覆辙,属实草台班子现场。 对我们这些搞AI工具的来说, 这简直是天上掉馅饼。想自己 搭个类似agentic coding CLI的, 直接抄作业就行,省了多少试错时间。 想看的直接冲这个仓库:github.com/instructkr/cla… 学完别白嫖,记得给原作者点个star, 或者自己改改发个更好玩的版本。 AI圈子就爱这种意外的开源精神
中文
72
389
2.2K
395K
Howarrd retweetledi
DAN KOE
DAN KOE@thedankoe·
You can learn anything in 2 weeks. You can't master it, obviously, but if you obsess over it, you can become better at it than most people ever will. You'd be surprised how fast your life can change when you understand this.
English
638
2.6K
19.4K
696K
Howarrd retweetledi
Claude
Claude@claudeai·
Computer use is now in Claude Code. Claude can open your apps, click through your UI, and test what it built, right from the CLI. Now in research preview on Pro and Max plans.
English
2.6K
4.9K
59.5K
16.1M
Howarrd retweetledi
Boris Cherny
Boris Cherny@bcherny·
I wanted to share a bunch of my favorite hidden and under-utilized features in Claude Code. I'll focus on the ones I use the most. Here goes.
English
554
2.5K
23.2K
3.9M
Howarrd retweetledi
chiefofautism
chiefofautism@chiefofautism·
someone at ANTHROPIC just showed CLAUDE finding ZERO DAY vulnerabilities in a live conference demo claude has found zero day in Ghost, 50,000 stars on github, never had a critical security vulnerability in its entire, history... it found the blind SQL injection in 90 minutes, stole the admin api key, then did the exact, same thing to the linux kernel
English
305
1.4K
11.8K
1.9M
Howarrd retweetledi
Sprinter Press Agency
Sprinter Press Agency@SprinterPress·
The Shocking Speed of China's Scientific Advancement" The Atlantic magazine notes China's rapid technological and scientific development. For instance, China's spending on research and development has increased from $13 billion in 1991 to over $800 billion annually today, second only to the United States. The country plans to increase its science budget by 7% annually over the next five years, and it is expected that by 2029, China's public spending on scientific research will exceed that of the United States. According to Professor Caroline Wagner of Ohio State University, in 2023, Chinese scientists published 58,000 of the approximately 190,000 most influential scientific papers in the world, which is the second-largest contribution after the United States. In scientific research conducted in collaboration between the United States and China, the proportion of leaders associated with Chinese institutions has increased from 30% in 2010 to 45% in 2023. It is predicted that China will achieve parity with the United States in this regard no later than 2027 or 2028. The magazine notes that China has made significant progress in applied sciences, catching up with or surpassing the United States in the development and production of advanced batteries, electric vehicles, and solar cells, taking a leading position in key technologies of the 21st century.
Sprinter Press Agency tweet media
English
8
23
108
68K
Howarrd retweetledi
Priyanka Vergadia
Priyanka Vergadia@pvergadia·
💀Fine-tuning is DEAD Their new paper Agentic Context Engineering (ACE) - A self-improving AI learns entirely from execution feedback. 87% reduction in adaptation latency. Open-source model. Beats proprietary enterprise agents. → Generator does the work → Reflector analyzes every failure → Curator applies surgical "delta updates" — one rule at a time → Zero human labels. Zero retraining. Zero fine-tuning bills. You no longer pay to retrain a model every time your use case shifts. The prompt becomes a living playbook that rewrites itself. Tested on AppWorld leaderboard. Matched the most expensive enterprise agents. On the hardest benchmarks, it beat them. I think the economics of AI just changed.
Priyanka Vergadia tweet media
English
43
86
496
33.6K
Howarrd retweetledi
Jianyang Gao
Jianyang Gao@gaoj0017·
The TurboQuant paper (ICLR 2026) contains serious issues in how it describes RaBitQ, including incorrect technical claims and misleading theory/experiment comparisons. We flagged these issues to the authors before submission. They acknowledged them, but chose not to fix them. The paper was later accepted and widely promoted by Google, reaching tens of millions of views. We’re speaking up now because once a misleading narrative spreads, it becomes much harder to correct. We’ve written a public comment on openreview (openreview.net/forum?id=tO3AS…). We would greatly appreciate your attention and help in sharing it.
Google Research@GoogleResearch

Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI

English
98
978
6.5K
1M