jason

16.2K posts

jason banner
jason

jason

@jasonth0

monitoring the sitch

SF Katılım Kasım 2012
1.3K Takip Edilen1.6K Takipçiler
jason
jason@jasonth0·
imagine inventing the engine then putting a billion on every car company being wrong. gotta respect that level of conviction
Ricardo@Ric_RTP

The man who INVENTED modern AI just made a billion dollar bet that ChatGPT, Claude, and every AI company on earth is building the wrong technology. Yann LeCun won the Turing Award in 2018 for creating the neural networks that made AI possible. He spent a decade running AI research at Meta. Oversaw the creation of Llama and PyTorch, the tools that half the AI industry runs on. Then he quit. And raised $1.03 billion in a seed round. The LARGEST seed round in European history. $3.5 billion valuation before generating a single dollar of revenue. Bezos wrote the check. So did Nvidia. Samsung. Toyota. Temasek. Eric Schmidt. Mark Cuban. Tim Berners-Lee (the guy who invented the internet). His new company is called AMI Labs. And it's built on one thesis: Every AI company spending billions on large language models is wasting their money. ChatGPT, Claude, Gemini, Grok. They all work the same way. They predict the next word in a sequence. See "the cat sat on the" and predict "mat." Scale that to trillions of words and you get something that sounds intelligent. But LeCun says it doesn't UNDERSTAND anything. It can't reason. It can't plan. It can't predict what happens when you push a glass off a table. A two year old can do that. GPT-5 cannot. That's why AI hallucinates. It doesn't have a model of how the world actually works. It just predicts words. His solution? Something called JEPA. Instead of predicting words, it learns how the PHYSICAL WORLD works. Abstract representations of reality. Not language but physics. Think about what that means. Current AI can write your emails. LeCun's AI could design a car, run a factory, operate a robot, or diagnose a patient without hallucinating and killing someone. The CEO of AMI said it perfectly: "Factories, hospitals, and robots need AI that grasps reality. Predicting tokens doesn't cut it." And here's what's really crazy to me... LeCun isn't some outsider throwing rocks. He literally built the foundations that ChatGPT runs on. He knows exactly how these systems work because he helped create them. And after watching the entire industry sprint in one direction for three years, he raised a billion dollars to run the OPPOSITE way. No product. No revenue. No timeline. Just pure research. He told investors it could take YEARS to produce anything commercial. But they funded it anyway in just four months. Meanwhile OpenAI just raised $120 billion and still can't stop their models from making things up. Anthropic is building AI so dangerous they're afraid to release it. Google is burning billions trying to catch up. And the guy who started it all says they're all solving the wrong problem. Two Turing Award winners raised $2 billion in three weeks betting AGAINST the entire LLM approach. LeCun at AMI. Fei-Fei Li at World Labs. The smartest people in AI are quietly building the exit from the technology everyone else is betting their future on. Either they're wrong and the trillion dollar LLM industry keeps printing. Or they're right and every AI company on earth just built on a foundation that's about to crack.

English
0
0
0
55
jason
jason@jasonth0·
openai shipping a claude code plugin is the most expensive compliment in tech right now
宝玉@dotey

OpenAI 官方发布了一个 Claude Code 插件 codex-plugin-cc,让开发者可以直接在 Claude Code 里调用 Codex 做代码审查、对抗性审查,甚至把任务整个移交给 Codex 执行。 这件事有意思的地方在于:这是 OpenAI 主动把自己的工具送进竞争对手 Anthropic 的地盘。Claude Code 有自己的插件生态,OpenAI 这次正式以官方身份入场,把 Codex 包装成 Claude Code 工作流里的一个"随叫随到的第二意见"。 插件提供三个核心命令:/codex:review 跑一遍标准的只读代码审查;/codex:adversarial-review 做对抗性审查,专门挑战现有实现的隐藏假设,适合迁移、鉴权变更、基础设施脚本这类高风险操作;/codex:rescue 则直接把任务交给 Codex 接管,用于线程卡住或需要换个智能体重新来过的场景。 三个命令都支持后台运行,配合 /codex:status 和 /codex:result 管理。还有个可选的 review gate 功能,能让 Claude Code 在 Codex 审查完成前不退出,不过 Srivastav 提醒这可能导致两个智能体循环调用,快速烧掉使用额度。 技术上,插件通过本地 Codex CLI 和 app server 中转,复用已有的认证、配置和 MCP 设置,不额外起运行时。使用前提是有 ChatGPT 订阅(包括免费版)或 OpenAI API key,加上 Node.js 18.18 以上。

English
0
0
0
80
jason
jason@jasonth0·
someone fine-tuned qwen 27B on distilled opus 4.6 data and it beats sonnet on SWE-bench. runs locally on 16gb. anthropic is accidentally the best open-source training data provider in AI
ℏεsam@Hesamation

this model is an agentic treasure. it has been #1 trending for 3 weeks on @huggingface as mentioned by @danielhanchen. it's Qwen 3.5 27B fine-tuned on Opus 4.6 distilled data and beats Sonnet 4.5 on SWE-bench verified and more. "Runs locally on 16GB in 4-bit or 32GB in 8-bit."

English
0
0
0
154
jason
jason@jasonth0·
someone had to reverse engineer their own claude binary to find cache bugs potentially burning tokens at 20x. max plan subscribers doing free QA work for anthropic in 2026
Alex Volkov@altryne

PSA: If you've been running out of Claude session quotas on Max tier, you're not alone. Read this. Some insane Redditor reverse engineered the Claude binaries with MITM to find 2 bugs that could have caused cache-invalidation. Tokens that aren't cached are 10x-20x more expensive and are killing your quota. If you're using your API keys with Claude this is even worse. This is also likely why this isn't uniform, while over 500 folks replied to me and said "me too", many (including me) didn't see this issue. There are 2 issues that are compounded here (per Redditor, I haven't independently confirmed this) : 1s bug he found is a string replacement bug in bun that invalidates cache. Apparently this has to do with the custom @bunjavascript binary that ships with standalone Claude CLI. The workaround there is to use Claude with `npx @anthropic-ai/claude-code` 2nd bug is worse, he claims that --resume always breaks cache. And there doesn't seem to be a workaround there, except pinning to a very old version (that will miss on tons of features) This bug is also documented on Github and confirmed by other folks. I won't entertain the conspiracy theories there that Anthropic "chooses" to ignore these bugs because it gets them more $$$, they are actively benefiting from everyone hitting as much cached tokens as possible, so this is absolutely a great find and it does align with my thoughts earlier. The very sudden spike in reporting for this, the non-uniform nature (some folks are completely fine, some folks are hitting quotas after saying "hey") definitely points to a bug. cc @trq212 @bcherny @_catwu for visibility in case this helps all of us.

English
0
0
1
196
jason
jason@jasonth0·
been asking everyone who says they vibe coded their startup to demo it for me. zero for twelve. starting to think vibe coding is just vibe posting
English
0
0
0
56