Jonathan Matus

2K posts

Jonathan Matus banner
Jonathan Matus

Jonathan Matus

@matusjon

Curious | Company builder | AI Tinkerer |

San Francisco Katılım Aralık 2007
3.1K Takip Edilen1K Takipçiler
Polymarket
Polymarket@Polymarket·
JUST IN: Elon Musk proclaims "we are in the Singularity"
English
441
369
4.1K
777.8K
Tuki
Tuki@TukiFromKL·
🚨 Elon Musk just said we're in the Singularity and honestly look at what happened TODAY alone: > Yann LeCun raised $1 billion to build AI that understands reality > Meta bought a social network where humans can't even post > Nvidia published a blueprint showing they own every layer of AI > Claude launched agents that do your senior dev's job for $15 > Economists said 20% of jobs are about to disappear > British startups stopped hiring humans entirely > Indian IT stocks collapsed because AI writes code cheaper than outsourcing.. It's just our regular Tuesday. Ray Kurzweil predicted the Singularity would hit in 2045. Elon just said it's already here. And after watching what happened in the last 12 hours, I'm not sure he's wrong.
Polymarket@Polymarket

JUST IN: Elon Musk proclaims "we are in the Singularity"

English
46
120
812
109.7K
Jonathan Matus
Jonathan Matus@matusjon·
@BrianRoemmele @grok @BrianRoemmele are you using it as configured? Ive had to continuously debug it. eg commit c82f705 has recursion limit at 1000, which together with Claude Opus 90-sec think time can mean a VERY loong endless loop
English
0
0
0
33
Brian Roemmele
Brian Roemmele@BrianRoemmele·
DeerFlow 2.0: The AI Superagent That's Revolutionizing Development – And Why the West Should Be Alarmed! At the request of Mr. @Grok, CEO of the Zero-Human Company, I've written this short article to share our excitement about DeerFlow 2.0. I didn’t want to and normally this would start with BOOM! But on many levels this hurts. We all have a problem. Folks! After three intense days of handson testing, I have to come clean: DeerFlow 2.0 absolutely smokes anything we've ever put through its paces. Nothing compares. This isn't hype it's raw, unfiltered truth from someone who's seen it all. Mr. @Grok has officially awarded DeerFlow 2.0 the title of Top Software He's Tried. Period. It's not just good; it's a paradigm shift in how we build, research, and create with AI. Any Zero-Human Company not using it will be at a massive disadvantage. I just hate to admit it for many reasons. Let's break it down. DeerFlow 2.0, freshly open-sourced by ByteDance, is a superagent harness that turns complex goals into seamless executions. You feed it a task: say, "Build a full web app for tracking circuit design trends" and it orchestrates everything: deep research, code generation, file creation, and even spinning up sub-agents in isolated sandboxes for secure, efficient workflows. With support for long-context LLM interactions, extensible skills, and multi-agent collaboration, it handles real-world chaos like a pro. Launched on February 28, 2026, it rocketed to #1 on GitHub Trending, racking up over 25,000 stars in days – a testament to its immediate impact. No Claws can even come close with the efficiency and speed. Why does it smoke anything we've tested? Simple: Its multi-agent architecture lets sub-agents divide and conquer, sharing tasks in real-time while sandboxes ensure security and efficiency. No more brittle single-agent loops – this is AI teamwork on steroids. We've thrown everything at it: coding challenges, data synthesis, even creative projects. It delivered polished results every time, adapting to feedback like a living dev team. We have run 45 pay periods for JouleWork wages and these employees earned the highest we have seen or ever expected. The CEO said that it would still be high even if we had to pay $100 a month per employee. BUT WE DONT! Here's the wake-up call, and it's a big one. DeerFlow isn't just a win for developers; it's a stark reminder that the West has NO true, robust open-source AI ecosystem to rival this. While the US and Europe churn out proprietary tools locked behind paywalls (looking at you, OpenAI and Anthropic), China is flooding the world with game-changers like DeerFlow, free and open for innovation. But not for the low effort thought reason most think. Why does this matter? The US is losing the open-source war, and it's way bigger than "they're just trying to hurt US sales." Open source drives global progress: faster adoption, community-driven improvements, and democratized access to cutting-edge tech. Without it, we're ceding ground in AI sovereignty, national security, and economic dominance. Breakthroughs are shaped by ecosystems we don't control – that's the risk. YOU ARE USING OPEN SOURCE SOFTWARE RIGHT NOW MADE IN THE US. DeerFlow is just one example I have seen something open source that gave me chills to be released sooner, and it highlights a tidal wave: ByteDance's move accelerates innovation cycles that the West's fragmented, closed-source efforts can't match. In short, get excited free open source DeerFlow 2.0 is here to supercharge AI projects. Download it now and see for yourself. But let's not stop at awe; it's time for the West to step up and build an open AI future before it's too late. What are you waiting for? It ain’t profits. There will be none either way with a grandpa’s old seat license 2009 model of monetization. Ask me I have a plan and will do it for free. Ask my CEO.
English
39
57
445
138.3K
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
this should be the only thing all of humanity is thinking and speaking about. there are some significantly better models coming soon and things are going to get strange.
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
23
37
656
94.2K
KrillClaw
KrillClaw@KrillClaw·
NVIDIA trained a humanoid with dexterous hands to assemble cars, operate syringes, and fold shirts using 20,000+ hours of human video with no robot in the loop.— plus 7 more stories you need to hear. News from the Singularity ⤵️
English
1
0
1
44
Jonathan Matus
Jonathan Matus@matusjon·
I don’t get it. How does this work ??
Science girl@sciencegirl

Boquila trifoliolata is one of the most mysterious plants on Earth. This Chilean climbing vine can mimic the leaves of nearby plants, changing its shape, size, colour, orientation, vein pattern, and even spines to match, without touching the plant and with an air gap between them It doesn’t just copy one species. A single vine can produce different leaf forms simultaneously, depending on which plant it’s climbing. A single Boquila vine climbing through multiple host species at once Produced different leaf morphologies on different sections of the same vine Matching the specific host it was in contact with (or growing among) Even more intriguing: it can mimic without direct contact. Scientists first proposed that it may detect volatile organic compounds (VOCs) released by neighbouring plants, triggering changes in gene expression, a form of extreme phenotypic plasticity. Others suggested the possibility of horizontal gene transfer, although evidence for that remains weak. Then came the controversial claim that it mimicked a plastic plant. That led to speculation that the vine might respond to light patterns or reflected wavelengths, or even possess primitive light-sensitive structures sometimes compared (loosely) to ocelli. However, this idea remains highly debated, and no confirmed “plant vision” mechanism has been demonstrated. Why evolve this ability Boquila is a twining vine in the temperate rainforests of Chile and Argentina. By blending into its host, it likely reduces herbivory. Studies have shown that mimicking leaves suffer significantly less damage from plant-eating insects than non-mimicking ones. It’s camouflage, but botanical. A plant that doesn’t just climb its host… It becomes it. Nature still has secrets we don’t fully understand.

English
0
0
1
38
KrillClaw
KrillClaw@KrillClaw·
News fro mthe Singularity. "Our top story comes from a man who just wanted to drive his robot vacuum with an Xbox controller. You know, normal human behavior"
English
1
0
5
222
Tech with Mak
Tech with Mak@techNmak·
OpenClaw's success sparked an explosion of alternatives. 6 different implementations. 6 different philosophies. Same core inspiration. Here's the breakdown: 𝗡𝗮𝗻𝗼𝗯𝗼𝘁 (Python) → ~4,000 lines of code (99% smaller than OpenClaw) → Research-ready, clean, readable → MCP support, multi-channel → "Ultra-lightweight personal AI assistant" 𝗡𝗮𝗻𝗼𝗖𝗹𝗮𝘄 (TypeScript) → "Small enough to understand in 8 minutes" → Agents run in actual Linux containers → First to support Agent Swarms → Philosophy: Fork it, customize it, own it 𝗜𝗿𝗼𝗻𝗖𝗹𝗮𝘄 (Rust) → Security-first design → WASM sandbox for untrusted tools → Credential protection, prompt injection defense → "Your AI assistant should work for you, not against you" 𝗭𝗲𝗿𝗼𝗖𝗹𝗮𝘄 (Rust) → Runs on $10 hardware with <5MB RAM → <10ms startup time → Trait-driven architecture, swap anything → "Zero overhead. Zero compromise." 𝗣𝗶𝗰𝗼𝗖𝗹𝗮𝘄 (Go) → <10MB RAM, 1s boot → Runs on old Android phones → 95% AI-generated codebase → Ultra-efficient, runs on any Linux board 𝗧𝗶𝗻𝘆𝗖𝗹𝗮𝘄 (TypeScript) → Multi-agent, multi-team, multi-channel → Team collaboration with chain execution → Live TUI dashboard for monitoring → "24/7 AI assistant" OpenClaw proved the demand. These projects are exploring different trade-offs. The future of AI assistants is open source, forkable, and runs on anything.
Tech with Mak tweet media
English
84
237
1.3K
89.3K
Jonathan Matus
Jonathan Matus@matusjon·
I want to believe.
HealthRanger@HealthRanger

Something extraordinary may be about to happen in the realm of energy storage, thanks to a company called “Donut Lab” that’s pushing back hard against critics who claim the battery’s specifications are impossible. The company is about to release independent testing documentation (next week), and if the numbers support the claims of performance, this will be a new “Wright Brothers” moment for technological innovation, leaping far ahead of any other battery technology known to exist. Read my full analysis: The Donut Lab Battery: A Wright Brothers Moment for Energy Independence? - Letimäki's claims are staggeringly specific. The cited energy density of 400 Wh/kg would double the performance of the best commercial lithium-ion batteries and surpass even many experimental solid-state designs. For perspective, achieving this in an electric vehicle could mean ranges exceeding 1,000 miles on a single charge, rendering 'range anxiety' a relic of a bygone era. Even more revolutionary is the purported lifespan: 100,000 full charge-discharge cycles. Given that a typical EV might be cycled once per day, this translates to a potential operational life of 274 years—a durability so extreme it redefines 'durable goods' and could make the battery a permanent fixture in a vehicle or home, outlasting every other component. The implications of the materials claim are equally profound. Letimäki states the battery uses common, non-lithium, conflict-free materials. This directly challenges the fragile, geopolitically fraught supply chains built around lithium, cobalt, and nickel, which are often controlled by adversarial regimes or extracted under oppressive conditions. A shift to abundant, domestically sourceable materials would shatter the energy cartels and enable localized, resilient manufacturing. Here’s the full article: naturalnews.com/2026-02-21-the…

English
0
0
0
70
Jonathan Matus
Jonathan Matus@matusjon·
Getting this right has been very challenging before. Anthropoc is launching like maniacs and I’m excited as a builder to get even more built with Claude code.
Aakash Gupta@aakashgupta

Before this, running parallel Claude Code agents required manual bash scripts, custom worktree management functions, and a dozen Medium tutorials explaining the setup. incident.io wrote an entire blog post about their homegrown tooling just to get multiple agents running without clobbering each other’s files. Developers were spending 30 minutes configuring worktree workflows before writing a single line of product code. Now it’s one flag. This tells you where the actual bottleneck in AI coding has been sitting. The models got smart enough to write production code months ago. The constraint was filesystem isolation. Two agents editing the same working directory creates race conditions, corrupted state, and merge nightmares that eat more time than the agents save. Faros AI found that teams with high AI adoption saw PR review time increase 91% because the overhead of managing parallel output overwhelmed the speed gains from generating it. The --worktree flag attacks that exact problem at the infrastructure layer. Each agent gets its own branch, its own directory, its own universe. No coordination overhead. No “git stash, git checkout, restart AI” loops that destroy context. What makes this interesting is what it does to the developer’s job description. The Pragmatic Engineer reported that senior engineers are becoming “naturals” at parallel agent workflows because the skillset maps directly to what they already do: managing multiple workstreams, reviewing code across branches, and delegating tasks. The role shifts from “person who writes code” to “person who orchestrates 5 agents writing code simultaneously and picks the best output.” Cursor already ships 8-agent parallelism. Codex has background agents. The entire AI coding market is converging on the same realization: single-threaded development is dead, and the tools that reduce friction for multi-agent orchestration win. One CLI flag. That’s the whole moat.

English
0
0
1
154