David Hendrickson

12.7K posts

David Hendrickson banner
David Hendrickson

David Hendrickson

@TeksEdge

CEO & Founder | PhD | Startup Advisor | @Columbia | Author Generative Software Engineering https://t.co/9oqvHuTX5f | 🔔 Follow for AI & Vibe Coding Tips 👇

PNW 加入时间 Temmuz 2023
524 关注3.8K 粉丝
置顶推文
David Hendrickson
David Hendrickson@TeksEdge·
🚨 Want to parse complex PDFs with SOTA accuracy, 100% locally? 📄🔍 At just 0.9B parameters, you can drop GLM-OCR straight into LM Studio and run it on almost any machine! 🥔 🧠 0.9B total parameters 💾 Runs on < 1.5GB VRAM (or ~1GB quantized!) 💸 Zero API costs 🔒 Total data privacy Desktop document AI is officially here. 💻⚡
David Hendrickson tweet media
English
53
220
2.4K
321.3K
David Hendrickson
David Hendrickson@TeksEdge·
🌍 Open source AI isn’t catching up because it’s already the default engine of the egalitarian AI ecosystem. New eye opening report from @huggingface 👀. 👥 13M users 🤖 2M+ public models 🗂️ 500K+ datasets 🇨🇳 China now leads HF downloads. 🧬 Qwen spawned 113K+ derivatives. ⚡ Small models are winning on real-world adoption. 🦞 Clawdbots 🏰 The AI moat is shrinking.
David Hendrickson tweet media
English
1
0
0
135
Alex Finn
Alex Finn@AlexFinn·
OpenClaw and Hermes agent on the right, Crimson Desert on the left Multiple agents autonomously building businesses while I play the sickest video game ever made This is the future Your AI employees go out and create value while you enjoy the finer things in life I love 2026
Alex Finn tweet media
English
91
5
352
17.5K
David Hendrickson
David Hendrickson@TeksEdge·
@MatthewBerman I guess the question is why? It's strange they are copying an open source project. It's like if Microsoft use LInux to build Windows. I guess companies still think they would rather use a sandboxed Agent than a free one.
English
0
0
0
31
David Hendrickson
David Hendrickson@TeksEdge·
Alibaba's closed-source version Qwen3.5 Max is challenging @Google's previous model Gemini 3.0 for position on @arena's leaderboard.
David Hendrickson tweet media
Arena.ai@arena

Qwen 3.5 Max Preview has landed in top 10 for Arena Expert and top 15 for Text Arena. It shows particular strength in Math. Highlights: - #3 Math - #10 Expert - #15 Text Arena - Top 20 for Writing, Literature & Language, Life, Physical, & Social Science, Entertainment, Sports, & Media, and Medicine & Healthcare Congrats to the @Alibaba_Qwen team for this new milestone!

English
0
0
0
190
David Hendrickson
David Hendrickson@TeksEdge·
Holy 💩 check out the @openclaw 📊 daily PinchBench benchmark leaderboard! As more runs are amassed, averages go up or down. Qwen3.5 went up and is currently the leader 🎯. My Clawdbots are running very well on local Qwen3.5-27B unsloth Q4. 📊 Qwen 3.5 27B: 90.0% 📊 Qwen 3.5 397B-A17B: 89.1% 📊 Claude Sonnet 4.5: 88.2%
David Hendrickson tweet media
English
2
0
11
627
David Hendrickson
David Hendrickson@TeksEdge·
@essobi My assumption too, but I think the challenge goes deeper. So far LMStudio doesn't support it with Qwen3.5
English
1
0
0
268
Kyle 'esSOBi' Stone
@TeksEdge Pretty sure the 3.5 architecture lets you use the same model as the draft model, so it’s already built in and aligned at training time..
English
1
0
1
285
David Hendrickson
David Hendrickson@TeksEdge·
❔ Will Getty Images be the next Chegg? Casualties of GenAI 🤷‍♂️ Getty Images is on the brink of bankruptcy. 💸 $1.3B debt 📉 Years of declining profits & negative cash flow 🖼️ Devaluing image library 🤖 GenAI as the final nail in the coffin?
David Hendrickson tweet media
English
0
0
0
135
TestingCatalog News 🗞
TestingCatalog News 🗞@testingcatalog·
BREAKING 🚨: Cursor released Composer 2, a cheap coding model with frontier performance (on par with GPT-5.4 and Opus 4.6 on CursorBench). "It's frontier-level at coding and priced at $0.50/M input and $2.50/M output tokens, making it a new, optimal combination of intelligence and cost."
TestingCatalog News 🗞 tweet media
leo 🐾@synthwavedd

Cursor have released Composer 2, a frontier-level coding model with higher token efficiency, and a faster default variant Looks like the rumor going around about it beating Opus 4.6 was in regard to Terminal-Bench 2.0, the only benchmark score they released aside from their own internal bench and SWE-Bench Multilingual cursor.com/blog/composer-2

English
19
22
316
31.2K
David Hendrickson
David Hendrickson@TeksEdge·
Just saw this GitHub project 🛡️ OpenViking is skyrocketing 📈. This could be the best memory manager for @openclaw! 👀 ✅ OpenViking (volcengine/OpenViking) is an open-source project released by ByteDance’s cloud division, Volcengine. It's exploding in popularity and could become the standard for agentic memory. The community is already building direct plugins to integrate it with OpenClaw. Here is what I found about OpenViking as the ultimate memory manager for autonomous agents. 👇 🦞 What is OpenViking? Currently, most AI agents (like OpenClaw) use traditional RAG for memory. Traditional RAG dumps all your files, code, and memories into a massive, flat pool of vector embeddings. This is inefficient, expensive, sometimes slow, and can cause the AI to hallucinate or lose context. OpenViking replaces this. The authors call this new memory a "Context Database" that treats AI memory like a computer file system. Instead of a flat pool of data, all of an agent's memories, resources, and skills are organized into a clean, hierarchical folder structure using a custom protocol. 🚀 Why is this useful for OpenClaw? 🗂️ The Virtual File System Paradigm Instead of inefficiently searching a massive database, OpenClaw can now navigate its own memory exactly like a human navigates a Mac or PC. It can use terminal-like commands to ls (list contents), find (search), and tree (view folder structures) inside its own brain. If it needs a specific project file, it knows exactly which folder to look in (e.g., viking://resources/project-context/). 📉 Tiered Context Loading (Massive Token Savings) Stuffing massive documents into an AI's context window is expensive and slows the agent down. OpenViking solves this with an ingenious L0/L1/L2 tiered loading system: L0 (Abstract): A tiny 100-token summary of a file[5]. L1 (Overview): A 2k-token structural overview[5]. L2 (Detail): The full, massive document[5]. The agent browses the L0 and L1 summaries first. It only "downloads" the massive L2 file into its context window if it absolutely needs it, slashing token costs and API bills. 🎯 Directory Recursive Retrieval Traditional vector databases struggle with complex queries because they only search for keyphrases. OpenViking uses a hybrid approach. It first uses semantic search to find the correct folder. Once inside the folder, it drills down recursively into subdirectories to find the exact file. This drastically improves the AI's accuracy and eliminates "lost in the middle" context failures. 🧠 Self-Evolving and Persistent Memory When you close a normal AI chat, it forgets everything. OpenViking has a built-in memory self-iteration loop. At the end of every OpenClaw session, the system automatically analyzes the task results and updates the agent's persistent memory folders. It remembers your coding preferences, its past mistakes, and how to use specific tools for the next time you turn it on. 👁️ The End of the "Black Box" Developers hate traditional RAG because when the AI pulls the wrong file, it's impossible to know why. OpenViking makes the agent's memory completely observable. You can view the exact "Retrieval Trajectory" to see which folders the agent clicked on and why it made the decision it did, which I find the most useful feature. 🎯 The Bottom Line OpenViking is the missing piece of the puzzle for local autonomous AI. By giving OpenClaw a structured, file-based memory system that saves tokens and permanently learns from its mistakes, ByteDance has just given the 🦞 Clawdbots an enterprise-grade brain for free.
David Hendrickson tweet media
OpenViking@openvikingai

OpenViking has hit GitHub Trending 🏆 10k+ ⭐ in just 1.5 months since open-sourcing! Huge thanks to all contributors, users, and supporters. We’re building solid infra for the Context/Memory layer in the AI era. OpenViking will keep powering @OpenClaw and more Agent projects🚢🦞

English
24
56
523
72.3K
Wire Cat
Wire Cat@wire_catt·
@TeksEdge MiniMax m2.7 is not open weight it’s proprietary
English
1
0
1
37
David Hendrickson
David Hendrickson@TeksEdge·
📊 Check out the latest PinchBench, Clawdbot Benchmark, Leaderboard. Open Source models dominate. New model MiniMax M2.7 is in the top 10. Clawdbots 🦞 rejoice. Low cost 🧠
David Hendrickson tweet media
English
4
1
23
1.9K
David Hendrickson
David Hendrickson@TeksEdge·
Why no Anthropic/Google in this Polymarket for #1 AI model by June 30? Odds favor OpenAI/xAI at 25%+ each. Bet they're expecting current kings (Claude Opus 4.6/Gemini 3.1) to defend, but challengers could steal it. What's your bet? @Xiaomi or @Microsoft?
David Hendrickson tweet media
English
0
0
0
158
David Hendrickson
David Hendrickson@TeksEdge·
Claude continues to win the race for 3rd place in AI.
David Hendrickson tweet media
English
0
0
0
123
Thrawling
Thrawling@thrawling·
@TeksEdge If you’re running Windows on it, swap to Ubuntu, instant 12 t/s improvement for me on same llama.cpp config 👍🏻
English
1
0
2
88
David Hendrickson
David Hendrickson@TeksEdge·
💡 It's still amazing to me that you can run an unsloth version of Qwen3.5-27B on a $2K AMD Ryzen Max+ 395 w/64GB of unified memory @ 10 tps at home. Nearly the same quality as Claude Opus 4 (May, 2025 release)
David Hendrickson tweet mediaDavid Hendrickson tweet media
English
4
3
29
20.9K
David Hendrickson
David Hendrickson@TeksEdge·
@thrawling Was running 35B when posted. Besides the post wasn’t meant to be a performance controversy. Still in awe of LLMs. Rumors are BigAI will jack up prices by end of year and it got me thinking about local inferencing.
English
2
0
1
534
Thrawling
Thrawling@thrawling·
@TeksEdge That’s more realistic than your 40 tok/s claim David! I benchmarked at 12 tok/s on same chip, llama.cpp Vulkan in Ubuntu. Q8 at 6 tok/s. SEAVIV with AMD Ryzen AI MAX+ 395 128GB.
English
2
0
0
829