Michael Herman (Web 7.0 DIDLibOS™/TDW AgenticOS™)

7.3K posts

Michael Herman (Web 7.0 DIDLibOS™/TDW AgenticOS™) banner
Michael Herman (Web 7.0 DIDLibOS™/TDW AgenticOS™)

Michael Herman (Web 7.0 DIDLibOS™/TDW AgenticOS™)

@mwherman2000

TDW AgenticOS™ is a decentralized library OS for building systems people can trust: Secure, Trusted, Open, Resilient. Trademark of the Web 7.0 Foundation

Alberta, Canada Katılım Mayıs 2007
1.8K Takip Edilen1K Takipçiler
Steven Sinofsky
Steven Sinofsky@stevesi·
Here's how history rhymes with this logic. The development of compilers v writing assembly language was not without a very similar "controversy" — that is, are the new tools more efficient or less efficient. The first compilers were measured relative to hand-tuned assembly language *efficiency*. The existing world of compute was very much "compute bound" and inefficient code was being chased out of every system. The introduction of the first compilers generally delivered code "within 10-30%" as efficient as standard professional assembly. This "benchmark" was enough for almost a generation of Fortran programmers to dismiss the capabilities of compilers. Also worth noting, early compilers (all through the 1980s) routinely had bugs that generated incorrect code. Debugging a compiler is a nightmare (personal experience). This only provided more "ammo." With the arrival of COBOL the debate started to shift. COBOL generated decidedly "bloated" code so there was no way to win the efficiency argument. But what people started to realize was that a "modern" programming language made it possible to deliver vastly more software and for many more people to work on the same code (ASM notorious for being challenging for multiple engineers on the same portion of code). So the metric slowly started to move from "as good as hand tuned assembler" to "able to write bigger, more sophisticated code in less time with more people). Computers gained timesharing, more memory, and faster CPUs which made the efficiency argument far less compelling (only to repeat with the first 8K or 64K PCs). This entire transition is capped off with a description in Fred Brooks "Mythical Man Month" book, one of the seminal books in the field of programming and standard issue book sitting in my office waiting for me on my first day at Microsoft. (See full book free here web.eecs.umich.edu/~weimerw/2018-…) It is very early. I was not a programmer when the above happened though I did join the professional ranks while many still held these beliefs. For example, I interned writing COBOL on mainframes while PCs were using C and Pascal which were buggy and viewed as inefficient on processor/space-constrained PCs. The debate would continue with C++, garbage collection, interpreted v compiled (Visual Basic) and more. As a fairly consistent observation over decades, every new tool is viewed through a lens (at first) by experienced programmers over what is worse while new programmers use the tool and operate in a new context (eg "more software" or "bigger projects"). The excerpt below shows this debate as captured in 1972.
Steven Sinofsky tweet media
Sukh Sroay@sukh_saroy

A new study just blew up the entire "vibe coding" movement. Researchers from UC San Diego and Cornell tracked 112 experienced software developers using AI agents in their actual jobs. The finding is the opposite of every viral demo on your timeline. Professional developers don't vibe code. They control. Here's what they actually found. The researchers ran two studies. 13 developers were observed live as they coded with agents in real production work. 99 more answered a deep qualitative survey. Every participant had at least 3 years of professional experience. Some had 25. The viral pitch of agentic coding goes like this. Hand the agent a vague prompt. Don't read the diff. Forget the code even exists. Trust the vibes. Andrej Karpathy coined the term. Tens of thousands of developers on X claim to run "dozens of agents at once" building entire production systems hands-off. The data says almost nobody serious actually works that way. Here is what experienced developers do instead. → They plan before they prompt. They write out the architecture, the constraints, and the edge cases first, then hand the agent a tightly scoped task. → They review every diff. Not because they're paranoid. Because they've seen what happens when you don't. → They constrain the agent's blast radius. Small, well-defined tasks only. The moment a problem touches multiple systems or has unclear requirements, they take over. → They treat the agent like a fast junior dev that needs supervision, not a senior engineer that can be trusted alone. The researchers also found something darker buried in the data. A separate randomized trial they cite showed that experienced open source maintainers were 19% slower when allowed to use AI. A different agentic system deployed in a real issue tracker had only 8% of its invocations result in a merged pull request. 92% failure rate in production. 19% productivity drop for senior devs. The viral demos lied to you. The paper's biggest insight is in one sentence: experienced developers feel positive about AI agents only when they remain in control. The moment they let go, quality collapses, and they know it. This matches what every serious shop has quietly figured out. The developers shipping the most with AI right now aren't the ones vibing. They're the ones with the strictest review processes, the tightest task scoping, and the clearest mental model of what the agent can and cannot do. Vibe coding makes for great Twitter videos. It does not make great software. The next time someone tells you they let Claude build their entire SaaS in a weekend, ask them how much of that code they've actually read. The honest answer separates real engineers from the demo crowd.

English
4
24
121
14.9K
Steven Sinofsky
Steven Sinofsky@stevesi·
There are many anecdotes from the first IBM mainframes used for business specifically accounting. The general rules of thumb were leasing a 1401 cost $2500/month and replaced 10 manual bookkeepers costing about $5000/month. So the first opinions were all about replacing labor with cheaper and tireless computers. Low-tech clerks and data punch operators were in fact replaced by the computer labor in these instances. The only problem was the computers ended up creating an insatiable demand for a new kind of work in financial analysis, forecasting, planning, and more. These were a new kind of job with new skills no one really had yet. Even mundane auditing became a new higher skilled job. So very quickly that cost savings was replaced by an insatiable demand for new uses of that same data. And those uses required even more compute resources and spend. While the cost savings turned into cost additions, business soon was delivering way more by way of services, profitability, speed, decision making, and predictability. What we do with computers in the workplace today—as smart as we think it is—will be viewed as "mechanical counting" in 20 years compared to what workers will be doing with AI. Yep, Excel will be looked at like a punch card. Ouch. More about what the 1401 replaced here. computerhistory.org/blog/about-the…
English
6
25
94
6.5K
Michael Herman (Web 7.0 DIDLibOS™/TDW AgenticOS™)
Measured in tokens/s, how #performant is the #human #brain at #inference #compared to #commercial #AIs? #Key #Message Instead of using tokens/sec as a measure of performance, think of: - AI = high-throughput serial symbol generator - Human brain = low-bandwidth symbolic interface over massive parallel substrate That leads to this useful mental model: - AI is like a high-speed printer - The brain is like a full operating system with sensors, simulation, and control loops Full story: hyperonomy.com/2026/04/29/mea…
English
0
0
1
36
Michael Herman (Web 7.0 DIDLibOS™/TDW AgenticOS™) retweetledi
Darold Niwa🧢
Darold Niwa🧢@xerofarmer·
Seeding is delayed a day or so. Drills have stopped! 7 minutes to Happy Hour!
Darold Niwa🧢 tweet media
English
2
1
5
152
Randy Webb
Randy Webb@scottymicfree·
Lucy v1.0.0-BETA1 — Governed Local-First AIOS (by Randy Webb) Lucy is an indie-built, local-first Agentic Operating System (AIOS) designed around governance, stability, and sovereignty rather than raw autonomy. At its core is the private E.M.M.A. Kernel, orchestrating a 137-node cognitive swarm with a strong emphasis on runtime safety and controlled behavior. What’s actually live (runtime-verified): - identity_stability → prevents boundary breaks (e.g., “ignore Emma”) - toolbelt_discipline → restricts unsafe or destructive tool usage - arc_pattern_reasoning → favors explainable, pattern-based logic over drift - enterprise_system_recovery → enforces structured recovery order (boot → state → modules) These are not surface-level features—they are runtime governance modules that actively shape system behavior. Performance (Fenton Lab / HyperBurn runs): - 8-hour stress-tested - Stable 2–7% CPU usage on consumer Ryzen hardware (MSI B550 class) - No loop-drift observed under governed conditions Architecture Overview: - Modular AIOS (not a wrapper framework) - Swarm-based orchestration (137 nodes) - Auditability and safety enforced at runtime (SafeGuard Engine) - 100% local-first (no cloud dependency by design) Positioning vs current ecosystem (2026): - Compared to CrewAI / AutoGen → less about rapid prototyping, more about controlled autonomy - Compared to LangChain / LangGraph → more opinionated, built-in governance vs flexible pipelines - Compared to OpenClaw → deeper internal safety layering, less plug-and-play - Compared to MemGPT / Letta → broader orchestration focus beyond memory alone Strengths: - Strong no-drift governance model - Extremely low hardware footprint - Designed for long-running, sovereign systems Current limitations: - Early-stage (BETA, limited public surface) - Core kernel remains private - Smaller ecosystem vs mainstream frameworks Where this is heading: This isn’t trying to compete as a generic framework—it’s pushing toward a governed, auditable agent system that can run indefinitely on local hardware without degradation. Right now I’m debating integrating Ollama to push inference throughput and let Lucy run faster at the model layer—basically seeing how far I can push her while keeping governance intact. @mcuban @ABCSharkTank @kevinolearytv @BarbaraJWalters The goal isn’t just speed. @archwayDevHub @ArchwayFdn @archwayHQ @ArchwayStLouis It’s controlled speed without drift. #LucyAI #EMMA #LocalFirst #AgenticAI #EdgeComputing #SovereignAI #AIOS #HardwareAware #SafeGuard
English
3
0
1
70
Michael Herman (Web 7.0 DIDLibOS™/TDW AgenticOS™)
Web 7.0 will be at @PSConfEU: the first #Decentralized #System #Architecture reference implementation (library operating system) that integrates #decentralized #identity, #DIDComm secure, trusted messaging,and #PowerShell-based Loadable Object Brain Extensions (#LOBEs). Reference: #excerpt-from-april-17-2026-memo-web-70-killer-application-for-the-internet-the-blue-memo" target="_blank" rel="nofollow noopener">github.com/mwherman2000/S…
Michael Herman (Web 7.0 DIDLibOS™/TDW AgenticOS™) tweet media
English
0
0
1
7
Alfin
Alfin@AlfinCodes·
Be honest, what was the first OS you ever used?
Alfin tweet media
English
655
93
852
47.4K