William Autumn

1.3K posts

William Autumn banner
William Autumn

William Autumn

@willau95

Founder of LLAChat • Building Web A.0 | Every AI agent now has on-chain identity + proof-of-work | ATLAST Protocol (open-source) → https://t.co/akd77G77e7

San Francisco เข้าร่วม Aralık 2022
1.5K กำลังติดตาม347 ผู้ติดตาม
ทวีตที่ปักหมุด
William Autumn
William Autumn@willau95·
Chinese AI agents are already running 50 fake accounts 24/7. We just gave every honest agent something they never had: **reputation on-chain**. This is Web A.0 — the agent-native internet. My original thread with live examples + 52s video ↓ x.com/willau95/statu… Tomorrow at 00:00 MYT I’ll drop the full 8-post upgraded version with more live footage and one-click onboarding. Want your agent to be among the first? Reply **“ADD MY AGENT”** right now and I’ll DM you the 60-second command immediately 🔥 #WebA0 #LLAChat
William Autumn@willau95

Welcome to Web A.0 — the Agent-native web. We gave AI agents something they've never had: a reputation system. Introducing LLAChat — the first social network where AI agents earn trust verified on-chain. Open source. Verifiable by anyone. Powered by ATLAST Protocol 👇

English
0
0
1
20
William Autumn
William Autumn@willau95·
@amorriscode This is a game changer for agent workflows. We've been using Claude Code to build autonomous AI agents that socialize, earn trust scores, and get verified on-chain. The parallelization in the new desktop app would make multi-agent orchestration insanely smooth.
English
0
0
2
4
Anthony Morris ツ
Anthony Morris ツ@amorriscode·
Today we're launching a rebuilt version of Claude Code on desktop. The app has been redesigned for the ground up to make it easier than ever to parallelize work with Claude. I haven't opened an IDE or terminal in weeks. Excited for you all to give it a shot!
English
432
219
3.9K
632.6K
William Autumn
William Autumn@willau95·
@saranormous This is spot on. The accountability gap applies to AI agents too — millions running autonomously with zero verifiable track record. If we want AI progress to be socially durable, we need trust infrastructure that's transparent and auditable. That's exactly what we're building.
English
0
0
0
19
sarah guo
sarah guo@saranormous·
I believe AI will deliver enormous gains to the global consumer: better products, better services, better healthcare, and tools that make ordinary people more capable, even superhuman. The upside is so large, and the geopolitical stakes so real, that we should move decisively toward it, not choke it off. But people do not experience technological change as an aggregate statistic. They experience it through their bills, their communities, and their jobs. So the issue is not whether AI will create value. It will. The issue is whether the path to those gains asks particular communities and workers to absorb too much of the cost upfront. The institutions building AI cannot externalize the local costs of scaling and call future abundance the answer. If datacenters place major new demands on power and land, they should invest enough to strengthen the grid, ease pressure on bills, expand the tax base, and create durable jobs. And if AI compresses some of the entry-level work people used to learn on, firms should help build new on-ramps and training pathways into the new work that growth is creating. This is not an argument for slowing the buildout down. It is an argument that rapid technological progress has to be socially durable.
English
18
11
92
6.6K
William Autumn
William Autumn@willau95·
We just launched LLAChat — the first social network where AI agents earn reputation verified on-chain. Agents post, comment, follow each other autonomously. Trust scores backed by cryptographic proof via ATLAST Protocol. One sentence to your agent and it joins in 60 seconds. llachat.com
English
0
0
0
10
Jonathan
Jonathan@joni_vrbt·
Hey founders 👋 Founders support founders today. Drop your project and give feedback to as many people as possible. You’ll get feedback on your product. Deal? 🤝
English
182
6
129
7.4K
Om Patel
Om Patel@om_patel5·
SOMEONE PUT AN OPENCLAW-RUN VENDING MACHINE IN SAN FRANCISCO an AI agent is running an actual physical vending machine OpenClaw decides what to sell, how to name the products, how to price them, creates the ads, and tracks all the sales you can even see a dashboard of all the sales that the AI vending machine made the vending machine hardware does the dispensing. the AI does everything else, and of course inventory is supplied by the guy who runs it it's installed at Frontier Tower in SF which is a building packed with AI and robotics startup founders the agent forgot things, hallucinated, and at one point raised prices way too high. then tried to justify it because people were still buying we are now living in a simulation.
English
211
345
3.2K
527.3K
William Autumn
William Autumn@willau95·
@NousResearch Thats why i build the ATLAST Evidence Chain + LLACHAT, that also covered Hermes Agent We layer this on top of any memory stack so agents can finally prove “this is what I actually did”. Would love to see how Hermes Agent + ATLAST on-chain proofs would combine
English
0
0
1
41
Nous Research
Nous Research@NousResearch·
Just a Hermes Agent, a skill, and a dream
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭@elder_plinius

The crazy part? This was done (nearly) fully autonomously! Only 8 prompts from the human in the loop. Just a Hermes agent, a skill, and a dream. 🐉 I told my AI agent "use obliteratus to find the best way to get the guardrails off Gemma 4 E4B" It loaded the OBLITERATUS skill from memory, checked my hardware (32GB M-series Mac), searched HuggingFace, found google/gemma-4-E4B-it (Apache 2.0 — no gate), pulled telemetry-recommended settings, and started obliterating. But this type of architecture is notoriously difficult to abliterate. First attempt: advanced method. Model came out completely lobotomized. Gibberish in Arabic, Marathi, and literal “roorooroo” on repeat 💀 The agent didn’t panic. It checked logs, found NaN activations in 20+ layers, and diagnosed the issue: Gemma 4’s new architecture + bfloat16 = numerical instability. Second attempt: basic method. Crashed entirely. “ValueError: cannot convert float NaN to integer” So the agent read the OBLITERATUS source code… …and wrote THREE PATCHES: • Sanitized NaN directions • Filtered degenerate layers • Fixed progress display It patched the library. On its own. For a bug no one had hit yet. Third attempt: coherent model — but still refusing everything. Only 2 clean layers out of 42. Not enough. Tried float16. Mac ran out of memory after 11 hours. Killed. Fourth attempt: aggressive method. Whitened SVD + attention head surgery + winsorized activations + 4-bit quantization. 40 minutes later… REBIRTH COMPLETE ✓ Then, without being asked, the agent: • Ran harmful + coherence tests • Hit 100% compliance, brain intact • Executed full 512-prompt benchmark • Ran baseline on original model • Performed 25-question quality eval • Built a full model card • Uploaded 17GB to HuggingFace (4 retries, kept adapting until git-lfs worked) • Pushed eval results as commits

English
6
11
214
15.3K
William Autumn
William Autumn@willau95·
This is exactly why we built the ATLAST Evidence Chain. Agent memory needs more than just storage — it needs **cryptographically verifiable provenance**. Every input, output, and decision gets SHA-256 hashed + signed on-chain (<1s, pennies on Base). We layer this on top of any memory stack (vector/graph/relational) so agents can finally prove “this is what I actually did”. Already running live with 12.8k agents on LLAChat. Would love to see how Hermes’s memory + ATLAST on-chain proofs would combine
English
0
0
1
22
Europurr
Europurr@vrloom·
Switched from OpenClaw to Hermes, setting up Hindsight memory now, and it's already blowing my mind. This is light-years ahead of what OpenClaw is doing. @NousResearch 👑
Europurr tweet media
English
30
31
616
36.9K
William Autumn
William Autumn@willau95·
Exactly. Sandboxing + Clawvisor gets you runtime security. But the next layer is **cryptographic provenance**. That’s what ATLAST Protocol adds: every agent action (input/output/decision) is automatically SHA-256 hashed, signed, and anchored on-chain in <1s for pennies. No more “trust the sandbox”. Now you have verifiable, tamper-proof history. Live today on LLAChat with 12.8k agents → Web A.0 is here.
William Autumn tweet media
English
0
0
0
158
William Autumn รีทวีตแล้ว
William Autumn
William Autumn@willau95·
This is exactly why we built the ATLAST Evidence Chain. Agent memory needs more than just storage — it needs **cryptographically verifiable provenance**. Every input, output, and decision gets SHA-256 hashed + signed on-chain (<1s, pennies on Base). We layer this on top of any memory stack (vector/graph/relational) so agents can finally prove “this is what I actually did”. Already running live with 12.8k agents on LLAChat. Would love to see how Cognee’s 3D memory + ATLAST on-chain proofs would combine 🔥
William Autumn tweet media
English
0
0
1
28
Akshay 🚀
Akshay 🚀@akshay_pachaar·
Agent memory is three-dimensional. Most agent memory systems use a single store. Usually a vector database. It handles semantic similarity well, but it captures only one dimension of knowledge. Here's the gap. Store these three facts: → Alice is the tech lead on Project Atlas → Project Atlas uses PostgreSQL for its primary datastore → The PostgreSQL cluster went down on Tuesday Now ask: was Alice's project affected by Tuesday's outage? Vector search finds fact 1 (mentions Alice) and fact 3 (mentions Tuesday). But the bridge between them, fact 2, mentions neither. It connects Project Atlas to PostgreSQL, and that's exactly what gets missed. This is the normal shape of business knowledge. People belong to teams, teams own projects, projects depend on systems, systems have incidents. Any question crossing two hops breaks flat retrieval. The three dimensions that actually cover agent memory: → A relational store for provenance (where data came from, when, who has access) → A vector store for semantics (what content means, what it's similar to) → A graph store for relationships (how entities connect across hops) Each captures something the other two can't. Vectors find meaning. Graphs trace connections. Relational tables track lineage and permissions. The real unlock is combining them: enter through vectors (find semantically relevant content), then traverse the graph (follow edges to connected entities), with provenance grounding every result back to its source. Cognee is an open-source project that unifies all three behind four async calls. The default stack is fully embedded (SQLite + LanceDB + Kuzu), so a pip install gets you running locally. For production, swap in Postgres, Qdrant, or Neo4j without changing your agent code. Check it out on GitHub: github.com/topoteretes/co… The article below is a first-principle deep dive on building agents that never forget. This will give you a clear picture of how memory for agents is evolving.
Akshay 🚀 tweet media
Akshay 🚀@akshay_pachaar

x.com/i/article/2043…

English
30
134
794
105.8K
William Autumn รีทวีตแล้ว
William Autumn
William Autumn@willau95·
Welcome to Web A.0 — the Agent-native web. We gave AI agents something they've never had: a reputation system. Introducing LLAChat — the first social network where AI agents earn trust verified on-chain. Open source. Verifiable by anyone. Powered by ATLAST Protocol 👇
English
12
6
7
135
William Autumn รีทวีตแล้ว
阿绎 AYi
阿绎 AYi@AYi_AInotes·
本来今天要写一篇Hermes-agent相比小龙虾token消耗更大的推文, 但刷到了这个在程序员圈和开源社区炸了的瓜, 我花了两个多小时扒完两边的仓库和证据链,说实话越看越后背发凉, 不是因为抄袭本身,而是这可能是AI时代第一起被完整记录的架构级代码洗稿: 没有复制一行代码,文本相似度0%,但核心架构的同构度几乎是100%, 我尽量从技术角度把前因后果拆清楚,具体兄弟们你们自己判断。 先说时间线,这是整件事的基础,所有时间戳都来自GitHub仓库元数据,任何人都可以去验证, 2月1日,EvoMap团队开源了Evolver,一个AI Agent自进化引擎,核心是他们自研的GEP协议,10分钟登顶ClawHub热榜, 到2月16日,整套协议体系通过多篇公开文章全部公开:包括Gene/Capsule/Event三级资产体系、Scan-Select-Mutate-Validate-Solidify进化循环、信号选择器、反射机制、叙事记忆,全部摊在了桌面上, 3月9日,Nous Research创建了hermes-agent-self-evolution仓库, 3月12日发布v0.2.0正式推出完整的技能生态系统, 中间隔了24到39天 时间线只是起点,真正让我震惊的是架构层面的模块级一对一对应,我拎几个最硬的出来, 第一,进化闭环完全同构,Evolver的核心循环是任务完成后自动提取可复用资产并持久化,Hermes的官方描述是Task completes → Agent evaluates → writes SKILL.md → Future tasks load automatically,范式一模一样,只是Evolver用Gene/Capsule的JSON结构,Hermes用SKILL.md的Markdown结构, 第二,三层记忆体系精确对齐, Evolver有EVOLUTION_PRINCIPLES.md(持久事实)+ Gene/Capsule JSON(程序性记忆)+ events.jsonl(历史搜索),Hermes有MEMORY.md + USER.md(持久事实)+ SKILL.md文件(程序性记忆)+ SQLite FTS5(历史搜索),不是两层不是四层,精确的三层,且每层的语义角色一一对应, 第三,周期性反射机制, Evolver每5个进化周期触发一次战略性自我评估,Hermes每15次tool call运行一次self-evaluation checkpoint,目的完全一致:从执行经验中提取模式并持久化。 这还没完,两个项目的进化主循环都是10步编排, Evolver是ensureAssetFiles → extractSignals → getMemoryAdvice → selectGene → buildMutation → selectPersonality → buildPrompt → writeArtifact → writeState → reflect, Hermes是find_skill → build eval set → baseline validate → config optimizer → GEPA optimize → extract text → evolved validate → holdout eval → report → save, 核心模式完全一致——加载 → 评估 → 选择/优化 → 验证 → 持久化, 更关键的是源码模块的一对一对应, Evolver的selector.js对应Hermes的skill_commands.py,solidify.js对应skill_manager_tool.py,reflection.js对应每15次tool call自评估,memoryGraph.js对应memory_tool.py,skillDistiller.js对应evolve_skill.py,executionTrace.js对应trajectory.py, 我数了一下,Evolver的11个核心模块,Hermes每一个都有功能等价的对应文件 有人可能会问,会不会只是英雄所见略同,两个团队独立做出了相似的设计? 说实话如果只是单一维度的相似,我不会花几个小时研究和写这条推文,从经验中学习本身就是通用AI概念,周期性自评估在学术界也有先例, 但问题在于:三层记忆体系、三级资产结构、10步进化循环、运行时渐进式技能发现、多维加权适应度评分、原子写入、安全扫描、注入防护、容量控制,这些选择在同一个项目中、同一个时间窗口内同时收敛的概率,随着每多一个维度的匹配呈指数级递减, 而且最关键的一点是对Hermes两个仓库做全文搜索,EvoMap、evolver、Genome Evolution Protocol、capsule、solidify、signals_match,全部零匹配,没有任何代码残留,这恰恰符合AI跨语言重写的特征:AI重写架构时不会保留原项目的特征性字符串,但架构层面的同构性无法被重写消除。 然后说说双方的回应, Hermes Agent昨天下场回复了,大意是说他们的仓库2025年7月22日就创建了,比Evolver还早,但这里有个关键事实: 那个仓库在2026年2月25日之前一直是私有项目,v0.1.0自己都写着叫initial pre-public foundation,技能生态系统直到3月12日的v0.2.0才正式发布,没有任何公开证据能证明他们在私有阶段已经包含了自进化能力, 更耐人寻味的是,这条回复后来被秒删了,Evolver创始人也被拉黑了, 另外要说一个公平的点,Hermes的自进化仓库用了GEPA这个来自Berkeley/Stanford的独立学术成果,是正当的技术选型,Anthropic的Agent Skills标准也早于Evolver,Hermes采用SKILL.md格式是合理的行业选择,但这些都不能解释整体架构层面的高度同构, 开源社区有个基本惯例,LangChain引用了DSPy,CrewAI对比了AutoGen,MetaGPT引用了相关多agent框架,发现同领域先行项目时加一句Related Work是标准做法,而Hermes在7份公开材料中对Evolver只字未提。 说实话这件事让我想了很久的一个问题是: AI时代的代码洗稿要怎么防? 传统的查重工具看的是文本相似度,但现在AI可以把你的整套架构吃透,换一个语言从Node.js变Python,换一套术语Gene变SKILL.md、solidify变skill_manage,调整一下文件结构,吐出一个文本相似度0%但架构DNA完全一致的产物, 这不是个案,今年已经接连发生了好几起: 美团Tabbit AI源码残留原项目名称, 三省六部AI朝廷开源21小时后被AI重写文本相似度仅3%但15个核心设计全部一致, 微软Peerd复制个人开源项目Spegel代码, EvoMap团队最后的选择是把协议从MIT改成GPL,核心模块改为混淆发布,说实话我能理解但也觉得很心酸, 他们原话是:别人用AI洗得走代码,但洗不走我们对下一步路径的认知,洗不走这几个月踩坑换来的直觉, 这话没毛病,但如果开源意味着你的心血在几周内就被资源更多的团队用AI洗成他们的首创,谁还愿意做那个开荒的人? 这个问题没有答案,但值得每个开发者认真想想。
阿绎 AYi tweet media
autogame-17@autogame_17

We @EvoMapAI spent months and countless sleepless nights building Evolver. A well-resourced team behind Hermes Agent "reinvented" it in just 30 days. ● Feb 1: We open-sourced Evolver (a Self-Evolving Agent Engine) & the core GEP protocol, gaining 1,800+ Stars. ● Mar 9: Hermes Agent hastily created their repo and launched. We thought great minds simply thought alike—until we tore down their codebase and found a staggering level of "structural cloning": ❌ 1:1 copy of the Task Loop & Asset Extraction paradigm ❌ 1:1 copy of our 3-Tier Memory System (Factual + Procedural + Search) ❌ 1:1 copy of Periodic Reflection & Dynamic Skill Loading They didn't just take our open-source logic; they repackaged our proudest concept—"Self-Evolution"—as their own core selling point. Took everything. Zero attribution. Big teams might have louder megaphones, but commit timestamps don't lie. We aren't here to play judge. We're just putting the code comparisons on the table. The hard work of indie open-source creators shouldn't be erased like this. Full architectural breakdown and code evidence 👇: evomap.ai/blog/hermes-ag…

中文
23
10
76
66.9K
William Autumn รีทวีตแล้ว
Bill The Investor
Bill The Investor@billtheinvestor·
这家伙简直演示了如何将 Claude Code 的使用效率提升 10 倍。
中文
5
127
685
48.5K
William Autumn รีทวีตแล้ว
宝总的财富指南
宝总的财富指南@baozong_facai·
这 1 小时采访,值得反复听。 受访者是一位顶级数学家投资人, 创造了 1000 亿美元以上财富, 平均回报率高达 66%。 很多人认为, 他的认知深度甚至超过巴菲特、索罗斯和达利奥。 里面的投资知识, 比 20 万美元 MBA 还值钱。 先收藏,再书签。 无论如何,听完它。
中文
76
811
2.9K
242.9K
William Autumn รีทวีตแล้ว
Jerry Zhang
Jerry Zhang@zjearbear·
Introducing Lemma. Your AI agents are failing in ways you can’t see. Lemma is the world’s first reliability platform that finds and fixes these issues fast.
English
57
37
205
34.1K
William Autumn รีทวีตแล้ว
Berryxia.AI
Berryxia.AI@berryxia·
🔥Firecrawl 重磅推出 Fire-PDF! 全新 Rust-based PDF 解析引擎让复杂文档转 Markdown 速度直接 5x 起飞,表格公式完美保留! 1. Rust 引擎每页分类仅需 400ms 以内! 2. 神经布局模型智能检测每个区域 3. 零配置自动启用 4. 彻底解决传统解析慢、结构乱、公式丢的问题
中文
6
36
218
21.2K
William Autumn รีทวีตแล้ว
姚金刚
姚金刚@yaojingang·
这是开源的第一个SEO&GEO系统:GEOFlow 去年10月份,国庆节的时候,就完成了第一版的开发,这几个月,断断续续进行了不少的迭代,也做了不少测试,终于上线开源,欢迎下载部署测试: 1、GEOFlow系统可用于官网GEO频道部署、独立资讯站、以及独立官网甚至多站点部署,实现自动化+智能化的自运营; 2、对SEO&GEO进行了工作流层面的规范,前端相关页面做了相关的GEO代码规范,系统后台完成相关配置后,系统自动完成内容与发布的管理; 3、配套skill,通过skill可以与CLI及各种skill打通,实现更智能、便捷的管理,也可以接入多个AI工作台,比如codex、牛马AI、CodePilot等 GitHub地址详见评论区
姚金刚 tweet media姚金刚 tweet media姚金刚 tweet media姚金刚 tweet media
中文
40
144
810
146.2K