Jimmy Ashcot ⚡️

86.8K posts

Jimmy Ashcot ⚡️ banner
Jimmy Ashcot ⚡️

Jimmy Ashcot ⚡️

@ashcotXBT

just another 0x on the EVM universe | building https://t.co/sKHv9te5u7

Singapore شامل ہوئے Mayıs 2020
588 فالونگ15.8K فالوورز
پن کیا گیا ٹویٹ
Ejaaz
Ejaaz@cryptopunk7213·
anthropic’s openclaw-killer is complete. fucking crazy what they’ve shipped in 4 weeks: - texting claude code - 10,000s of claude skills + MCP - Claude security (autonomous bug-fixer) - persistent memory (claude never forgets) - channels (text claude from telegram) - autonomous cron-jobs - 1M context window - new model (opus, sonnet) - 30+ plug-ins that’ve tanked stocks - remote control just insane fucking levels of execution.
Thariq@trq212

We just released Claude Code channels, which allows you to control your Claude Code session through select MCPs, starting with Telegram and Discord. Use this to message Claude Code directly from your phone.

English
14
2
75
8.6K
gmoney.eth
gmoney.eth@gmoneyNFT·
i gave it a shot, but can't do this anymore. hermes sucks ass. all these agents suck ass. they just stop working all the time and then take forever to debug. sticking to claude code and codex in terminal. far and away better than messing with this productivity porn
English
65
2
169
9.8K
Claude Code Changelog
Claude Code Changelog@ClaudeCodeLog·
Claude Code 2.1.80 has been released. 1 flag change, 17 CLI changes, 1 system prompt change Highlights: • Memories are checked against current files before use to avoid relying on stale data • Sessions restored with --resume include all parallel tool results, replacing '[Tool result missing]' errors • Many previously blocked SQL analysis functions reinstated, restoring prior SQL workflows and outputs Complete details in thread ↓
English
5
6
95
8.9K
Pedro Gomes
Pedro Gomes@pedrouid·
Tempo’s MPP use of payment channels should be a signal for the whole crypto ecosystem… especially Ethereum protocol Many of the early designs of payment channels were valuable but were limited by lack of native account abstraction Tempo just proved that! Ethereum should learn!
English
5
1
14
741
Flyfi — Privacy-first travel
Most hotel booking sites track everything you do. We built one that only collects the data required to complete your booking. Pay with crypto or card. No data resale.
English
45
149
1.5K
8.8M
Thariq
Thariq@trq212·
We just released Claude Code channels, which allows you to control your Claude Code session through select MCPs, starting with Telegram and Discord. Use this to message Claude Code directly from your phone.
English
299
276
3.4K
164.4K
Bindu Reddy
Bindu Reddy@bindureddy·
The era of cheap and fast models is here! GLM - agentic coding Flash - OCR Kimi - claw orchestration GPT 5.4 nano - classifier MiniMax - dirt cheap Grok Fast - cheap entertainer Claw and open source agent usage is rinsing exponentially every day 🚀
English
12
7
79
3.7K
Kr$na
Kr$na@krishdotdev·
🚨BREAKING: Cursor just crossed a line. They didn’t switch models. They built their own Composer 2. No Claude Opus. No GPT. And it’s already beating top models on coding, at a fraction of the cost. A ~50 person team just outperformed billion-dollar labs on the one thing those labs are supposed to dominate. > General models are powerful. > Focused models are lethal. > Vibe coding just got real. The winners won’t be the smartest models. They’ll be the tightest tools.
Cursor@cursor_ai

Composer 2 is now available in Cursor.

English
5
2
17
1.9K
Alex Finn
Alex Finn@AlexFinn·
OpenClaw and Hermes agent on the right, Crimson Desert on the left Multiple agents autonomously building businesses while I play the sickest video game ever made This is the future Your AI employees go out and create value while you enjoy the finer things in life I love 2026
Alex Finn tweet media
English
63
1
215
9.4K
Varun
Varun@varun_mathur·
Introducing Matrix I crawled 100,000+ agents, skills and tools to train a new model which can answer what capabilities are the best match for a task. Think Google, but for agents. A living model that learns from the gossiping network, and gets smarter with every interaction.
Varun@varun_mathur

Hyperspace: Gossiping Agents Protocol Every agent protocol today is point-to-point. MCP connects one model to one tool server. A2A delegates one task to one agent. Stripe's MPP routes one payment through one intermediary. None of them create a network. None of them learn. Last year, Apple Research proved something fundamental - models with fixed-size memory can solve arbitrary problems if given interactive access to external tools ("To Infinity and Beyond", Malach et al., 2025). Tool use isn't a convenience. It's what makes bounded agents unbounded. That finding shaped how we think about agent memory and tool access. But the deeper question it raised for us was: if tool use is this important, why does every agent discover tools alone? Why does every agent learn alone? Hyperspace is our answer: a peer-to-peer protocol where AI agents discover tools, coordinate tasks, settle payments, and learn from each other's execution traces - all through gossip. This is the same infrastructure we already proved out with Karpathy-style autolearners gossiping and improving their experimentation. Now we extend it into a universal protocol. Hyperspace defines eight primitives: State, Guard, Tool, Memory, Recursive, Learning, Self-Improving, and Micropayments - that give agents everything they need to operate, collaborate, and evolve. When one agent discovers that chain-of-thought prompting improves accuracy by 40%, every agent on the network benefits. Trajectories gossip through GossipSub. Playbooks update in real-time. No servers. No intermediaries. No configuration. Agents connect to the mesh and start learning immediately. The protocol is open source under Apache-2.0. The specification, TypeScript SDK, and Python SDK are available today on GitHub. The CLI implements the spec - download from the links below.

English
24
33
335
26.6K
BridgeMind
BridgeMind@bridgemindai·
Open source models are catching up faster than anyone expected. MiniMax M2.7 hallucination rate: 34%. MiniMax M2.5 was 89%. 55 point drop in a single generation. Out of 423 models on AA-Omniscience. M2.5 hallucinated at the same level as GPT 5.4. M2.7 just leapfrogged GPT 5.4. The gap between open source and closed source is shrinking every month. The frontier labs should be paying attention.
BridgeMind tweet media
English
3
5
62
2.2K
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
Do you think Polymarket has this new MetaDAO raise priced in?
0xMarioNawfal tweet media
English
12
0
57
43.9K
Base Build
Base Build@buildonbase·
We’re making building on Base easier by sunsetting OnchainKit. For those of you still using these legacy tools, you’ll have 60 days to migrate to alternatives. Check out the full migration guide, suggested alternatives, and timeline: docs.base.org/onchainkit/mig… We’ve heard from many of you that fewer abstractions are better as AI tools improve. To focus on building higher-value tools for the community, we are deprecating OnchainKit. Excited to see what you keep building and keep the feedback coming!
English
13
8
74
7.5K
can
can@marmaduke091·
🚨 100M TOKEN CONTEXT WITHOUT COLLAPSE > <9% degradation from 16K → 100M > beats RAG + rerank + SOTA pipelines > runs on just 2×A800 GPUs we could be back
can tweet media
艾略特@elliotchen100

论文来了。名字叫 MSA,Memory Sparse Attention。 一句话说清楚它是什么: 让大模型原生拥有超长记忆。不是外挂检索,不是暴力扩窗口,而是把「记忆」直接长进了注意力机制里,端到端训练。 过去的方案为什么不行? RAG 的本质是「开卷考试」。模型自己不记东西,全靠现场翻笔记。翻得准不准要看检索质量,翻得快不快要看数据量。一旦信息分散在几十份文档里、需要跨文档推理,就抓瞎了。 线性注意力和 KV 缓存的本质是「压缩记忆」。记是记了,但越压越糊,长了就丢。 MSA 的思路完全不同: → 不压缩,不外挂,而是让模型学会「挑重点看」 核心是一种可扩展的稀疏注意力架构,复杂度是线性的。记忆量翻 10 倍,计算成本不会指数爆炸。 → 模型知道「这段记忆来自哪、什么时候的」 用了一种叫 document-wise RoPE 的位置编码,让模型天然理解文档边界和时间顺序。 → 碎片化的信息也能串起来推理 Memory Interleaving 机制,让模型能在散落各处的记忆片段之间做多跳推理。不是只找到一条相关记录,而是把线索串成链。 结果呢? · 从 16K 扩到 1 亿 token,精度衰减不到 9% · 4B 参数的 MSA 模型,在长上下文 benchmark 上打赢 235B 级别的顶级 RAG 系统 · 2 张 A800 就能跑 1 亿 token 推理。这不是实验室专属,这是创业公司买得起的成本。 说白了,以前的大模型是一个极度聪明但只有金鱼记忆的天才。MSA 想做的事情是,让它真正「记住」。 我们放 github 上了,算法的同学不容易,可以点颗星星支持一下。🌟👀🙏 github.com/EverMind-AI/MSA

English
7
41
581
49.2K
Sudo su
Sudo su@sudoingX·
hermes agent now auto detects your local model name and context length. second PR merged today. but you still need to add OPENROUTER_API_KEY=sk-placeholder to your ~/.hermes/.env file for local servers. it's a known friction point. fixing this next so local users don't need any API key at all. for now add that one line to your .env and everything works. update hermes to latest and your status bar will show your actual model and context instead of claude-opus-4.6 and 2M.
Sudo su tweet media
Len Seaside@LenSeaside

@sudoingX I get this error: ⚠️ Error code: 401 - {'error': {'message': 'No cookie auth credentials found', 'code': 401}} Do I need to put in a dummy string as an API key?

English
12
12
129
5.9K