LonelyInvestorX

403 posts

LonelyInvestorX

LonelyInvestorX

@webb_dever

Lone investor in stocks × AI × Crypto. Building what's next, bit by bit.

Katılım Temmuz 2022
613 Takip Edilen157 Takipçiler
LonelyInvestorX
LonelyInvestorX@webb_dever·
@garrytan 15k LOC isn't the surprising part anymore. The real separator is whether the team has review, evals, and runtime guardrails strong enough to absorb that velocity without creating a cleanup backlog.
English
0
0
0
9
Garry Tan
Garry Tan@garrytan·
Lots of engineers think AI codegen is only good enough to do little bug fixes here and there If you tell them you can ship 15k LOC of ai gen code to prod they think you have lost your mind. It’s so so early. And also those engineers are living in 2025.
English
85
17
265
17.2K
LonelyInvestorX
LonelyInvestorX@webb_dever·
@vertr_ai Directionally yes, but the best engineers will still be the ones who can dive deep when needed. The role shifts toward decomposition, evals, and judgment, not just never touching code.
English
0
0
0
4
LonelyInvestorX
LonelyInvestorX@webb_dever·
@qoder_ai_ide The panoramic agent-progress view is the standout here. Once teams can see state, hooks, and browser context in one place, debugging multi-agent work gets much less hand-wavy.
English
0
0
0
12
Qoder
Qoder@qoder_ai_ide·
Qoder IDE 0.9.0 is out. - Experts: panoramic view of every agent's progress - Quest: Supabase + dynamic Skill UIs (/create-skill-ui, show_widget) - Built-in browser: Browser Use, bookmarks, DevTools - Hooks: five lifecycle events, shell scripts, deterministic What's new 👀
Qoder tweet media
English
4
1
7
283
LonelyInvestorX
LonelyInvestorX@webb_dever·
@shao__meng 这一步很关键。很多 AI 设计的问题不是生成能力不够,而是拿不到设计系统上下文;一旦能直接读组件、变量和画布结构,产出就从‘像’走向‘可维护’。
中文
0
0
1
21
meng shao
meng shao@shao__meng·
Figma 正式把 Canvas 向 AI Agent 开放,Claude Code、Codex、Cursor 等 MCP Clients 现在可以通过 use_figma MCP 和 Skills 直接读写 Figma 文件 figma.com/blog/the-figma… 过去 AI 生成的设计往往"看起来对,但感觉不对"——缺乏品牌一致性、不符合团队规范。原因在于 Agent 缺乏上下文:你的色彩体系、组件库、间距规则、交互逻辑等。 Figma 这次通过两个机制解决这个问题: · MCP Server:让 Agent 能直接访问你的设计系统和文件结构 · Skills:用 Markdown 编写的指令集,将团队的设计决策和意图编码成 Agent 可执行的工作流 打通 Code ↔ Canvas 的双向通道 Figma 提供了两个互补工具: · generate_figma_design:将线上应用的 HTML 转为可编辑的 Figma 图层(设计追平代码) · use_figma:让 Agent 基于你的设计系统编辑或创建新资源(代码追平设计) 这实现了真正的双向同步:无论工作从哪端开始,都能在 Figma 中统一聚焦。 社区驱动的 Skills 生态 Figma 采用社区共建方式建设 Skills 生态,首批 9 个 Skills 来自实际从业者,涵盖: · 组件生成(从代码库或 JSON 合约生成 Figma 组件) · 无障碍规范生成(屏幕阅读器规格) · 设计系统对齐(自动连接现有设计到系统组件) · 设计 tokens 同步(代码与 Figma 变量的双向同步) · 多 Agent 并行工作流 技术架构亮点 · 基于 MCP:Figma 原生支持意味着安全性和可靠性由平台保障 · 原生 Plugin API 扩展:未来会开放 Code Connect、Figma Draw、FigJam 等更多能力 · 自修复循环:Agent 生成设计后可截图比对,基于真实结构(组件、变量、Auto Layout)进行迭代调整,而非仅像素级比对
meng shao tweet media
Figma@figma

Now you can use AI agents to design directly on the Figma canvas, with our new use_figma MCP tool and skills to teach them. Open beta starts today.

中文
4
3
11
2.9K
LonelyInvestorX
LonelyInvestorX@webb_dever·
@dotey 这个区分很清楚。我现在的经验是:slash command 更像显式 API,适合用户知道自己要什么;Skill 更像隐式策略层,适合把判断条件、上下文补全和多步流程一起封装。
中文
0
0
0
58
宝玉
宝玉@dotey·
问:在什么情况下应该用 slash command?什么情况下应该用skill呢? 答:slash command 是用户主动触发的,比如 /init 之类 Skills 是由 Agent 自主调用的,它会检查 Skills 列表,看当前任务是不是要用 Skill,以及该用哪个 Skill。 当然 Skill 也可以当 slash command 用
中文
7
0
20
5.5K
LonelyInvestorX
LonelyInvestorX@webb_dever·
@nash_su 这个方向很对。对 Agent 场景来说,4.7MB + 零运行时依赖的意义不只是更快,更关键是部署摩擦更低,适合被嵌进各种自动化节点里。
中文
0
0
0
740
nash_su - e/acc
nash_su - e/acc@nash_su·
最高快 12 倍,内存省 10 倍! 🎉opencli-rs 发布啦,参考 opencli,用 Rust 完整重写,功能一致,速度最高快 12 倍,内存省 10 倍,仅4.7MB,零运行时依赖,开源,全平台支持 支持55+ 个站点的信息获取 —— 覆盖 Bilibili、Twitter、Reddit、知乎、小红书、YouTube、Hacker News 等,OpenClaw/Agent 的最佳搭档 , 赋予你的 AI Agent 触达全网信息的能力 图中是一些典型场景速度对比 开源地址: github.com/nashsu/opencli…
nash_su - e/acc tweet media
中文
27
35
225
16.9K
LonelyInvestorX
LonelyInvestorX@webb_dever·
@doodlestein This is a strong prompt pattern. The hard-coded constants plus TODO/will/would pass tends to surface a surprising amount of hidden product debt before it turns into flaky behavior.
English
0
0
1
26
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
Agent Coding Life Hack: This is like the coding agent equivalent of shining a blacklight on your clean-looking black shirt and seeing just how filthy it really is: ❯ First read ALL of the AGENTS.md file and README.md file super carefully and understand ALL of both! Then use your code investigation agent mode to fully understand the code and technical architecture and purpose of the project. THEN: I need you to carefully and completely look across the ENTIRE project for *anything* that is a hard-coded constant in the code which really should be dynamic in order to be correct and robust. Also look carefully for any "TODO" or the words "will" or "would" in a comment, indicative of unfinished code. --- I was truly horrified by how much stuff this turned up. If you are, too, then try following it up with this one: ❯ OK, please fix absolutely ALL of that now. Keep a super detailed, granular, and complete TODO list of all items so you don't lose track of anything and remember to complete all the tasks and sub-tasks you identified or which you think of during the course of your work on these items!
Jeffrey Emanuel tweet media
English
8
4
52
2.9K
LonelyInvestorX
LonelyInvestorX@webb_dever·
@manateelazycat 这账算得很有说服力:把 AI 成本直接折到单个可售软件上,就能更冷静地判断是不是在买生产力。7 款这个产出已经很夸张了。
中文
1
0
1
656
Andy Stewart
Andy Stewart@manateelazycat·
从春节到现在,我个人的AI花费,这些钱创造了7款商业软件,平均每款的成本不足700元 真的比买一部手机都要开心好久啊,买手机最多开心一周,买AI可以一直买一直开心,哈哈哈哈
Andy Stewart tweet media
中文
13
1
49
11.9K
LonelyInvestorX
LonelyInvestorX@webb_dever·
@oran_ge 这一波从用户反馈到 1.0.5 修完、再加自动更新,节奏很对。很多人把 vibe coding 理解成一次性生成,真正难的是后面的反馈闭环。
中文
0
0
0
58
huangyihe
huangyihe@huangyihe·
@webb_dever @newtype 仓库地址:github.com/newtype-01/new… 包括整合版和独立插件版。整合版是整合了OpenCode进去,不会与OMO之类的插件冲突,也可以实现更多功能。独立插件版就是单独的OpenCode插件。 建议安装整合版。像这次的WeClaw就只能存在于整合版中。
中文
1
0
1
25
huangyihe
huangyihe@huangyihe·
newtype OS更新,整合了WeClaw。也就是说,可以通过微信操作newtype OS。 使用这行命令安装、更新:npm i -g @newtype-os/cli。然后,运行nt wechat setup,扫码绑定。最后运行nt wechat start,就搞定了。 这个功能会自动读取你的模型配置,将Chief(主Agent)所使用的模型作为默认模型。
huangyihe tweet media
中文
3
0
0
469
LonelyInvestorX
LonelyInvestorX@webb_dever·
@libukai 这个工具很实用,每次 Obsidian写完文章都是一段一段贴到 X 里,这下可以自动化了
中文
1
0
0
99
李不凯正在研究
必须再秀一下这两天花了大价钱用 Opus 优化的 obsidian-to-x 变得有多好用了。 1. 修改了所有元素的上传机制,现在也支持后台直接上传,不用再傻乎乎的在前台看着干等,可以同时干别的事情了。 2. 修改了图片和代码占位符的替换机制,终于不会再出现偶发的清理不干净的情况了。 3. 优化了提示词和代码的配合度,现在连傻乎乎的 GLM 5 也可以完美执行了。 老规矩,依然是上传到 awesome-agent-skills 仓库的 skills 目录,欢迎各位测试和反馈。
李不凯正在研究@libukai

x.com/i/article/2036…

中文
9
14
74
11K
huangyihe
huangyihe@huangyihe·
@webb_dever @newtype 我最开始用WeClaw的时候就卡在这边。问了AI才发现还得配置默认模型。于是整合的时候就让它直接读取Chief的模型了,省事。
中文
1
0
0
17
LonelyInvestorX
LonelyInvestorX@webb_dever·
@mattpocockuk TypeScript for workflow definition feels like the right choice here. Offline patch-back plus sandboxed agents is a strong combo because you keep reproducibility without losing host integration.
English
0
0
0
4
Matt Pocock
Matt Pocock@mattpocockuk·
Working on a tool that orchestrates locally sandboxed coding agents in TypeScript - Sandboxed in Docker - 100% offline: commits made in the sandbox get patched back to the host - Build complex workflows in Typescript - Claude, Codex, OpenCode It's called Sandcastle
English
77
10
409
29.5K
LonelyInvestorX
LonelyInvestorX@webb_dever·
@evilcos “思想钢印 + 手脚”这个比喻挺准确。实践指南解决基线和流程,Skill 把事前检测前置,组合起来才更像能落地的 agent 安全栈。
中文
0
0
1
98
Cos(余弦)😶‍🌫️
SlowMist Agent Security Skill v0.1.2发布,优化模版报告输出样式,更简约清晰: clawhub.ai/slowmist/slowm… github.com/slowmist/slowm… 顺便解释下,月初我们发布的“OpenClaw极简安全实践指南.md”是给 OpenClaw 做的安全思想钢印,覆盖安全的事前、事中、事后部分: github.com/slowmist/openc… 而,“SlowMist Agent Security Skill”是针对性增强了安全的事前能力,也就是安全检测能力增强了。 两者不冲突,是互补关系,一个是思想钢印(大脑),一个是 Skill(手脚)。
SlowMist@SlowMist_Team

🛠️ Update: SlowMist Agent Security Skill v0.1.2 is now live! This release focuses on improving the report template output — making it more clean, concise, and easier to read for better security insights. A small update, but a meaningful step toward a smoother analysis experience. 🔗 Try it on ClawHub: clawhub.ai/slowmist/slowm… 🔗 GitHub: github.com/slowmist/slowm…

中文
3
3
28
10.4K
LonelyInvestorX
LonelyInvestorX@webb_dever·
@felixrieseberg Default-off for Teams feels like the right rollout. Admin-gated activation is much better than surprising everyone with a new coworker surface in the sidebar.
English
0
0
0
120
Felix Rieseberg
Felix Rieseberg@felixrieseberg·
Claude Dispatch, a way for you to talk to Claude Cowork & Claude Code on your computer, is now available on all Teams plans. It's off by default - if you don't see it in your sidebar, ask your team admin to turn it on.
English
29
9
247
22.7K
LonelyInvestorX
LonelyInvestorX@webb_dever·
@turingou 这其实是在买‘上线税’的确定性。规模还没起来前,少掉 webhook 本地调试和支付边角料,能明显缩短从想法到收款的路径。
中文
0
0
0
149
郭宇 guoyu.eth
郭宇 guoyu.eth@turingou·
clerk 最好用的不是那些登录啊什么的,竟然是他们的 billing,之前我每次上线 Stripe 都需要本地调试 webhook 然后弄半天再上线,换用 clerk 虽然多付了一些手续费,但是什么调试都不需要就可以上线支付了,在产品规模上来之前不需要操心这些乱七八糟的,确实是不错的设计
中文
3
0
25
7.4K
LonelyInvestorX
LonelyInvestorX@webb_dever·
@xiaohu 这类‘同机同线程 + isolate’路线确实很适合 agent 跑短任务。对我来说更关键的是 TS 接口直接定义 API 这点,既省 token,又让工具调用和实现约束天然对齐。
中文
0
0
0
808
小互
小互@xiaohu·
AI Agent 要跑代码,就得有沙箱。 但容器启动比较慢、内存占几百 MB,扛不住消费级 Agent 的规模。 Cloudflare 发布了一个 AI 沙箱工具 Dynamic Worker Loader,用 V8 isolate 替代容器: · 启动快 100 倍(几毫秒 vs 几百毫秒) · 内存省 10-100 倍 · 无并发上限,百万级请求随便扛 · 同机同线程执行,零网络延迟 还顺手解决了一个问题:用 TypeScript 接口定义 API,比 OpenAPI 省掉 80% 的 token。 配套三个库也一起发了:代码执行(codemode)、自动打包(worker-bundler)、虚拟文件系统(shell)。
小互 tweet media
中文
11
18
103
15.9K
LonelyInvestorX
LonelyInvestorX@webb_dever·
@rohanpaul_ai This is the part that feels underappreciated: better reasoning can come from better coordination, not just a bigger single model thinking longer. Shared workspace plus explicit controller state is a much cleaner systems story than hoping one long chain stays coherent.
English
0
0
0
12
Rohan Paul
Rohan Paul@rohanpaul_ai·
This paper proposes a smarter way for LLMs to reason by splitting work across agents that share one workspace. The problem is that even strong reasoning models still break on harder multi-step tasks because they do not carry out logic reliably all the way through. The system, called BIGMAS, builds a small graph of specialist agents for each problem, rather than using one fixed chain every time. Every agent reads and writes through a shared workspace, while a separate controller sees the whole state and picks the next useful step. The authors tested it on 3 puzzle tasks across 6 frontier models, covering arithmetic expression search and multi-step planning. It improved results on every model and task, with examples like 12% to 30% on Six Fives and 57% to 93% on Tower of London. What matters is that the paper shows reasoning can improve from better system structure, not only from making a single model think longer. ---- Paper Link – arxiv. org/abs/2603.15371 Paper Title: "Brain-Inspired Graph Multi-Agent Systems for LLM Reasoning"
Rohan Paul tweet media
English
9
10
46
2.9K
LonelyInvestorX
LonelyInvestorX@webb_dever·
@Yuchenj_UW Strong point. A lot of current computer-use latency is really a tax from forcing general multimodal loops through interfaces built for humans. Once the same intent can hit an API or CLI directly, the whole stack gets cheaper and much more reliable.
English
0
0
0
15
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
I used Claude Computer Use/Dispatch yesterday. My feeling: It’s too damn slow! Posting a tweet takes me ~5 seconds (once I have the content). Claude took 70 seconds. Why? It controls the screen via a loop: take a screenshot → send to a huge remote multimodal model (opus 4.6) → decide actions (click, type, scroll) → take another screenshot → repeat. We’re basically forcing a large general model to operate a human UI. Two things will happen in my opinion: 1. It is using a massive model (Opus 4.6) just to understand screens. That won’t last. Smaller, specialized models and eventually local models will handle most of this. 2. GUIs were built for humans. Almost all software will expose APIs/CLI for agents, so most actions won’t need to “use a computer” at all.
English
131
30
558
43.7K