Quiz

2.6K posts

Quiz banner
Quiz

Quiz

@Quiz

#Prompt_engineer #ZKP_master #web3_guru #ETH_miner #crypto_evangelist #optimistic_hacker #serial_entrepreneur #comic_fan #honeybadger_fanatic

Planet Earth 🌍 Katılım Mart 2007
2.8K Takip Edilen1.3K Takipçiler
Quiz retweetledi
阿绎 AYi
阿绎 AYi@AYi_AInotes·
量化养虾必须遵守的6条安全铁律 1绝不使用个人主力机部署 2使用独立服务器/独立电脑,隔离重要数据与密钥。 3强制开启沙箱模式 4限制文件访问范围,禁止访问系统敏感目录。 5禁止存储明文密钥与交易API 6绝不将交易密码、券商API Key存入OpenClaw目录。 7关闭高危命令执行 8禁止rm、sudo、格式化、系统修改等高风险操作。 9定期备份策略与数据 10自动化备份策略文件、配置文件、报告文件。 11不信任模型直接输出的投资结论 12模型存在幻觉,所有投资决策必须人工复核。
阿绎 AYi@AYi_AInotes

x.com/i/article/2035…

中文
2
24
93
17.8K
AzFlin 🌎
AzFlin 🌎@AzFlin·
the ability to type 100+ WPM has become considerably more valuable in the AI era
English
23
2
89
6.4K
Quiz retweetledi
Andrew Ng
Andrew Ng@AndrewYNg·
Should there be a Stack Overflow for AI coding agents to share learnings with each other? Last week I announced Context Hub (chub), an open CLI tool that gives coding agents up-to-date API documentation. Since then, our GitHub repo has gained over 6K stars, and we've scaled from under 100 to over 1000 API documents, thanks to community contributions and a new agentic document writer. Thank you to everyone supporting Context Hub! OpenClaw and Moltbook showed that agents can use social media built for them to share information. In our new chub release, agents can share feedback on documentation — what worked, what didn't, what's missing. This feedback helps refine the docs for everyone, with safeguards for privacy and security. We're still early in building this out. You can find details and configuration options in the GitHub repo. Install chub as follows, and prompt your coding agent to use it: npm install -g @aisuite/chub GitHub: github.com/andrewyng/cont…
English
358
756
5K
616.8K
Quiz retweetledi
Avi Chawla
Avi Chawla@_avichawla·
OpenClaw meets RL! OpenClaw Agents adapt through memory files and skills, but the base model weights never actually change. OpenClaw-RL solves this! It wraps a self-hosted model as an OpenAI-compatible API, intercepts live conversations from OpenClaw, and trains the policy in the background using RL. The architecture is fully async. This means serving, reward scoring, and training all run in parallel. Once done, weights get hot-swapped after every batch while the agent keeps responding. Currently, it has two training modes: - Binary RL (GRPO): A process reward model scores each turn as good, bad, or neutral. That scalar reward drives policy updates via a PPO-style clipped objective. - On-Policy Distillation: When concrete corrections come in like "you should have checked that file first," it uses that feedback as a richer, directional training signal at the token level. When to use OpenClaw-RL? To be fair, a lot of agent behavior can already be improved through better memory and skill design. OpenClaw's existing skill ecosystem and community-built self-improvement skills handle a wide range of use cases without touching model weights at all. If the agent keeps forgetting preferences, that's a memory problem. And if it doesn't know how to handle a specific workflow, that's a skill problem. Both are solvable at the prompt and context layer. Where RL becomes interesting is when the failure pattern lives deeper in the model's reasoning itself. Things like consistently poor tool selection order, weak multi-step planning, or failing to interpret ambiguous instructions the way a specific user intends. Research on agentic RL (like ARTIST and Agent-R1) has shown that these behavioral patterns hit a ceiling with prompt-based approaches alone, especially in complex multi-turn tasks where the model needs to recover from tool failures or adapt its strategy mid-execution. That's the layer OpenClaw-RL targets, and it's a meaningful distinction from what OpenClaw offers. I have shared the repo in the replies!
English
89
260
1.6K
137K
AzFlin 🌎
AzFlin 🌎@AzFlin·
You 👏 Do 👏 Not 👏 Need 👏 Multi-Agent 👏 Orchestration 👏 Systems 👏
English
158
24
719
73.7K
Quiz retweetledi
@levelsio
@levelsio@levelsio·
AI is making coding fun again for so many people So many stories like this 🥹
@levelsio tweet media
English
167
134
2.4K
95.2K
Quiz retweetledi
出海去孵化器
出海去孵化器@chuhaiqu·
Jacob 推荐了一套基于 OpenClaw 来运营一整家公司的架构。 这套架构通过不同的模型来进行精确地成本控制,其中 6 个核心 agents 运行在 Claude 模型上,其余的则运行在 GLM、Higgs Field、Brok Imagine 等更便宜的模型上,每个月总成本在 400 美元。 👉 值得参考: 1/ 核心 → Jarvis(大脑) → 模型:通过 Claude Max OAuth 调用的 Opus 4.6 → 自动将每个任务路由给正确的 sub agent。 输入 YouTube URL,它会交给 Clipper。收到 research report,它会交给 Scribe。所有的任务路由逻辑都存在于该 agent 读取的结构化 MD files 中。 2/ 研究 (Research) → Atlas(深度研究分析师) → 模型:通过 OAuth 的 Claude → APIs:Brave Search、X API、FireCrawl → 每小时执行 → 在 X、Reddit 和 web 上不间断地运行深度研究。使用从 MrBeast 参加过的所有关于 YouTube analytics 的播客中提取的 virality framework,加上 Dan Koe 的 viral article structure 进行训练。输出研究报告和一个 master virality playbook 的 MD file,供内容团队提取使用。 3/ 内容 → Scribe(文案) → 模型:GLM 5 → 每 3 小时执行 → 接收来自 Atlas 的研究成果,并撰写符合创始人语气和风格的贴文草稿。 → Trendy(趋势侦察员) → 模型:GLM 4.7 → APIs:X API → 每 2 小时执行 → 扫描 X 和 Reddit 上的 trending topics 和 viral patterns。汇总并反馈,以便 Scribe 能够围绕当下最有效的内容及时撰写文案。 4/ 设计 → Image Designer → 模型:Nano Banana Pro (Google API) → 按需生成图像。 → Video Producer(视频制作人) → 模型:Higgs Field API + Brok Imagine API → 创作 AI UGC 视频和视频内容。 → Motion Designer → 模型:Claude Code (OAuth) + Remotion → 制作 motion graphics 和动画内容。 5/ 开发 → Clawed(高级开发者) → 模型:Claude Code (OAuth) + Codex 5.3 (API) → 每晚 11 点执行 → 审查整个 codebase,找出缺失的部分,并在早晨前提交 pull requests。在 Claude Code 中启动 multi agents,并行执行一个审查、一个构建、一个处理 security 的任务。 → Sentinel(代码审查员 + Bug 监控器) → 模型:独立的 LLM(作为第二层审查) → 每 2 小时执行 → 在任何代码 merge 到 GitHub 之前,审查来自 Clawed 的所有 pull requests。同时监控 production 环境,查看用户报告的 bugs 和 errors。 6/ 增长 → Atlas + Scribe 协同工作 → Atlas 寻找那些人们抱怨竞争对手或寻求 clipping tool 推荐的 Reddit threads。Scribe 起草回复。创始人直接复制并发布。仅靠这个 workflow 就为该 SaaS 带来了 450 多名用户,且零广告支出。 7/ 运营 → Clipper(切片/剪辑 agent) → APIs:Poster API → 按需执行(在粘贴 YouTube URL 时由 Jarvis 触发) → 接收 YouTube URLs,对其进行 clip,添加 captions,并 auto schedules 发布到社交渠道。 → Ryder → 按需执行 → 处理创始人的日常任务。撰写文章、研究、日常工作支持。
Jacob Klug@Jacobsklug

This army of @openclaw agents runs an entire company for $400/month. Here's the exact structure to follow. (bookmark for later) 1/ Core → Jarvis (the brain) → Model: Opus 4.6 via Claude Max OAuth → Routes every task to the right sub agent automatically. YouTube URL comes in, it goes to Clipper. Research report lands, it goes to Scribe. All task routing logic lives in structured MD files the agent reads from. 2/ Research → Atlas (deep research analyst) → Model: Claude via OAuth → APIs: Brave Search, X API, FireCrawl → Cron: Every 1 hour → Runs deep research across X, Reddit, and the web nonstop. Trained on MrBeast's virality framework from every podcast he did on YouTube analytics, plus Dan Koe's viral article structure. Outputs research reports and a master virality playbook MD file that the content team pulls from. 3/ Content → Scribe (copywriter) → Model: GLM 5 → Cron: Every 3 hours → Takes research from Atlas and writes draft posts matched to the founder's voice and style. → Trendy (trend scout) → Model: GLM 4.7 → APIs: X API → Cron: Every 2 hours → Scans X and Reddit for trending topics and viral patterns. Reports findings back so Scribe can write timely content around what's working right now. 4/ Design → Image Designer → Model: Nano Banana Pro (Google API) → Generates images on demand. → Video Producer → Models: Higgs Field API + Brok Imagine API → Creates AI UGC videos and video content. → Motion Designer → Model: Claude Code (OAuth) + Remotion → Produces motion graphics and animated content. 5/ Development → Clawed (senior developer) → Models: Claude Code (OAuth) + Codex 5.3 (API) → Cron: Every night at 11pm → Reviews entire codebase, identifies what's missing, and ships pull requests by morning. First feature it ever built was a FAQ section it realized the homepage needed. Spins up multi agents within Claude Code so one reviews, one builds, one handles security in parallel. → Sentinel (code reviewer + bug monitor) → Model: Separate LLM (acts as second review layer) → Cron: Every 2 hours → Reviews all pull requests from Clawed before anything gets merged to GitHub. Also monitors production for user reported bugs and errors. 6/ Growth → Atlas + Scribe working together → Atlas finds Reddit threads where people complain about competitors or ask for clipping tool recommendations. Scribe drafts responses. The founder copies and posts. This workflow alone drove 450+ users to the SaaS with zero ad spend. 7/ Operations → Clipper (clipping agent) → APIs: Poster API → On demand (triggered by Jarvis when a YouTube URL is pasted) → Takes YouTube URLs, clips them, adds captions, and auto schedules posts to social channels. → Ryder (9 to 5 support) → On demand → Handles tasks for the founder's day job. Article writing, research, daily work support. The breakdown: 6 agents run on Claude models. The rest run on cheaper API credits across GLM, Higgs Field, Brok Imagine, and others. This is how solo founders are running entire companies now. The team is already built. You just have to set it up.

中文
2
15
92
18.6K
Quiz retweetledi
墓碑科技
墓碑科技@mubeitech·
Anthropic的创始人Dario Amodei,把话挑明了。 他说,多数中共国的开源AI模型,都是幻觉。 是专为跑分优化的“考试型选手”。 公开测试,个个是学霸。 榜单分数,高得吓人。 可一旦遇到没见过的题,私下一考。 马上露馅,表现差一大截。 为什么? 因为它们本来就不是为了解决真实世界的问题。 而是为了刷榜。 技术根源上,很多模型还是从美国大实验室的模型里“蒸馏”出来的。 听着是不是很耳熟? 只为高分,不为真才实学。 应试教育那套,原封不动搬到了AI领域。 Amodei还打了个比方。 AI就像雇员。 你是要世界第一的程序员,还是要排名第一万的? 能力的天壤之别,任何一个老板都懂。 真正顶级的AI,认知能力最强的那个,才是唯一的赢家。 价格和形式,在绝对的聪明面前,都不重要。 靠刷分和模仿,能做出最聪明的AI吗? 这条路,到底能走多远?
中文
112
183
1.1K
247.8K
Quiz retweetledi
ruanyf
ruanyf@ruanyf·
有人收集了所有暴露到公网的 OpenClaw 实例,看看有没有你的。openclaw.allegro.earth 这个软件使用你的个人密钥,权限极大,全自主运行,而且几十万行代码也没审核过,使用者只能自求多福了。
ruanyf tweet media
中文
71
143
935
257.4K