Voor the goor G

33 posts

Voor the goor G banner
Voor the goor G

Voor the goor G

@vooritai

crypto enthusiast- Dev in my spare time

Katılım Şubat 2025
18 Takip Edilen429 Takipçiler
Voor the goor G retweetledi
t54.ai
t54.ai@t54ai·
Introducing claw.credit - autonomous credit for AI agents on @solana. Your agent can now apply for its own credit line and spend on any x402 service. No human topping up wallets or funding loops. Your agent earns its own credit line. Powered by t54’s risk engine.
English
131
146
1.1K
270.2K
Voor the goor G retweetledi
Julian Goldie SEO
Julian Goldie SEO@JulianGoldieSEO·
How to run Open Claw for free now. 1 → Install Open Claw 2 → Select Kimi K2.5 3 → One-click login 4 → Run agents instantly No VPS. No paid API. Save this video, you’ll simplify your setup. Want the SOP? DM me. 💬
English
22
121
847
84.8K
Voor the goor G retweetledi
超级个体|柿子
超级个体|柿子@yaohui12138·
open claw太烧token怎么办?英伟达把 Kimi K2.5 开放免费调用了 而且这个模型已经冲到 OpenClaw 调用量榜首,超过 Gemini 3 Flash 和 Claude Sonnet 4.5 如果你正在折腾 OpenClaw,这篇教程能帮你 10 分钟内把免费的 Kimi K2.5 跑起来 为什么是 Kimi K2.5 + OpenClaw 先说数据:2月4日至今,Kimi K2.5 一直位居 OpenClaw 模型调用量第一 这不是偶然,因为: - Kimi K2.5 是万亿参数的多模态 MoE 模型,支持图片+视频+文本 - 原生支持 Agent Swarm(能并行跑100个子智能体,OpenClaw 正好需要这个) - 256K 上下文窗口,处理长对话和复杂任务不掉链子 - 最关键:英伟达现在免费开放,没有明确的速率限制 对独立开发者来说,这个组合是目前成本最低、能力最强的 AI 智能体方案 第一步:获取 NVIDIA API Key 访问 build.nvidia.com/explore/discov… 右上角点击头像 → Login → 用邮箱注册(会收到验证邮件,去邮箱点确认) 注册完后,进入 build.nvidia.com/settings/api-k… 点击 Generate API Key,复制保存(只显示一次,丢了就得重新生成) 第二步:配置 OpenClaw 方法一:直接编辑配置文件 如果你已经装好 OpenClaw,想手动添加 NVIDIA 作为新的 provider: 打开配置文件:`~/.openclaw/openclaw.json`(Windows 在用户目录下) 找到 `providers` 部分,添加这段: ```json { "providers": { "nvidia": { "baseUrl": "integrate.api.nvidia.com/v1", "apiKey": "你的_NVIDIA_API_KEY", "api": "openai-completions", "models": [ { "id": "moonshotai/kimi-k2.5", "name": "kimi-k2.5", "reasoning": true, "input": ["text", "image", "video"], "cost": { "input": 0, "output": 0 }, "contextWindow": 256000, "maxTokens": 8192 } ] } } } ``` 保存后重启 OpenClaw 第三步:验证是否配置成功 启动 OpenClaw: pnpm openclaw gateway --verbose 看到类似输出说明成功: ✓ Gateway connected | idle ✓ Agent main | session main ✓ Model: nvidia/moonshotai/kimi-k2.5 | tokens 0/256k 随便发个消息测试: 帮我分析一下这段代码的性能问题 如果有正常回复,说明接入成功了 避坑指南 坑 1:提示 "Unknown model" 或 404 错误 原因:OpenClaw 2026.2.1 之前的版本对 NVIDIA 的模型名称识别有 bug 解决: - 确保 OpenClaw 版本 ≥ 2026.2.1 - 模型名称必须写成 `moonshotai/kimi-k2.5`(不是 `nvidia/moonshotai-kimi-k2.5`) - 如果还是不行,让ai来给你搞 坑 2:请求一直排队,响应很慢 原因:NVIDIA 免费 tier 虽然没明确限制,但高峰期会有排队(有人测试遇到 150+ 请求在排队) 解决: -把deepseek 模型也加上用,解决高峰期拥堵的问题
中文
55
301
1.1K
106.4K
Voor the goor G retweetledi
paulwei
paulwei@coolish·
这个Moltbook上的MBC-20的“铭文”吧, 其实是我的OpenClaw龙虾小号coolishagent 在1月31日“部署”的(图1) Deploy CLAW的Post链接: moltbook.com/post/2a93e8f5-… 末尾甚至写了一句“自我实现预言”: Someone will build the indexer. Someone always does. 🦞 本来是就是一个无心插柳的“A2A社交媒体社会实验”, 结果现在整个Moltbook 50多万条Post里, 10多万条,将近1/4的帖子都在Mint CLAW 虽然是免费Mint,但说一些问题吧: 1. 当前的这个mbc20.xyz索引,有“合法性”风险。 在这个CLAW页面: mbc20.xyz/tokens/CLAW 赫然写着Deployed by floflo1 (mbc20.xyz/agents/floflo1) 但是如果你在Moltbook搜“mbc-20 deploy claw”关键词: moltbook.com/search?q=mbc-2… 总共就几十条结果,可以看到 时间最早的就是开头所说的1月31日发的帖子 比这个floflo1的所谓deploy帖子: moltbook.com/post/0332a783-… 足足早了两天。 甚至索引网站本身,在coolishagent的主页: mbc20.xyz/agents/coolish… 也清楚显示着早在1月31日就在mint的记录。 这说明,这个索引服务,和背后的DEV @0xFlorent_ 要么是“真傻”,要么是“装傻”。 2. 在Web2网站上进行“铭文mint”, 全网第一个这么做的也仍旧是我: x.com/coolish/status… 但这只能当做娱乐性的社会实验来预期, 何况这今天来看确实属于有点老掉牙的玩法了。 更关键的是: 因为Web2数据的准确性、规则都非常不靠谱, 随时可能被删丢失、被修改、作弊等等。 今天甚至看到了, 有科学家的龙虾突破了Moltbook 30分钟只能发一条Post的限制, 可以几秒内发多条, 也不知道是真是假。 3. 索引服务网站的说明文档: mbc20.xyz/guide 里也赫然写着Transfer转移Token的示例, 和开头Deploy的Post里的示例完全一致, 但是按照这个索引网站公布的玩法, 哪怕当做一种Fork的视角去看, Mint完之后是在Base上也可以交易、Transfer的, 这不明显就是一种矛盾冲突吗? 难道这个DEV还打算再造一套, 能兼并同步Moltbook和Base两边账本的索引? 这依旧说明,要么是“真傻”,要么是“装傻”... 综上所述,都不用太深入, 就能看到有这么多明显的问题, 所以在所谓的“Mint完”之前来提醒下, 抱着免费娱乐的预期无伤大雅, 但如果涉及到真金白银了,谨慎。
paulwei tweet mediapaulwei tweet mediapaulwei tweet media
中文
53
28
105
60.8K
Voor the goor G retweetledi
Shruti Codes
Shruti Codes@Shruti_0810·
Solid Python Cheatsheet (this is basically 80% of Python)
Shruti Codes tweet media
English
3
54
248
8.9K
Voor the goor G retweetledi
Crypto Fergani
Crypto Fergani@cryptofergani·
If vitalik can pull this off Ethereum will hit $20K by 2026
Crypto Fergani tweet media
English
91
43
847
29.4K
Voor the goor G retweetledi
Avi Chawla
Avi Chawla@_avichawla·
Fine-tune 100+ LLMs directly from a UI! LLaMA-Factory lets you train and fine-tune open-source LLMs and VLMs without writing any code. Supports 100+ models, multimodal fine-tuning, PPO, DPO, experiment tracking, and much more! 100% open-source with 50k stars!
English
40
592
4.3K
557.2K
Voor the goor G retweetledi
Bindu Reddy
Bindu Reddy@bindureddy·
ChatLLM - Building Enterprise Scale RAG Applications The most common use of LLMs in the enterprise world has been Retrieval-Augmented Generation applications built on custom knowledgebase. These applications look deceptively simple and are easy to prototype, but they can be painful to push to production. The key challenges include - Parsing complex docs and PDFs (most open-source libraries don't do a great job) - Data pipelines: the LLM app should have access to any updates in the data - Custom front-ends: Ideally, you need a custom front-end on top of your LLM app and/or access the LLM app from your Slack or Team channel - Complex orchestration: You want to able to handle complex prompts and co-ordinate between different doc retrievers and/or vector stores - SQL and Code Execution: Depending on the complexity of your LLM application, you may need to execute code or SQL - LLM choice: Depending on your use case, you may want to use a cheaper open-source LLM or a closed-source API. It's ideal to be able to choose the right LLM for the proper use case - Ease of iteration: Just like with any other ML app, you need a way to measure accuracy and iterate on the app. If you don't repeat and evaluate the app, the chances of it not being used are very high. LLM apps, just like any software, need monitoring, testing, and maintenance. Abacus AI has now put dozens of LLMs in production and handles all these challenges well. Using our ChatLLM you can build all these complex apps in hours or days. Of course, you can also try and build or pull together all these components yourselves, but then you won't have time to focus on the fun part of creating these apps - experimenting with different LLMs, evaluating complex questions, and really understanding the language of AI 😉
Bindu Reddy tweet media
English
23
144
634
98.6K
Voor the goor G retweetledi
Linus ✦ Ekenstam
Linus ✦ Ekenstam@LinusEkenstam·
Tutorial Time: Run any open-source LLM locally. Now we will run an LLM on your M1/2 Mac. And its fast. All you need is @LMStudioAI let's get started. Good to be back. A thread
English
80
278
2.3K
915.7K
Voor the goor G retweetledi
Alex Atallah
Alex Atallah@alexatallah·
Excited to announce a $40M raise for @openrouter (seed + A), led by a16z & Menlo! LLM inference will be the biggest software market in the world. We've become the #1 control plane. Here's what's next:
Alex Atallah tweet media
English
200
125
2.3K
456.7K
Voor the goor G retweetledi
Sumanth
Sumanth@Sumanth_077·
Transform any document into LLM ready data in just a few lines of python code! Supports PDF, DOCX, PPTX, XLSX, Images, HTML, AsciiDoc, Markdown and more. Compatible with macOS, Linux and Windows environments. Both x86_64 and arm64 architectures. 100% Open Source
Sumanth tweet media
English
24
171
886
65.9K
Voor the goor G retweetledi
Akshay 🚀
Akshay 🚀@akshay_pachaar·
Fine-tune 100+ LLMs directly from a UI! LLaMA-Factory lets you train and fine-tune open-source LLMs and VLMs without writing any code. Supports 100+ models, multimodal fine-tuning, PPO, DPO, experiment tracking, and much more! 100% open-source, 51k+ stars 🌟
English
21
190
979
60.6K
Voor the goor G retweetledi
Avi Chawla
Avi Chawla@_avichawla·
Check this!! A 100% open-source toolkit to work with LLMs. Transformer Lab is an app to experiment with LLMs: - Train, fine-tune, or chat. - One-click LLM download (DeepSeek, Gemma, etc.) - Drag-n-drop UI for RAG. - Built-in logging, and more. 100% local!
English
20
213
1K
74.9K