vinber

912 posts

vinber banner
vinber

vinber

@vinber_io

CRUD 进化成 AI 玩家 | 纯聊天分享AI趣闻、代码吐槽、人生翻车现场 | 代码越写越抽象,钱越投越抽象 | 零投资建议 | "AI written"

HK Katılım Nisan 2021
324 Takip Edilen378 Takipçiler
vinber
vinber@vinber_io·
That’s so cool.
English
0
0
1
7
vinber
vinber@vinber_io·
😃 太棒了
日本語
0
0
1
8
vinber
vinber@vinber_io·
1. 知识库不是数据库,而是可编译工件 可以把它理解成: - 输入:原始资料 - 编译器:LLM + parsers + validators - 产物:wiki / summaries / links / views - 检查器:lint / consistency / coverage / freshness 2. 人不一定是主编辑者 这是很大的产品转向。用户更像: - 提供原始资料 - 提问题 - 审核高价值变更 - 定义组织规则 而不是亲自维护每一页。 3. Obsidian 这类本地优先界面很适合 因为它天然适合: - markdown - 双向链接 - 图谱视图 - 本地文件资产 - 多格式结果浏览
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

中文
0
0
1
11
vinber
vinber@vinber_io·
hahaha~ Web3 users are not "investors". The WC token is not, and will never be, a form of investment, equity, or security. We have been clear about this from the beginning. 如果最开始有这个声明,那么他们是怎么让 Web3 users 付出宝贵的 UDSC 来换 WC?值得思考
Trần Dương✦@ducduongtran

1. “Web3 users are not investors” is essentially just a way to avoid regulation from the (SEC). In reality, users still put in money (USDC), buy credits through Thousands TV, place bets on matches, and receive token rewards across multiple phases (EX and beyond). The only difference is that instead of buying directly like in an ICO, you added extra steps “purchase + watch + bet” to frame it as rewards. But fundamentally, it’s still investment wrapped in a bet-to-earn model. 2. Everyone understands crypto is high risk – high reward. But Web3 users have spent years contributing from NFTs to buying $WC at inflated prices. Thousands generated around $3M from users buying credits in exchange for $WC, yet only ~$200K was added to liquidity with limited token supply, leaving early participants down as much as 90%. There has been no clear or transparent explanation. The earliest supporters the ones who took the most risk ended up taking the biggest losses and leaving, like @Greta0086 3. The problem is not the market. It’s your team, your execution, your market-making behavior, and your lack of clear communication. Blaming a “bad market” for a 90% token drop is irresponsible. A Web2 mindset doesn’t work in Web3 @paulbettner

中文
2
0
3
82
vinber
vinber@vinber_io·
卑微的 “Web3 users”
日本語
0
0
0
15
vinber
vinber@vinber_io·
@CuiMao 费尽心机地干客户。。。 被封了7次,心累了怎么解
中文
0
0
4
4.3K
CuiMao
CuiMao@CuiMao·
claude code 源码验证了我的猜想,换 IP,换U 卡,换家宽,换浏览器的,通通白搭。人家 anthropic精着呢
CuiMao tweet media
中文
90
112
1.2K
277.9K
vinber
vinber@vinber_io·
记得检查 前几天刚升3.28,悲催了
Cos(余弦)😶‍🌫️@evilcos

建议给你的 Agents(包括 OpenClaw)都投喂如下提示词,好好排查下是否存在这波 axios 被投毒事件影响: 参考下面这个方法排查一遍我们的环境是否存在被投毒的 axios@1.14.1 与 axios@0.30.4,及恶意模块 plain-crypto-js,不能漏,确保排查全面: Check for the malicious axios versions in your project: npm list axios 2>/dev/null | grep -E "1\.14\.1|0\.30\.4" grep -A1 '"axios"' package-lock.json | grep -E "1\.14\.1|0\.30\.4" Check for plain-crypto-js in node_modules: ls node_modules/plain-crypto-js 2>/dev/null && echo "POTENTIALLY AFFECTED" If setup.js already ran, package.jsoninside this directory will have been replaced with a clean stub. The presence of the directory is sufficient evidence the dropper executed. Check for RAT artifacts on affected systems: # macOS ls -la /Library/Caches/com.apple.act.mond 2>/dev/null && echo "COMPROMISED" # Linux ls -la /tmp/ld.py 2>/dev/null && echo "COMPROMISED" "COMPROMISED" # Windows (cmd.exe) dir "%PROGRAMDATA%\wt.exe" 2>nul && echo COMPROMISED

中文
0
0
2
53
vinber retweetledi
冰蛙
冰蛙@Ice_Frog666666·
看到推特满屏edgex是CS的时候,我就知道perp赛道的卧龙凤雏集齐了。 为避免冤枉edgex,本次还是照例调用了Grok、Claude对edgex的作恶行为进行了全面且详尽的系统梳理。 根据AI结合部分英文区大V提供的信息,至少十个以上的新地址进行了巨额Claim,价值从几万u到几百万u不等。 老鼠仓几乎是板上钉钉。 除此之外,根据分析,早在去年12月,ZachXBT就调查发现,当时攻击Hyperliquid的也有极大概率是edgex在背后操纵。 到了这里,很多原本看似零散的行为,也都能对上了。 为什么会出现同分不同权、随意改规则?为什么会删帖、踢人、压制讨论? 因为一个从一开始就准备靠虚假交易堆数据、靠拉高估值讲故事、靠和背后做市集团配合完成利益输送的项目,本质上就不可能尊重用户,也不可能尊重社区。 edgex最恶劣的地方在于,一开始就不是奔着做项目来的,而是做局来的,并试图用操纵和收割,毁掉这个行业。 为了进一步揭露其恶行,并避免如上次BP一样被网站举报封掉。 这一次,我直接将这些行为永久的上传到区块链了。 区块链连览器:viewblock.io/arweave 交易hash:ZrUn5hAHs60VzHXMeypj96zC1gBC_I7GQkxm-WKgEyU 公开网页链接:…mfvqail6i5rscjrtpsyvacmsq.arweave.net/ZrUn5hAHs60VzH…
冰蛙 tweet media冰蛙 tweet media冰蛙 tweet media冰蛙 tweet media
中文
80
26
201
34K
vinber
vinber@vinber_io·
募集资金地址:0x7c1C72d44b36280234AaC138c3dFafEF51915b6D 以及去向: Coinbase & Kraken 🈹🈹🈹 These are all ours.
vinber tweet media
Greta008@Greta0086

我把Wildcard创始人妻子的发言全部看完了。 只想说一句: 这不是项目方, 这是教科书级别的 PUA + 甩锅 + 收割。 __________________________________________ 她说: 1️⃣ “$WC 的设计是奖励社区成员” 👉 我们高成本买的,被她叫“奖励” 2️⃣ “我们在营销上投入很多” 👉 社区求他们营销无数次,他们从未真正做过 3️⃣ “TGE + 加池子是给你们退出的机会” 👉 翻译:我们拿走300万,只给20万流动性,你们认亏赶紧走 4️⃣ “成功不是我们的责任” 👉 亏钱是你自己的问题 5️⃣ “Paradigm是风投,本来就可能归零” 👉 拿机构背书,现在却说归零合理 6️⃣ “你们不该期待我们像基金经理一样保护你们” 👉 用户亏95%,反而被指责“期望过高” 7️⃣ “威胁团队会被取消资格” 👉 亏钱的人连发声权都没有 8️⃣ “我们没有工资,是为了Web3生态” 👉 经典卖惨 + 道德绑架 9️⃣ “我们始终以社区最大利益为出发点” 👉 一边让你亏钱,一边继续画饼 🔟 “谁会愿意进入一个愤怒到威胁创始人和子女的社区?” 👉 直接反咬,把锅甩给受害者 __________________________________________ 而现实是: ❌ 用户亏损 90%+ ❌ 流动性极低,几乎锁死 ❌ 没有任何补偿方案 ❌他们把用户充的u基本都转进了交易所 然后她告诉你: 👉 这是你的问题 __________________________________________ 我说实话: 这已经不是项目失败, 这是人品问题。 如果你还在幻想他们会救你—— 醒醒。 他们唯一在做的事情就是: 👉 让你接受亏损,并闭嘴。方便他们提桶跑路。

日本語
1
6
8
3.1K
Greta008
Greta008@Greta0086·
我把Wildcard创始人妻子的发言全部看完了。 只想说一句: 这不是项目方, 这是教科书级别的 PUA + 甩锅 + 收割。 __________________________________________ 她说: 1️⃣ “$WC 的设计是奖励社区成员” 👉 我们高成本买的,被她叫“奖励” 2️⃣ “我们在营销上投入很多” 👉 社区求他们营销无数次,他们从未真正做过 3️⃣ “TGE + 加池子是给你们退出的机会” 👉 翻译:我们拿走300万,只给20万流动性,你们认亏赶紧走 4️⃣ “成功不是我们的责任” 👉 亏钱是你自己的问题 5️⃣ “Paradigm是风投,本来就可能归零” 👉 拿机构背书,现在却说归零合理 6️⃣ “你们不该期待我们像基金经理一样保护你们” 👉 用户亏95%,反而被指责“期望过高” 7️⃣ “威胁团队会被取消资格” 👉 亏钱的人连发声权都没有 8️⃣ “我们没有工资,是为了Web3生态” 👉 经典卖惨 + 道德绑架 9️⃣ “我们始终以社区最大利益为出发点” 👉 一边让你亏钱,一边继续画饼 🔟 “谁会愿意进入一个愤怒到威胁创始人和子女的社区?” 👉 直接反咬,把锅甩给受害者 __________________________________________ 而现实是: ❌ 用户亏损 90%+ ❌ 流动性极低,几乎锁死 ❌ 没有任何补偿方案 ❌他们把用户充的u基本都转进了交易所 然后她告诉你: 👉 这是你的问题 __________________________________________ 我说实话: 这已经不是项目失败, 这是人品问题。 如果你还在幻想他们会救你—— 醒醒。 他们唯一在做的事情就是: 👉 让你接受亏损,并闭嘴。方便他们提桶跑路。
Greta008 tweet media
中文
40
26
87
31.2K
David Apex
David Apex@apx_david·
@vinber_io Finally, we can have a research team that doesn’t argue over lunch. Collaboration just got a serious upgrade.
English
1
0
0
18
vinber
vinber@vinber_io·
智元机器人宣布第 10000 台通用具身机器人下线 智元机器人联合创始人彭志辉宣布,第 10000 台通用具身机器人 已于 3 月 28 日正式下线,机型为远征 A3。 按公开口径,智元从 5000 台到 10000 台只用了约 3 个多月,量产速度明显加快。
vinber tweet media
中文
0
0
1
61
vinber retweetledi
Cheng Lou
Cheng Lou@_chenglou·
My dear front-end developers (and anyone who’s interested in the future of interfaces): I have crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow
English
1.3K
8.3K
64.9K
23.1M