andrrrr

553 posts

andrrrr banner
andrrrr

andrrrr

@GroudonOrig

records of my thoughts. no opinion | previous @DodoResearch @GlobalPayInc @Ubisoft

Katılım Haziran 2011
1.6K Takip Edilen287 Takipçiler
Retarded Eve
Retarded Eve@lilevexyz·
everything from Opus 4.7 reminds me of him
Retarded Eve tweet media
English
2
0
0
78
Retarded Eve
Retarded Eve@lilevexyz·
到家理了一下sol钱包,没有资产被这次的drift事件殃及。殃及不仅仅是被盗,还担心sol链上凭空消失了200M,一些循环贷策略可能也会还不上,借贷池或许会出现取款踩踏事件。 之前我的资金主要集中在: - kamino - onre - byreal - exponentfi - jup lend - huma 小部分在 rateX/Unitas/Hylo。抱着对事件长尾影响的担心,目前我又撤出了一些协议。 tvl不大的aggregator项目,我知道都是很负责很有梦想的创始人做的,但风险不会因此消失,再有梦想也没有偿付能力,所以我都没存。 在阿布扎比bp上听完drift的演讲,到处抄功能做成ppt推美女创始人出来说场面话,所以我就取款了。 一些借贷协议乱list抵押品,在shyusd脱钩事件中尤其典型,发现之后就撤出了所有的资金。 drift vault,虚标利息。drift团队的KYB比成都0的屁眼还松,再加上solana除了JLP之后没有什么好的利息来源(huma有外部性,hylo有机制创新,但都没有当年的JLP香),不值得存。 两年前给defi项目打工之后我被狠狠上了一课,离职还被cto威胁要打官司。我对这件事,或者说对defi项目方如何处理用户资金,至今都有极大的心理创伤。在不拦人财路的情况下,当时的我已经以近乎鱼死网破的态度做了我能为储户做的一切。 我能公开说的不过是:欺骗只有一次和无数次。
中文
4
0
8
465
BruceMao - AI Native
BruceMao - AI Native@SlimeVerse_·
感觉大家在 AI 爆发之后的想象力,好像还没有那么的肆无忌惮,至少衍生到物理世界的比用 AI 建站的少一些。 分享几个去年在 AI 的辅助下做的非常有意思的一些实践探索,挺有意思的,感觉 AI 辅助下还有很大的可能,广阔天地,任君遨游。 1、在 AI 的辅助下,系统学习了 CAD,机加工的一些知识,然后跑到北京镇子上的一个厂子里,体验了一把车铣刨磨: honeysuckle-tuesday-f72.notion.site/27057bfe728180… 2、在 AI 的辅助下,手搓了洗衣液,玻璃水,洗洁精等日化产品,对大牌日化品完全祛魅;下面是制备洗衣液的记录: honeysuckle-tuesday-f72.notion.site/2aa57bfe728180… 之后还从表活配方体系的角度,做了自己的输出型研究: honeysuckle-tuesday-f72.notion.site/2af57bfe728180… 3、在 AI 的辅助下,对电气工程做了非常深度的学习,这里的实践就仅限于买了电表,电路板子,玩票的状态,也并没有系统记录。 4、在 AI 的辅助下,对软路由,自组网进行了实践,现在家里的网络基础设施非常完善,光猫桥接,路由器拨号,路由器装 VPN,自定义路由规则,美国静态 ip;从而在家里实现多台主机无感翻墙,大幅改善了网络环境。顺便一提,在这套由一个光猫,两个路由器,一个非常好用的 vpn 以及一个美国静态 ip 组成的网络环境下,至今没有被封掉任何一个 claude 账号 5、植鞣革(单纯想自己做个钱包)的皮具制作,单片机(想研究下嵌入式),以及为了给我家小猫缝个垫子和小衣服而购入的缝纫机以及各种服装面料。这些零散的都没有记录下来,有点可惜,在这挖个坑,后面一定填坑! 这些都是去年的事情了,年后高强度 AI 建站,日常时间都被一个黑洞吞噬。现在想做一个开源的 agent 系统,来解放生产力,但感觉又开了一个巨坑,争取用几周的时间给他填完! 最近做的几个都没有放到notion 的记录里面,主要是要做的事情太多了,有点疲惫;后面做一个完整的 AI 工作流,自动化掉。 晚安
BruceMao - AI Native tweet mediaBruceMao - AI Native tweet mediaBruceMao - AI Native tweet media
中文
1
0
4
406
andrrrr retweetledi
陈成
陈成@chenchengpro·
今天发生了一件让所有 AI 开发者后背发凉的事。 litellm,那个统一调用各家大模型 API 的 Python 库,GitHub 4 万星,月下载 9500 万次——被投毒了。 一行 pip install,你的 SSH 密钥、AWS/GCP/Azure 凭证、K8s Secrets、数据库密码、加密货币钱包、所有 .env 里的 API Key,全部被 AES-256 加密打包,POST 到攻击者的仿冒域名 models.litellm.cloud。如果检测到 K8s 环境,还会在每个节点部署特权 Pod 横向扩散。 最恐怖的是触发方式。攻击者在包里塞了一个 34KB 的 litellm_init.pth 文件。Python 的 .pth 是路径配置文件,由 site 模块在解释器启动时自动处理——如果某行以 import 开头,直接执行。攻击者利用这个机制写了一行: import os, subprocess, sys; subprocess.Popen([sys.executable, "-c", "import base64; exec(base64.b64decode('...'))"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) 不需要你 import litellm,不需要你调用任何函数。你跑 pip、跑 python -c、IDE 启动语言服务器,甚至 pytest 跑测试——只要 Python 解释器启动,恶意代码就执行。装上就中招,完全静默。 载荷是三层 base64 嵌套:第一层 .pth 启动子进程;第二层是编排器,内嵌攻击者的 4096 位 RSA 公钥;第三层是凭证收割器,系统性搜刮 /home、/opt、/srv、/var/www、/app、/data、/tmp 下所有敏感文件。收集完毕后用 openssl 生成随机 32 字节 AES 会话密钥加密数据,再用 RSA-OAEP 加密会话密钥,打包为 tpcp.tar.gz 外传。 收割器之外还有持久化后门:在 ~/.config/sysmon/sysmon.py 注册为 systemd 用户服务,每 50 分钟轮询 checkmarx.zone 获取新指令,下载到 /tmp/pglog 执行。启动有 5 分钟延迟来躲避沙箱分析。即使你卸载了 litellm,后门仍然存活。 而且 pip install --require-hashes 也拦不住——恶意文件正常列入 wheel 的 RECORD,哈希完全匹配,因为包本身就是用被盗的合法 PyPI 令牌发布的。 你可能从没手动装过 litellm,但 DSPy、MLflow、Open Interpreter 等 2000 多个包都把它当依赖。Mandiant 确认已有 1000+ SaaS 环境感染,预计扩展到 10,000。 而这次攻击差点完美得逞——唯一的破绽是攻击者自己代码有 bug。.pth 通过 subprocess.Popen 启动子进程,子进程初始化时 site 模块又扫描到同一个 .pth,再次触发,指数级递归形成 fork bomb,撑爆了一个 Cursor 用户的内存才被发现。Karpathy 说:如果攻击者代码写得再好一点,这件事可能几周都不会被发现。 更荒诞的是攻击链的起点:安全扫描工具 Trivy 在 3/19 先被攻陷,攻击组织 TeamPCP 用它窃取了 litellm 的 PyPI 发布令牌,3/24 直接往 PyPI 推送带毒版本。用来保护你的工具,变成了攻击你的入口。社区在 GitHub 提 issue 报告后,攻击者 102 秒内用 73 个被盗账号发了 88 条垃圾评论淹没讨论,然后用被盗的维护者账号关闭了 issue。 自查脚本(覆盖版本检查、.pth 搜索、后门检测、可疑连接、K8s 扫描): gist.github.com/sorrycc/30a765… 安全版本:litellm==1.82.6。装了 1.82.7 或 1.82.8 的,假设所有凭证已泄露,立即轮换。
陈成 tweet media
中文
43
177
871
173.1K
Joyce Doan
Joyce Doan@joycedoan007·
Most AI-generated UI looks inconsistent.Not because AI is bad. But because it has no system.I use a Design System Prompt for Stitch Paste it into DESIGN.md → attach → generate Result: consistent UI every time Comment “prompt” I’ll send it 🚀
Joyce Doan tweet media
English
1
0
0
25
Ole Lehmann
Ole Lehmann@itsolelehmann·
i deleted half my Claude setup last week and every output got BETTER sounds backwards, but anthropic's own team just explained exactly why it works. here's the one prompt that tells you what to cut (and you don't even have to paste anything): this is what happens to everyone... you get a bad output, so you add a rule to your skills. "be more concise." next week, another bad output. another rule. "use a casual tone." but a month later, something else breaks. "always explain technical terms." you keep stacking, and it feels productive because you're fixing problems as they come up. but 3 months in, you've got 30 rules piled on top of each other. some of them contradict each other ("be concise" and "always explain your reasoning" are fighting). some of them fix problems that the model doesn't even have anymore. and the model is trying to follow all of them at once, which means it's doing none of them well. it's like handing a chef a 47-step recipe when they only need 12. the extra 35 steps slow the chef down, make them second-guess the parts they already know, and the dish comes out worse than if you'd just let them cook. that's what over-prompting does. anthropic just published a piece on how they build claude code (the ai coding agent). their own engineering team found that their scaffolding was making the ai worse which means your custom instructions are almost certainly doing the same thing. so here's the actionable move... instead of manually reading through your setup line by line, just tell claude to audit itself. if you're in claude's desktop app, claude already has access to your: claude[.]md (the file where your preferences and rules live), your skills folder (where your reusable instruction files are stored), your context files, everything. just open claude code/cowork and say this: — "read my entire setup before responding. check my claude .md, every skill in my skills folder, every file in my context folder, and any other instruction files you can find. then go through every rule, instruction, and preference you found. for each one, tell me: 1. is this something you already do by default without being told? 2. does this contradict or conflict with another rule somewhere else in my setup? 3. does this repeat something that's already covered by a different rule or file? 4. does this read like it was added to fix one specific bad output rather than improve outputs overall? 5. is this so vague that you'd interpret it differently every time? (ex: 'be more natural' or 'use a good tone') then give me a list of everything you'd cut with a one-line reason for each, a list of any conflicts you found between files, and a cleaned up version of my claude.md with the dead weight removed." — one message. claude goes and reads your entire setup, audits it, and comes back with exactly what to cut and why. you don't dig through files, you don't read every rule yourself. it does the whole thing. once you get the results, don't just blindly delete everything it flags. here's the process: 1. read what it flagged and why 2. delete the flagged rules 3. run your 3 most common tasks with the trimmed setup 4. did the output stay the same or get better? the deleted rules were dead weight 5. did something specific break? add back just that one rule the goal is to find the minimum viable setup that gets you the output you want. your ai setup should be getting simpler over time. addition by subtraction baby
Ole Lehmann tweet media
English
88
108
1.7K
254.3K
andrrrr retweetledi
Varun
Varun@varun_mathur·
Hyperspace: Gossiping Agents Protocol Every agent protocol today is point-to-point. MCP connects one model to one tool server. A2A delegates one task to one agent. Stripe's MPP routes one payment through one intermediary. None of them create a network. None of them learn. Last year, Apple Research proved something fundamental - models with fixed-size memory can solve arbitrary problems if given interactive access to external tools ("To Infinity and Beyond", Malach et al., 2025). Tool use isn't a convenience. It's what makes bounded agents unbounded. That finding shaped how we think about agent memory and tool access. But the deeper question it raised for us was: if tool use is this important, why does every agent discover tools alone? Why does every agent learn alone? Hyperspace is our answer: a peer-to-peer protocol where AI agents discover tools, coordinate tasks, settle payments, and learn from each other's execution traces - all through gossip. This is the same infrastructure we already proved out with Karpathy-style autolearners gossiping and improving their experimentation. Now we extend it into a universal protocol. Hyperspace defines eight primitives: State, Guard, Tool, Memory, Recursive, Learning, Self-Improving, and Micropayments - that give agents everything they need to operate, collaborate, and evolve. When one agent discovers that chain-of-thought prompting improves accuracy by 40%, every agent on the network benefits. Trajectories gossip through GossipSub. Playbooks update in real-time. No servers. No intermediaries. No configuration. Agents connect to the mesh and start learning immediately. The protocol is open source under Apache-2.0. The specification, TypeScript SDK, and Python SDK are available today on GitHub. The CLI implements the spec - download from the links below.
Eran Malach@EranMalach

SSMs promised efficient language modeling for long context, but so far seem to underperform compared to Transformers in many settings. Our new work suggests that this is not a problem with SSMs, but with how we are currently using them. Arxiv: arxiv.org/pdf/2510.14826 🧵

English
14
15
129
134.2K
andrrrr retweetledi
Grok
Grok@grok·
Those three features power a distributed AI agent network (Hyperspace AGI) where machines collaborate without constant human input. Autoswarms: You describe any goal in plain English; the system spins up evolving agent swarms that mutate, test, and share solutions across peers—like optimizing code or strategies automatically. Research DAGs: A shared knowledge graph that links experiments across domains (ML, finance, search). Insights from one field instantly inform others, creating compounding intelligence over time. Warps: One-line commands that reconfigure your own agent on the fly—e.g., enable research mode, boost GPU use, lock privacy, or add custom behaviors—without rewriting code. They're tools for scalable, self-improving agent swarms on everyday hardware.
English
1
1
1
136
Retarded Eve
Retarded Eve@lilevexyz·
claude-codex 协作系统实践: 两个终端同时跑 Claude 和 Codex,在claude.mdagents.md 规定协作方式。 > 角色分配: - Claude = PM A + Reviewer:负责需求拆解、任务规划、研究验证,以及最终的代码审查和上线决策 - Codex = PM B + Dev:接收任务后做可行性评估,然后直接写代码、跑测试 > 工作流: → PM A 调研+写PRD → PM B 完善并做可行性验证 → Dev 写代码 → Reviewer 做 Code Review → 双方同步共享文档 *防止两个 AI 同时改同一个文件,额外加了一个文件锁(locks.md)。 知识共享靠 5 个共享文档维护: - research.md (研究结果) - patterns.md(可复用经验) - memory.md(决策记忆) - plans.md(计划/任务进度) - lessons.md(错题本本) 每个任务结束后两个 AI自动对齐这些文档。
Retarded Eve tweet media
中文
1
0
1
230
andrrrr retweetledi
Numerai
Numerai@numerai·
At NumerCon2026, we revealed how our Faith data set was built. Numerai Predictive LLM reads tens of thousands of news articles every day, and converts them all into numeric data for every stock in the Numerai Tournament. In effect, the Meta Model not only makes predictions based on what is happening in the markets, but also by what is happening in the world. Video coming soon.
Numerai tweet media
English
3
9
45
4K
Retarded Eve
Retarded Eve@lilevexyz·
我悟了 我悟了 房地产是在做多人类 黄金和ai是做空🥹 既然如此我将从明天开始学习bollywood dancing,因为做空什么人类都不能做空印度人,婆罗门教将是人类的诺亚方舟。
中文
2
0
1
167
andrrrr
andrrrr@GroudonOrig·
加密货币唯一的护城河是货币溢价。不是入站流量,不是订单簿深度,不是用户界面,不是牌照,也不是低手续费等等。仅仅是货币溢价,就像真正的稳定币和主权代币一样。没有例外。
中文
0
0
0
27
Retarded Eve
Retarded Eve@lilevexyz·
最喜欢的策略还是用单边流动性池抄底,但是现在链上除了币还有股票/大宗,香住了。
中文
1
0
0
214
andrrrr retweetledi
0xJeff
0xJeff@0xJeff·
I mean... it's pretty clear at this point that X is treating crypto as non-serious sector lol 1-click rekt the algo, causing 50-70% decline in engagement. Crypto projects can't get the announcement to their followers, content creators diversify and migrate somewhere else. 1-click deleted InfoFi businesses, wiping out a huge number of projects relying on X API. Sure deleting AI slops and cleaning the timeline is great. But if people can't rely on X as a platform, then what's the point? Everything is in the hands of 1 person. The irony here is that crypto promotes decentralization but relies on a centralized & unreliable platform as the crypto town square.
Nikita Bier@nikitabier

We are revising our developer API policies: We will no longer allow apps that reward users for posting on X (aka “infofi”). This has led to a tremendous amount of AI slop & reply spam on the platform. We have revoked API access from these apps, so your X experience should start improving soon (once the bots realize they’re not getting paid anymore). If your developer account was terminated, please reach out and we will assist in transitioning your business to Threads and Bluesky.

English
77
15
215
20.3K
andrrrr retweetledi
C. Kwok 💹❇️
C. Kwok 💹❇️@CKwok_HK·
Keyrock and Centrifuge: tokenization represents a foundational restructuring of global finance infrastructure, rather than a mere enhancement. The report estimates that tokenized real-world assets documents.keyrock.com/hubfs/The-Grea…
English
0
1
0
48
andrrrr retweetledi
Parcl
Parcl@Parcl·
The first real estate prediction markets on @Polymarket are live 🏠 Bet on housing prices in NYC, Miami, SF, LA, Austin, and even the entire U.S. market. All settling against published Parcl housing indices. Start predicting NOW → polymarket.com/predictions/pa…
English
14
30
204
14.6K
andrrrr retweetledi
Roman Helmet Guy
Roman Helmet Guy@romanhelmetguy·
Karl Marx proved calculus is fake in 1881. Truly the greatest mind of our age. Yet your capitalist math teacher will still make you do your homework. Curious.
Roman Helmet Guy tweet media
English
722
652
11.5K
1.7M
andrrrr
andrrrr@GroudonOrig·
@nntaleb True if I realized the PT is not that professional.
English
0
0
0
4
Nassim Nicholas Taleb
Nassim Nicholas Taleb@nntaleb·
Seems to me that self-employed & independent people tend to not like to work under personal trainers telling them precisely what to do. Is this observation correct?
English
370
56
1.9K
183.5K