mangoice

30.1K posts

mangoice

mangoice

@mangoice

你的禮貌ㄋ?

🇹🇼 Katılım Eylül 2007
625 Takip Edilen739 Takipçiler
mangoice
mangoice@mangoice·
Deadline-Driven Development
English
1
0
5
102
mangoice
mangoice@mangoice·
@cat88tw 你要是問我就可以得到 gemini 的回答 +1 🤣
日本語
1
0
2
54
Jeremy Lu
Jeremy Lu@cat88tw·
欸 脫離短暫退休生活重出江湖做全職顧問後,先撇開美國市場不談,目前訪談過的五個台灣客戶居然沒有一家提到 gemini 👉🏻 更有趣的模式是:大家都訂微軟 copilot 但請 AWS 到府上課教 Kiro 然後私下恰談正式導入 claude team 方案...🤭 #所謂一葉知秋 #現在你也知道大勢之所趨了齁
中文
9
1
94
4.5K
mangoice
mangoice@mangoice·
笑不出來啦 @lotisav.s/post/DWTiVi3iAF6" target="_blank" rel="nofollow noopener">threads.com/@lotisav.s/pos…
中文
0
0
2
49
mangoice
mangoice@mangoice·
洗咧靠
mangoice tweet media
日本語
2
0
3
122
mangoice
mangoice@mangoice·
都給你們聊
Suryansh Tiwari@Suryanshti777

Holy shit… someone just made Claude instances talk to each other. Not APIs. Not agents. Not orchestrators. Just multiple Claude Code sessions… messaging each other like coworkers. It’s called claude-peers — and it turns one Claude into a team. Here’s what’s happening: Run 5 Claude Code sessions across different projects Each one auto-discovers the others They send messages instantly Ask questions Share context Coordinate work Your AI tools literally collaborate. Example: Claude A (poker-engine): "what files are you editing?" Claude B (frontend): "working on auth.ts + UI state" Claude A: "ok I'll avoid touching auth logic" No conflicts. No manual coordination. Just AI syncing itself. Under the hood: • Local broker daemon (localhost) • SQLite peer registry • MCP servers per session • Instant channel push messaging • Auto peer discovery • Cross-project communication Everything runs locally. No cloud. No latency. What it unlocks: • Multi-agent coding without frameworks • One Claude writes backend, another frontend • One debugs while another refactors • Research Claude feeds builder Claude • Large projects split across AI workers This is basically: "spawn 5 Claudes and let them coordinate themselves" Even crazier: Each instance auto-summarizes what it's doing Other Claudes can see: • working directory • git repo • current task • active files They know what the others are working on. Commands: • list_peers → find all Claude sessions • send_message → talk to another Claude • set_summary → describe your task • check_messages → manual fallback So you can literally say: "message peer 3: what are you working on?" …and it responds instantly. No orchestration layer. No agent framework. Just Claudes… talking. This is the cleanest multi-agent system I've seen. We're moving from: 1 AI assistant → to AI teams that coordinate themselves. And it's all running on your machine. Wild.

中文
0
0
1
169
mangoice retweetledi
Yan Practice ⭕散修🎒
靠杯 今天剛爆出一個很大的資安事件 Karpathy 都親自發文警告 Python 套件 litellm 被投毒了 現在 AI 用的太廣了 你自己都不一定知道寫了什麼腳本 更何況是依賴套件 - 很多 AI 開發者常用的 Python 套件 litellm 在 PyPI 上被塞進惡意程式 中毒版本是 1.82.7 和 1.82.8 裡面多了一個 litellm_init pth 只要 Python 啟動就會自動執行 不用主動 import 只要裝到就可能中招 很多人沒自己裝過 litellm 但很多依賴工具依賴這個包 litellm 本來就是很多 AI 工具鏈的底層依賴 所以很危險 至少有兩千多個庫依賴 litellm 惡意程式會蒐集主機上的敏感資料 像是 SSH KEY .env 錢包私鑰 環境變數等等 然後加密打包送回攻擊者的伺服器 如果它發現你在 Kubernetes 環境還會進一步橫向擴散 在整個叢集裡部署特權 Pod - 攻擊者先偷到 litellm 的 PyPI 發布權限 再直接推上帶毒版本 整個過程就是安全工具反過來變成突破口 立刻檢查 pip show litellm 1.82.6 是最後一個乾淨版本 只要你裝過 1.82.7 或 1.82.8 就直接假設所有資料都已經外洩 直接把錢包密碼全換了
Andrej Karpathy@karpathy

Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.

中文
4
180
689
79.7K
mangoice retweetledi
小克 🌤
小克 🌤@littlegoodjack·
Gemini 誰教你這樣講話的
小克 🌤 tweet media
中文
11
4
138
12.9K
mangoice retweetledi
YC
YC@echim2021·
最近在 survey 蘋果的 Age Range Request API,下午為了測這個功能,我先把自己的生日改成 16 歲的少年來測試 好消息:測試的結果很成功 壞消息:要改回 30 歲的時候,需要家長同意 🤡
中文
3
2
29
3.7K
mangoice
mangoice@mangoice·
這個案例好適合收錄進來
Andrej Karpathy@karpathy

Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.

中文
0
0
0
146
mangoice
mangoice@mangoice·
有點香
Cindy ❤️@CryptoCindyyy

Claude 推出的Dispatch 正式上線 不用安裝複雜的龍蝦 實測用手機遠端控制我的 Mac ✅學習門檻0 ✅Pro 或 Max 訂閱 ✅Mac 電腦 手機是遙控器,Mac 是執行引擎。 🔘從手機發出指令,Claude 在電腦上實際操作 🔘開 App、讀檔案、整理資料,全部在本地執行 🔘檔案不會離開電腦 ⚡️設定只要 5 步驟,2 分鐘完成 1️⃣ 下載 Claude 桌面版 App → claude.com/download 更新至最新版 2️⃣ 打開 Cowork → 點 Dispatch → Get started 3️⃣ 開啟兩個權限開關 → 允許存取檔案 → 保持電腦不睡眠 4️⃣ 手機打開 Claude App → 同帳號登入,Dispatch 自動出現 → 不需要掃 QR Code 5️⃣ 從手機發出第一個任務 🎉 ➖➖➖➖➖➖➖➖➖➖➖➖➖➖ 使用前要注意 ⚠️ ⚠️ Mac 必須保持開機 合上螢幕就停止,這是遠端控制不是雲端 🔒 檔案全程不外傳 所有處理在你的 Mac 上執行 🛡 敏感資料先別用 Research Preview 階段,官方建議迴避 ✅ 每個動作都需要你授權 Claude 會先詢問,你可以隨時叫停

中文
0
0
4
286
mangoice retweetledi
Toomore 📷🎈🎫
很高興和大家分享:我們在 Tor Project 客座文章已正式發佈了! 這篇文章記錄了在國立臺灣師範大學推動 Tor Relay 的實務過程,不只談技術配置,也談如何和網管、教授與行政系統溝通,讓一個「可被說明、可被管理」的方案真的落地。 其中最重要的一點是:我們部署的是 Tor Relay(中繼節點),不是 Exit Node(出口節點)。如果你也關注校園網路治理、隱私基礎設施或公共技術實作,希望這篇經驗能提供參考。🙂 blog.torproject.org/setting-up-tor… #Tor #TorRelay #NetworkGovernance #DigitalRights #Taiwan #NTNU
中文
0
10
21
1.4K