陈仓颉

5.6K posts

陈仓颉 banner
陈仓颉

陈仓颉

@raymekee

仿生猫梦见电子猫粮。 https://t.co/DVzNBNyViy

中华人民共和国 Katılım Mayıs 2009
619 Takip Edilen500 Takipçiler
陈仓颉 retweetledi
Cyandev
Cyandev@unixzii·
Apple 在中国大陆把很多国外 apps 的推送给阻断了,真牛逼啊,不知道距离全面锁区还有多久 😅
中文
535
44
1.2K
579.2K
阿杰 Jack
阿杰 Jack@real_veryjack·
叛变了叛变了,产品体验做得比Google好太多了
阿杰 Jack tweet media
中文
3
0
6
1.6K
歸藏(guizang.ai)
小米 MiMo -V2.5 系列模型全部开源 采用宽松的 MIT 协议,允许自由商用、二次训练与微调,无需额外授权。 同时他们还推出了Orbit 100T Token 计划。 这个太牛批了!如果你有自己 Vibe Coding 一些东西可以去领一下。 包含两部分: 分别是面向 AI builder 的『百万亿 Token 创造者激励计划』,与面向 Agent 框架团队的『Agent 生态共建计划』。 百万亿 Token 创造者激励计划: 申请通过的 AI builder 用户最高将获得 Max 档位的 Token Plan,包含 16 亿 Credits ,价值 659 元。 Agent 生态共建计划: 将为你的 agent 框架提供 MiMo token 限免支持,让你的用户免费接入并体验 MiMo 系列模型。
歸藏(guizang.ai) tweet media歸藏(guizang.ai) tweet media
Xiaomi MiMo@XiaomiMiMo

Xiaomi MiMo-V2.5 is now officially open-sourced! MIT License, supporting commercial deployment, continued training, and fine-tuning - no additional authorization required. Two models, both supporting a 1M-token context window : • MiMo-V2.5-Pro: built for complex agent and coding tasks, ranking No.1 among open-source models on GDPVal-AA and ClawEval • MiMo-V2.5: a native omni-modal model with strong agent capabilities A model's value isn't measured by rankings alone — it's measured by the problems it solves. Let's build with MiMo now! 🤗 Weights: huggingface.co/collections/Xi… 📄 Blog: #blog" target="_blank" rel="nofollow noopener">mimo.xiaomi.com/index#blog

中文
49
4
112
159.1K
陈仓颉
陈仓颉@raymekee·
结局令人舒适
陈仓颉 tweet media
中文
0
0
0
16
陈仓颉
陈仓颉@raymekee·
在闪购上花了一分钱买了一杯库迪
陈仓颉 tweet media
中文
0
0
0
45
陈仓颉 retweetledi
DeepSeek
DeepSeek@deepseek_ai·
🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length. 🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models. 🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice. Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today! 📄 Tech Report: huggingface.co/deepseek-ai/De… 🤗 Open Weights: huggingface.co/collections/de… 1/n
DeepSeek tweet media
English
1.6K
7.7K
44.9K
9.4M
陈仓颉 retweetledi
Deli Chen
Deli Chen@victor207755822·
DeepSeek-V3: Dec 26, 2024 DeepSeek-V4: Apr 24, 2026 484 days later, we humbly share our labor of love. As always, we stay true to long-termism and open source for all. AGI belongs to everyone. ❤️🌍 #DeepSeekV4 #AGIforEveryone #OpenSource
DeepSeek@deepseek_ai

🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length. 🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models. 🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice. Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today! 📄 Tech Report: huggingface.co/deepseek-ai/De… 🤗 Open Weights: huggingface.co/collections/de… 1/n

English
351
1.3K
13.1K
1M
陈仓颉 retweetledi
Teknium 🪽
Teknium 🪽@Teknium·
Quick facts about Hermes; - It's pronounced Her Meeze - It's named after the Greek God of communication, magic, and intelligence - We have been using the Hermes name for 3 or so years now, including in our Hermes 1-4 series of models - That's all ^_^
English
117
45
1.3K
84.1K
陈仓颉 retweetledi
hunter
hunter@3gpmh·
forgive me karl marx for i have bought
English
213
58.3K
312.9K
3.9M
陈仓颉 retweetledi
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
Yeah folks, it's gonna be harder in the future to ensure OpenClaw still works with Anthropic models.
Peter Steinberger 🦞 tweet media
English
541
249
5.5K
1.4M
陈仓颉 retweetledi
Nous Research
Nous Research@NousResearch·
We have partnered with @Xiaomi to bring their excellent MiMo V2 Pro model to Hermes Agent via the Nous Portal - completely free to use for the next 2 weeks! Access now on the latest version of Hermes Agent: 'hermes update'
English
168
173
1.9K
712.2K
陈仓颉 retweetledi
Ronan Farrow
Ronan Farrow@RonanFarrow·
(🧵1/11) For the past year and a half, I've been investigating OpenAI and Sam Altman for @NewYorker. With my coauthor @andrewmarantz, I reviewed never-before-disclosed internal memos, obtained 200+ pages of documents related to a close colleague, including extensive private notes, and interviewed more than 100 people. OpenAI was founded on the premise that A.I. could be the most dangerous invention in human history—and that its C.E.O. would need to be a person of uncommon integrity. We lay out the most detailed account yet of why Altman was ousted out by board members and executives who came to believe he lacked that integrity, and ask: were they right to allege that he couldn't be trusted? A thread on some of of our findings:
Ronan Farrow tweet media
English
585
8.2K
37.1K
8.5M
陈仓颉 retweetledi
kepano
kepano@kepano·
I like @karpathy's Obsidian setup as a way to mitigate contamination risks. Keep your personal vault clean and create a messy vault for your agents. I prefer my personal Obsidian vault to be high signal:noise, and for all the content to have known origins. Keeping a separation between your personally-created artifacts and agent-created artifacts prevents contaminating your primary vault with ideas you can't source. If you let the two mix too much it will likely make Obsidian harder to use as a representation of *your* thoughts. Search, bases, quick switcher, backlinks, graph, etc, will no longer be scoped to your knowledge. Only once your agent-facing workflow produces useful artifacts would I bring those into the primary vault.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
82
167
3K
483.8K
陈仓颉 retweetledi
阑夕
阑夕@foxshuo·
麻豆传媒官宣倒闭之后,老司机群里有很多爆料和讨论,我稍微总结一下: - 黄赌毒这种暴利行业,如果不是被叔叔按了,很难真的因为亏本而搞砸,但黄相对来说确实是脆断性偏高的一个,因为供给没有另外两个那么受限,上古时代姑且不论,时至今日绝对不再算是稀缺商品了; - 麻豆的艰难早已迹象,从苏畅沈芯语夏晴子吴梦梦艾秋苏语棠莉娜这些我不知道的名字纷纷出走就能发现,长江后浪推前浪,当年麻豆传媒是怎么吃SWAG份额,后来就是怎么被大象传媒这些新生代挖墙脚的; - 而且从报酬来说,不只是麻豆,所有华语厂牌给的都确实不多,纯靠片酬可能连一万人民币都不到,不少女演员打出名气之后自己去做社群开OF,大概率能赚更多,而且还自由,毕竟这行当又不推崇事业心,甚至还有福利姬去拍片是为了给自己线下接单提价; - 麻豆的商业模式,大致上分为三路,一路是标准的用户付费,包括会员订阅和单片点播,一路是最让人吐槽的博彩及灰产广告植入,一路是版权分销,市面上那么多镜像站就是这么来的; - 这个结构看起来很多元,但其实一直都是跛腿走路,支撑制作成本的支柱来源,始终都是广告植入,在全网随便就能搜到片源的环境下,华语观众看片基本上是不会付钱的,甚至麻豆官号在Threads抽免费会员都没几个人参加,特别凄凉; - 说白了,你做不到Netflix那样的流媒体粘性,大家消费内容是「片站分离」的,都懒得去麻豆的平台观看,这也是麻豆官宣公告里把盗版作为两大影响之一的原因,用户付费不足,又意味着分销模式也行不太通,直营都卖不掉货,凭什么代理商就能卖动; - 所以实际上,麻豆非常依赖那些网赌和炒币广告的赞助,但是一个原意并不是针对它、却在客观层面断绝了金主资源的环境巨变,就这么无意的摧毁了麻豆,那就是这一两年来整个东南亚电诈园区的被清扫,以及作为洗钱中转节点的太子集团覆灭,直接掐断了上游资金的水源; - 麻豆把一个D2C的生意干成了B2C,从一开始可能就错了,不知道有多少人对那些强行摆在镜头前的广告文案感到膈应,大哥,用户看片是为了撸出来,而那些广告甚至口播却又无时不刻的在影响他们的专注,这太矛盾了,我慕名看过一部女演员在高潮时喊广告词的片子,堪称阳痿诱发器; - 当然麻豆的整活和网感一直是优点,就它蹭热点蹭得最有创意,执行力也不错,但还是像上面说的,流量决定拉新,质量决定留存,成片的粗制滥造,让麻豆注定成为不了华语厂牌里的SOD,不断重复「开局好牌打得稀烂」的循环; - 预算越来越低,水片越来越多,演员越来越差,对内容产业有所了解的应该都会很熟悉这种「死亡螺旋」的趋势,甚至连演员身上的纹身越来越多都是一个非常明显的信号,全是精神小妹在卷工时,有人问过麻豆为什么不能找些没纹身的演员,麻豆官号斗机灵的回复「那也得有」,是真没有还是找不到,我是存疑的; - 目前来看,日本同行的工业化,从制片到发行的完整闭环,是根本没办法复制的,那必须依托一个高度保守化和静态化的社会体系,其他市场,包括欧美,都在选择一条更加个人化、或者说是MCN化的商业路线,片商的利润和权力被分走了太多,利润大头集中在发行端和生产者两个位置,只能说麻豆有些生不逢时; - 这些信息未必全都准确,可能麻豆传媒自己出来复盘或者出书才是最好的记载媒介,搞不好可以整出华语成人行业的「激荡三十年」,青史留名。
中文
52
53
612
112.7K
陈仓颉
陈仓颉@raymekee·
GitHub你啊
陈仓颉 tweet media
中文
0
0
0
63
陈仓颉 retweetledi
Readwise
Readwise@readwise·
Introducing the Readwise CLI. Anything you've saved in Readwise (highlights, articles, PDFs, books, youtube, newsletters) is now instantly accessible from the terminal. For you, and your AI agents. npm install -g @readwise/cli
English
64
111
1.2K
271K
Bruce 📷
Bruce 📷@huhexian·
今天公历生日,24 岁了。
Bruce 📷 tweet media
中文
75
0
197
10.9K
陈仓颉
陈仓颉@raymekee·
π
Ελληνικά
0
0
0
64