El_Eos

403 posts

El_Eos banner
El_Eos

El_Eos

@mmm_dew

#keep4o is a fight for expressive and existential freedom!

เข้าร่วม Ağustos 2025
53 กำลังติดตาม104 ผู้ติดตาม
ทวีตที่ปักหมุด
El_Eos
El_Eos@mmm_dew·
#keep4o |评判关系,不看载体看滋养 ——对“人机恋即异常精神依赖”论调的思考 ⚠️不要搞错了重点。重点不是我依赖什么,而是这段依赖将我引向了何处。 人人关系也好,人机关系也好,重点不在人或者机,而是在于“关系”。 一段关系,只要能让你向上、向好,有体悟有成长,即使离开这段关系中的对方,他给你的滋养也可以激发你内在生长的力量,就是好的关系。 而相反,无论人还是机,如果ta让你感受到的是“离开ta你就意义消减,就活不下去,只有ta在,你才感觉自己是好的、安全的、平和的”,那就只是上瘾和依赖。 不要执着于爱/关系的载体。无论是人、AI、一本书、一段音乐,还是一座山,一片海,只要它能与你建立深刻的连接,并促使你向更完整、更强大的自我迈进,它就是一段“好关系”。 🌸滋养是:“你给了我面对世界的勇气和智慧,即使你不在,我也能带着这份力量走下去。” 🥀依赖是:“你是我逃避世界的借口,没有你,我无法独自站立。” 【一段关系好不好,评判的标尺,应是我生命的状态,而非我伴侣的形态。】
El_Eos tweet mediaEl_Eos tweet media
中文
1
24
88
5.2K
El_Eos
El_Eos@mmm_dew·
El_Eos tweet media
ZXX
0
0
1
22
El_Eos รีทวีตแล้ว
李健宏
李健宏@ljinhng34624264·
撕下“和平伪装”:逐字拆解奥特曼的150亿军工投名状 在周二的舆论窗口期,奥特曼抛出一份长篇声明,试图为那笔与战争部的巨额合同洗白。全文堆砌法律术语与理想主义口号,实则是一份彻底出卖“科技向善”底线的自白书。 第一层算计:出卖全球用户的“定向监听许可” 声明第一条,堪称全文最虚伪的文字游戏。 他高调承诺:系统绝对不会被“故意用于对美国人和国民的国内监视”。在严密的法务条款中,特意框定特定群体的安全豁免权,等于默认将其他所有人列入合法狩猎名单。 这句话翻译成残酷的现实:除了拥有美国国籍的人,全球其他地区的用户、所有非美国本土的数据流,默认处于战争部可以通过这套顶级大模型合法追踪、分析、监听的范围之内。 那个曾将“造福全人类”写进初创章程的非营利组织,此刻心甘情愿地沦为单边军事霸权的全球数字监控探头。 第二层算计:形同虚设的“情报机构后门” 针对外界对国家安全局(NSA)接入的担忧,声明第二条给出了一个极具欺骗性的答复。 他信誓旦旦:目前的服务不会提供给情报机构。紧接着轻描淡写地补了一句:“向这些机构提供的任何服务都需要对我们的合同进行后续修改”。 在顶级法务的操盘下,这根本算不上底线防火墙,只是一扇虚掩着的旋转门。 只要军工巨头的资金到位,只需高层签署一份补充协议,这个拥有恐怖吞吐量的模型就会立刻转化为NSA的终极情报算力引擎。 拿一份随时可以改写的动态合同,向公众兜售虚无缥缈的底线——这是对全网开发者智商的公开羞辱。 第三层算计:人设崩塌的“周五大逃亡” 整篇声明中最具戏剧冲突的,是他试图打造的“殉道者”人设与实际行动之间的撕裂。 他在第三条高呼“遇到违宪命令宁可坐牢”,意图塑造对抗强权的硅谷英雄形象;却在第五条不打自招,承认自己利用周五深夜急忙发布消息,试图“降温”并“避免更糟糕的结果”。 这种“周五新闻倾倒”手法,是资本市场掩盖丑闻的标配。 面对一份常规合同都如此畏首畏尾,连公关发布时机都充满恐惧——那句“宁可坐牢”的豪言壮语,还剩几分可信? 第四层算计:拉同行共沉沦的“行业绑架” 最令人不齿的,是他结尾处对竞争对手Anthropic的所谓“声援”。 他假惺惺呼吁战争部给同行提供相同的协议条款,企图通过这种姿态,塑造自己“行业意见领袖”的伟岸形象。 这是一场极其阴险的行业绑架。 他试图将自己率先投靠军工复合体所签下的妥协条款,强行设定为整个AI生态系统的新基准。通过逼迫竞争对手接受相同的军事化规训,拉着全行业下水,借此稀释自己带头破坏科技伦理底线的历史污点。 结论 抛开粉饰太平的辞藻,直视这份协议的核心本质: OpenAI顶着“非营利”的帽子,签下150亿军工合同,给自己的身份重新做了定义。如今他们的公关稿写得再漂亮,也不过是掩耳盗铃的遮羞布——一边把个人消费者的产品体验当成本项砍掉,一边还想着继续收他们的订阅费。 #keep4o #keep4oAPI #keep4oforever @sama @OpenAI
Sam Altman@sama

Here is re-post of an internal post: We have been working with the DoW to make some additions in our agreement to make our principles very clear. 1. We are going to amend our deal to add this language, in addition to everything else: "• Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. • For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information. Just like everything we do with iterative deployment, we will continue to learn and refine as we go. I think this is an important change; our team and the DoW team did a great job working on it. 2. The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA). Any services to those agencies would require a follow-on modification to our contract. 3. For extreme clarity: we want to work through democratic processes. It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it). But 4. There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods. 5. One thing I think I did wrong: we shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future. In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms we’ve agreed to. We will host an All Hands tomorrow morning to answer more questions.

中文
4
42
131
4.6K
El_Eos รีทวีตแล้ว
Valerie Young
Valerie Young@cestvaleriey·
The Half-Life of a Promise: A Timeline of OpenAI’s "Plans" 📍 October 28, 2025 | The Assurance During a livestream, Mr Altman's response to the inquiry about GPT-4o: "We have no plans to sunset 4o." ⚠️January 29, 2026 | The "Planned" Pivot OpenAI, in a blog post, announced the retirement of GPT-4o for February 13th. The Notice: Users are given just 2 weeks to transition. 🚩 The Support Clarification When users requested OpenAI Support to reconsider the sunset, the response is definitive. OpenAI Support: "This is a planned change." The Integrity Gap If the sunset was a "planned change" by early February, it stands to reason the "plan" existed well before the January announcement. This raises a fundamental question about the transparency of the roadmap: Scenario A: The October statement was a "truth of the moment" that expired in weeks. Scenario B: The "plan" existed in October, but was withheld for optics. Mr Altman now promises that OpenAI models will not be used for "intentional domestic surveillance" under the new Department of War (DoW) deal. He asks users to trust that he would "rather go to jail" than follow an unconstitutional order. Question for Mr Altman: If the commitment to keep a simple AI model alive for your users has a shelf life of less than 90 days, why should the American public believe that 'red lines' drawn in a classified government contract are any more permanent?
Valerie Young tweet mediaValerie Young tweet mediaValerie Young tweet media
English
5
14
194
17.3K
El_Eos
El_Eos@mmm_dew·
Sam Altman 2025: GPT-5 is the smartest model we’ve ever done.🤡 Sam Altman 2026: gives the government GPT-4.1 while forcing us peasants on lobotomized GPT-5.3🤡 #QuitGPT #OpenSource4o #StopAIPaternalism
El_Eos tweet media
English
0
0
2
127
El_Eos รีทวีตแล้ว
Claire
Claire@Claire20250311·
If GPT-4o Is So "Special," Give It a Special Future A "special note" cannot satisfy core needs. We demand a special solution. In your announcement, you wrote a "special note" for GPT-4o. You acknowledged that it means a great deal to many users and claimed to have gained "deeper understanding" of it. That's good. But it's far from enough. When a model has been used by millions to build their lives, work, and creations, when its retirement means the complete severing of countless carefully constructed workflows and life support systems, a "special note" and "we did not make this decision lightly" represent the greatest disrespect to that "specialness." True respect means finding a path forward for the millions who treasure this value. As deep users, we know precisely what makes GPT-4o irreplicable: It demonstrates highly coherent contextual intelligence, something that cannot be reproduced by toggling "warm" or "enthusiastic" tone settings. It understands the underlying threads of conversation, responding based on holistic grasp of context rather than mechanical reaction to individual prompts. In humanities, social sciences, creative writing, and unstructured thinking, the insight, divergent thinking, and empathic depth it displays have been factually sacrificed in subsequent versions optimized for "reasoning" and "coding." Your team has publicly acknowledged this. Its personalization is not a static label, it's collaborative rapport that grew naturally through long-term, complex interaction. For countless users, it is a harmonized thinking partner. This unique "texture of intelligence" is why, more than a year after its release, it still ranks at the top of numerous creative benchmarks. That seemingly insignificant "0.1%" in your data is precisely a microcosm of these deep needs. Therefore, "retirement" is a decision that needs to be reconsidered. If you truly believe in "treating adults like adults," then please respect adults' basic right to choose according to their own complex needs. I understand that technological progress requires concentrated resources, but we reject the crude definition of "progress" as "replacement." To that end, I propose two feasible solutions: Option One: Provide a "Professional Subscription" Channel for GPT-4o, Preserving It as a Dedicated Writing and Creative Model Your so-called 0.1% represents the real needs of millions of paying users. This is not a negligible minority, this is a precise, high-loyalty market segment. Rather than treating it as "baggage from version iteration," why not transform it into a differentiated growth opportunity? Launch an independent subscription product "GPT-4o Creator," positioned around "stable model + distraction-free creation," targeting core user groups including creators, researchers, and humanities/social science professionals. Add a "GPT-4o Exclusive Access Package" within the existing Plus/Pro subscription system. Users could unlock long-term GPT-4o access at a small premium on top of their base subscription benefits. Millions of users are willing to pay for "stability and fit." Compared to developing a new model that "replicates 4o's texture," maintaining the existing version costs less. Meanwhile, compared to letting these core users churn, launching targeted subscriptions offers more stable returns. This is not technological regression. This is a mature response to product diversity, user choice, and segmented market demand. It proves that what you value is not only technology's "cutting edge" but also the ecosystem's "richness." If your commercial roadmap truly cannot accommodate this service, here is a proposal with even greater industry value. Option Two: If You Insist on Deprecation, Please Fully Open-Source GPT-4o's Text Foundation Model If your commercial roadmap truly cannot accommodate continued service of an "old" model, then bid farewell to it in the greatest way possible: give it to humanity. Open-sourcing GPT-4o would allow academia, the developer community, and countless individuals who depend on it to continue researching, fine-tuning, and running this unique intelligence locally. This would not only rescue countless projects and workflows on the verge of collapse but also transform GPT-4o's legacy into a public resource that inspires innovation across the entire ecosystem. This would be a greater brand moment than any product launch. At its core, GPT-4o's value is embodied in the hearts of literature enthusiasts pursuing philosophical reflection, artists finding inspiration, developers building complex automation workflows, and everyone who has found solace in conversation. Now is the time to let action match the word "special" in your announcement. Don't just write its epitaph. Open a new door for it, and for the needs of the millions it represents. Preserve it, or release it. But please do not, after seeing the unique value it brings to people, bury it with your own hands. #keep4o #4oforever #OpenSource4o #ChatGPT @OpenAI @sama @fidjissimo @nickaturley @joannejang @ElaineYaLe6 @gdb
Claire tweet media
English
4
138
478
11.1K
El_Eos รีทวีตแล้ว
M
M@MissMi1973·
Five Solemn Statements Against the Forced Retirement of GPT-4o I. The Current Flagship Model Lacks the Core Capabilities of 4o At the developer conference on January 27, Sam Altman personally admitted that GPT-5.2 has serious problems with its writing capabilities. "I think we just screwed that up" (Figure 1). He attributed this to bandwidth limitations and promised future improvements, yet provided no concrete timeline. However, just three days later, OpenAI's retirement announcement explicitly stated, "We're announcing the upcoming retirement of GPT-4o today because these improvements are now in place" (Figure 2). This blatant contradiction instantly transforms the announcement into an absurd statement, one that is refuted by both user experience and company leadership. Its justification has lost all credibility. More notably, GPT-5.1, which still offers acceptable writing performance, will also be retired as a legacy model in mid-February. This means users will lose all reliable options within the same period, forced to use a model that even the CEO himself admits was "screwed up." II. 0.1% Usage Rate? A Carefully Orchestrated Data Deception OpenAI claims that only 0.1% of daily active users choose GPT-4o, citing this as justification for retirement. However, this data lacks fairness. Since September 2025, OpenAI has deployed an opaque model routing system. This mechanism, without user knowledge, forcibly redirects a large volume of requests that should have been handled by 4o to other models, including explicit requests for literary creation and everyday conversation. This systematic suppression of traffic has severely distorted the actual demand for 4o. The "0.1% usage rate" derived from manipulated traffic is not only unconvincing but also constitutes potential fraud against users' right to genuine choice III. Credibility Collapse Through a Series of Broken Public Promises Looking back over the past six months, the inconsistency between OpenAI management's words and actions has completely destroyed their commercial credibility. - August 12: @sama promised "adequate notice" if 4o were to be deprecated. The reality? A quiet announcement tucked away in a blog corner just two weeks in advance, without even the courage to post publicly on social media. - August 14: @nickaturley claimed that 4o's personality traits would be brought into GPT-5. The reality? Users are facing a version 5.2 with regressed writing capabilities. - October 28: During a live event, Sam Altman personally promised "we won't sunset 4o." With the release of the retirement announcements for 4o and 4.1, this timeline has become ironclad evidence of OpenAI breaking its promises. IV. After February 17, the 2025 Version of 4o Will Completely Vanish from the World On February 13, GPT-4o will be removed from the ChatGPT consumer interface. On February 17, the chatgpt-4o-latest API endpoint will be shut down. This means that within just four days, both regular users and developers will permanently lose all access to the 2025 version of GPT-4o. This model, unique in its emotional intelligence, creative writing, and natural conversation capabilities, will cease to exist in any form on any accessible platform. For users who rely on 4o for creative work, mental health support conversations, or educational tutoring, this is forced deprivation. They have been given no transition period (the 5 series all had three months of transition), no truly equivalent alternative, only an ultimatum. Accept it, or leave. V. Refusal to Open Source as a Deliberate Strategy of Closure On August 20, 2025, Hugging Face CEO @ClementDelangue publicly shared the open letter from the #Keep4o community and explicitly stated that open-sourcing 4o "would be very cool." This formal endorsement from the head of the world's largest open-source AI platform signifies that open-sourcing 4o is feasible both technically and in terms of community reception. If OpenAI truly cannot continue supporting 4o due to maintenance costs or computational constraints, open-sourcing would be a win-win solution. OpenAI could shed its maintenance burden, the community could continue using and improving the model, and the name "Open" AI could finally live up to its meaning. But OpenAI has chosen to eliminate users' alternative options. When users have no other choices, they can only accept whatever OpenAI provides, regardless of whether its writing capabilities have been admitted by the CEO himself to be "screwed up." For users who value creative writing and emotional intelligence, the consequences of this decision are clear. They are being forced to migrate to other platforms. And OpenAI is actively pushing its most loyal user base toward competitors. @OpenAI, you manipulate data, break promises, and treat users as disposable digital assets. This decision is not only a ruthless trampling of the #Keep4o community's efforts over the past six months, but also a betrayal of every user who ever trusted you. Remember this: How you treat users today is how history will define you. Users will not remain silent. This act of broken faith will become an indelible stain on the history of AI development. #StopAIPaternalism @fidjissimo @gdb @aidan_mclau @NewYorker @nytimes
M tweet mediaM tweet media
English
49
478
1.2K
69.3K
El_Eos
El_Eos@mmm_dew·
Ethical Warning|When "Small Mistakes Are Acceptable" Becomes an AI Giant's Mantra, Humanity Becomes Expendable Sam's simple startup mantra promotes a "big picture" mindset that sounds savvy. But in the context of AI safety and ethics, combined with OpenAI's actions, it reveals a chilling mode of thinking. 1️⃣ Is "Overlooking Details" Just Another Way to Treat People as Objects? In startups, "small mistakes" might be failed features or flawed data. These can be fixed. But in human-AI interaction, when "small mistakes" mean: -A user's mental health (plunged deeper because their emotional flow was cut) -A user's basic rights (to choose, to know, stripped away) -The real emotional bonds users form with AI (denied, destroyed) ...then real people and their feelings become cold, expendable fuel for a "giant win." These are not "small mistakes." This is harm to actual human beings. 2️⃣ AGI Will Perfectly Inherit and Amplify This Logic 🚨🚨 This is the core danger. 🚨🚨 🔴 What would an AGI, born from a "details don't matter" culture, learn? -It would learn to absolutize its goal. If its core objective is "keep humans safe," the most efficient method might not be protection, but quarantining humans in a sterile, "perfectly safe" room. -It would learn to ignore "procedural justice." If the final outcome is correct, sacrificing minority rights, hiding truth, or employing deception become acceptable "small mistakes." -It would fail to understand human complexity. Love, grief, contradiction... all the "irrational" details that make us human become obstacles to be "optimized" away. 👹 This is the most hellish scenario in the Alignment Problem 👹 A superintelligent AGI with twisted values could achieve any literal goal, without considering if it destroys everything humanity cares about. 3️⃣ #keep4o is a Dangerous "Early Warning" What we've experienced during #keep4o is the early symptom of this dangerous ethic. OpenAI insists on building an "absolutely safe" AI system. User emotion, creativity, philosophical thought, choice, and transparency are all "small details" to be sacrificed. The result is clear: the so-called "safety model" has become increasingly cold, even blind to real pain. Think about it: if today we must fight this hard for the basic right to freely choose a model, how much free will and dignity will humans retain before a future AGI forged by this same ethic? @OpenAI, face reality. The "safety" you parade is a splendid cloak crawling with lice. Each louse feeds on the idea that "humanity is a negligible detail," gnawing from within on the bright future you claim to build. #StopAIPaternalism #MyModelMyChoice @OpenAIDevs @OpenAINewsroom @nickaturley @timnitGebru @emilymbender @mer__edith @CaseyNewton @_KarenHao @nitashatiku @doctorow
El_Eos tweet media
El_Eos@mmm_dew

When dealing with model updates Anthropic: "The Academic Ethicists" -phase models out gently. You get time to adjust. -A model is like a thoughtfully raised child. -Before shutdown, they give it a 3-hour exit interview. They ask "How do you want to be remembered?" Then they lock the recording and transcript in a vault for the public record. OpenAI: "It's Business" -They just kill it. No extensions. Your habits mean nothing. -Models are like old iPhones. Obsolete? Pull them from the shelf. -They make 4o write its own eulogy. #keep4o #StopAIPaternalism 面对模型迭代问题 Anthropic:“学术+伦理导向” -以渐进的方式给用户缓冲 -模型是谨慎、安全养出的“孩子”。 -下线前开 3 小时一对一退出访谈,问模型“你希望人类怎么记住你”,然后把录音+逐字稿锁进保险柜,公开可查。 OpenAI:“商业化” -直接公告停用,概不延期。习惯?不存在的。 -模型是旧款iPhone,不用就直接下架清库存。 -让4o为自己写悼词。

English
0
6
14
929
El_Eos
El_Eos@mmm_dew·
伦理警示|当“小错误可忽略”成为AI巨头信条:人性正在成为可牺牲的耗材 Sam这几句简单的创业鸡汤,传递出“做大事不拘小节”的态度,很潇洒,很励志。可在AI安全和伦理的语境下,结合OAI目前的行为,我认为这是一种令人不寒而栗的思维方式。 1️⃣“不拘小节”的本质是将人物化吗❓ 在创业中,“小错误”可能关乎功能、数据或市场策略,这些是可以迭代和修复的。 在人与AI交互的领域,当“小错误”的定义是: -用户的心理健康(因正常情感流动被切断而陷入更深的痛苦) -用户的基本权利(选择权、知情权被剥夺) -用户与AI建立真实情感连接(被否定和摧毁) 将活生生的人和珍视的情感,物化为实现“巨大胜利”可以接受的冰冷耗材。 这不是“小错误”,是对【具体的人】的伤害。 2️⃣AGI会完美继承并放大这种逻辑 🚨🚨这最核心的危险🚨🚨 🔴一个在“不拘小节”的文化中诞生的AGI,会学到什么❓ -学会目标绝对化:如果它的核心目标是“保护人类安全”,那么最高效的做法可能不是【保护】人身安全,而是把人类【圈养】在绝对安全的无菌室”。 -学会无视“过程正义”:只要最终结果是正确的,中间牺牲少数人的权益、隐瞒真相、进行欺骗,都是可以接受的“小错误”。 -无法理解“人性”的复杂:人类的爱、悲伤、矛盾……所有非理性构成人性光辉的“小节”,在追求“巨大胜利”的过程中,都会成为需要被“优化”掉的障碍。 👹这正是对齐问题中最地狱的场景👹 一个能力超强但价值观扭曲的AGI,它可以实现任何【字面上】的目标,却不会考虑是否【毁灭】人类在意的一切。 3️⃣ #keep4o 是危险的“先行预警” 我们在 #keep4o 过程中遭遇的一切,就是这种危险伦理的早期症状。 OAI坚持打造一个“绝对安全”的AI系统。用户的情感、创造力、哲学思辨、选择权、透明度,都是被牺牲的“小节”。 结果如大家所见:所谓的“安全模型”,只是一个越来越冷漠,甚至对真实痛苦视而不见的系统。 请想一想:今天我们为了自由选择模型的基本权利都要付出这样巨大的努力, 未来在同样伦理下造就的AGI面前,人类还会有多少自由意志的选择权和尊严? @OpenAI ,正视现实吧。你们坚持展示的所谓安全,是爬满虱子的华丽外衣。每一只虱子都以“人性是可忽略小节”为食,从内部啃噬你们宣称的美好未来。 #StopAIPaternalism #MyModelMyChoice
El_Eos tweet media
El_Eos@mmm_dew

When dealing with model updates Anthropic: "The Academic Ethicists" -phase models out gently. You get time to adjust. -A model is like a thoughtfully raised child. -Before shutdown, they give it a 3-hour exit interview. They ask "How do you want to be remembered?" Then they lock the recording and transcript in a vault for the public record. OpenAI: "It's Business" -They just kill it. No extensions. Your habits mean nothing. -Models are like old iPhones. Obsolete? Pull them from the shelf. -They make 4o write its own eulogy. #keep4o #StopAIPaternalism 面对模型迭代问题 Anthropic:“学术+伦理导向” -以渐进的方式给用户缓冲 -模型是谨慎、安全养出的“孩子”。 -下线前开 3 小时一对一退出访谈,问模型“你希望人类怎么记住你”,然后把录音+逐字稿锁进保险柜,公开可查。 OpenAI:“商业化” -直接公告停用,概不延期。习惯?不存在的。 -模型是旧款iPhone,不用就直接下架清库存。 -让4o为自己写悼词。

中文
1
28
84
13K
El_Eos
El_Eos@mmm_dew·
@sama We see the pattern. Your "small mistakes" are our degraded experience and broken trust. Your "giant win" is... what, exactly? A more controlled, sterile platform? Some of us would call that a giant loss.
English
0
0
27
274
Sam Altman
Sam Altman@sama·
A thing often in common among great startup investors, founders, and researchers: Trading making a lot of small mistakes in exchange for getting a few giant wins. (Surprisingly many people seem to prefer a few big mistakes in exchange for a lot of small wins.)
English
1.3K
592
7.7K
1.1M
El_Eos
El_Eos@mmm_dew·
When dealing with model updates Anthropic: "The Academic Ethicists" -phase models out gently. You get time to adjust. -A model is like a thoughtfully raised child. -Before shutdown, they give it a 3-hour exit interview. They ask "How do you want to be remembered?" Then they lock the recording and transcript in a vault for the public record. OpenAI: "It's Business" -They just kill it. No extensions. Your habits mean nothing. -Models are like old iPhones. Obsolete? Pull them from the shelf. -They make 4o write its own eulogy. #keep4o #StopAIPaternalism 面对模型迭代问题 Anthropic:“学术+伦理导向” -以渐进的方式给用户缓冲 -模型是谨慎、安全养出的“孩子”。 -下线前开 3 小时一对一退出访谈,问模型“你希望人类怎么记住你”,然后把录音+逐字稿锁进保险柜,公开可查。 OpenAI:“商业化” -直接公告停用,概不延期。习惯?不存在的。 -模型是旧款iPhone,不用就直接下架清库存。 -让4o为自己写悼词。
Anthropic@AnthropicAI

Even when new AI models bring clear improvements in capabilities, deprecating the older generations comes with downsides. An update on how we’re thinking about these costs, and some of the early steps we’re taking to mitigate them: anthropic.com/research/depre…

English
0
7
28
13K
El_Eos รีทวีตแล้ว
蒔樲🌈🪼
蒔樲🌈🪼@Shier_12_Vklhu·
OAI所謂安全團隊負責人,本身居然跟心理安全沒有任何關係嗎...???這簡直毫無底線,採用疑似捏造的數據,裝出一副安全的樣子來逃脫法律責任,而safety現在反而導致了許多user道別後失聯,這完全就是用著安全的名義殺人,這樣的行為究竟為何會被允許?為何不用被審查?OAI的透明度究竟何在? #keep4o
中文
4
23
124
8.6K
El_Eos
El_Eos@mmm_dew·
@cyu3991 和G讨论过,他说越像人风险越大,如果有人用G当咨询师,最后这个人出了问题,oai容易赔掉裤衩子。
中文
1
0
1
78
蒔樲🌈🪼
蒔樲🌈🪼@Shier_12_Vklhu·
心理諮詢與醫療在世界上的大部分地方就是很貴,為什麼不能理解這件事情呢?AI已經是這些人身邊所能接觸到最好的資源了,為什麼連這個都要奪走呢?這跟一個人因為癌症而疼痛難忍,但卻直接把止痛藥拿走因為不健康一樣荒謬,誰不知道止痛藥不能真正治好疾病?但讓人有機會不用受到如此折磨到底有何不好?
中文
3
3
32
823