oakpark

3.6K posts

oakpark

oakpark

@7oakpark

Cumming, GA Katılım Temmuz 2009
241 Takip Edilen93 Takipçiler
Stanley
Stanley@Stanleysobest·
卧槽,这是在说哪家? 说的这么惨! 印象中应该没有这种全线崩盘的友商啊? 都发展的如日中天的!
Stanley tweet media
中文
273
5
278
387.1K
oakpark retweetledi
Reuters
Reuters@Reuters·
DeepSeek launched a preview of its new AI model adapted for Huawei chip technology, marking a shift from Nvidia chips and highlighting China's advances in artificial intelligence reut.rs/4vUpBOe
English
21
125
478
43.2K
oakpark retweetledi
AB Kuai.Dong
AB Kuai.Dong@_FORAB·
历史性的时刻!时隔一年,中国 AI 大模型 DeepSeek 再次发布新版,最强开源大模型 DeepSeek v4。 API 定价比 GPT 5.4 和 Claude Opus 4.6 便宜 10 - 50 倍,支持 1M 百万字超长上下文,且统统开源! 本次分为 V4 Pro 与 V4 Flash 两个版本。 其中,V4 Pro 在推理能力、世界知识、数学、STEM、竞赛代码等推理评测中表现领先,官方称其性能,已比肩美国顶级 AI 闭源模型。 V4 Flash 则主打更快、更经济的 API 服务,适用于小公司或资金有限的个人开发者,并在简单 Agent 任务中接近 V4 Pro。 官方表示,V4 采用全新注意力机制与 DSA 稀疏注意力,大幅降低长上下文计算和显存需求。 即日起,1M 上下文将成为 DeepSeek 官方服务标配。 便宜、好用、开源,属于中国的 AI 时刻再次到来。如果你不知道这有多恐怖,可以看下图的 API 定价,人民币单位!
AB Kuai.Dong tweet mediaAB Kuai.Dong tweet media
DeepSeek@deepseek_ai

🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length. 🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models. 🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice. Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today! 📄 Tech Report: huggingface.co/deepseek-ai/De… 🤗 Open Weights: huggingface.co/collections/de… 1/n

Meguro-ku, Tokyo 🇯🇵 中文
90
48
368
288.4K
Chris McGuire
Chris McGuire@ChrisRMcGuire·
DeepSeek v4 just dropped. At first glance it does not appear to be the kind of leap that v3 claimed to be in January 2025, nor does it challenge the consensus regarding the state of the U.S.-China AI competition: U.S. models lead by ~7 months, and leading Chinese models remain dependent on U.S. tech. A few quick observations about the paper: - DeepSeek admits that v4 does not challenge leading U.S. models in performance. DeepSeek admits that v4 trails state-of-the-art frontier models by 3-6 months. It claims v4's reasoning and agentic performance is comparable to GPT 5.2, Gemini 3.0 Pro, and Claude Opus 4.5--which were all released 5-6 months ago. This is broadly consistent with longstanding estimates that U.S. models lead Chinese models by ~7 months. v4 does appear impressive on coding benchmarks (93.5% on LiveCodeBench), but its best results are on benchmarks with known contamination risk that are most easily gamed; even DeepSeek even admits that its internal benchmarks show a larger gap with frontier models in coding capabilities than the public benchmarks do. v4 therefore does not appear to change priors about the state of U.S.-China AI competition. - DeepSeek v4 is not even clearly the best Chinese model. It appears to have narrow leads over Kimi K2.6 and GLM-5.1 in most benchmarks, although not all. But its lead is marginal, not the step-change over other Chinese models that R1 was. This again is indicative of a model that is largely a status-quo release, not a gamechanger. - DeepSeek's paper does not discuss what training costs or chips - very likely because it was trained on banned Nvidia Blackwell chips. This stands in stark contrast to DeepSeek's paper for v3, which claimed v3 was trained on 2,000 Nvidia H800 chips for only $5 million (this claim was misleading at best, and potentially outright false). The United States government has already publicly asserted that it knows v4 was trained on Nvidia Blackwell chips, which are banned in China. This is almost certainly why DeepSeek is silent on how it was trained. There is no reason to believe DeepSeek was able to "do more with less" to train v4, they just were able to smuggle in banned chips. - DeepSeek cannot serve v4 pro widely, as it admits to being compute constrained. In its pricing sheets for the model, DeepSeek notes that "Due to constraints in high-end compute capacity, current service capacity for Pro is very limited" (h/t @jukan05 ). A competitive AI ecosystem requires sufficient compute to both train and widely serve a model. China doesn't appear to have that. A very capable model isn't very useful if it can't be deployed at scale. Bottom line: DeepSeek v4 appears to be a fine model that may be the best Chinese model by a small amount. It is not competitive with frontier U.S. models, and does not appear to close the gap with the United States in AI. It is entirely consistent with what we already knew: the gap between U.S. and Chinese models is about seven months. And remember, like all other leading Chinese models, v4 was trained using U.S. chips, and on data illicitly distilled from frontier U.S. models. If China fully lost access to U.S. chips and models, not to mention U.S. and allied chipmaking tools, DeepSeek and others would likely fall much farther behind.
English
71
32
205
57.1K
oakpark retweetledi
Ejaaz
Ejaaz@cryptopunk7213·
it’s official - china’s fucking dominating AI. they’ve caught up. new DeepSeek v4 matches GPT-5.5, costs 86% less. 100% open source. don’t take my word for it: -> deepseek v4 flash is 99% cheaper than opus 4.7 (not a typo). $0.28 per million tokens -> ranks #1 on code forces benchmark beating gpt 5.4. competitive to 5.5 and opus. in the last week: → Mon: Moonshot drops Kimi K2.6 → Wed: Alibaba drops Qwen 3.6-27B → Thurs: DeepSeek drops V4 3 chinese labs, 3 frontier OPEN source models in < 4 (FOUR) days there is no way you can argue china hasn’t caught up. gg
Ejaaz tweet media
DeepSeek@deepseek_ai

🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length. 🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models. 🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice. Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today! 📄 Tech Report: huggingface.co/deepseek-ai/De… 🤗 Open Weights: huggingface.co/collections/de… 1/n

English
122
137
1.4K
183.1K
oakpark
oakpark@7oakpark·
@xqt1688 这叫谋士以身入局。当年广州谢三秀跪行事件,她小孩生病没钱,有人说给她两万让她在广州最繁华的大街上跪行1000米,结果她做了,但是那人却说他没钱是逗她玩的。结果就造成了社会事件,群众群情激愤痛骂那男子,同时踊跃捐款,让孩子得到了治疗。事后大家才发现整件事都是那男子一身策划的
中文
0
0
5
547
江南雨💦狙击手
看看武钢马科长的语言艺术巅峰 2011年,河北省邯郸武安有一位马 科长,他因为一段央视采访被全国 人民嘲笑,他在采访中颠三倒四支 支吾吾,回答问题时毫无逻辑,思 维混乱,答非所问,被不少人质疑 是小学生水平。 采访还没结束,记者都觉得没有必 要继续了,当时是环保风暴,很多 地区的钢铁厂都被陆续关停,但武 安却在当时成功保住了武安钢铁厂。 十余年后再看当时的采访,马科长 不是水平低下,而是用自己的脸面 挡在武钢前面。对了,他现在已经 是马局长了! 以前觉得,这种人也能当科长? 现在再看,这种人怎么才科长?
中文
12
5
47
25.9K
oakpark
oakpark@7oakpark·
@RJDAIGOGO 明显是民进党自导自演的一场戏,这SB黄国昌还看不出来。如果真的有国事访问,非洲那个国家为什么不抗议?
中文
0
0
0
57
RJ
RJ@RJDAIGOGO·
民众党主席黄国昌谈赖清德出访失败:不管喜不喜欢,赖清德对外就是代表中华民国,打压他其实就是打压台湾人民。老共打压台湾外交空间,我们同样表达愤怒。我觉得这做得实在太过分了,对促进两岸善意交流毫无帮助。对这件事不满的,绝不会只有民进党的支持者。老共这就是在给民进党助选,我其实一直搞不懂老共为什么要做这么蠢的事情!
RJ@RJDAIGOGO

中华民国外交部发出严正谴责:

中文
420
10
151
306.4K
oakpark
oakpark@7oakpark·
@TidVR7NzHA26324 只能说中共对香港太好了,说的50年不变真的就没有变,让香港人自己治理,结果一群资本家控制下的政客根本不能有效管理香港。对比一下澳门,香港的政客们真的都应该去吃屎
中文
0
0
24
1.5K
羲皇
羲皇@TidVR7NzHA26324·
现在回头看2019年的香港暴乱视频,评论区依旧十分精彩。 其中有台湾人评论:别说大陆人,连我台湾人都想免费去甩几棍,愿意交钱,只要可以让我打,这些青鸟太恶物了。 视频名《三棍打碎港独魂,阿sir我是中国人》,B站可搜
羲皇 tweet media羲皇 tweet media羲皇 tweet media羲皇 tweet media
中文
27
8
269
62.9K
oakpark
oakpark@7oakpark·
@8964kevin_t 贵阳是被低估的美食城市,个人认为贵阳的小吃比重庆还好吃。
中文
2
0
3
2.4K
KT666
KT666@8964kevin_t·
我这次在中国停留了差不多3个月,去了好几个地方,潮州,贵阳,凯里,重庆,芒市,昆明,大理。也算是用脚走了很多地方,我一般不去逛人工景点也很厌恶打卡文化。大多数时候在当地人生活区走走,看看当地人的生活。感触颇深的。
中文
43
4
264
84.3K
biantaishabi5
biantaishabi5@biantaishabi5·
今天海军主题宣传片《向大洋》,结尾明示 19 舰为核舰(何剑,19岁)
biantaishabi5 tweet media
中文
33
13
385
136.9K
oakpark retweetledi
指路大神
指路大神@guoqianhli·
北京老哥在太原的市井生活调查报告
指路大神 tweet media指路大神 tweet media
中文
0
6
85
25.8K
oakpark
oakpark@7oakpark·
@bkingfilm 北京四中不是有钱的人能上的。
中文
0
0
2
5.3K
oakpark
oakpark@7oakpark·
@KELMAND1 该新闻的重点不是坦克炸膛,而是日本自卫队的年龄。有统计日本自卫队的平均年龄是30多岁,现在看来是真的。
中文
2
1
2
2.1K
Eason Mao☢
Eason Mao☢@KELMAND1·
4 月 21 日上午 8 时 40 分左右,自卫队拨打 119 报警称:“日出生台演习场内进行射击训练时坦克发生炸膛,现场有 4 人受伤。” 目前确认,两名自卫队队员(分别为 45 岁和 28 岁男性)已经死亡,一名 32 岁男性处于心肺功能停止状态,另有一名 21 岁女性受伤。
Eason Mao☢ tweet media
Eason Mao☢@KELMAND1

日本自卫队训练时发生爆炸 当地时间21日上午,日本大分县陆上自卫队一处训练场在训练时发生弹药爆炸,多名自卫队员受波及,详细伤情正在确认中。

中文
56
13
173
238.8K
oakpark
oakpark@7oakpark·
@btcbqr 有人说过40多岁的女人是最容易出轨的人群,果然如此
中文
0
0
1
98
Bitcoin.不求人
前段时间和在惠州老同学聚会,有个女同学喝多了,开始疯狂吐槽自己老公,听完我都惊呆了! 那女同学喝大了,大喊一声:“真的好想离婚!感觉快撑不住了。”我们都以为她老公家暴她了,赶紧问她咋了。 结果她说,她老公是公务员,性欲小,每月工资7000多,给她6000,自己留1000加油、偶尔请家人吃饭,每天下班还回家做晚饭。我一听,这男人挺好啊,基本没啥社交,每天就回家做饭。 结果她哭了,开始说她老公不好。她说她老公在外人眼里是模范丈夫,可只有她知道,日子过得有多压抑。俩人自从结婚,交流就越来越少,每天就聊聊柴米油盐。 她嫌她老公不懂浪漫,也不了解她,纪念日不买花,也没惊喜。她精心准备话题,她老公总是几句话就给打发了。晚上俩人还背对背睡,中间像隔了道鸿沟。 她试过跟她老公沟通,想让他浪漫点,可她老公要么沉默,要么就说她想太多了。她觉得自己像被困在笼子里,家庭成了束缚。 她说她有自己的梦想和追求,可这样的日子太平淡了,她怕自己在这平淡里迷失。看看身边同学同事的生活,再看看自己,觉得过得太糟了,心里苦得很。还说她怀念以前那个充满活力、对生活充满憧憬的自己。 你们说,这女同学是不是身在福中不知福?她老公虽然不浪漫,但好歹顾家、有责任心。可她自己呢,是不是也太追求那些虚无缥缈的浪漫了?婚姻里,到底是实惠重要,还是浪漫重要?大家伙儿都来聊聊,说说你们的看法。
中文
294
11
207
354.7K
oakpark
oakpark@7oakpark·
@shangguanluan 给我帐号打1千万美元,包赖清德过境美国。可以事后付款,只需要先预付1%的费用。
中文
0
0
0
15
上官亂
上官亂@shangguanluan·
这次赖清德被拒绝,让人想起,郑丽文带回十项大礼中,包含扩大两岸直航城市,结果陆委会副主委软拒绝,表示转机更便宜、选择更多,认为无立即全面开放直航的迫切需要。 有趣的是,这次赖清德要造访史瓦帝尼(斯威士兰),台当局曾尝试南非、阿联酋、土耳其、荷兰等中转点,均遭拒绝。 于是只能规划全程直飞,就必须路过塞舌尔、毛里求斯、马达加斯加这三个国家的空域,结果这三个国家也直接拒绝了。
上官亂 tweet media
中文
89
17
547
214.4K
oakpark
oakpark@7oakpark·
@LuisSteeven @jacksonhinklle Do you believe the fox reports don't know the difference between civilian material and military material? They don't care about the difference
English
0
0
0
14
Luis Steeven
Luis Steeven@LuisSteeven·
@7oakpark @jacksonhinklle Suggesting car owners arming Iran implies a direct, intentional link that's not supported. It ignores the vast difference between civilian fuel use and sanctioned military supply chains.
English
1
0
0
23
Jackson Hinkle 🇺🇸
Jackson Hinkle 🇺🇸@jacksonhinklle·
🇨🇳🇮🇷 Iran's 'Touska' cargo ship seized by the US military was carrying chemicals from China used to manufacture ballistic missiles.
English
176
296
2.3K
154.5K
oakpark retweetledi
鸟哥 | 蓝鸟会🕊️
白嫖党狂喜!这个GitHub开源脚本真的绷不住了 爱奇艺、腾讯、优酷、芒果TV、B站VIP视频,一个脚本全破解,PC+手机都能用,还长期免费维护更新。装个Tampermonkey,导入脚本,点一下开关,会员墙直接没了。732个Star说明已经有一批人在偷偷用了…… 🔗 github.com/88lin/video_vip
鸟哥 | 蓝鸟会🕊️ tweet media
中文
37
293
1.6K
208.3K
John Dutton
John Dutton@johndutton0412·
检查一下兄弟们的英语水平。
John Dutton tweet media
中文
153
2
45
24.4K