Pei

150 posts

Pei

Pei

@alas_chn

Katılım Eylül 2012
344 Takip Edilen16 Takipçiler
Pei
Pei@alas_chn·
@SadlyItsBradley Hey Bradley, curious what you think about this “Vision Air”-style XR reference design from China. It’s apparently a ~90g glasses-form device using pancake optics, VST + OST, ~90° FOV, and a split-compute setup powered by GravityXR’s X100 chip. Link: bilibili.com/video/BV1qsRTB…
English
0
0
0
38
Brad Lynch
Brad Lynch@SadlyItsBradley·
When it’s hard to make XR headsets able to work with a lot of fans and a tethered battery: how do you expect “AR glasses” to randomly solve those They won’t. It’s for those reasons, and the laws of physics, that convince me that XR headsets will be pursued at each breakthrough
English
7
3
88
5.5K
Brad Lynch
Brad Lynch@SadlyItsBradley·
XR devices are clunky because it’s so early. Many of the pieces needed for change are not able to be mass produced yet Vision Pro brute forced us closer. But still not enough. Will take years for the next step That reality is hard for short term gratification. But AR/VR is hard
English
17
12
306
13.6K
Pei
Pei@alas_chn·
@fuxianyi 这么看地方是有虚报的动机。
中文
0
0
1
215
易富贤Yi Fuxian《大国空巢》
2000年普查0岁人口1379万,国家统计局和计生委官员说存在超生漏报,参考小学招生数将2000年出生数上调为1771万,2006年小学果然招生1729万。其实由于教育经费是由中央和地方分担、中央出大头,地方政府和教育机构纷纷虚夸20-50%招生数以冒领教育经费。2014年初三学数1426万,2015年初中毕业1418万。
Pei@alas_chn

@fuxianyi 我查到2006年小学招生1729.36万人。如何解释这个数据更接近1771万呢?

中文
2
2
32
7.7K
Pei
Pei@alas_chn·
@fuxianyi 我查到2006年小学招生1729.36万人。如何解释这个数据更接近1771万呢?
中文
1
0
0
8.1K
易富贤Yi Fuxian《大国空巢》
中国早就普及九年义务教育,2015年初中毛入学率104%,初中毕业生1418万换算成15岁人口只1363万,与2000年普查0岁人口1379万相当,远少于官方公布的出生1771万。
Jonesun@Jonesun252779

@fuxianyi 虚报肯定是有的 但是也没你说的那么大的差值 农村上不到初中毕业 还是有很多的

中文
7
7
51
9.1K
Pei retweetledi
TVアニメ「バーテックスフォース」【公式】
▸▸プレゼントキャンペーン🎉!!◂◂ 簡単なアンケートにお答えいただいた方の中から 抽選で𝟏𝟎名様に <<非売品B2ポスター>>をプレゼント🎁! vertexforce.jp/enquete/ ❚ アンケート受付期間 4/26(日) 23:59まで たくさんの応募お待ちしております🤖 #バーテックスフォース
TVアニメ「バーテックスフォース」【公式】 tweet media
日本語
0
36
75
9K
Pei
Pei@alas_chn·
@Engineer_Wong 我一直想要一个只有紫外线灯没有滤网的放浴室
中文
0
0
1
241
Adam Wong
Adam Wong@Engineer_Wong·
Many commercial air purifiers are equipped with a built-in UV lamp and claim that it can kill airborne bacteria. However, this claim is misleading. Most commercial purifiers use H12 or higher-grade HEPA filters, which alone capture over 99.9% of viruses and bacteria in a single pass. Even if the UV lamp could kill 100% of the remaining microbes within the one second that air passes through it, the overall improvement in efficiency would be less than 0.1%. Therefore, the assertion that the UV lamp helps disinfect the air is merely a marketing gimmick. The true purpose of the UV lamp is to irradiate the filter surface to inhibit fungal growth. Bacteria and viruses generally cannot survive long on the filter, but if the filter becomes damp and is left without airflow for an extended period, mold can proliferate on the dust accumulated in the HEPA filter under suitable temperature and humidity, occasionally causing unpleasant odors. The downsides of UV lamps include the generation of trace amounts of ozone and accelerated degradation of the filter material, which shortens the filter’s lifespan. Consequently, I still do not recommend using UV lamps inside air purifiers. Simply running the air purifier 24/7 can resolve the issue of mold growth on the filter.
English
5
39
295
6.7K
Pei
Pei@alas_chn·
@COMICFUZ COMIC FUZ特典の「カラーイラスト」は、どこで入手できますでしょうか。電子版を購入すれば付いてきますか?
日本語
0
0
0
84
COMIC FUZ(コミックファズ)
🌟特典情報更新!🌟 4/28(火)発売のじゃが先生 『恋文と13歳の女優(アクトレス)』第8巻、 特典情報を公開🧁 今回も超かわいい豪華描き下ろし特典の数々✨ 有償特典アクスタもどうぞお見逃しなく⛩️🍈 次回41話は5/5更新予定です! comic-fuz.com/manga/3149?pos…
COMIC FUZ(コミックファズ) tweet mediaCOMIC FUZ(コミックファズ) tweet mediaCOMIC FUZ(コミックファズ) tweet mediaCOMIC FUZ(コミックファズ) tweet media
日本語
1
15
47
7K
Pei
Pei@alas_chn·
@fuxianyi 我有个感觉,高生育率前提是社会保障不能好。北欧福利社会,国内给兜底又维稳养,这两种做法都是错误的。需要一种“吃人”的社会制度才能提高生育率,例如封建社会还是美帝资本主义。
中文
2
0
0
509
易富贤Yi Fuxian《大国空巢》
挪威的生育率也从2009年1.98降至2025年1.48。挪威再财大气粗,也填不上老龄化导致的财政黑洞。“再世的老人”是有决策话语权的,而“未出生的胎儿”是没有话语权的,在博弈中始终处于弱势。理想的模式还是儒家模式:家庭为主,社会福利为辅,在“各亲其亲,各子其子”的基础上,“不独亲其亲,不独子其子”。
WWG@wednesd332

@fuxianyi 挪威也不行吗,人家主权财富基金给每个挪威人存了34万美金,造血能力很好啊!

中文
25
15
122
22.4K
Pei
Pei@alas_chn·
@kagibangou029 新作情報を見ました!今からもう楽しみで仕方ないです……! A4判の大きなサイズで見られるのが本当に嬉しいです。 雑誌のB5判より大きくて、原画のB4サイズにもぐっと近いので、作品の魅力をより深く味わえそうでわくわくしています!
日本語
0
0
0
108
きい
きい@kagibangou029·
なすびは食べ物です。 どの口で食べるかはまた別の話ですけどね。 おやすみなさい😴
日本語
9
3
201
14.4K
Pei
Pei@alas_chn·
@onevcat 三个臭皮匠顶个诸葛亮
中文
0
0
0
144
onevcat
onevcat@onevcat·
一个比较直观的 argue 的结果对比:虽然期望模型给出那种令人眼前一亮的答案还是有些太苛刻了,但是也算是证明多轮争辩和多角度review的一些基本价值。同样问题的普通回答和 argue 后的回答,深度和辩证角度已经不在一个层级。最后顺便也附带一个 ChatGPT 5.4 Pro 思考了快十分钟后的回答...怎么说呢,也许能自吹一波用普通模型的成本达到接近 GPT Pro 的结果?😂 (反正吹牛又不用交税...) github.com/onevcat/argue/
onevcat tweet mediaonevcat tweet mediaonevcat tweet media
中文
2
0
20
5.6K
Infuse
Infuse@infuse·
Infuse 8.4.2 now available! 🙌 New Bwdif deinterlacing option (highest quality), improved handling of extras and bonus content, better support for PiP features, and more! community.firecore.com/t/infuse-8-4-2…
Infuse tweet media
English
15
5
83
9.3K
Pei retweetledi
少女☆歌劇 レヴュースタァライト 9th Anniversary EXHIBITION
✨フォローリポストキャンペーン✨ めばち先生描き下ろしビジュアルポスターがランダムで当たるチャンス🎯 詳細は添付画像をご確認ください👀 【応募方法】 ①@starlight_9thEXをフォロー👤 ②本投稿をリポスト🔁 応募〆切:4月19日(日) 23:59まで #スタァライト9thEX #スタァライト
少女☆歌劇 レヴュースタァライト 9th Anniversary EXHIBITION tweet media
日本語
8
3.2K
2.2K
342.8K
Pei
Pei@alas_chn·
@Balder13946731 必然啊。苹果全球几亿用户,1%的边界情况和幻觉率都不能接受。工程角度肯定要兜底大模型能力下限
中文
0
0
2
115
Balder
Balder@Balder13946731·
我感觉苹果里面有一个专门给LLMs挑刺的研究团队,得不到就要毁掉吗🌚
Nav Toor@heynavtoor

🚨SHOCKING: Apple just proved that AI models cannot do math. Not advanced math. Grade school math. The kind a 10-year-old solves. And the way they proved it is devastating. Apple researchers took the most popular math benchmark in AI — GSM8K, a set of grade-school math problems — and made one change. They swapped the numbers. Same problem. Same logic. Same steps. Different numbers. Every model's performance dropped. Every single one. 25 state-of-the-art models tested. But that wasn't the real experiment. The real experiment broke everything. They added one sentence to a math problem. One sentence that is completely irrelevant to the answer. It has nothing to do with the math. A human would read it and ignore it instantly. Here's the actual example from the paper: "Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?" The correct answer is 190. The size of the kiwis has nothing to do with the count. A 10-year-old would ignore "five of them were a bit smaller" because it's obviously irrelevant. It doesn't change how many kiwis there are. But o1-mini, OpenAI's reasoning model, subtracted 5. It got 185. Llama did the same thing. Subtracted 5. Got 185. They didn't reason through the problem. They saw the number 5, saw a sentence that sounded like it mattered, and blindly turned it into a subtraction. The models do not understand what subtraction means. They see a pattern that looks like subtraction and apply it. That is all. Apple tested this across all models. They call the dataset "GSM-NoOp" — as in, the added clause is a no-operation. It does nothing. It changes nothing. The results are catastrophic. Phi-3-mini dropped over 65%. More than half of its "math ability" vanished from one irrelevant sentence. GPT-4o dropped from 94.9% to 63.1%. o1-mini dropped from 94.5% to 66.0%. o1-preview, OpenAI's most advanced reasoning model at the time, dropped from 92.7% to 77.4%. Even giving the models 8 examples of the exact same question beforehand, with the correct solution shown each time, barely helped. The models still fell for the irrelevant clause. This means it's not a prompting problem. It's not a context problem. It's structural. The Apple researchers also found that models convert words into math operations without understanding what those words mean. They see the word "discount" and multiply. They see a number near the word "smaller" and subtract. Regardless of whether it makes any sense. The paper's exact words: "current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data." And: "LLMs likely perform a form of probabilistic pattern-matching and searching to find closest seen data during training without proper understanding of concepts." They also tested what happens when you increase the number of steps in a problem. Performance didn't just decrease. The rate of decrease accelerated. Adding two extra clauses to a problem dropped Gemma2-9b from 84.4% to 41.8%. Phi-3.5-mini from 87.6% to 44.8%. The more thinking required, the more the models collapse. A real reasoner would slow down and work through it. These models don't slow down. They pattern-match. And when the pattern becomes complex enough, they crash. This paper was published at ICLR 2025, one of the most prestigious AI conferences in the world. You are using AI to help you make financial decisions. To check legal documents. To solve problems at work. To help your children with homework. And Apple just proved that the AI is not thinking about any of it. It is pattern matching. And the moment something unexpected shows up in your question, it breaks. It does not tell you it broke. It just quietly gives you the wrong answer with full confidence.

中文
6
1
19
9.3K
Pei
Pei@alas_chn·
@ponpon_sensya ご検討ありがとうございます!とても嬉しいです。 本としてまとめて読めるのが本当にありがたいので、楽しみにお待ちしております!
日本語
0
0
0
4
イラストぽんぽん戦車
イラストぽんぽん戦車@ponpon_sensya·
@alas_chn ほぼ全ての絵がpixiv等で見れるので意味あるのかな?と思っていたのですがそう言って頂けるなら配信をしようと思います。元手とかほぼないので。 データ作成&申請等で数日かかっちゃうと思うのでそれまで少しお待ちください。
日本語
1
0
0
33
@ariannabetti.bsky.social
@ariannabetti.bsky.social@ariannabetti·
@Engineer_Wong: for flying (if neck/muscles allow), AirFanta Wear is a game changer. Many thanks for engineering it. Unexpected bonus: it can be made to look more like high tech cool headphones. KLM crew has remarked on the 4lite a couple of times but so far... Not on the Wear.
English
4
8
112
49.6K
🌿 lithos
🌿 lithos@lithos_graphein·
H200 unblocked in January. US/China summit in May. NXT:1970~2050i tools on the table. How's that SSA800 coming along?
🌿 lithos tweet media
English
5
4
42
5K