DRiftingZ
62 posts


群聊了一会儿,发现一个很搞笑的事情 Claude 不充钱,不会被封 冲20刀,不会被封 冲100刀、200刀,封的概率很大 总之,你给他们钱越多,被封的概率越大

こういう「母国語出てしまった」系の「簡体字が混ざった日本語の文章」って、一体どういう入力方法をしたらそうなるのかいつも気になるんだよな。普通に日本語を入力する時は日本語入力キーボードを使う訳じゃん。

GLM 5 beats Claude Opus 4.6 and GPT 5.3 Codex in the AICodeKing benchmark

Huge! @TianhongLi6 & Kaiming He (inventor of ResNet) just Introduced JiT (Just image Transformers)! JiTs are simple large-patch Transformers that operate on raw pixels, no tokenizer, pre-training, or extra losses needed. By predicting clean data on the natural-data manifold, JiT excels in high-dimensional spaces where traditional noise-predicting models can fail. On ImageNet (256 & 512), JiT achieves competitive generative performance, showing that sometimes going back to basics is the key.
























