Tensor-Slayer

10.6K posts

Tensor-Slayer banner
Tensor-Slayer

Tensor-Slayer

@TensorSlay

張量殺手

参加日 Şubat 2010
61 フォロー中795 フォロワー
Tensor-Slayer
Tensor-Slayer@TensorSlay·
5.4 but 4.6 personality who’s building this.
English
0
0
0
25
Perelman
Perelman@Eric26573545571·
Yo Tensor. The CPU shortage is only accelerating; whats the play, still most bullish on AMD, or have news on Intel shifted your opinion? If agentic workloads are optimizied at 1:1 CPU to GPU ratio, perhaps the winner is the one who can manufacture the most CPUs (intel)? Not bullish ARM unless they increase margins?
English
1
0
1
55
Jukan
Jukan@jukan05·
>> 2025 Q4 Global CPU Market — AMD Sets All-Time Highs in Both Shipments and Revenue - According to a report by market research firm Mercury Research, AMD achieved record highs in both shipment volume and revenue in the global CPU market in Q4 2025. - Specifically: CPU revenue share reached 35.4%, up 6.8pp YoY and 2.9pp QoQ. CPU unit shipment share reached 29.2%, up 4.5pp YoY and 3.6pp QoQ. - In short, AMD significantly expanded its share of the global CPU market in Q4 2025, with both shipments and revenue reaching their highest levels on record. $AMD $INTC
English
4
9
152
20.4K
Tensor-Slayer
Tensor-Slayer@TensorSlay·
I think both. Technological revolutions have historically benefit both type of workers. So I’m hopeful. I’m actually thinking the most prosperous time of humanity is here. Just need to leverage it right. The only difference is multiplier effect. One person moving how many needles at a given point. The high agency signal is clear.
English
1
0
1
24
Perelman
Perelman@Eric26573545571·
I thought a bit about this. If AI makes humans a bottleneck, wouldn't the industries most impacted by AI demand exponentially more workers? That's basically what Jevons paradox would imply. Do you think the future is most bright for people in information heavy fields or physical jobs?
English
1
0
1
82
Tensor-Slayer
Tensor-Slayer@TensorSlay·
There is no other way to say this. We have become the bottleneck.
English
2
1
1
128
Nous Research
Nous Research@NousResearch·
People have a lot of great things to say about Hermes Agent. We asked it to compile all this praise and make its own hype video. It delivered:
English
42
64
1.4K
307.5K
Tensor-Slayer
Tensor-Slayer@TensorSlay·
@basil_k15_ Most probably not. But you can point your coding agent to the documentation to replicate the patch on latest version
English
1
0
0
15
Basil Khowaja
Basil Khowaja@basil_k15_·
@TensorSlay hey, one question, would this be able to receive the updates which openAI does to the official codex app?
English
1
0
1
37
Tensor-Slayer
Tensor-Slayer@TensorSlay·
5.4 codex should be wild.
English
1
0
0
109
Zach Mueller
Zach Mueller@TheZachMueller·
4.6 is growing on me these days. It still has a different cadence and such but perhaps I judged it too harshly at first
English
2
0
5
885
JingyuanLiu
JingyuanLiu@JingyuanLiu123·
Some updates: I've always been bullish on TML, and I actually joined TML this Monday Looking back, I am feeling so lucky that I have the privilege to work closely with the best optimization experts on the Muon optimizer ( @Jianlin_S from Kimi and @clu_cheng from Meta). Now I am so excited to be able to work with @jxbz and build new cool things! (On the other hand, there have always been some bad rumors about Meta TBD's potential failure. That's not true! From my personal experiences, it really has the best talents in the field, and I really enjoyed learning from the lab. The avocado model will for sure be great!)
JingyuanLiu@JingyuanLiu123

hmm I sort of disagree and I am bullish for TML. I think they really really have the top talents that I admire in the field, e.g. Jeremy and Sam for optimization, Songlin for Attn, Lia for MoE, Andrew for FSDPv2, and a bunch more folks it's just natural that it takes a while to publish good models: - dpsk starts to publish papers in 2023, even piblished dspkv2 (which I think is already amazing) in mid 2024 and nobody cares, until dpskv3 and r1 - msh took 10+ month to deliver a first not bad long ctx model in 2023 and be silent for the whole 2024 year, and starts to catch up gradually in 2025 - qwen starts to be a much better model than llama until qwen2.5, mid or late 2024, while the lab has been there forever it takes time to get infra and data done, but as long as you have good folks, and principled ways of doing science and experiments, some time or later, scaling laws will pay back

English
41
9
273
53.7K
Tensor-Slayer
Tensor-Slayer@TensorSlay·
> Qwen 3.5 - Success > Qwen 3 - Success > Foundational frontier research - Success > Multi-modal at scale - Success > Small Language Models - Success > Agentic Models - Success > Reasoning Models - Success > Ultra sparse MoEs - Success >> mfw “asked to leave”
English
1
0
8
296