Herr Greenrush (e/acc)

11.8K posts

Herr Greenrush (e/acc) banner
Herr Greenrush (e/acc)

Herr Greenrush (e/acc)

@HerrGreenrush

Coder and GPU-poor AI alchemist. News and thoughts about technology. English/Deutsch.

Switzerland 가입일 Ekim 2017
2.9K 팔로잉687 팔로워
Herr Greenrush (e/acc) 리트윗함
Sravan Kundojjala
Sravan Kundojjala@SKundojjala·
Arm's pivot to become a competitor to its customers may come at the expense of margins (from current 99% to 73% by FY31) but offers higher incremental revenue and gross profit. The ramp is stunning. FY28 $1B, FY29 $2B, FY30 $4B and FY31 $15B. ASP increases a factor too. Arm sees >55% margins on CPUs and implied upside on revenue. Rambus is probably close example of IP/product company and Rambus has >60% margins on product.
Sravan Kundojjala tweet media
English
4
8
82
10.5K
Herr Greenrush (e/acc) 리트윗함
John Arnold
John Arnold@johnarnold·
I think I finally solved the stock market.
John Arnold tweet media
English
212
491
9.7K
1.4M
Herr Greenrush (e/acc) 리트윗함
Lech Mazur
Lech Mazur@LechMazur·
Persuasion has two sides. This chart shows how easy each model is to move as a target. Xiaomi MiMo V2 Pro and Gemini 3.1 Pro Preview are the softest targets. Grok 4.20 Beta 0309 (Reasoning) is nearly immovable on average.
Lech Mazur tweet media
English
122
140
1.1K
7.3M
Herr Greenrush (e/acc) 리트윗함
Lech Mazur
Lech Mazur@LechMazur·
Which LLM is the most persuasive? New LLM Persuasion Benchmark! Models try to move each other's stated positions in multi-turn conversations. 15 models, 6,296 conversations, 15 topics. GPT-5.4 (high) is the strongest persuader. Claude Opus 4.6 (high) is second.
Lech Mazur tweet media
English
20
43
275
29.1K
Herr Greenrush (e/acc) 리트윗함
Veer Masrani
Veer Masrani@veermasrani·
Gemma 4 Is Being Tested on Arena Under the Name "significant-otter" The model identified itself as "I am Gemma 4, a large language model developed by Google DeepMind." Release is being called imminent. Lineup: 2B, 4B, and 120B15A. 📡
Veer Masrani tweet media
English
6
6
68
29.9K
Herr Greenrush (e/acc) 리트윗함
Andrej Karpathy
Andrej Karpathy@karpathy·
- Drafted a blog post - Used an LLM to meticulously improve the argument over 4 hours. - Wow, feeling great, it’s so convincing! - Fun idea let’s ask it to argue the opposite. - LLM demolishes the entire argument and convinces me that the opposite is in fact true. - lol The LLMs may elicit an opinion when asked but are extremely competent in arguing almost any direction. This is actually super useful as a tool for forming your own opinions, just make sure to ask different directions and be careful with the sycophancy.
English
986
1.2K
15.7K
772.7K
Herr Greenrush (e/acc) 리트윗함
Jasper Dekoninck
Jasper Dekoninck@j_dekoninck·
Last year, models miserably failed on USAMO 2025. This year, GPT-5.4 scores an amazing 95%, essentially saturating the benchmark. Yes, LLMs still make many mistakes, but overall, one can be nothing but amazed at what they are achieving and how steep progress in AI4Math is.
Jasper Dekoninck tweet media
English
21
52
466
42K
Herr Greenrush (e/acc) 리트윗함
Andrew Curran
Andrew Curran@AndrewCurran_·
Three weeks ago there were rumors that one of the labs had completed its largest ever successful training run, and that the model that emerged from it performed far above both internal expectations and what people assumed the scaling laws would predict. At the time these were only rumors, and no lab was attached to them. But in light of what we now know about Mythos, they look more credible, and the lab was probably Anthropic. Around the same time there were also rumors that one of the frontier labs had made an architectural breakthrough. If you are in enough group chats, you hear claims like this constantly, and most turn out to be nothing. But if Anthropic found that training above a certain scale, or in a certain way at that scale, produces capabilities that sit far above the prior trendline, then that is an architectural breakthrough. I think the leaked blog post was real, but still a draft. Mythos and Capybara were both candidate names for the new tier, though Mythos may now have enough mindshare that they end up keeping it. The specific rumor in early March was that the run produced a model roughly twice as performant as expected. That remains unconfirmed. What is confirmed is that Anthropic told Fortune the new model is a 'step change,' a sudden 2x would certainly fit the definition. We will find out in April how much of this is true. My own view is that the broad shape of this is correct even if some of the numbers are wrong. And if it is substantially accurate, then it also casts OpenAI's recent restructuring in a new light. If very large training runs are about to become essential to staying in the game, then a lot of their recent decisions, like dropping Sora, make even more sense strategically. For the public, this would mean the best models in the world are about to become much more expensive to serve, and therefore much more expensive to use. That will put pressure on rate limits, pricing, and subscription plans that are already subsidized to some unknown degree. Instead of becoming too cheap to meter, frontier intelligence may be about to become too expensive for most of humanity to afford. Second-order effects; compute, memory, and energy are about to become much more important than they already are. In the blog they describe the new model as not just an improvement, but having 'dramatically higher scores' than Opus 4.6 in coding and reasoning, and as being 'far ahead' of any other current models. If this is the new reality, then scale is about to become king in a whole new way. It would also mean, as usual, that Jensen wins again.
English
57
82
1.2K
97.5K
Herr Greenrush (e/acc) 리트윗함
RoboHub🤖
RoboHub🤖@XRoboHub·
China’s next-gen Robot Wolf Pack just went public, and this isn’t your average robotic quadruped. 🇨🇳🐺 Developed by the China Ordnance Automation Research Institute, these wolves are purpose-built for high-stakes urban warfare and ruin penetration. The squad splits into three specialized roles: Shadow for recon, Bloody Battle for precision strikes, and Polar for logistics support. These machines clock a top speed of 15 km/h (9.3 mph) with 12 degrees of freedom, mimicking the agility of biological predators in ruins. Engineers boosted motor power to handle a 25 kg (55 lbs) payload while clearing 30 cm (11.8 inches) obstacles with zero stability loss. The firepower is modular: you’re looking at micro-missiles, grenade launchers, and the 191 automatic rifle integrated into a shared-brain network. A unified command terminal syncs the wolves with aerial drones, creating a seamless air-ground strike net via real-time SLAM mapping. Human operators confirm every shot through voice commands, data gloves, or electronic grips, keeping a lethal man-in-the-loop for decision making. Autonomous target acquisition meets shared situational awareness, making this swarm a major shift in modern infantry-machine teaming. This pack proves that the gap between lab prototypes and battlefield-ready quadrupeds is officially closing. What’s your take on autonomous swarms taking over urban recon?
RoboHub🤖@XRoboHub

China's Sep. 3rd military parade features robot dogs alongside its unmanned combat units.

English
16
157
496
61.6K
Herr Greenrush (e/acc) 리트윗함
Kyle Chan
Kyle Chan@kyleichan·
China’s new 15th Five-Year Plan targets key emerging industries such as embodied AI, biomanufacturing, space, green hydrogen, eVTOL aircraft, and brain-computer interface (translation below). From my latest piece on China’s technology long game: highcapacity.org/p/chinas-tech-…
Kyle Chan tweet media
English
4
108
346
22.2K
Herr Greenrush (e/acc) 리트윗함
Humanoids daily
Humanoids daily@humanoidsdaily·
US lawmakers have officially introduced a bill to ban the federal government from buying or using Chinese humanoid robots. With companies like Unitree and AgiBot currently dominating global shipments, the American Security Robotics Act sets up a massive clash between national security concerns and the realities of the hardware market. Meanwhile, American humanoids like the Figure 03 are making their White House debuts. Read our full analysis of the proposed ban and what it means for the US robotics ecosystem: humanoidsdaily.com/news/us-lawmak…
English
5
7
72
9.8K
Herr Greenrush (e/acc) 리트윗함
Serenity
Serenity@aleabitoreddit·
My thoughts on $NBIS, $IREN, $CRWV and the current Neocloud market. One of them ends up as the next AWS in 5 years: My guess it’s Nebius. It's not winner takes all (DigitalOcean is there with Amazon), but there's clearly superior structures and likely winners. The downside: -> Low chance of rate cuts from Iran conflict. ->Broader market doesn't appear to want to fund the CapEx cycle. But want to reap the benefits With $IREN: We get it, 4.5GW = X revenue. But who is funding the GPUs? Whoever is buying into the $6,000,000,000 ATM right now. The winners will be whoever enters after holders get fully diluted. The reality is, they don't have enough funding to monetize their capacity through GPUs without colo models. And they didn't find other financing methods, so they went through ATMs because of a cult community that will buy into anything they sell. However, I agree it will be accretive long term. Just not as much for the retail buying in now. With $CRWV: They did everything right... $NVDA backing. Hyperscaler clients... But they financed completely wrong. Now, $1.5B+ yearly debt interest is eating Coreweave alive and cuts into FCF. Almost like credit card debt, Coreweave gets a job to pay off that debt, but eventually, the debt interest is too high that working doesn't really cover that and expansion too. If any company goes down, $CRWV is the first to go the massive debt load and interest. With $NBIS: They're doing as much as they can right... $NVDA funding $2B to fund capex. Convertible note offerings (convertible note short hedging is annoying for short term price appreciation). But this is the best way to do financing structures with much lower interest than Coreweave. They now have ~$46B+ in backlog from $META and $MSFT, two of the most profitable hyperscalers out there, without direct OpenAI linked contagion like Coreweave. And unlike others; there’s appreciation from their other companies (Clickhouse equity appreciation: avride robotaxi scale up; toloka triple digit growth) From my take: Nebius is the clear winner. However, current macro environments does not favor short term holders across the board with indexes dropping 7%. Especially so if they're buying into active ATMs. Long term, the benefits when they scale up eg. $NBIS Q4 2026 (yes, even $IREN), will be immense.
Serenity tweet mediaSerenity tweet media
English
97
48
829
93.3K
Herr Greenrush (e/acc) 리트윗함
Rohan Paul
Rohan Paul@rohanpaul_ai·
That 26-year-old Steve Jobs energy was something else.
English
19
173
1.8K
98.6K
Herr Greenrush (e/acc) 리트윗함
Georgia Adamson
Georgia Adamson@GeorgiaCAdamson·
NVIDIA has restarted H200 production for China. But H200s share manufacturing inputs with more advanced US chips, and those inputs are severely supply-constrained. BIS's January rule could permit up to as many as ~1 million chip exports, but requires applicants to certify exports won't reduce chip availability for US customers. However, the rule doesn't say how to evaluate this. In a new report, @fiiiiiist and I lay out a methodology for assessing whether H200 exports could divert chips from US customers, and quantify what the US stands to lose: ifp.org/ai-chip-supply… We distinguish between two forms of diversion: inventory diversion and manufacturing capacity diversion. Based on public information, we judge that: 1. There is weak evidence that exports of existing H200 inventories at current prices would divert supply from US customers. Global Hopper sales have fallen sharply since Blackwells became available. But deployed H200s remain fully utilized in the cloud, and China is reportedly being offered chips at ~$27K/unit, below US market prices available to some customers. Technically, a diversion holds if even one US customer would purchase the chip at the price offered to China. BIS needs non-public pricing data to make this determination. 2. There is strong evidence that new H200 production would divert manufacturing capacity for US customers of comparable or more advanced AI chips. All leading US AI chips share at least one key input with the H200: advanced logic fab capacity, HBM, or CoWoS packaging. All three inputs are severely supply-constrained this year. US hyperscalers and AI labs face enormous backlogs for these chips, meaning freed capacity would very likely serve American customers. These conditions likely apply to the roughly 250,000 H200s reportedly manufactured for NVIDIA between early January and early March 2026, when severe supply constraints on advanced logic wafer fabrication, HBM, and CoWoS capacity were already in effect. 3. Under current inelastic supply conditions, the US loses disproportionately more computing power for every H200 export than China gains. This is because the same inputs and/or manufacturing capacity are being used to produce less powerful H200 chips than frontier AI chips for US customers. Each 100K H200s produced for China could delay ~75K Blackwell B200s — forfeiting 1.7x the processing power per chip. We also provide a comprehensive set of questions BIS can ask license applicants and chip suppliers to assess both inventory and capacity diversion during license reviews, using the private data needed to make these determinations accurately.
Georgia Adamson tweet media
English
10
66
214
55.9K
Herr Greenrush (e/acc) 리트윗함
Humanoids daily
Humanoids daily@humanoidsdaily·
"It will be hard to believe that in the next few years, everyone doesn't have generally useful robots in their homes." Physical Intelligence (π) co-founder @lachygroom tells Sachin & Adam why the rapid progress of physical AI has them astounded.
English
7
16
106
11.2K
Herr Greenrush (e/acc) 리트윗함
Jukan
Jukan@jukan05·
>> PCB Material Shortages Intensify as CCL Lead Times Hit 6 Months, Quota System Imposed - The surge in AI applications is accelerating PCB material upgrades, deepening raw material supply shortages. - Notably, beyond sustained price increases, some IC substrate manufacturers have flagged that CCL supply has been disrupted for several months due to capacity constraints in fiberglass fabric and copper foil, with product lead times stretching up to six months — prompting affected suppliers to implement quota systems. - As upstream material shortages have recurred multiple times, PCB manufacturers now find themselves unable to cope with supply constraints through inventory buildup alone. - Instead, they are coordinating order volumes with upstream suppliers based on actual consumption in order to optimally fulfill customer demand. - This situation signals that rising demand for high-end products such as IC substrates, HDI, and high-layer-count (HLC) boards is intensifying material competition within the PCB industry. - To prevent customer over-ordering and duplicate orders, CCL suppliers are increasingly responding with allocation policies. - Meanwhile, PCB manufacturers are adopting a two-pronged risk mitigation strategy: modest product price increases to reflect rising costs of fiberglass fabric and copper foil, alongside pre-confirmed order volumes with customers — building a structure that secures sufficient material supply from CCL suppliers up to six months ahead of mass production. - Industry participants note that shortages of T-glass fiberglass fabric used in IC substrates are particularly acute, with the supply crunch expected to persist through 2026 — with resolution contingent on Nitto Boseki's capacity expansion timeline. As top-tier suppliers concentrate on high-end product lines, shortages of mid-to-low-grade E-glass fiberglass fabric have also begun to emerge. - CCL suppliers further highlight that copper foil prices continue to climb in tandem with base copper metal prices — and that these price increases are not one-time adjustments, but rather stepped hikes expected to rise in double-digit percentages with each successive round.
English
3
15
120
86.2K
Herr Greenrush (e/acc) 리트윗함
Jukan
Jukan@jukan05·
According to a Nikkei report, Apple is considering using NAND from China’s YMTC.
Jukan tweet media
English
7
20
144
42.7K
Herr Greenrush (e/acc) 리트윗함
Jukan
Jukan@jukan05·
Honestly, I understand that people want to tout their long careers, but that doesn't mean you should let the past cloud your judgment of the present. Let me give you one example. Lately, a lot of people have been asking me whether they should keep holding memory stocks given that spot prices are declining. This is a very foolish question. Spot prices already represent less than 1% of total volumes in circulation, and they've been trading at abnormally elevated levels to begin with. A decline in spot prices is completely meaningless right now. And yet, people making a big fuss over memory retail prices or DRAMeXchange spot price drops… I can only describe it as unfortunate. I'd recommend referring to this week's comment on memory spot prices from Bank of America's Simon Woo: ------------------------------------------------------------ DRAM spot prices have fallen further, but still remain too high Over the past two weeks, DRAM spot prices have undergone further correction. However, current spot prices ($20–$40 per GB) still significantly exceed memory makers' average selling prices (ASP, which remains around $10 excluding HBM) and long-term historical averages (around $5). Therefore, even a 50% decline in DRAM spot prices would not be surprising, and it would have little impact on ASP. Because spot market transaction volumes are so thin, DRAM spot market revenues represent a negligible share of memory makers' total sales. We now assume that DRAM spot prices will gradually decline for the remainder of 2026. NAND spot prices are also expected to trend lower toward the end of 2026, given the currently abnormally high price levels and rising fab utilization rates. ------------------------------------------------------------ I wrote this not to tell you whether to buy or sell memory stocks, but in the hope that people will look at the industry itself with a clearer perspective. To add my personal view, I think spot prices could see a fairly significant decline in the second half of this year. There are reports that Chinese buyers who had been stockpiling memory for speculative purposes are now beginning to offload their inventory. That said, I hope we won't see the usual hysteria about memory "collapsing" when that happens. As Simon Woo pointed out, even a 50% drop in spot prices would have virtually no impact.
English
15
21
316
32.1K
Herr Greenrush (e/acc) 리트윗함
Stefan Schubert
Stefan Schubert@StefanFSchubert·
While social media is polarising, evidence suggests AI may nudge people towards the centre. This holds true of all studied models. Grok is more right-leaning than other models, but also has depolarising effects. By @jburnmurdoch.
Stefan Schubert tweet media
English
156
746
4.4K
573.3K