
rw ./
7K posts

rw ./
@gradientintern
push past limits | real niu for @Gradient_HQ | part time troll | full time janitor | professional edging specialist



Morgan Stanley has again raised its capex forecasts for the five hyperscalers Amazon, Alphabet, Meta, Microsoft, and Oracle. It now expects them to spend about $805bn this year, up from a previous estimate of $765bn. For next year, the forecast has been lifted from $951bn to $1.1TRILLION. To put that into perspective, their 2026 spending alone would be roughly equal to what all non-tech companies in the S&P 500 spent combined in 2025. The expected ~$800bn for 2026 is nearly double 2025 levels and about three times what was spent in 2024.



xAI has launched Grok 4.3, achieving 53 on the Artificial Analysis Intelligence Index with improved agentic performance, ~40% lower input price, and ~60% lower output price than Grok 4.20 The release of Grok 4.3 places @xAI just above Muse Spark and Claude Sonnet 4.6 on the Intelligence Index, and a 4 points ahead of the latest version of Grok 4.20. Grok 4.3 improves its Artificial Analysis Intelligence Index score while reducing cost to run the benchmark suite. Key Takeaways: ➤ Grok 4.3 improves on cost-per-intelligence relative to Grok 4.20 0309 v2: it scores higher on the Intelligence Index while costing less to run the full benchmark suite. Grok 4.3 costs $395 to run the Artificial Analysis Intelligence Index, around 20% lower than Grok 4.20 0309 v2, despite using more output tokens. This makes it one of the lower-cost models at its intelligence level ➤ Large increase in real world agentic task performance: The largest single benchmark improvement is on GDPval-AA, where Grok 4.3 scores an ELO of 1500, up 321 points from Grok 4.20 0309 v2’s score of 1179 Grok 4.3, surpassing Gemini 3.1 Pro Preview, Muse Spark, Gpt-5.4 mini (xhigh), and Kimi K2.5. Grok 4.3 narrows the gap to the leading model on GDPval-AA, but still trails GPT-5.5 (xhigh) by 276 Elo points, with an expected win rate of ~17% against GPT-5.5 (xhigh) under the standard Elo formula ➤ Grok 4.3’s performs strongly on instruction following and agentic customer support tasks. It gains 5 points on 𝜏²-Bench Telecom to reach 98%, in line with GLM-5.1. Grok 4.3 maintains an 81% IFBench score from Grok 4.20 0309 v2 ➤ Gains 8 points on AA-Omniscience Accuracy, but at the cost of lower AA-Omniscience Non-Hallucination Rate of 8 points, so Grok 4.20 0309 v2 still leads AA-Omniscience Non-Hallucination Rate, followed by MiMo-V2.5-Pro, in line with Grok 4.3 Congratulations to @xAI and @elonmusk on the impressive release!

We're hiring at Gradient. Building open-source environment infrastructure for our distributed RL training stack — reproducible, scalable to thousand-GPU runs Looking for 1–2 RL Environments engineers / tech leads: You've designed verifiers, built sandboxes for agentic RL rollouts, or shipped RL training data pipelines that survived contact with real training. Domain depth in math, code, agent, tool, or GUI is a plus. PhD not required. Also hiring research interns: PhD / Masters students with hands-on RLHF / RLVR / GRPO / DPO / agentic RL experience. Open-source footprint matters more than paper count. Most intern roles convert post-grad. No age cap. Founding-team-level equity for the right people. DMs open.















