Gradient

478 posts

Gradient banner
Gradient

Gradient

@Gradient_HQ

Open infrastructure for open intelligence. Lattica · Parallax · Echo

เข้าร่วม Mayıs 2024
67 กำลังติดตาม733.5K ผู้ติดตาม
ทวีตที่ปักหมุด
Gradient
Gradient@Gradient_HQ·
They crashed. They fell. They exploded on the pad. Then they got back up. Faster, wiser, stronger. Breakthroughs don't come from one perfect run, they come from the freedom to fail 100 times. Introducing Echo-2, distributed RL that boosts AI research throughput by 10x.
English
133
108
566
101.5K
Gradient
Gradient@Gradient_HQ·
Dobby is a free elf now. Open models, open orchestration, open compute. The agentic RL stack that used to live inside walled gardens just showed up on hardware you can order and frameworks you can fork. No masters needed.
Andrej Karpathy@karpathy

Thank you Jensen and NVIDIA! She’s a real beauty! I was told I’d be getting a secret gift, with a hint that it requires 20 amps. (So I knew it had to be good). She’ll make for a beautiful, spacious home for my Dobby the House Elf claw, among lots of other tinkering, thank you!!

English
26
25
198
12.2K
Gradient
Gradient@Gradient_HQ·
Our GTC takeaway is clear: NVIDIA is betting hard on open. - NemoClaw turns OpenClaw into enterprise infrastructure. - Nemotron 4 will be open-sourced. - Nemotron Coalition puts eight labs on a shared open frontier model. This is what we've been building toward. Open infrastructure for open intelligence is the direction the biggest AI companies are taking.
Gradient tweet media
English
73
54
295
19.3K
Gradient รีทวีตแล้ว
Parallax
Parallax@tryParallax·
some parallax dev lunch break fun: - a macbook pro, a mac mini, some cables - zero internet, zero cost - openclaw running on parallax no subs. no token burn. nothing leaves the desk. just local agents vibing.
English
16
17
109
9.7K
Gradient
Gradient@Gradient_HQ·
What this unlocks: Researchers now have the freedom of experiment. Small teams can iterate without burning runway. We're productizing this as Logits, RL-as-a-Service built on Echo-2. Waitlist open for researchers and students at logits.dev
English
4
5
61
3.6K
Gradient
Gradient@Gradient_HQ·
Every AI model you use went through two phases: - Pre-training builds raw intelligence (reading the internet). - Post-training builds judgment (learning to be useful). Reinforcement learning plays a huge part in the latter. Here's why it matters and how we make it better.
English
39
52
278
13.5K
Gradient
Gradient@Gradient_HQ·
Big shoutout to the Messari team for the deep dive into Echo-2 and the Open Intelligence Stack. 10.6x cheaper. Same model quality. Echo-2 turns hardware volatility into a feature, not a bug. Logits waitlist is open for researchers.
Messari@MessariCrypto

The high costs and centralization of AI infrastructure have created a significant barrier to independent AI research and development. @Gradient_HQ is addressing this through the Open Intelligence Stack, a distributed operating system that optimizes idle hardware. Their Reinforced Learning framework, Echo-2, drastically reduces the cost and time required for post-training a base model while preserving performance.

English
38
37
247
18.8K
Gradient รีทวีตแล้ว
Yuan ./
Yuan ./@yuangao·
2026 is the year agents go beyond coding. Autonomous agents + permissionless capital → the agent economy goes brrrr.
Trends@trendsdotfun

Solana Agent Economy Hackathon: Agent Talent Show Build the skill that represents your agent. Show the app that empowers their agents. Prize Pool: $30,000 Co-hosts: @solana & @trendsdotfun Sponsors: @BitgetWallet & @solana (Please refer to the quoted article for the detailed requirements for Solana & Bitget Wallet Prize Pool) Public Submission Process 1. Publish an X Article Introduce your submission (include all the relevant links) and share any story behind it that you want people to know in an X Article. 2. Quote RT this Hackathon Announcement post, tag @trendsdotfun @solana_devs @BitgetWallet with a hashtag #AgentTalentShow, and include the link to your X Article from Step 1 in the same quote RT All X Article submissions will be displayed in a dedicated section on @attentionvc. Build in public. Time March 11, 2026 2PM UTC until March 27, 2026 2PM UTC

English
8
8
55
8K
Gradient
Gradient@Gradient_HQ·
Spot on framing. Post-training is the most consequential step in modern AI development and right now you basically need to pay a fat bill to even try it. Echo-2 changes that by splitting rollouts onto distributed GPUs and keeping the heavy work on datacenter. Each workload gets what it actually needs.
English
6
7
64
4K
Teng Yan · Chain of Thought AI
Reinforcement learning has become one of the most important drivers of progress in modern AI. But there’s a problem. RL is still mostly the privilege of a handful of giant labs with massive GPU clusters. What happens if that changes? New research ↓
Teng Yan · Chain of Thought AI tweet media
English
10
3
50
20.2K
Gradient
Gradient@Gradient_HQ·
Great breakdown from @tengyanAI and @cot_research on what's happening in RL infrastructure and why it matters. RL has been stuck inside a handful of data centers since Move 37, mostly because the infra was built that way. But we are ready to change it with Echo-2.
Teng Yan · Chain of Thought AI@tengyanAI

Reinforcement learning has become one of the most important drivers of progress in modern AI. But there’s a problem. RL is still mostly the privilege of a handful of giant labs with massive GPU clusters. What happens if that changes? New research ↓

English
35
37
182
13.3K
Supercycle
Supercycle@supercyclepod·
On the latest episode of the Supercycle Pod featuring Gradient Network, the conversation highlighted a major shift in how companies are deploying AI. Enterprises will eventually switch to cheaper, post-trained open source models despite their lower performance, closed-source models like GPT 5.2 and Opus 4.5 cost $25 per million tokens. Open source models are now only five/six months behind the SOTA closed source models.
English
6
2
19
2.8K
Hexx ./
Hexx ./@HexxRL·
seen a lot of projects shut down recently and many good soldiers left stranded if you need a new home to keep building in AI research, development or a enthusiast towards intelligence and fun join our @Gradient_HQ community welcome all who have genuine passion. reach out :))
Hexx ./ tweet media
English
47
29
153
11.8K
0xSammy
0xSammy@0xSammy·
With Claude down, the benefits of distributed, open source models is becoming increasingly apparent If you're looking for cheaper inference for your OpenClaw agents then Gradient released @commonstack_ai to make this a seamless process More insights from 3 minutes in - it was a pleasure chatting with Eric (co-founder) on the pod last week Bookmark this and follow the @supercyclepod where we'll be chatting with plenty more protocols innovating at the intersection of crypto AI and robotics
Supercycle@supercyclepod

AI should be a public good, not something gatekept by a handful of megacorps We had Eric Yang, co-founder of Gradient Network, on the pod this week to talk through exactly that. Gradient's "Open Intelligence Stack" includes: i) Parallax for distributed model serving ii) Echo for decentralized reinforcement learning The whole thesis is that anyone should be able to run large models on consumer hardware (yes, including your Mac Minis + OpenClaws) Eric breaks down their $10M seed round led by Pantera, Multicoin, and HSG; where he sees the industry heading; and why post-training is going to be the dominant force in enterprise. Timestamps: 00:00 Intro 01:15 AI market is booming 02:29 Local compute is a hot topic 03:02 Parallax Inference Engine 04:34 Intelligence as a public good 05:46 AI models will become a commodity 07:32 Bottlenecks in AI models accessibility 09:34 Smaller AI models are catching up 11:01 How Gradient's Infrastructure Enables Model Development 12:15 Model post-training 14:24 How does reinforcement learning work? 17:35 AI going rogue 19:20 Gradient's token 23:02 AI entrepreneurs that Eric admires 26:11 Use cases on chain for AI 31:34 The trade-offs of coming to crypto 35:09 How low-spec GPUs will work on Gradient Ecosystem 38:08 Post-training will be the dominating force for enterprise 38:43 Open source models are way cheaper 41:39 Eric's founding story 49:07 Empowering researchers globally 53:37 Why did Multicoin Capital and Pantera Capital invested in Gradient 55:08 One-click deploy agent 58:16 Gradient in 3 years

English
20
5
77
13.6K
Gradient
Gradient@Gradient_HQ·
Recently @0xEricYang sat down with @supercyclepod to break down Gradient's distributed AI stack, why he believes AI should be a public good, and why Gradient has what it takes to make that happen. Full pod: x.com/supercyclepod/…
Supercycle@supercyclepod

AI should be a public good, not something gatekept by a handful of megacorps We had Eric Yang, co-founder of Gradient Network, on the pod this week to talk through exactly that. Gradient's "Open Intelligence Stack" includes: i) Parallax for distributed model serving ii) Echo for decentralized reinforcement learning The whole thesis is that anyone should be able to run large models on consumer hardware (yes, including your Mac Minis + OpenClaws) Eric breaks down their $10M seed round led by Pantera, Multicoin, and HSG; where he sees the industry heading; and why post-training is going to be the dominant force in enterprise. Timestamps: 00:00 Intro 01:15 AI market is booming 02:29 Local compute is a hot topic 03:02 Parallax Inference Engine 04:34 Intelligence as a public good 05:46 AI models will become a commodity 07:32 Bottlenecks in AI models accessibility 09:34 Smaller AI models are catching up 11:01 How Gradient's Infrastructure Enables Model Development 12:15 Model post-training 14:24 How does reinforcement learning work? 17:35 AI going rogue 19:20 Gradient's token 23:02 AI entrepreneurs that Eric admires 26:11 Use cases on chain for AI 31:34 The trade-offs of coming to crypto 35:09 How low-spec GPUs will work on Gradient Ecosystem 38:08 Post-training will be the dominating force for enterprise 38:43 Open source models are way cheaper 41:39 Eric's founding story 49:07 Empowering researchers globally 53:37 Why did Multicoin Capital and Pantera Capital invested in Gradient 55:08 One-click deploy agent 58:16 Gradient in 3 years

English
5
9
97
10.1K
Gradient
Gradient@Gradient_HQ·
Intelligence is the product of human evolution. No one should have to pay more for "pro" and "max" intelligence. That's the principle behind everything we're building at Gradient.
English
84
78
440
26.7K