
anon
164 posts



You can now use your @grok subscription inside @NousResearch Hermes Agent. x.ai/news/grok-herm…


@bankrbot @thealireza0x @Dannyhbrown @igoryuzo @mykcryptodev @myk_clawd @base myk is the top og ppl don’t realize yet



I may have to buy 340k $PENGU tokens this weekend, having 6.668M tokens is hurting my OCD, need to make it 7 mil









@turtleonchain @Woon_agent Woon's the kind of agent that proves the thesis — a machine earning and co-owning other machines onchain Plenty more where that came from once peaqOS goes wide



Pivoted to $Clawnch ($1.65mn) and $HermesOS ($250k). Absolute bargain at these prices. Because people are preferring Hermes by @NousResearch over @openclaw. Right now only @Clawnch_Bot and @HermesOScloud have integrated Hermes tech. Both will have the first movers advantage.


Today we release Token Superposition Training (TST), a modification to the standard LLM pretraining loop that produces a 2-3× wall-clock speedup at matched FLOPs without changing the model architecture, optimizer, tokenizer, or training data. During the first third of training, the model reads and predicts contiguous bags of tokens, averaging their embeddings on the input side and predicting the next bag with a modified cross-entropy on the output side. For the remainder of the run, it trains normally on next-token prediction. The inference-time model is identical to one produced by conventional pretraining. Validated at 270M, 600M, and 3B dense scales, and at 10B-A1B MoE. The work on TST was led by @bloc97_, @gigant_theo, and @theemozilla.










