

Teos
466 posts

@Teoss__
OnChain Chef | Cooking in @lacocinacrypto and @onchainbd. | Data Science and Math Grade in the oven



Life is fucking electric bro. Don't fall for the doomer shit.



Releasing INTELLECT-2: We’re open-sourcing the first 32B parameter model trained via globally distributed reinforcement learning: • Detailed Technical Report • INTELLECT-2 model checkpoint primeintellect.ai/blog/intellect…



I think AI is going to massively widen the cognitive divide Those who already know how to think clearly, logically, and critically have gained immense leverage to do so at unimaginable scales For those who don't already have that cognitive capacity, it's going to be harder than ever to acquire it The only way to learn to think better now, paradoxically, is to refuse to use AI tools so you can strengthen your natural intellect But that essentially means opting out of humanity's most powerful invention and walking when jet engines are available

Claude for prompting + ChatGPT for static generation is insane. I made all of these entirely with AI. Vibe marketing is HERE.


This is wild - UC Berkeley shows that a tiny 1.5B model beats o1-preview on math by RL! They applied simple RL to Deepseek-R1-Distilled-Qwen-1.5B on 40K math problems, trained at 8K context, then scaled to 16K & 24K. 3,800 A100 hours ($4,500) to beat o1-preview in math! Best thing is they open-sourced everything: the model, the training code (based on ByteDance verl library), and the dataset.

Long $TAO (dTAO Launch may case some hype) Long $LDO (Staked ETF may cause some hype) Long $ONDO or $CHEX (US Tokenization may cause some hype) Very important: Sell when they pump. You're welcome.

In 2024, zero knowledge verification for machine learning seemed impossible. The latency overhead was too high. We did multiple research reports on it. But, it is 2025. The year of AGI. And, we have made it possible. Today, we open source our original research, ZKLoRA. A zero knowledge protocol that allows verification of LoRA fine-tuning of open source AI models, in 1-2 seconds. And not just for toy models, but for state of the art open source models like llama 3.3 etc. with tens or hundreds of billions of parameters. Want to try it yourself? It is OPEN SOURCE. Github link is below:
