ruffy369

155 posts

ruffy369 banner
ruffy369

ruffy369

@ruffy0369

The greatest irony of human life is that we must become qualified again to attain what we already possess. 🌌∩ My story is both personal and universal.

https://github.com/RUFFY-369 Katılım Nisan 2021
761 Takip Edilen134 Takipçiler
ruffy369
ruffy369@ruffy0369·
@sarahookr @adaption_ai Unlike @SakanaAILabs v2, AutoScientist co-optimizes data & weights at train-time. Is data scoring an async Critic or fused into the forward pass? If fused, how are you stabilizing the resulting gradient variance??
English
0
0
0
219
ruffy369
ruffy369@ruffy0369·
@CalebPeffer It’s basically a Reinforcement Learning exploration problem: Profitability rewards optimization. Venture funding enables exploration.
English
0
0
0
34
Caleb Peffer (Hiring!)
Caleb Peffer (Hiring!)@CalebPeffer·
People talk about chasing profitability like it's "prudent" thing to do Its not. it just means you're company is focused on extracting, not innovating That's why the venture model has consistently changed the world - and bootstrapping hasn't
David Senra@davidsenra

I love this idea from @jasonfried "Your only competition is your costs." Keep costs low, keep the team small, make stuff you want to use. You don't need the whole world: “A business is very simple. You got to make more than you spend. If you're making more than you spend, then your competition is your cost. That's what you're really in business against, how much it costs you to stay in business. It's not all the other alternatives that are on the market. You can't control what they're going to put out there, what they're going to price it at, all the things they're going to do. They're going to do what they're going to do. What I can control is how much it costs me to run my business, how much I sell my product for, and as long as I make more than I spend, I get to stay in business. And isn't that what this is all about, staying in business? That's what it's all about because I like this. I want to keep doing this. I can't keep doing it if I don't stay in business. I can't keep doing it if I make less than it costs me to make the things that I make. So I'm always thinking about the only competition I really have on an annual basis is to make sure that we make more as a company than it costs us to run the company. That's my real competition.”

English
5
1
13
1.7K
Teknium 🪽
Teknium 🪽@Teknium·
@ZeroToAGI If we weren’t stretched thin (this why we’re hiring ) id say yes but maybe 🤷‍♂️
English
2
0
1
112
Teknium 🪽
Teknium 🪽@Teknium·
I am once again reaching out to you all to apply @ nous for the post training team to work with me and our team on advancing several core capabilities of our models! If you want to work on: - Creativity, Roleplaying, and Simulation - STEM - Math - Code and Code Agents - Innovative New Agents - Instruction Following and RLHF - Safety and Uncensoring - Multimodality - Training and Data Synthesis Infrastructure *and* have experience training models, doing RL or ML, data engineering, or working with synthetic data.. and are a highly self-motivated individual, then you should apply. This is a full time job, is fully remote, has competitive pay, and will be hard work! You can DM me directly or through nousresearch.com/careers We want those who will dedicate their time toward advancing our models and want to do so with a passion.
Teknium 🪽 tweet media
English
76
66
940
194.1K
ruffy369
ruffy369@ruffy0369·
@filibluster @NousResearch @Teknium - New papers have the 'recipes' for fast ai. - Usually it takes time for humans to cook them. - This tool reads the recipe and cooks it for you quicker than humans
English
0
0
2
42
ruffy369
ruffy369@ruffy0369·
Just shipped a meta-compiler for @NousResearch hermes. It reads papers and turns them into optimized triton kernels for agent skills. Fed it sageattention-2 and it synthesized the int8/fp8 quantization + outlier smoothing perfectly. 🔗github.com/RUFFY-369/pape… cc @Teknium 🛠️🔥
ruffy369 tweet media
English
4
5
88
5.8K
ruffy369
ruffy369@ruffy0369·
@Teknium Heartfelt condolences for your loss man :(
English
0
0
2
196
Teknium 🪽
Teknium 🪽@Teknium·
I have to go out of town for a funeral thru the weekend but I am leaving everyone with one new cool feature inspired by ralph loops and Codex's upcoming /goal feature. If you use /goal , it will start a loop with a supervisor model determining whether the task completed at the end of an agent loop - if it hasn't it will force it to keep going until it's done! Enjoy and have a great weekend. PR: github.com/NousResearch/h…
Teknium 🪽 tweet media
English
129
116
2.1K
365.3K
ruffy369
ruffy369@ruffy0369·
LLMs struggle with Strategic Entropy. I’ve integrated @SakanaAILabs SSoT into @NousResearch stack: • Atropos: Faithful RL rollouts • Hermes : Diversified agent skills • Evolution: Eliminating Cognitive Inertia in model synthesis 25% novelty gain.🧠🚀 cc @Teknium @hardmaru
ruffy369 tweet mediaruffy369 tweet mediaruffy369 tweet media
Sakana AI@SakanaAILabs

Can LLMs flip coins in their heads? When prompted to “Flip a fair coin” 100 times, the heads to tails ratio drifts far from 50:50. LLMs can understand what the target probability should be, but generating outputs that faithfully follow a given distribution is a separate problem. This bias extends beyond coin flips. When LLMs are asked to generate multiple story ideas or brainstorm solutions, the outputs tend to cluster around a narrow range. The same probabilistic skew that distorts coin flips limits diversity in creative generation, recommendations, and other tasks where varied outputs are needed. We discovered a prompting technique named String Seed of Thought (SSoT). The method is simple: instruct the LLM to generate a random string in its own output, then manipulate that string to derive its answer. It requires only a small addition to the prompt and no external random number generator. SSoT significantly reduces output bias across a wide range of LLMs, both open and closed. With reasoning models (such as DeepSeek-R1), it reaches accuracy close to that of actual random sampling. The method generalizes from binary choices to n-way selections and arbitrary probability distributions. On the NoveltyBench diversity benchmark, SSoT outperformed other approaches across all six categories while maintaining output quality. This work will be presented at #ICLR2026! Blog: pub.sakana.ai/ssot Paper: arxiv.org/abs/2510.21150 Openreview: openreview.net/forum?id=luXtb…

English
1
0
3
186
ruffy369 retweetledi
Sakana AI
Sakana AI@SakanaAILabs·
Can LLMs flip coins in their heads? When prompted to “Flip a fair coin” 100 times, the heads to tails ratio drifts far from 50:50. LLMs can understand what the target probability should be, but generating outputs that faithfully follow a given distribution is a separate problem. This bias extends beyond coin flips. When LLMs are asked to generate multiple story ideas or brainstorm solutions, the outputs tend to cluster around a narrow range. The same probabilistic skew that distorts coin flips limits diversity in creative generation, recommendations, and other tasks where varied outputs are needed. We discovered a prompting technique named String Seed of Thought (SSoT). The method is simple: instruct the LLM to generate a random string in its own output, then manipulate that string to derive its answer. It requires only a small addition to the prompt and no external random number generator. SSoT significantly reduces output bias across a wide range of LLMs, both open and closed. With reasoning models (such as DeepSeek-R1), it reaches accuracy close to that of actual random sampling. The method generalizes from binary choices to n-way selections and arbitrary probability distributions. On the NoveltyBench diversity benchmark, SSoT outperformed other approaches across all six categories while maintaining output quality. This work will be presented at #ICLR2026! Blog: pub.sakana.ai/ssot Paper: arxiv.org/abs/2510.21150 Openreview: openreview.net/forum?id=luXtb…
GIF
English
35
138
810
257.6K
ruffy369
ruffy369@ruffy0369·
@Teknium @pastaraspberry Even fairly niche artists like Billy Woods have ~280k monthly listeners; distribution online isn’t that constrained anymore. Similarly, GitHub stars don’t require millions of active users; they just reflect reach + visibility within the right audience.
English
0
0
0
55
Teknium 🪽
Teknium 🪽@Teknium·
@pastaraspberry Why would you think there's not? Some investigative article compared fake stars across a variety of repos, openclaw had nearly 60%, ours only 8% so I think they represent real users!
English
7
0
27
1.3K
ruffy369
ruffy369@ruffy0369·
Agents shouldn't be limited to text generation.🤖🛸 I built ROSHermes: A bridge giving @NousResearch Hermes Agent native agency over the @rosorg ROS2 ecosystem. Heavily inspired by @irvinxyz efforts; taking the vision to the next level for Hermes. cc @stash_pomichter @Teknium
stash@stash_pomichter

Openclaw can now understand physical space and temporality. Integrate with any lidar, stereo, rgb camera. Fully open source. Video below is our openclaw on a Unitree G1 humanoid. We integrate with most drones, quadrupeds as well.

English
1
3
9
1K
ruffy369
ruffy369@ruffy0369·
4/ REPO HERE: Code is public. The HAL is modular; run the sim or port it to real hardware. v0.1-alpha is live. Go break it. 🔗: github.com/RUFFY-369/ros-…
English
0
0
2
147
ruffy369
ruffy369@ruffy0369·
3/ Architecture: A 5-pillar bridge using Dynamic Affordance Injection and a modular HAL. 🛰️ On the Latency Cliff: Hermes isn't the 20Hz driver; it's the Strategic Orchestrator. High-level mission logic stays at the agent layer, while the bridge handles millisecond physics.
English
1
0
1
153
ruffy369
ruffy369@ruffy0369·
2/ Here is the uncut (not sped up) 6-minute deep dive. 🧠⚡ Raw agent reasoning through coordinates, scanning hardware, and managing a 4-robot swarm in real-time. @Teknium don't forget to look at 4:12 of the video 😉
English
2
2
13
3.5K