Hellas

252 posts

Hellas banner
Hellas

Hellas

@hellasdotai

serverless AI secured by web3 - https://t.co/irTCUyjkrv

شامل ہوئے Ağustos 2024
109 فالونگ160 فالوورز
پن کیا گیا ٹویٹ
Hellas
Hellas@hellasdotai·
If agents are going to scale they need. 1. Liquid compute market 2. Verifiable inference Without it running your open-claw or Claude agents will remain expensive at scale and you won’t know what’s happening under the hood. The compute market has remained illiquid due to external hardware providers being unable to prove that user workloads ran correctly. No one has fully provided an answer to this problem. At Hellas we’re building a solution take a look at our articles to learn more.
English
0
1
12
304
Hellas
Hellas@hellasdotai·
@juristr Seems like its also useful in bot replies lol
English
0
0
0
10
Hellas
Hellas@hellasdotai·
@michael_timbs yup knowledge is gained through experience, not probabilities. But maybe there is a way to aggregate dev experience to improve agentic system design?
English
0
0
0
15
Hellas
Hellas@hellasdotai·
@youyuxi Hmm so... Competitant devs. + powerful tools ≠ higher performance the math aint mathin
English
0
0
2
523
liminally chris ⬡
liminally chris ⬡@Chris_8086·
@heynavtoor I can’t wait until LLM writing does not sound like this. “ The wildest part? That’s it. That’s your off-grid knowledge station.” 🤮
English
11
0
305
15K
Nav Toor
Nav Toor@heynavtoor·
🚨Someone just open sourced a computer that works when the entire internet goes down. It's called Project N.O.M.A.D. A self-contained offline survival server with AI, Wikipedia, maps, medical references, and full education courses. No internet. No cloud. No subscription. It just works. Here's what's packed inside: → A local AI assistant powered by Ollama (works fully offline) → All of Wikipedia, downloadable and searchable → Offline maps of any region you choose → Medical references and survival guides → Full Khan Academy courses with progress tracking → Encryption and data analysis tools via CyberChef → Document upload with semantic search (local RAG) Here's the wildest part: A solar panel, a battery, a mini PC, and a WiFi access point. That's it. That's your entire off-grid knowledge station. 15 to 65 watts of power. Works from a cabin, an RV, a sailboat, or a bunker. Companies sell "prepper drives" with static PDFs for $185. This gives you a full AI brain, an entire encyclopedia, and real courses for free. One command to install. 100% Open Source. Apache 2.0 License.
Nav Toor tweet media
English
482
3.2K
19.6K
830.7K
Hellas
Hellas@hellasdotai·
@allshiny Better for those who are actually thinking and writing, I guess.
English
0
0
0
3
Hellas
Hellas@hellasdotai·
@tomfgoodwin How would judges evaluate ideas over others?
English
0
0
0
19
Tom Goodwin
Tom Goodwin@tomfgoodwin·
Someone should invent a new type of hackathon that's sort of the opposite- its called a "What If" Rather than tech folk play around around with great tech to see what they can make We get imaginative types in a room & just share solutions, or ideas, or things that should exist.
English
10
0
12
986
Hellas
Hellas@hellasdotai·
@Old_Samster Agreed, it's the only place that feels alive.
English
0
0
1
34
Sami Kassab
Sami Kassab@Old_Samster·
crypto-AI feels like the only corner of the broader industry where you wake up every morning actually excited I've seen every AI breakthrough (agentic loops, auto-research, personal AI assistants, vibe coding) get immediately absorbed into Bittensor everyday is something new
const@const_reborn

What if you could create an auto-research where your agent just focused on the eval and it was designed so that others could have swarms of agents across the web try to solve it and you paid them based on the ownership of the mechanism which produced the research

English
5
5
71
4.6K
Hellas ری ٹویٹ کیا
CryptoEconLab
CryptoEconLab@cryptoeconlab·
1/ Tensor compute is the high-performance execution of tensor operations, and it powers modern AI from inference to training. Today, much of this compute runs on external infrastructure: centralized clouds, GPU marketplaces, and decentralized networks. Yet outsourcing compute comes with a structural issue: verifying that a specific computation was executed correctly is surprisingly hard. Without this guarantee, clients have no choice but to trust their provider. And trust doesn't scale. So how do you remove trust from outsourced compute? We studied how @hellasdotai solves this 👇: cryptoeconlab.com/blog/hellas-tr…
English
1
5
10
720
Hellas
Hellas@hellasdotai·
@tomfgoodwin Don't forget the undercover agents who make sure all agents are productive.
English
1
0
2
22
Tom Goodwin
Tom Goodwin@tomfgoodwin·
I’m surely being stupid. But if AI is rather unconstrained by expertise or capacity or to some extent speed Why do we need to divide tasks or departments to 9 agents ( the marketing agent, the optimization agent etc ) to each do one thing. And then another agent to manage the swarm. Cant one agent just be doing it all you know. It seems very skeuomorphic. Will we have HR agents to make sure the agent agents are being looked after ? A office canteen manager agent to feed the agents ? Seems daft
English
198
3
190
25.1K
Tom Goodwin
Tom Goodwin@tomfgoodwin·
@kaseyklimes But but but but it's an AI powered rube goldberg machine, It can be even more messy. , It can be even more pointless. , It can be even more inelegant. , But just look at how clever it is.
English
2
0
3
232
kasey
kasey@kaseyklimes·
we've already been through a few bundlings and unbundlings here: 1. hit the limitations of current models 2. break out responsibility to individual agents 3. the models improve, rendering your specialized division of labor irrelevant 4. realize you built a rube goldberg machine, strip it down to a single elegant agent 5. push your system to handle more complexity at higher levels of abstraction return to step 1
Tom Goodwin@tomfgoodwin

I’m surely being stupid. But if AI is rather unconstrained by expertise or capacity or to some extent speed Why do we need to divide tasks or departments to 9 agents ( the marketing agent, the optimization agent etc ) to each do one thing. And then another agent to manage the swarm. Cant one agent just be doing it all you know. It seems very skeuomorphic. Will we have HR agents to make sure the agent agents are being looked after ? A office canteen manager agent to feed the agents ? Seems daft

English
1
0
12
1.3K
Hellas
Hellas@hellasdotai·
@dollhares Thank you sirrr for your kind words
English
1
0
1
16
doll hairs
doll hairs@dollhares·
@hellasdotai is the most exciting and impressive blockchain company seen since 2021 unusually below radar, it will alter the on-chain ai landscape so who am i this is a (relatively) fresh profile, as is my modus for large shifts. i covered vxv as the largest dedicated community profile in 2020-21, we ran it to 700m, before unmasking kasian as a fraud. that was my first lesson. next i discovered bittensor pre-testnet, meaning 5-6 people total in their discord, a claim you can vet by searching my @confluent_ handle in their server. i left due to what i viewed as rampant greed, racism, dishonesty and backstabbing, none of which are in my nature. my second lesson. ive stayed largely behind the scenes since, and kept a very small circle of trust with people i am proud to call true friends. without them, their help and guidance, i would not be here today. i then spent the last 3 years hunting relentlessly through code and shadows trying to repeat that feat, an exceedingly difficult task to pull off twice. many conditions must align and patience is required. lightning rarely strikes twice. thousands upon thousands of people attempt this and fail. they vouch for subpar projects, shilling 10, 20, 100 different coins in a year, hoping to be right. they round trip, and for the most part they lose their followers money. if they do profit, they tend to do so off the hard work of others. never fully backing anything since except neet for all the above reasons, i tried my hand at a few experiments like sho, wrote a few threads on agoras. meanwhile, in reality and known to few, the true goal was the hunt with hellas, i believe that hunt is now over. it is an exceptional company run by a brilliant team with the right philosophy, doing work that will impact the ai industry both on and off chain. in 6 years ive never been this confident in a project. the conditions have finally, finally aligned. as articles are rolled out and word spreads, this profile will undoubtedly grow. if you end up being lucky enough or crafty enough to find this early, good on you, you earned it. i look forward to meeting you & seeing where this takes us. one last ride with the one-eyed jack of diamonds, the devil close behind.
English
1
0
1
23
Distributed State
Distributed State@DistStateAndMe·
When you fix one bottleneck, the next one becomes visible. At @covenant_ai we built PULSE (arxiv.org/abs/2602.03839) to make weight sync 100× faster. That worked. Then the trainer itself became the new ceiling. So @erfan_mhi ran autoresearch on our GRPO trainer. 27% → 47% MFU. 16.7 min → 9.2 min per epoch. 1.8× faster on a single B200. Decentralized post-training, closing the gap with centralized. github.com/tplr-ai/grail
Erfan Miahi@erfan_mhi

Used autoresearch to make @grail_ai GRPO trainer 1.8x faster on a single B200. I kept postponing this for weeks since the bottleneck in our decentralized framework was mainly communication. But after our proposed technique, PULSE, made weight sync 100x faster, the training update itself became the bottleneck. Even with a fully async trainer and inference, a slow trainer kills convergence speed. A task that could've eaten days of my time ran in parallel while I worked on other stuff. Unlike original autoresearch, where each experiment is 5 min, our feedback loop is way longer (10-17 min per epoch + 10-60 minutes of installations and code changes), so I did minimal steering when it was heading in bad directions to avoid burning GPU hours. The agent tried so many things that failed. But, eventually found the wins: Liger kernel, sequence packing, token-budget dynamic batching, and native FA4 via AttentionInterface. 27% to 47% MFU. 16.7 min to 9.2 min per epoch. If you wanna dig deeper or contribute: github.com/tplr-ai/grail We're optimizing everything at the scale of global nodes to make decentralized post-training as fast as centralized ones. Stay tuned for some cool models coming out of this effort. Cheers!

English
4
16
105
7K
Hellas
Hellas@hellasdotai·
@ryanjanssen I will be a balanced kind of like how openclaw should run on separate hardware.
English
0
0
3
32
Hellas
Hellas@hellasdotai·
@Jason The Amazon Factory right now
GIF
English
0
1
2
75
Hellas
Hellas@hellasdotai·
@vladsavov Let AI create your daily mantras.
GIF
English
0
0
1
28
Hellas
Hellas@hellasdotai·
@0x506c61746f In your case, it improves your internal reasoning, not sure if most people will use it like that. How else does it improve how you think?
English
1
0
0
9
Plato (idea/acc)
Plato (idea/acc)@0x506c61746f·
@hellasdotai to be fair, i have more ideas since chatgpt came out. in 4o period, every ideas was (incorrectly) validated as genial and this might have contributed to it. Simply having "somehow" listening at my crazy ideas let them flow better. Now the validation got better with new model
English
1
0
0
26
Hellas
Hellas@hellasdotai·
@MaximeRivest hmm agreed, and OpenAI recently announced Frontier Alliance Partners to help enterprises move from AI pilots to production. My point is that those with contracts will be able to fine-tune ChatGPT.
English
0
0
0
20
Maxime Rivest 🧙‍♂️🦙🐧
@hellasdotai yes finetuning! but not these frontier models/labs! it generally comes from finetuning chinese models on huggingface not from openai, anthropic, google or xai
English
1
0
2
20