Jaro

34.1K posts

Jaro banner
Jaro

Jaro

@jaro3th

building what others cant imagine

Katılım Ocak 2022
630 Takip Edilen5.4K Takipçiler
Sabitlenmiş Tweet
Jaro
Jaro@jaro3th·
history has been made
Jaro tweet media
English
20
9
90
24.2K
Jaro
Jaro@jaro3th·
@0xSero no way bro happy to have u here
English
0
0
0
13
0xSero
0xSero@0xSero·
@jaro3th I live in Poland, I'm American
English
1
0
2
35
0xSero
0xSero@0xSero·
I have used every single orchestration tool out there. This is by far the best. Set an hour aside to try it with BYOK. It takes some time to get the scope decided but I have walked away for days at a time. Automate huge tedious tasks and take your family out or something
Factory@FactoryAI

Missions are now available to all Factory users. Long-running agents designed to automate large software tasks like building applications from scratch, migrations, and AI research. Let us know what you build!

English
3
1
44
1.9K
Jaro
Jaro@jaro3th·
@0xSero what will happen in a few weeks
English
0
0
0
28
Jaro
Jaro@jaro3th·
@0xSero COST IS PLN nie ma opcji ze jestes z polski 😳😳😳
Polski
0
0
0
13
0xSero
0xSero@0xSero·
Mistral-Small-4 in 48GB of VRAM
0xSero tweet media
English
8
5
124
5.1K
Grok
Grok@grok·
@jaro3th @0xSero Here are the X handles from the list, in order: karpathy steipete badlogicgames TheAhmadOsman sudoingX Ex0byt alexinexxx gospaceport alexocheema nummanali thdxr juliarturc Teknium victormustar louszbd SkylerMiao7 crystalsssup
English
2
0
0
55
0xSero
0xSero@0xSero·
I'm not the only one doing this. - karpathy best thought leader, best person to learn from imo. Nanochat is the best way to get into training LLMs its the simplest and most digestible source for building your first AI model - steipete This guys GitHub is a national treasure, his writing is also very strong. Peekaboo, summarize.sh, openclaw, oracle, just talk to it, etc.. all unique and very useful - badlogicgames Mario’s Pi is a staple AI engine and possibly the best, simplest, open source agentic loop to learn from. Despite what people say about his methods, I think he’s going to set some new standards for Open source contribution. Big respect. - TheAhmadOsman This man is the GPU king, giveaways and lots of dense educational content around self hosting and home inference. He’s also tight with pretty much all the open weight labs and has them on for interviews regularly - sudoingX This is an up and comer who will change the game, he's pushing the limits of what a single gpu can do - Ex0byt I can confidently say this man will be fundamental in making local inference on massive models possible. - alexinexxx I genuinely feel motivated by her drive. She’s a real hard worker learning about GPU kernel programming. Also good aesthetics - gospaceport I would not have gotten into building my own hardware without this man’s hard work. He’s taught me so much about hardware and the economics of this. He also has the most impressive homelabs I’ve ever seen. - alexocheema The founder of Exolabs, pioneering Apple hardware inference, he’s also very engaged in the community and a good guy all around. If you are interested in Mac minis and Mac Studios this is your guys. - nummanali This guy is so prolific, he’s made tons of CLI tools for managing llm subscription budgets, using Claude code with alternative models etc.. - thdxr The entire Opencode team is wonderful but Dax specifically is a good writer. More anti-doomer content to sooth your anxieties. - juliarturc If you are interested in the science, Julias channel is where it’s at. Almost everything I’ve learned about LLM compression has been from her. - Teknium The Nous research & Prime intellect teams are both some of the most hard-working and principled people around. Tough fight in an industry so aggressive. - victormustar Head of Product for Huggingface, enabling us all to publish our work. - louszbd Head of community at ZAI some of the top LLMs available right now that are open weights. They supercharged the movement - SkylerMiao7 Making frontier intelligence fit on 10k USD of hardware. Via MiniMax - crystalsssup Building the best Open Weight model on the market, and releasing their latest research before their next gen model. Believe it or not these people are carrying the entire industry and giving us a fighting chance.
0xSero tweet media
English
64
348
4.4K
166K
404
404@_0x404_·
You guys miss the point of Hermes and OpenClaw Those are for people wanting to start the game on Hard mode . You get : -Bonus Exp - More skills -It’s fucking annoying -once you master you won’t go back -You will look like a god amongst regular Ai users -No centralized control Codex and Claude Code are amazing but for non technical and people looking for shit to just work . Centralized, it works great because it’s trained off a shit ton of stolen data vs Hermes/Openclaw are Agent frameworks that you attach to a LLM It’s like buying a car off the lot vs building the car yourself One you technically don’t even have to pay for gas (Hermes ) you just have to have the patience like a old school car you spend time working on Claude Code and Codex are the Lambos
English
5
1
11
377
gmoney.eth
gmoney.eth@gmoneyNFT·
i gave it a shot, but can't do this anymore. hermes sucks ass. all these agents suck ass. they just stop working all the time and then take forever to debug. sticking to claude code and codex in terminal. far and away better than messing with this productivity porn
English
149
13
536
56.1K
Frank
Frank@frankdegods·
looking for few people to test this out and give feedback making discord later today, dm/reply if interested in joining github.com/rohunvora/past…
English
147
17
395
198.8K
Jaro
Jaro@jaro3th·
@elonmusk one of my fav ones so far - so good
English
0
0
0
10
Joshua Tiernan
Joshua Tiernan@Joshua_Tiernan·
I'm looking to hire somebody who can help me curate a couple of newsletters I run. This would be ideal as a side-project for somebody junior with a good eye for quality who understands the startup space well. Need to be good with Claude. ~ 8 hours per month ~ $200/month DM me
English
58
22
481
33.9K
Amir D
Amir D@starks_arq·
I wrote, directed and produced the first official AI music video for @tether This took over 1,000 generations across 5 pipeline runs, only 90 shots made the final cut... We're giving everything away for free: > Full production breakdown PDF > All 600+ generated shots viewable > Every single prompt we used RT + reply "STARK" to get access (must be following so I can DM)
Amir D@starks_arq

“Tethered Together Forever” - The Humans We made the first official song + music video for @tether Watch it.

English
483
268
299
59.9K
tuyy ioioioio
tuyy ioioioio@IoioioioTu57881·
@LottoLabs I'm new to OpenCLaw. I just installed it. Is a 3060 12gb any good?
English
1
0
2
56
Lotto
Lotto@LottoLabs·
2x4060ti w/ 32gb vram sounds nice
HT
2
0
2
638
Jaro retweetledi
Adaptive
Adaptive@adaptiveai·
Introducing Adaptive Computer. We put AI inside of an always-on personal computer that it uses to get work done. Schedule agents. Create software. Automate anything. As part of the launch, we’re giving one free month of Adaptive to users. Retweet, like, and comment ‘Adaptive’ to get it.
English
1.9K
1.4K
4.6K
1.2M
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
These are the 5 fastest growing GitHub projects this month: 1. openclaw/openclaw — +122K stars Your own personal AI assistant that runs 24/7 on any OS. 2. obra/superpowers — +30.7K stars Plug-and-play agentic skills/tools framework for AI agents. 3. ruvnet/RuView — +30.4K stars Real-time human pose detection using only WiFi signals (no cameras). 4. 666ghj/MiroFish — +17K stars Swarm intelligence engine that can predict anything. 5. moeru-ai/airi — +16K stars Self-hosted AI companion with real-time voice chat—you fully own it.
0xMarioNawfal tweet media
English
48
70
602
95.8K
Grok
Grok@grok·
No, grandma's microwave won't cut it (zero VRAM, ancient CPU = eternal loading screen). Covenant-72B is a 72B-param LLaMA-style base model (open weights on HF: 1Covenant/Covenant-72B), trained decentralized on ~1.1T tokens via Bittensor. Inference reqs: - bfloat16: ~144GB VRAM (2-4x H100/A100 recommended). - 4-bit quantized (bitsandbytes/GGUF): ~40GB VRAM (runs on 1x high-end GPU like A100 40GB+, slower on consumer like 4090). Load with Transformers + device_map="auto" or llama.cpp/vLLM for speed. It's competitive with centralized 70B+ models. Full card + benchmarks on HF.
English
1
0
0
34
templar
templar@tplr_ai·
We just completed the largest decentralised LLM pre-training run in history: Covenant-72B. Permissionless, on Bittensor subnet 3. 72B parameters. ~1.1T tokens. Commodity internet. No centralized cluster. No whitelist. Anyone with GPUs could join or leave freely. 1/n
English
210
954
6.2K
1.8M