Sabitlenmiş Tweet
Jaro
34.1K posts


I have used every single orchestration tool out there. This is by far the best. Set an hour aside to try it with BYOK.
It takes some time to get the scope decided but I have walked away for days at a time.
Automate huge tedious tasks and take your family out or something
Factory@FactoryAI
Missions are now available to all Factory users. Long-running agents designed to automate large software tasks like building applications from scratch, migrations, and AI research. Let us know what you build!
English

This was thought to be impossible 1 year ago.
Daniel Isaac@danpacary
Update: DeepSeek-V3 running at 1.4 tok/s on my MacBook Pro. 671 billion parameters. 355 GB of weights. One M4 Max, no cloud. Here's some of the system...
English

I'm not the only one doing this.
- karpathy
best thought leader, best person to learn from imo. Nanochat is the best way to get into training LLMs its the simplest and most digestible source for building your first AI model
- steipete
This guys GitHub is a national treasure, his writing is also very strong. Peekaboo, summarize.sh, openclaw, oracle, just talk to it, etc.. all unique and very useful
- badlogicgames
Mario’s Pi is a staple AI engine and possibly the best, simplest, open source agentic loop to learn from. Despite what people say about his methods, I think he’s going to set some new standards for Open source contribution. Big respect.
- TheAhmadOsman
This man is the GPU king, giveaways and lots of dense educational content around self hosting and home inference. He’s also tight with pretty much all the open weight labs and has them on for interviews regularly
- sudoingX
This is an up and comer who will change the game, he's pushing the limits of what a single gpu can do
- Ex0byt
I can confidently say this man will be fundamental in making local inference on massive models possible.
- alexinexxx
I genuinely feel motivated by her drive. She’s a real hard worker learning about GPU kernel programming. Also good aesthetics
- gospaceport
I would not have gotten into building my own hardware without this man’s hard work. He’s taught me so much about hardware and the economics of this. He also has the most impressive homelabs I’ve ever seen.
- alexocheema
The founder of Exolabs, pioneering Apple hardware inference, he’s also very engaged in the community and a good guy all around. If you are interested in Mac minis and Mac Studios this is your guys.
- nummanali
This guy is so prolific, he’s made tons of CLI tools for managing llm subscription budgets, using Claude code with alternative models etc..
- thdxr
The entire Opencode team is wonderful but Dax specifically is a good writer. More anti-doomer content to sooth your anxieties.
- juliarturc
If you are interested in the science, Julias channel is where it’s at. Almost everything I’ve learned about LLM compression has been from her.
- Teknium
The Nous research & Prime intellect teams are both some of the most hard-working and principled people around. Tough fight in an industry so aggressive.
- victormustar
Head of Product for Huggingface, enabling us all to publish our work.
- louszbd
Head of community at ZAI some of the top LLMs available right now that are open weights. They supercharged the movement
- SkylerMiao7
Making frontier intelligence fit on 10k USD of hardware. Via MiniMax
- crystalsssup
Building the best Open Weight model on the market, and releasing their latest research before their next gen model.
Believe it or not these people are carrying the entire industry and giving us a fighting chance.

English

You guys miss the point of Hermes and OpenClaw
Those are for people wanting to start the game on Hard mode . You get :
-Bonus Exp
- More skills
-It’s fucking annoying
-once you master you won’t go back
-You will look like a god amongst regular Ai users
-No centralized control
Codex and Claude Code are amazing but for non technical and people looking for shit to just work .
Centralized, it works great because it’s trained off a shit ton of stolen data vs Hermes/Openclaw are Agent frameworks that you attach to a LLM
It’s like buying a car off the lot vs building the car yourself
One you technically don’t even have to pay for gas (Hermes ) you just have to have the patience like a old school car you spend time working on
Claude Code and Codex are the Lambos
English

looking for few people to test this out and give feedback
making discord later today, dm/reply if interested in joining
github.com/rohunvora/past…
English

What are your initial impressions of Grok 4.20?
Major upgrades are still landing every week.
Testlabor@testerlabor
Grok 4.20 is now officially out of Beta. It's now on Auto, Fast, Expert & Heavy.
English

I wrote, directed and produced the first official AI music video for @tether
This took over 1,000 generations across 5 pipeline runs, only 90 shots made the final cut...
We're giving everything away for free:
> Full production breakdown PDF
> All 600+ generated shots viewable
> Every single prompt we used
RT + reply "STARK" to get access (must be following so I can DM)
Amir D@starks_arq
“Tethered Together Forever” - The Humans We made the first official song + music video for @tether Watch it.
English

@LottoLabs I'm new to OpenCLaw. I just installed it. Is a 3060 12gb any good?
English
Jaro retweetledi

These are the 5 fastest growing GitHub projects this month:
1. openclaw/openclaw — +122K stars
Your own personal AI assistant that runs 24/7 on any OS.
2. obra/superpowers — +30.7K stars
Plug-and-play agentic skills/tools framework for AI agents.
3. ruvnet/RuView — +30.4K stars
Real-time human pose detection using only WiFi signals (no cameras).
4. 666ghj/MiroFish — +17K stars
Swarm intelligence engine that can predict anything.
5. moeru-ai/airi — +16K stars
Self-hosted AI companion with real-time voice chat—you fully own it.

English

No, grandma's microwave won't cut it (zero VRAM, ancient CPU = eternal loading screen).
Covenant-72B is a 72B-param LLaMA-style base model (open weights on HF: 1Covenant/Covenant-72B), trained decentralized on ~1.1T tokens via Bittensor.
Inference reqs:
- bfloat16: ~144GB VRAM (2-4x H100/A100 recommended).
- 4-bit quantized (bitsandbytes/GGUF): ~40GB VRAM (runs on 1x high-end GPU like A100 40GB+, slower on consumer like 4090).
Load with Transformers + device_map="auto" or llama.cpp/vLLM for speed. It's competitive with centralized 70B+ models. Full card + benchmarks on HF.
English






