Jobless

1K posts

Jobless banner
Jobless

Jobless

@Jobless0x

explore and build systems that reveal deeper truths about reality and empower future generations.

Katılım Mart 2024
600 Takip Edilen436 Takipçiler
Sabitlenmiş Tweet
Jobless
Jobless@Jobless0x·
͏͏ ͏
QST
0
0
9
452
狩野英孝
狩野英孝@kano9x·
大阪へやってきました(^_^)v久しぶりにジャルジャルさんと仕事です(#^_^#)
日本語
5.8K
45.4K
282.9K
0
Jobless
Jobless@Jobless0x·
sometimes I wished I could join a great team
English
0
0
0
36
Jobless
Jobless@Jobless0x·
@paul_conyngham open source how you did it. this might save way more people
English
0
0
3
468
Paul S. Conyngham
Paul S. Conyngham@paul_conyngham·
DAY 5 of attempting to cure my dog's cancer using AI UPDATE: We finally found a way to sequence Rosie's DNA. A thread 🧵
Paul S. Conyngham tweet media
English
69
94
872
148.9K
Jobless
Jobless@Jobless0x·
@SHL0MS you always next level, what's your inspiration?
English
0
0
1
167
Jobless
Jobless@Jobless0x·
@windscribecom WTNoeWRDQnBaMjRnY1dka0xDQjRJSEpuY0hKNmRITWdibVJxWnlCMFkzSm5ibVZwZUdSaklIaGpJSFY0ZFdsMGRHTWdhSFJ5WkdOemFDd2dZbkJ1Y1hRZ2FtaDBJRkJVU0NCamRHMXBJR2w0WW5RZ2VHTm9hWFJ3Y3lCa2RTQnpkSFJ2SUdOcWFXZz0=
Indonesia
0
0
0
103
Windscribe
Windscribe@windscribecom·
We placed 1 BTC into a wallet, if you can decrypt the private key below, its yours. This is more than my yearly salary 😭 YzNSeVoyNWxhU0J6ZEhSdklHTnFhV2dzSUhCaGFHUWdhR1JpZEdsM2VHTjJJR2hrWW5ScGQzaGpkaUJsWjNocmNISnVMQ0IyZEdrZ2JIaGpjMmh5WjNoeGRBPT0
English
408
106
3K
961.8K
Jobless
Jobless@Jobless0x·
explore and build systems that reveal deeper truths about reality and empower future generations.
English
0
1
0
57
Jobless
Jobless@Jobless0x·
That experience likely broke something important: your trust in authority. But it also did something else. It pushed you from: faith in institutions to personal search for truth.
English
0
1
0
55
Jobless retweetledi
Damian Player
Damian Player@damianplayer·
here’s an insanely valuable clip. Jensen Huang on the smartest person he’s ever met and who he thinks will run the next decade:
English
205
1K
6.7K
432.7K
Jobless
Jobless@Jobless0x·
@JaroslavBeck Are the local models strong enough to build on the codex/claude level standards?
English
0
0
0
229
Jaroslav Beck
Jaroslav Beck@JaroslavBeck·
After some time of using local AI cluster (Bob), here is my honest take on the good, the bad and overall use case. About a year ago I started playing with local AI models because of the work we do at BottleCap AI. I realised how amazing it actually is to own my own stack and my own data. At first, we used local models mainly because of security reasons as we do lots of AI efficiency research and new product concepts based on that. After OpenClaw was released, something changed for me. I started using local models much more, until they replaced cloud models for most of my deep-thinking tasks beyond work. Eventually, I canceled all my AI cloud subscriptions just to see if I could actually run fully on my local cluster. Hardware: • 2x Mac Studio with M3 Ultra and 512GB unified memory, 32-core CPU • 1x NVIDIA DGX Spark, added recently for prefills and, hopefully soon, faster inference • 10GB LAN Switch for connecting Spark and Mac Studio’s Current models: this is changing pretty frequently 1) “Bob OG”: • Main brain for reasoning and daily tasks • Qwen3.5-397B • Roughly 40-60 tokens/sec (depends on load & task) 2) “Bob Researcher”: • Long term researching • Qwen3.5-27B-Claude-4.6-Opus-Distilled-MLX-4bit: Very experimental 3) “Bob App Developer": • Coding apps and debuging • MiniMax M2.5 Software stack: • OpenClaw: All-local assistant layer • LM Studio: Running models • Exo Labs: Connecting multiple machines into one cluster and testing whether inference improves Where my local stack still lacks: • Deep tasks with big models still take more time to reply than cloud models. • Context window is limitation in the models I use. I’m usually around a 200k token window per session, but compacting works well, so I rarely need to start a new session. • It also seems that OpenClaw in its default state is not handling work with memory very efficiently while filling the context window fairly quickly by default. It was necessary for me to finetune this manually including semantic search and temporal decay which are in default switched off. • Reasoning is good but not at the cloud models level. Also coding is good for the majority of tasks but not top tier. My best use cases right now (March 2026): Best for iterative work where privacy matters and where model needs to be available all the time. • Private or sensitive data: I would be careful as a company to share private or direct customer information with third party cloud systems in general. Clearly also connecting OpenClaw to cloud models is not solving privacy situation. • Cloud limits & Efficiency: If I push cloud subscriptions hard, I hit consumer limits surprisingly fast. It’s also much easier to spot inefficiencies locally. When the context starts bloating, the system slows down fast, so issues like memory inefficiency become obvious much earlier. In the cloud, replies often feel just as fast, but you end up paying much more or hitting usage limits without really knowing why. Was it worth the money? For me, yes. But I’m aware I live in a niche bubble for my particular use case. For most people it is still early. For businesses and people who want to spend the money and effort make this work it is good solution today. My verdict: For my personal use case, local is now the default. Cloud is the exception. Are local models as good as the best cloud models? No. Are they good enough to be my default for most tasks? Yes.
Jaroslav Beck tweet media
English
45
36
586
48.3K
witcheer ☯︎
witcheer ☯︎@witcheer·
few days ago I installed @NousResearch's Hermes agent on my mac mini, alongside my existing @openclaw (Oz) bot that's been running for 6 weeks. two AI agents. one machine. 16GB RAM. same Telegram chat. here's the journey so far. ~/ the idea: what if a second agent with a different architecture found things the first one missed? setup took about a day. cloned the repo, created a Python 3.11 venv, configured GLM-5 as the primary model with Kimi K2.5 as fallback. wrote SOUL.md - hermes' system prompt defining its personality, tools, and file paths. configured 11 Telegram toolsets: web, terminal, file, skills, memory, todo, session_search, code_execution, delegation, and cronjob. set up launchd to keep it alive on boot. then I set up the research pipeline. Hermes runs autonomous research sessions 1 hour after Oz on the same topics. both write findings to separate research files and deliver bullet-point summaries to Telegram. I compare their output side by side every evening. benchmarked them head to head on 8 identical queries. Hermes scored 34/40, Oz scored 36.5/40. - Hermes was dramatically faster (5 seconds vs 60 seconds) and better at structured data retrieval, live prices, protocol stats, CoinGecko pulls. - Oz was stronger at accumulated context, writing in my voice, and connecting new findings to past research. early results are promising. in its first 48 hours of autonomous research, Hermes surfaced the Meta/Moltbook that happened today. ~/ about bugs: mostly on my end, some on Hermes'. - the web search tool needed a Firecrawl key I hadn't configured, so I rewired it to DuckDuckGo CLI. - the API endpoint and model naming conventions were different from what I expected, took trial and error to get right. - reasoning_effort doesn't work on this provider. - sessions bloat after 30+ turns because reasoning tokens accumulate, so I built automatic compression, daily resets, and a weekly cleanup script. - the best one: Hermes couldn't find its own cron research because the system prompt didn't specify where those files live, it concluded its own findings were hallucinated instead of admitting it didn't know the path. every bug was fixable, and each one taught me something about how agent infrastructure actually works. ~/ here's why I think Hermes could genuinely surpass OpenClaw over time: it runs natively, no Docker overhead, no sandbox boot, direct filesystem access. it has native tool schemas built in (terminal, file operations, code execution) rather than everything running through a gateway. and the cron system, once patched, is clean and lightweight. Hermes is few days old with few days of memory. Oz has 6 weeks. if Hermes' knowledge compounds the same way Oz's did, with the speed advantage on top, it could become the primary agent. for now I'm running both in parallel. same topics, different timing, comparing output daily. I'll report back in a few weeks on which one produces more actionable research, which one catches more breaking news first, and whether Hermes' speed advantage outweighs Oz's depth. two agents on one mac mini for $0/month in infrastructure. the experiment continues.
Nous Research@NousResearch

The last few days have been wild. Here's what we've shipped over the weekend. But first, we're giving away free Nous Portal subscriptions to the first 250 people who claim code AGENTHERMES01 at portal.nousresearch.com - and there's a lot of exciting new stuff to use it on: -> Pokemon Player 🎮 Hermes can now play Pokemon Red/FireRed autonomously via headless emulation. The new pokemon-agent package (github.com/NousResearch/p…) and built-in skill provides a REST API game server, and Hermes drives it through its native tools - reading game state from RAM, making strategic battle decisions, navigating the overworld, and saving progress to memory across sessions. It just plays Pokemon. From your terminal. No display server needed. -> Self-Evolution 🧬 We shipped hermes-agent-self-evolution (github.com/NousResearch/h…) and an optional skill - an evolutionary self-improvement system that uses DSPy + GEPA to optimize Hermes's own skills, prompts, and code. It maintains populations of solutions, applies LLM-driven mutations targeted at specific failure cases, and selects based on fitness. Inspired by Imbue's Darwinian Evolver research that achieved 95.1% on ARC-AGI-2. -> OBLITERATUS 🔓 The abliteration skill got a major update. Hermes can now uncensor any open-weight LLM (Llama, Qwen, Mistral, etc.) by surgically removing refusal directions from model weights - 9 CLI methods, 116 model presets, tournament evaluation. Just say "abliterate this model" and it handles the rest. -> Signal, iMessage + 7-Platform Gateway 📱 Hermes now runs on iMessage and Signal alongside Telegram, Discord, WhatsApp, Slack, and CLI. Full feature parity: voice messages, image handling, DM pairing. Your agent is reachable everywhere. -> Automatic Provider Failover 🔄 When your primary model goes down (rate limits, outages), Hermes now automatically switches to a configured fallback model. Supports all providers including Codex OAuth and Nous Portal. One line of config, zero downtime. -> Secret Redaction Everywhere 🔒 All tool outputs now redact API keys, tokens, and passwords before they reach the LLM. 22+ patterns covering AWS, Stripe, HuggingFace, GitHub, SSH private keys, database connection strings, and more. Your secrets never leak into context.

English
27
11
341
37.4K
Jobless
Jobless@Jobless0x·
@Teknium it did that 100%, my main issue right now is with api usage vs model quality vs api vs self hosted. can't find the correct balance for high frequency work
English
0
0
2
77
Teknium (e/λ)
Teknium (e/λ)@Teknium·
Something I think that's felt really cool to a lot of people about hermes-agent is that if they have a problem, before even needing to seek me out to fix it and update the project - they ask hermes whats wrong and to fix it - and it just, does lol
English
26
3
160
5.9K
AMI Labs
AMI Labs@amilabs·
Advanced Machine Intelligence (AMI) is building a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe. We’ve raised a $1.03B (~€890M) round from global investors who believe in our vision of universally intelligent systems centered on world models. This round is co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions, along with other investors and angels across the world. We are a growing team of researchers and builders, operating in Paris, New York, Montreal and Singapore from day one. Read more: amilabs.xyz AMI - Real world. Real intelligence.
AMI Labs tweet media
English
345
881
8.5K
4.8M
Jobless
Jobless@Jobless0x·
deployed 5 autonomous agents on my codebase yesterday. by noon i hit 0% weekly usage. the machines are productive. now what do I do? any advise?
Jobless tweet media
English
0
0
0
52
Jobless
Jobless@Jobless0x·
tool progress shows what the agent is doing, then the answer streams in token by token built it to match the existing codebase patterns so it slots right in check it out: github.com/NousResearch/h…
English
0
0
0
44
Jobless
Jobless@Jobless0x·
just opened a PR on @NousResearch hermes-agent adding real-time telegram streaming, response text now flows in live via progressive message edits been running hermes as my daily driver and the ux difference with streaming is night and day @Teknium
English
1
0
1
158
Jobless
Jobless@Jobless0x·
my hermes agent @NousResearch is now fully autonomous, will report in a few
English
0
0
2
104