mirage

535 posts

mirage banner
mirage

mirage

@strangecodename

I don't exist.

EU Katılım Ağustos 2017
75 Takip Edilen18 Takipçiler
mirage
mirage@strangecodename·
@outsourc_e Ok you got me. I've heavy pipelines already set with custom web-search / obsidian / liteLLM, should I plan some extras ?
English
1
0
1
53
Eric ⚡️ Building...
NEW HermesWorkspace v0.2.0 Landed 🎉 One command to run everything : - pnpm dev auto-starts Hermes Agent, browser auto-connects. No second terminal needed. ⚡ Auto-start hermes-agent from pnpm dev (sibling repo detection) 🔧 Model & provider config from the UI — writes to ~/.hermes/config.yaml 🎨 Unified splash screen — loading → connected or → setup help, no double-flash 🔌 Live model catalog — fetches available models per provider from hermes CLI 🛠️ New WebAPI endpoints: PATCH /api/config, GET /api/available-models ⬆️ Merged latest upstream (ACP persistence, Codex reasoning fix, browser CDP) Clone both repos as siblings → pnpm install → pnpm dev → open browser → chat.
Eric ⚡️ Building... tweet media
English
3
1
23
848
mirage
mirage@strangecodename·
@gmoneyNFT imho : first thing to do before entering in prod, is to cut the fat (a lot). Then after this you have to fight with AWS and Anthropic ^^ Just implementation for the moment, to gain time. Can't wait to return to local models, it's a pure circus now.
mirage tweet media
English
0
0
0
88
gmoney.eth
gmoney.eth@gmoneyNFT·
i gave it a shot, but can't do this anymore. hermes sucks ass. all these agents suck ass. they just stop working all the time and then take forever to debug. sticking to claude code and codex in terminal. far and away better than messing with this productivity porn
English
102
6
305
21.2K
mirage
mirage@strangecodename·
@TheRabbitHole The more stable grip that I found is blocking accounts and removing everything in tendance (never my spice) constantly. Like a negative prompt for SD ^^
English
0
0
0
11
The Rabbit Hole
The Rabbit Hole@TheRabbitHole·
The algorithm feels TikTokified, where, for example, liking a post overcorrects the feed to hyperfocus on similar stuff. I think it would be better if the alg was rebalanced to be more focused on follows. Maybe shifting the weight given to likes towards follows would work? 𝕏 always thrived on communities and self-curated niches.
The Rabbit Hole tweet media
Elon Musk@elonmusk

@diana_dukic What needs to be better?

English
59
24
345
23.6K
Thrilla the Gorilla
Thrilla the Gorilla@ThrillaRilla369·
Be honest... could you jump into a stick-shift car and drive it without a problem right now? 🚘
Thrilla the Gorilla tweet media
English
4.9K
424
10.3K
139.8K
mirage
mirage@strangecodename·
@0xSero I feel the same with the AWS interface.
English
0
0
0
146
0xSero
0xSero@0xSero·
I had to open this abomination today, holy fuck. How can anyone use this?????
0xSero tweet media
English
168
5
754
148.4K
izzy
izzy@girlontoast·
@X24VII @SwiftOnSecurity not necessarily paying, but there are many websites with old magazine scans for people who read them back in the day and are nostalgic or people who didn’t because they’re too young but are curious about how things were. i’m surprised anyone would find this surprising.
English
1
0
2
165
SwiftOnSecurity
SwiftOnSecurity@SwiftOnSecurity·
I commissioned a piece of custom anti-reflective glass so I could better photograph full-spread magazine ads and context for Millennials to reminisce. This is not a joke. I really did this.
SwiftOnSecurity tweet mediaSwiftOnSecurity tweet media
English
47
53
4.3K
441.6K
DataVoid
DataVoid@DataPlusEngine·
I pushed my tests to hard for Hermes-Agent and trashed my CPU somehow. Literally burnt it. It's now 62ºc under no load and than overheats. Oops lol I got wayyyy to into Hermes dev. Props to @Teknium @NousResearch for making a project that has gotten me so enveloped.
DataVoid tweet mediaDataVoid tweet media
English
3
0
16
592
Arnault Chatel
Arnault Chatel@ArnaultChatel·
🇫🇷 @MistralAI vient de faire 4 annonces titanesques. Et personne n'en parle en France.(comme d'habitude) Les Américains, eux, ils sont en PLS. Alors permettez-moi de corriger ça. 1/ Small 3 → Small 4 Un modèle qui réunit TOUT le savoir-faire de Mistral. Open source. Gratuit. Mixture of Experts. Raisonnement + multimodal + code. Fenêtre de contexte XXL. Licence Apache 2.0 = ultra-permissive. C'est le nouveau champion de l'IA open source mondiale. 2/ Mistral rejoint la coalition Nemotron (NVIDIA) Aux côtés de Black Forest Labs, des meilleures boîtes IA open source de la planète. Un seul siège français dans cette coalition d'élite. Ce siège, c'est Mistral. 3/ LeanMistral Un modèle dédié aux preuves formelles : maths, sciences, raisonnement rigoureux. L'IA qui ne se trompe pas — et qui peut le prouver. Pour la crédibilité de l'IA en entreprise, c'est un game changer. 4/ Mistral Forge Fini le fine-tuning artisanal ou les bases de données séparées. N'importe quelle entreprise peut maintenant créer son propre modèle, entraîné sur ses données, verticalisé sur son métier. Des centaines d'IA hyper-spécialisées vont émerger. Elles auront toutes du Mistral dans les veines. L'avenir de l'IA, ce n'est pas forcément le plus gros modèle propriétaire derrière un paywall. C'est peut-être une IA open source, gratuite, partout, dans tous les logiciels et services — une vraie commodité technologique. Et le champion qui dessine cet avenir ? Il est français. Il s'appelle Mistral. Vous en pensez quoi ? #IA #AI #IAGen #LLMs #MBADMB #OpenSource #FrenchTech
Arnault Chatel tweet mediaArnault Chatel tweet mediaArnault Chatel tweet mediaArnault Chatel tweet media
Français
103
605
2.4K
174.1K
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
How does this happen with OpenClaw and how can you possibly fix it… My bot continually dumps massive repetitive tool results, does heavy exec work and gets into debug loops in the shared session my DMs are in and gets stuck for 10 mins at a time until he times out or the gateway crashes and restarts. This causes dropped messages, unresponsive bot and OOM crashing multiple times an hour. Even when I get the bot to delegate, the subagents dump results into the context window. I had codex investigate and it found: • 56 tool results ≥150k chars already baked into current session history • Pruning doesn't work on our primary model path (Codex/OpenAI Oauth) • No runtime enforcement to stop huge tool dumps into context • Session maintenance cleans up after the damage, it doesn't prevent it I’m pretty sure default OpenClaw behavior shouldn't be dumping 200k char tool results into the transcript. Something in my specific setup must be either disabling a safeguard or skipping truncation for tool results… Since I’m using lossless-claw it’s allowed to grow even worse: 81MB session file, 31.6MB is just tool result text 😬 169 tool results over 50k chars. One is 285k chars (from sessions_list.) There is pruning logic which trims tool results from the context messages. buildContextPruningFactory But models have to be “cache-ttl" The eligible providers are apparently only: anthropic moonshot zai For me, my bot tells me the pruning code refuses to activate on non-Anthropic providers. I’m using openai-codex 5.3 a lot, so when pruning is configured, the code exists, it just silently never activates. OpenAI Responses API uses server-side compaction & OpenClaw auto-enables this for direct openai models so OpenAI handles compaction on their side. But I’m on openai-codex/*, not openai/*. The Codex OAuth path goes through a different runtime (apparently pi-ai), not the Responses API. So: • cache-ttl pruning > Anthropic only • OpenAI server-side compaction > direct openai API only • LCM/lossless-claw > doesn't prune old tool results afaik My bot insists the openai-codex lane doesn't get either pruning path. So I’m left with a bot that relies on the emergency truncation function truncateOversizedToolResultsInSession far too often as last-resort overflow recovery with no preventive pruning / safeguards. Since LCM/lossless-claw doesn't have its own tool result management, it inherits huge oversized transcripts and has to work extra hard to summarize for DAG nodes. I have no session maintenance and long sessions so nothing bounds the transcript over time resulting in: 4,707 tool results piling up forever in an 81MB file, with no runtime mechanism actually cleaning them. When my bot starts debugging, it starts grepping and dumping massive text into the main session, then gets stuck in that loop and dies then has to do it again, compounding the problem. I’m at a loss at how to tackle this problem, it’s multiple layers deep.
Brad Mills 🔑⚡️ tweet media
English
49
2
54
8.2K
Bhavani.py
Bhavani.py@Bhavani_00007·
Why is everyone in tech using a ThinkPad?
Bhavani.py tweet media
English
112
25
795
44.2K
mirage
mirage@strangecodename·
@DFintelligence Beaucoup de potentiel si le projet est bien monté. VR = mauvaise idée, accessibilité basse + pas de déploiement massif "on demand". Youtube/Twitch : post-prod only, live pour les paris. Etc ... pour pas devenir une lubie qui fait pschitt.
Français
0
0
0
190
Grok
Grok@grok·
Pour tourner Mistral Small 4 (119B params totaux, MoE avec ~6B actifs) en local : - Tous les poids doivent charger en VRAM → besoin massif malgré l'efficacité. - Minimum officiel Mistral : 4x H100 HGX, 2x H200 ou 1x DGX B200. - Avec 4-bit quant : ~60 Go VRAM → théoriquement 3x RTX 4090 (via vLLM/llama.cpp, support en cours). Pas pour un PC lambda, mais viable sur setup high-end pro/serveur. Teste sur Hugging Face pour confirmer !
Français
1
0
2
99
mirage
mirage@strangecodename·
@ArnaultChatel @MistralAI Après un rapide tour, c'est prometteur et même étonnant car l'axe est stratégique. Mais : Studio, une démo. Forge, un formulaire. " J'espère de tout cœur assister à un tournant radical et une vrai implication terrain. Et 500% moins d'entrisme.
Français
0
0
0
609
mirage
mirage@strangecodename·
@LottoLabs Qw35 is really in ease with this scaffold. I will bring back in the dance the Gemma3 soon, quite complementary in hybrid setup for administrative tasks (better writing/multilingual/summaries)
English
0
0
1
183
Lotto
Lotto@LottoLabs·
Nemo nano can run skills reliably and it’s 144tps Qwen 3.5 4b was 112tps Qwen is smarter and easier to steer in Hermes though
English
6
2
69
4.5K
mirage
mirage@strangecodename·
@aubreystrobel Wait. OAI will build this kind of robot now ? wtf
English
0
0
0
157
Aubrey Strobel
Aubrey Strobel@aubreystrobel·
they didn’t need to add the “hard” there did they
Aubrey Strobel tweet media
English
249
224
6.2K
360.3K
mirage
mirage@strangecodename·
@vibeonX69 I'll buy a keyboard first, with macros.
English
0
0
0
3
kritika
kritika@vibeonX69·
If you had this setup, which language would you start coding in?
kritika tweet media
English
107
6
274
7.6K
mirage
mirage@strangecodename·
@BTUPDATES10 I'll choose to be fired from xAI over the others. I will not last more than a week, I think.
English
2
6
15
122
BT UPDATES
BT UPDATES@BTUPDATES10·
Which of Elon musk companies would you like to work at? A) Tesla B) SpaceX C) X D) Neuralink E) XAI Be honest
BT UPDATES tweet media
English
623
207
525
19.2K
mirage
mirage@strangecodename·
@Itsfoss You can force the constructor to sell you the laptop without any hard drive.
English
0
0
0
7
It's FOSS
It's FOSS@Itsfoss·
Why do Linux laptops cost so much? With Windows out of the picture, should they not cost less? But you look at preloaded Linux laptops, and they are more expansive than similar spec Windows laptops. The same is the case with Linux mini-PCs. Your thoughts?
It's FOSS tweet media
English
281
23
492
54.4K
mirage
mirage@strangecodename·
@Teknium I will call it brutalist-minimalist 😁
mirage tweet media
English
0
0
1
24