elderorb.lens 🆓🏴‍☠️🫡

3.3K posts

elderorb.lens 🆓🏴‍☠️🫡 banner
elderorb.lens 🆓🏴‍☠️🫡

elderorb.lens 🆓🏴‍☠️🫡

@elderorb

加入时间 Nisan 2009
5K 关注353 粉丝
elderorb.lens 🆓🏴‍☠️🫡
@AIatAMD @NousResearch @lmstudio Did exactly this a few days ago - still fighting with slow compressions, insane delays on context growing, tasks delegations to faster model than qwen3.6 etc. would appreciate more articles like this but for advanced Hermes fine-tuning for Ryzen AI Max+
English
1
0
7
325
Václav Pavlín | λ
@elderorb Yes! I have just hit 240K context and Codex kicked of compaction - takes a looooong time because that actually needs to reprocess the whole context, but so far so good:) And Codex is still going after 40mins of tool calling and analyzeing the outputs, fixing...pretty good stuff:)
English
1
0
0
97
Václav Pavlín | λ
Ok, I think that with Codex CLI + Lemonade (llama.cpp + Vulkan) + Qwen3.6 35B I finally have a usable setup! I can go up to 200K context with decent speed and quality! And no "full reprocessing" errors! Lemonade llama.cpp args: -np 1 --chat-template-kwargs '{"preserve_thinking": true}' -b 4096 -ub 1024 --temp 0.7 --top-p 0.8 --top-k 20 --min-p 0.0 --presence-penalty 1.5 --repeat-penalty 1.0 --ctx-checkpoints 64
Václav Pavlín | λ tweet media
English
3
0
5
282
vLLM
vLLM@vllm_project·
🎉 We just shipped a major redesign of recipes.vllm.ai. "How do I run model X on hardware Y for task Z?" now has a clickable answer. What's new: - URLs mirror HuggingFace: just swap huggingface.corecipes.vllm.ai in any model URL to jump straight to its recipe (e.g. recipes.vllm.ai/Qwen/Qwen3.6-3…) - Interactive command builder: pick hardware, variant, strategy (tensor, tensor+expert, or data+expert; single or multi-node; or a prefill/decode disaggregated cluster), toggle features → get the exact `vllm serve` command - Pluggable hardware: NVIDIA + AMD already integrated. One-click switch between Hopper/Blackwell and MI300X/MI355X, and the right flags and env are applied automatically - JSON API for agents: every recipe is also published at //.json (e.g. recipes.vllm.ai/Qwen/Qwen3.6-3…), so tools and agents can consume recipes without scraping - Contribute a new recipe end-to-end with the agent skill shipped in the repo: github.com/vllm-project/r… 🔗 recipes.vllm.ai Enjoy! ✨
vLLM tweet media
English
33
113
760
71.1K
Sandro
Sandro@pupposandro·
After countless requests on our megakernel article asking us to optimize for bigger models, we're excited to show how we built a standalone C++/ggml speculative decoder for Qwen3.5-27B Q4_K_M with a DFlash block-diffusion draft. we were able to get a 207.6 tok/s run (5.46x over AR); the HumanEval 10-prompt bench averages 129.5 tok/s at DDTree budget=22, single RTX 3090, 24 GB. 3.43x over autoregressive and 2.8x over the best public SGLang AWQ number. Full write-up with @davideciffa below:
Sandro@pupposandro

x.com/i/article/2046…

English
9
13
117
13.6K
elderorb.lens 🆓🏴‍☠️🫡
Klif Orłowski near Gdynia, Poland. Steep moraine slopes drop straight into the Baltic Sea, with amazing views over the bay, forest on top and the long sandy beach below. Much calmer and more natural than the crowded piers in Sopot. @travalacom
elderorb.lens 🆓🏴‍☠️🫡 tweet media
English
0
0
2
64
elderorb.lens 🆓🏴‍☠️🫡
Keep enjoying Smart member benefits on Travala. Tried car rental on my last trip and it did work. Paying with $AVA gives nice extra discount, and travel credits help too. After years of using Travala, these perks make trips noticeably cheaper. @AVAFoundation @Travalacom
English
0
0
1
36
elderorb.lens 🆓🏴‍☠️🫡 已转推
Travala
Travala@travalacom·
📢 Introducing the New Travala Logo Since 2017, Travala has been connecting the worlds of travel and Web3. Today, we proudly unveil our new logo and brand identity. A future-focused design built for travellers without borders. Learn more in our blog: travala.com/blog/introduci…
Travala tweet media
English
33
662
195
42.4K
0xSero
0xSero@0xSero·
Let me make your life better. Do you spam Claude Code all day? Are you struggling to keep track of all your projects, terminals, windows etc.. Do you need a browser in your terminal? Fully functional browser with it's own tabs and bookmarks. Do you multitask? Do you like notifications? You won't believe how long I have been looking for this cmux.dev -------- 1. VSCode has all the bells and whistles but performs like shit 2. Trae & Cursor are amazing but same issue above 3. Zed is awesome but hard for most people to figure out. Plus it doesn't have a browser, fun fact I tried to build one for it! This is just ghostty, with everything you need prebuilt.
0xSero tweet media
English
27
10
335
24.9K
Hunter Hammonds
Hunter Hammonds@hunterhammonds·
I’m starting a community for cracked AI builders. Claude Code. Codex. Cursor. Conductor. It doesn’t matter. All I care about is that you’re building or you’re hungry to learn. We’re sharing workflows, skills, repos, plugins, etc. Want to join? Comment below and I’ll reach out.
English
2.2K
55
2.6K
182.2K
zak.eth
zak.eth@0xzak·
Just shipped adversarial-spec, a Claude Code plugin for writing better product specs. The problem: You write a PRD or tech spec, maybe have Claude review it, and ship it. But one model reviewing a doc will miss things. It'll gloss over gaps, accept vague requirements, and let edge cases slide. The fix: Make multiple LLMs argue about it. adversarial-spec sends your document to GPT, Gemini, Grok, or any combination of models you want. They critique it in parallel. Then Claude synthesizes the feedback, adds its own critique, and revises. This loops until every model agrees the spec is solid. What actually happens in practice: requirements that seemed clear get challenged. Missing error handling gets flagged. Security gaps surface. Scope creep gets caught. One model says "what about X?" and another says "the API contract is incomplete" and Claude adds "you haven't defined what happens when Y fails." By the time all models agree, your spec has survived adversarial review from multiple perspectives. Features: - Interview mode: optional deep-dive Q&A before drafting to capture requirements upfront - Early agreement checks: if a model agrees too fast, it gets pressed to prove it actually read the doc - User review period: after consensus, you can request changes or run another cycle - PRD to tech spec flow: finish a PRD, then continue straight into a technical spec based on it - Telegram integration: get notified on your phone, inject feedback from anywhere Works with OpenAI, Google, xAI, Mistral, Groq, Deepseek. Leveraging more models results in stricter convergence. If you're building something and writing specs anyway, this makes them better. Check it out and let me know what you think! github.com/zscole/adversa…
English
77
68
1.2K
82.4K
CloudAI-X
CloudAI-X@cloudxdev·
Everyone loves @opencode now what? I have published a universal setup to start your project with opencode. Similar to the one that I published for claude code. github.com/CloudAI-X/open…
English
11
18
315
24.5K
elderorb.lens 🆓🏴‍☠️🫡
Paying for travelling with crypto? Easy. I'm doing it since 2021 via Travala. Just give it a try: travala.com/ref/F88113 ...Once your friend makes their first hotel booking of US$400 or more and completes their stay, you'll both receive US$50 in Bitcoin to your account wallets
English
0
0
1
35