EF BlaackZero
787 posts

EF BlaackZero
@BlaackZero
Owner/Founder @EternalFlameCL Full-Stack Developer
Santiago, Chile Katılım Ağustos 2013
519 Takip Edilen173 Takipçiler
Sabitlenmiş Tweet

🐉
Eternal Flame@EternalFlameCL
🔥 Nos encontramos en búsqueda de un Jungla para nuestro roster de LoL 2026 y próximas qualys. Buscamos players con compromiso, tiempo y ganas de competir. Contamos con Coach experimentado y staff a disposición. Interesados, completar el form: 📄 forms.gle/xZZaopjpYP6sf9… #GOEF
ART

Me gustaría que explicaran por qué EF y no otra organización.
Alejandro Gomis@anonimotum
Sources Val: Eternal Fire 🇹🇷 is set to replace ULF Esports 🇹🇷 in VCT EMEA 🇪🇺 Most of the current roster is expected to remain under the new team. ULF decided to withdraw following unresolved payment issues affecting players and staff @Sheep_VALORANT ⬇️ sheepesports.com/en/all/article…
Español

pocos dias ya para empezar a cerrar y revisar propuestas/cvs
EF BlaackZero@BlaackZero
busco CM para @EternalFlameCL , apasionado y que le sepa al mundillo del esports , mas info x DM 🤓
Español

busco CM para @EternalFlameCL , apasionado y que le sepa al mundillo del esports , mas info x DM 🤓
Español
EF BlaackZero retweetledi


¡Buscamos editor Short Format especializado en League of Legends! 🙌
Si sos un experto del algoritmo y tenés experiencia en LoL queremos conocerte.
¡Comentá tu portfolio o trabajos, o recomendá a tu editor favorito! 📝
#GoISG 🦈 #BeTheShark

Español

OpenFang v0.3.46 is out! here's everything since v0.3.42:
- NEW: Trading Hand (8th bundled hand), full autonomous trading agent with 8-phase pipeline: state recovery, portfolio setup, market intelligence scan, multi-factor analysis (RSI/MACD/Bollinger/VWAP/ATR), adversarial bull/bear debate, risk management gate with circuit breakers, trade execution (analysis/paper/live via Alpaca API), and analytics reporting. 12 configurable settings, 10 dashboard metrics, refuses to bypass risk management
- Chutes.ai provider with 5 models (DeepSeek-V3, DeepSeek-R1, Llama-4-Maverick, Qwen3-235B, Llama-3.3-70B)
- Z.AI Coding models (glm-5-coding, glm-4.7-coding), Kimi for Code, kimi-k2.5-0711 alias
- MiniMax-M2.5-highspeed (1M context, tools+vision) and abab7-chat added
- Telegram forum topic thread_id support, replies go back to the correct topic
- 11 community PRs reviewed and implemented, all authors credited
- IME composition guard prevents Enter from sending during CJK input
- default User-Agent header on all LLM drivers, fixes 403s from Moonshot/Qwen
- default token quota changed to unlimited (was 1M/hour, caused quota errors on fresh installs)
- hourly token rate limit now configurable via [budget] in config.toml
- XML-attribute tool call recovery (Pattern 9) for Groq/Llama models
- web_fetch now handles gzip/deflate/brotli decompression, finance sites work
- voice messages no longer lost when sent quickly, bridge dispatch is now concurrent
- capture reasoning_content from local LLMs (Qwen3, DeepSeek-R1), synthesize response from thinking when content is empty
- fix streaming think tag leak, StreamingThinkFilter buffers reasoning content so it never reaches the client
- fix cron jobs orphaned after agent deletion
- fix Gemini thought_signature missing in function calls, provider_metadata carries opaque data through round-trip
- fix Gemini tool schema rejecting oneOf/const/type-array
- fix generic "Verify API key" error hiding real cause, error messages now include sanitized raw excerpts
- fix UTF-8 boundary panic with CJK characters across 9 slicing sites
- fix safe string slicing in channel bridge + desktop server shutdown race condition
- fix custom model API keys lost on reload
- fix embedding driver respects provider_urls config + auto-selects model per provider
- fix wizard generates proper multi-line TOML strings for system_prompt
- fix Moonshot base URL
- OpenClaw plugin install rate limit: 5-attempt exponential backoff with jitter
github.com/RightNow-AI/op…
English

OpenFang v0.3.29 is out! here's everything since v0.3.25:
- auto-detect LLM provider on fresh install, scans env for any configured API key instead of failing when Anthropic key is missing
- custom provider setup UI in Settings tab, add any OpenAI-compatible endpoint from the dashboard
- Claude Code added to wizard popular providers with install instructions
- Gemini 2.5+ thinking models: parse thoughtSignature field on functionCall parts
- hand activation state persists across daemon restarts, no more manual reactivation after reboot
- image display preserved in dashboard session history, images no longer lost on page reload
- auto-retry on temperature rejection for any model that returns unsupported_parameter
- strip temperature parameter for o-series reasoning models (o1/o3/o4), fixes 400 errors from OpenAI
- IMAP email channel: fallback to AUTHENTICATE PLAIN when server rejects LOGIN (fixes Lark/Larksuite)
- remove fake OpenRouter models (optimus/nitro/auto), replaced with real model IDs like openrouter/anthropic/claude-sonnet-4
- model change via /model command now correctly updates the provider in the dashboard UI
- doctor command shows correct status for connected database
- ZHIPU/GLM API: empty tool result content replaced with "(empty)" to avoid format rejection
- Linux: suppress Chromium sandbox EPERM error on xdg-open, always print dashboard URL as fallback
- model_studio recognized as DashScope/Qwen provider alias, custom provider_urls work again
- ResourceQuota default max_cost=0.0 (unlimited, stops blocking fresh agents)
- session reset clears scheduler usage tracking
- web search returns errors not fake success
- MCP stdio Content-Length framing + correct version
- non-TTY init auto-falls back to quick mode
- python3 to python fallback for Windows Browser Hand
- embedding driver passes custom base URL from provider_urls
- Bluesky seenAt parameter URL-encoded
- serde alias on ModelConfig.model field, TOML manifests with name = "..." under [model] now work correctly
- 1863 tests passing, 0 clippy warnings
github.com/RightNow-AI/op…
English
EF BlaackZero retweetledi










