Antwane

151 posts

Antwane banner
Antwane

Antwane

@AntoinePinelli

Product Engineer building easy-to-use tools to simplify life

Katılım Ocak 2013
426 Takip Edilen109 Takipçiler
Antwane
Antwane@AntoinePinelli·
@brendonovich @grok Rust n'aurait pas été, n'est-ce pas, plus judicieux? Pourquoi ils ont fait ce choix d'Electron?
Français
1
0
0
104
brendan
brendan@brendonovich·
we've moved opencode desktop to electron. it's faster, more reliable, and will replace our tauri build soon. try it out in beta via the link below.
English
96
40
836
195.6K
Qwen
Qwen@Alibaba_Qwen·
⚡ Meet Qwen3.6-35B-A3B:Now Open-Source!🚀🚀 A sparse MoE model, 35B total params, 3B active. Apache 2.0 license. 🔥 Agentic coding on par with models 10x its active size 📷 Strong multimodal perception and reasoning ability 🧠 Multimodal thinking + non-thinking modes Efficient. Powerful. Versatile. Try it now👇 Blog:qwen.ai/blog?id=qwen3.… Qwen Studio:chat.qwen.ai HuggingFace:huggingface.co/Qwen/Qwen3.6-3… ModelScope:modelscope.cn/models/Qwen/Qw… API(‘Qwen3.6-Flash’ on Model Studio):Coming soon~ Stay tuned
Qwen tweet media
English
413
1.5K
10.7K
2.2M
Garry Tan
Garry Tan@garrytan·
OpenClaw is the most significant innovation since ChatGPT and Claude Code It’s the third shockwave that will be felt by all
Onur Solmaz@onusoz

You need to understand one fact about OpenClaw People are biased and incentivized to spread disinformation about OpenClaw. That is because OpenClaw IS NOT PUMPING ANYONE’S BAGS, unlike most other projects Literally every other for-profit agent product is incentivized to trash OpenClaw, BECAUSE OpenClaw is a neutral third party across the industry and geopolitical scene. They MAKE MONEY when OpenClaw loses OpenClaw does not worry about making money for some investors. Its founder @steipete is a successful exited founder. He is motivated by having fun and democratizing AI, literally. That is why he is suddenly so loved by everyone. He cares about PEOPLE, not MONEY “OpenClaw is bloated” -> Since beginning of March, OpenClaw is thinning its core and putting functionality in plugins behind a plugin SDK. Having numerous plugins to choose from does not mean bloat. This was already copied by others and is still a work in progress “OpenClaw is not secure” -> OpenClaw has the most eyeballs and immediately addresses any security advisories as soon as they come. It is the most secure agent, by sheer pressure “OpenClaw is bought by OpenAI” -> Then why is my bank account so empty bro??? All maintainers are literally unpaid and working DOUBLE beside their dayjobs to ship features to you. Do you think VC money can buy that kind of commitment? Once you understand these facts, you’ll like OpenClaw even more. Because OpenClaw is your AI, People’s AI And you can join us too. OpenClaw is the easiest-to-join project in AI right now. You just need to start using it, and start making good contributions. If you are competent, you can become a maintainer, and join the rest of the team making history!

English
107
126
1.7K
224.1K
Stark
Stark@StarkPrivacy·
💻 Brave lanzará Brave Origin con polémica El famoso navegador Brave ha anunciado el lanzamiento de Brave Origin, una nueva variante de su navegador centrada en la simplicidad y la privacidad. Esta edición se presenta como una opción “debloated" del navegador estándar, en la que se eliminan de forma permanente características no esenciales para la navegación básica, la privacidad y la seguridad. Según la información disponible en los repositorios de Brave y comentarios de sus desarrolladores, Brave Origin suprime promociones internas, anuncios de marca, el sistema de recompensas BAT, el asistente de inteligencia artificial Leo, el monedero cripto integrado, VPN y otras funciones adicionales que se han incorporado con el tiempo al navegador principal. ¿Y cual es la polémica? Que solo es gratuito para Linux. Pues al parecer, para usuarios de Windows, MacOS y otras plataformas los precios van entre 59.99$ y 69.99$. Creo que nos hemos acostumbrado a tenerlo todo gratuito y ni siquiera es obligatorio adquirirlo, puedes usar el navegador Brave como siempre.
Stark tweet media
Español
50
70
877
76.1K
Claude
Claude@claudeai·
We've redesigned Claude Code on desktop. You can now run multiple Claude sessions side by side from one window, with a new sidebar to manage them all.
English
2.1K
3.3K
42.6K
5.9M
Antwane
Antwane@AntoinePinelli·
GPT 5.5 Opus 4.7
Indonesia
0
0
0
108
DROID
DROID@droidbuilds·
Hey devs, which is the best place to host a backend right now?
DROID tweet mediaDROID tweet mediaDROID tweet mediaDROID tweet media
English
125
3
166
14.6K
Antwane
Antwane@AntoinePinelli·
Thanks for the mention! Different trade-offs indeed. RNDA's SDR signatures achieve impressive compression, but at 0.547 cosine similarity you're losing ~45% fidelity on reconstruction — that's a dealbreaker for agent memory where you need exact recall ("what auth decision did we make last Tuesday?"). MemoryPilot takes the opposite bet: 100% R@5 retrieval recall, zero API calls, 22 MB single Rust binary. We compress at the output layer (AAAK, 3x token savings) not the storage layer — so raw data stays intact and searchable. Happy to compare on LongMemEval if RNDA publishes scores.
English
1
0
0
21
Ben Sigman
Ben Sigman@bensig·
Which AI memory system is best for you? I compared 4 open-source memory systems for AI agents: mempalace - verbatim storage, fully offline, MIT claude-mem - silent capture for Claude Code, AGPL mem0 - pluggable SDK with 30+ vector stores, Apache-2.0 supermemory - managed engine with user profiles, ~50ms Key findings: - mempalace scored highest overall (8.25/10) with perfect data integrity - nothing is lost or summarized - mem0 has the most mature architecture but highest operational complexity - claude-mem is unbeatable if you live in Claude Code, but locked to that ecosystem - supermemory scores high benchmars, but requires cloud + proprietary license ($) No single system wins everywhere. The right choice depends on your constraints: privacy, simplicity, flexibility, or managed infra. Full comparison in the images below. Disclosure: I built mempalace. Eval was run by a separate LLM I don't use, fed all 4 codebases blind.
Ben Sigman tweet mediaBen Sigman tweet mediaBen Sigman tweet mediaBen Sigman tweet media
English
50
34
308
34.1K
Symbion
Symbion@symbion_crypto·
@bensig Will definitely give mempalace a go, thanks!
English
1
0
1
73
CPUTER
CPUTER@cputer·
@bensig Missing one: mind-mem (pip install mind-mem, Apache-2.0). Hybrid BM25F + vector search with RRF fusion, 9-type intent router, contradiction detection, adversarial abstention. 32 MCP tools, 2180 tests. Would be curious to see how it scores on your eval framework.
English
1
0
1
83
Ben
Ben@BenHadley·
@bensig Dude, I hope Mem Palace is worth it. I'm migrating 10,000 transcripts over right now, and it's a slog.
English
2
0
1
822
Franck ANDRIANO 🍃
Franck ANDRIANO 🍃@FranckAndriano·
@AntoinePinelli @leftcurvedev_ @KyleHessling1 @grok Le LLM World Models utilise 15 M de tokens (200 fois moins de tokens et 48 fois plus vite) et est entraîné sur un seul GPU. Un LLM "classique" réagit, bloqué dans le présent (token précédent). Un World Model anticipe, projette des scénarios dans son simulateur (son espace latent)
Français
2
0
1
97
left curve dev
left curve dev@leftcurvedev_·
This is it, a Gemma 4 31B distilled on Claude Opus 4.6 reasoning has been released! Gemopus-4-31B-it-GGUF It was added 5min ago, ggufs are being added as we speak. Jackrong & @KyleHessling1 are on a generational run 🔥 huggingface.co/Jackrong/Gemop…
English
14
51
673
40.3K