
Gerald Morrison
1.9K posts

Gerald Morrison
@GeraldMorrison
Engineer | Data Science | Quantitative Finance | https://t.co/S8XZ7tEyhg




we’ve signed Zero Data Retention agreements with all providers for Go all models now follow a zero-retention policy your data is not used for training







This piece on Palantir death-tech and the kill chain is breathtakingly good theguardian.com/news/2026/mar/…





Kimi-K2.5 (1T-parameter MoE) running coherently on 25GB of GPU memory (on a unified 128 GB machine)!


Introducing Base44 Superagents. AI agents built with managed infrastructure, secured by default, one-click integrations, and 24/7 execution from the start. Everything is taken care of so you can focus on what your agent does, not how to get it running. That means no API keys to juggle, no config files, no security setup, and no maintenance. We handle all of it. Your Superagent connects to all the tools you already use in one click, runs on schedules and triggers, remembers context across sessions, acts proactively on your behalf, and keeps working around the clock. All from wherever you already are, WhatsApp, Telegram, Slack, or your browser. The AI agent everyone's been waiting for, with everything you need already built in. We're excited to get this into your hands, so we're giving free credits to everyone who comments and reposts in the next 24 hours.







Huge Report on @OpenAI's new launch. It happened minutes ago. My news system wrote this report by reading all 50,000 of you here on X. This is a super power that Levangie Labs has given me. Thanks @blevlabs. docs.google.com/document/d/19l… Shows everyone on X who has posted something about @OpenAI's GPT-5.4. No one else can do this. No one else has a cognitive architecture. No one else has every single person in AI and every company in lists. Your OpenClaw can't do this.

🚀 Introducing the Qwen 3.5 Small Model Series Qwen3.5-0.8B · Qwen3.5-2B · Qwen3.5-4B · Qwen3.5-9B ✨ More intelligence, less compute. These small models are built on the same Qwen3.5 foundation — native multimodal, improved architecture, scaled RL: • 0.8B / 2B → tiny, fast, great for edge device • 4B → a surprisingly strong multimodal base for lightweight agents • 9B → compact, but already closing the gap with much larger models And yes — we’re also releasing the Base models as well. We hope this better supports research, experimentation, and real-world industrial innovation. Hugging Face: huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw…









