AlphaSignal AI

204 posts

AlphaSignal AI banner
AlphaSignal AI

AlphaSignal AI

@AlphaSignalAI

The latest news from the top 100 companies in AI. Over 280,000 devs read our newsletter.

Signup → Inscrit le Şubat 2010
285 Abonnements12K Abonnés
AlphaSignal AI
AlphaSignal AI@AlphaSignalAI·
Math just got a compiler. Math, Inc. open-sourced OpenGauss, an AI agent that translates human math into machine-verifiable Lean proofs. → Beats rival agents with no time limit, using only 4 hours → Formalized the Strong Prime Number Theorem in 3 weeks vs. 18+ months for human experts → Runs many subagents in parallel Think of it like a compiler for mathematical truth: you write the idea, it generates code a machine can verify is correct. Math proofs that took top experts years can now be verified automatically, making formally verified AI training data finally scalable. github.com/math-inc/OpenG…
AlphaSignal AI tweet media
English
1
32
146
7.1K
AlphaSignal AI
AlphaSignal AI@AlphaSignalAI·
An AI model just helped build the next version of itself. MiniMax released M2.7, a coding agent that handles 30-50% of its own training workflow autonomously. → 56% on SWE-Pro, near top global models → 66.6% medal rate on ML research competitions → Fixes live production bugs in under 3 minutes → 97% skill adherence across 40+ complex tasks The trick: it runs experiments, reads logs, debugs failures, then rewrites its own code over 100+ iteration loops. You can now point it at a real codebase and let it plan, execute, and self-correct end-to-end without babysitting every step. opencode.ai/go
AlphaSignal AI tweet media
English
0
1
8
394
AlphaSignal AI
AlphaSignal AI@AlphaSignalAI·
Someone just built an OpenClaw robot that is shockingly aware. > Tracks who visited a room and when > Answers questions like "where did I last see my keys?" > Understands cause-and-effect across hours of footage Standard AI memory runs on text tokens. It has no sense of time or physical space. That breaks the moment you add video, depth sensors, or moving objects. OpenClaw solves this by storing space, objects, and time into a structured memory the robot can query later. Robots usually operate only in the present moment, reacting to sensor input without remembering the past. Now, they can build a running record of its environment and reason about it later. If this technology continues evolving, robots could begin to understand environments, track events, and reason about the physical world in ways that previously required human perception. Fully open-source.
English
2
1
9
596
AlphaSignal AI
AlphaSignal AI@AlphaSignalAI·
OpenAI just shipped a full model tier for agents, and it's 2x faster. GPT-5.4 mini and nano are two small, cost-efficient models built for coding, computer use, and running as subagents inside larger pipelines. → 2x faster than GPT-5 mini → Nano at $0.05/M input tokens → Mini handles multimodal tasks natively → Both live in ChatGPT, Codex, and the API The key shift: these aren't just smaller versions. They're optimized to operate inside multi-agent systems, where one model orchestrates others to complete tasks end to end. You can now build pipelines where fast, cheap subagents handle the heavy lifting without burning your budget on a flagship model. openai.com/index/introduc…
AlphaSignal AI tweet media
English
1
0
10
595
AlphaSignal AI
AlphaSignal AI@AlphaSignalAI·
Stop paying for cloud GPUs. You can now train LLMs locally with a full web UI. Unsloth AI just launched Unsloth Studio, an open-source tool to train and run models on your own machine. → 2x faster training, 70% less VRAM → 500+ supported models → Auto-builds datasets from PDFs, CSVs → Self-healing tool calling built-in It works by combining optimized kernels with smart memory reuse, cutting overhead without touching model quality. Fine-tuning a custom model went from a cloud job to a single pip install. github.com/unslothai/unsl…
English
2
10
53
4.1K
AlphaSignal AI
AlphaSignal AI@AlphaSignalAI·
xAI just shipped a TTS API that laughs, whispers, and sighs on command. Grok's Text-to-Speech API turns text into expressive audio with a single POST call. → 5 distinct voices, built-in → Inline tags: laughs, whispers, pauses, emphasis → 20+ auto-detected languages → From telephony (8kHz) to studio-grade (48kHz) The whole voice stack was built in-house: VAD, tokenizer, and audio models trained from scratch. Building a voice app no longer means stitching three separate APIs together. #text-to-speech" target="_blank" rel="nofollow noopener">x.ai/api/voice#text
English
2
1
7
379
AlphaSignal AI retweeté
Lior Alexander
Lior Alexander@LiorOnAI·
Every foundation model you've ever used has the same bug. It just got fixed. Since 2015, every deep network has been built the same way: each layer does some computation, adds its result to a running total, and passes it forward. Simple. But there's a problem, by layer 100, the signal from any single layer is buried under the sum of everything else. Each new layer matters less and less. Nobody fixed this because it worked well enough. Moonshot AI just changed that. Their new method, Attention Residuals, lets each layer look back at all previous layers and choose which ones actually matter right now. Instead of a blind running total, you get selective retrieval. The analogy: imagine writing an essay where every draft gets merged into one document automatically. By draft 50, your latest edits are invisible. AttnRes lets you keep every draft separate and pull from whichever ones you need. What this fixes: 1. Deeper layers no longer get drowned out 2. Training becomes more stable across the whole network 3. The model uses its own depth more efficiently To make it practical at scale, they group layers into blocks and attend over block summaries instead of every single layer. Overhead at inference: less than 2%. The result: 25% less compute to reach the same performance. Tested on a 48B-parameter model. Holds across sizes. Residual connections have been invisible plumbing for a decade. Now they're becoming dynamic. The next generation of models won't just pass through their own layers, they'll search them.
Kimi.ai@Kimi_Moonshot

Introducing 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍𝒔: Rethinking depth-wise aggregation. Residual connections have long relied on fixed, uniform accumulation. Inspired by the duality of time and depth, we introduce Attention Residuals, replacing standard depth-wise recurrence with learned, input-dependent attention over preceding layers. 🔹 Enables networks to selectively retrieve past representations, naturally mitigating dilution and hidden-state growth. 🔹 Introduces Block AttnRes, partitioning layers into compressed blocks to make cross-layer attention practical at scale. 🔹 Serves as an efficient drop-in replacement, demonstrating a 1.25x compute advantage with negligible (<2%) inference latency overhead. 🔹 Validated on the Kimi Linear architecture (48B total, 3B activated parameters), delivering consistent downstream performance gains. 🔗Full report: github.com/MoonshotAI/Att…

English
8
17
91
19.2K
AlphaSignal AI
AlphaSignal AI@AlphaSignalAI·
GLM-5-Turbo just made long-running AI agents dramatically faster. Z.ai released a speed-optimized variant of GLM-5, built specifically for multi-step agentic workflows. > #1 open-source on agentic tasks > 77.8% on SWE-bench (real coding tasks) > 92.7% on AIME 2026 (hard math) > Closes the gap with Claude Opus 4.5 The base model is a 744B MoE architecture with only 40B active parameters at inference time. It uses DeepSeek Sparse Attention to cut deployment costs while keeping a 200K context window intact. The Turbo variant pushes this further for speed. OpenClaw is an agent framework that runs multi-step tasks across tools, files, and terminals. Before this, fast and capable was a tradeoff. Now you can plug a frontier-level model directly into long-horizon agent pipelines without sacrificing response speed. It ships on OpenRouter and via API today, closed-source for now, with all findings going into the next open-source release. docs.google.com/forms/d/e/1FAI…
AlphaSignal AI tweet media
English
1
0
6
639
AlphaSignal AI
AlphaSignal AI@AlphaSignalAI·
Google open-sourced the Agent Development Kit, pairing it with Gemini 3.1 Flash-Lite. Together, they power a memory agent that runs 24/7 as a background process. > No vector database needed > Supports 27 file types > Auto-consolidates every 30 min > Answers with source citations The trick is splitting work into three specialized agents: 1. Ingest: drop any file in an inbox folder, it extracts entities, topics, and importance scores automatically 2. Consolidate: every 30 minutes, it links related memories and generates new insights 3. Query: ask a question, get an answer with source citations from stored memories Before this, you had two bad options: RAG systems that just retrieve without thinking, or summaries that lose detail over time. You can now build apps where the AI actually remembers what it learned last week.
AlphaSignal AI tweet media
English
3
3
33
2.7K
AlphaSignal AI
AlphaSignal AI@AlphaSignalAI·
Stop switching between 10 AI tools. One agent just replaced all of them. Perplexity launched Computer for Enterprise: a multi-step AI agent that routes tasks across 20 specialized models and connects to 400+ apps. > Research, coding, design, deployment > CB Insights, PitchBook, Statista built-in > Lives natively inside Slack > Your org's data never used for training The core idea is simple: instead of one model doing everything poorly, Computer picks the best model per subtask. Coding goes to one model. Research to another. Images to another. One instruction, one unified output. Before this, enterprise teams stitched together separate tools, separate contexts, separate outputs. Now a single prompt can trigger a full workflow, shared across your team in Slack, backed by premium data sources, and locked inside your existing security perimeter. perplexity.ai/computer
English
2
2
16
1.4K
AlphaSignal AI
AlphaSignal AI@AlphaSignalAI·
Alibaba open sourced 𝗣𝗮𝗴𝗲 𝗔𝗴𝗲𝗻𝘁, a JavaScript library that gives any website a natural language operator. > Pure DOM manipulation, no vision > One script tag to deploy > Bring your own LLM > Built-in human approval step Traditional browser agents take screenshots and guess what to click. This reads the DOM directly, like a developer inspecting the page, so it's faster and more precise. Before this, adding natural language control to a web app meant building a backend service, managing a separate Python process, and handling browser automation infrastructure yourself. Now it's a script tag. Any SaaS tool, internal dashboard, or legacy enterprise panel gets a natural language layer immediately. 1.6K stars. 100% open source. 3 lines of code.
English
2
15
77
6.5K