Giorgio Robino

29.4K posts

Giorgio Robino banner
Giorgio Robino

Giorgio Robino

@solyarisoftware

Conversational LLM-based Applications Specialist @almawave | Former ITD-CNR Researcher | Soundscapes (Orchestral) Composer.

Genova, Italia شامل ہوئے Nisan 2009
4.4K فالونگ3.2K فالوورز
پن کیا گیا ٹویٹ
Giorgio Robino
Giorgio Robino@solyarisoftware·
My preprint "Conversation Routines: A Prompt Engineering Framework for Task-Oriented Dialog Systems" now has a revised version on @arXiv with updated experimental results. Here’s a thread with the changes! 🧵 ➡️ Paper: arxiv.org/abs/2501.11613 1/ What’s CR?
English
1
2
3
460
Giorgio Robino ری ٹویٹ کیا
Rohan Paul
Rohan Paul@rohanpaul_ai·
Research proves that current AI agent groups cannot reliably coordinate or agree on simple decisions. Building teams of AI agents that can consistently agree on a final decision is surprisingly difficult for LLMs. But problem is that developers frequently assume that if you have enough AI agents working together, they will eventually figure out how to solve a problem by talking it through. This paper shows that this assumption is currently wrong. Even in a friendly environment where every agent is trying to help, the team often gets stuck or stops responding entirely. Because this happens more often as the group gets bigger, it means we cannot yet trust these agent systems to handle tasks where they must agree on a correct answer. ---- Paper Link – arxiv. org/abs/2603.01213 Paper Title: "Can AI Agents Agree?"
Rohan Paul tweet media
English
57
52
250
17.3K
Giorgio Robino ری ٹویٹ کیا
Sebastián Ramírez
Sebastián Ramírez@tiangolo·
Install the library skills bundled with your dependencies (like FastAPI) for your coding agent 🤖 In Python or Node.js, both versions support both ecosystems ✨ github.com/tiangolo/libra…
Sebastián Ramírez tweet media
English
6
19
158
8.7K
Giorgio Robino ری ٹویٹ کیا
elvis
elvis@omarsar0·
// Recursive Multi-Agent Systems // Great read for the weekend. (bookmark it) Multi-agent systems often pass full text messages between agents at every step. This leads to token bloat, latency, and context dilution which all grow with the number of agents. RecursiveMAS asks a different question: what if agents collaborated through recursive computation in a shared latent space, instead of through text? A multi-agent system can be treated as a recursive computation, where each agent acts like an RLM layer, iteratively passing latent representations to the next and forming a looped interaction process. They introduce a RecursiveLink module that generates latent thoughts and transfers state directly between heterogeneous agents, plus an inner-outer loop learning algorithm with shared gradient-based credit assignment across the team. Think of it as agents passing notes in their own internal language instead of rewriting everything in English each turn. Less talking, more thinking. The numbers are strong. Across 9 benchmarks spanning math, science, medicine, search, and code generation: 8.3% average accuracy gain over baselines, 1.2×–2.4× end-to-end inference speedup, and 34.6%–75.6% reduction in token usage. Why does it matter? If agent-to-agent communication is the next real bottleneck (and it is), latent-space recursion is one of the cleaner ways to scale collaboration without paying a token tax for every coordination step. Paper: arxiv.org/abs/2604.25917 Learn to build effective AI agents in our academy: academy.dair.ai
elvis tweet media
English
18
43
228
26.5K
Giorgio Robino ری ٹویٹ کیا
Mario Zechner
Mario Zechner@badlogicgames·
People of pi.dev. As a weekend gift, we added @XiaomiMiMo Token Plan as a first class provider. I also made some breaking changes for the better. If you have custom providers and models, point pi at the changelog so it can fix them up for you. This will be a recuring theme in the coming days and weeks. We'll get through it together.
Mario Zechner tweet media
English
8
9
151
7.3K
Giorgio Robino ری ٹویٹ کیا
Qwen
Qwen@Alibaba_Qwen·
Today we’re releasing Qwen-Scope 🔭, an open suite of sparse autoencoders for the Qwen model family. It turns SAE features into practical tools: 🎯 Inference — Steer model outputs by directly manipulating internal features, no prompt engineering needed 📂 Data — Classify & synthesize targeted data with minimal seed examples, boosting long-tail capabilities 🏋️ Training — Trace code-switching & repetitive generation back to their source, fix them at the root 📊 Evaluation — Analyze feature activation patterns to select smarter benchmarks and cut redundancy We hope the community uses Qwen-Scope to uncover new mechanisms inside Qwen models and build applications beyond what we explored.Excited to see what you build! 🚀 🔗🔗 Blog: qwen.ai/blog?id=qwen-s… HuggingFace: huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw… Technical Report: …anwen-res.oss-accelerate.aliyuncs.com/qwen-scope/Qwe…
Qwen tweet media
English
85
346
2.5K
326.6K
Giorgio Robino ری ٹویٹ کیا
David Hendrickson
David Hendrickson@TeksEdge·
☀️ Qwen just dropped something big for personal AI. ✨They released Qwen-Scope, the first major open Sparse Autoencoder (SAE) toolkit for real models. 💡 Instead of wrestling with prompts, you can now directly steer Qwen models by manipulating internal features. Why this matters? 🧠 Precise, reliable control when running models locally 🛠️ Fix repetition, hallucinations & bad behaviors at the source 📊 Smarter data synthesis and evaluation 🚀 A real step toward controllable, sovereign personal agents This is unique as no other top lab has open-sourced practical tools for mechanistic control of open models like this (that I know of) The future of personal AI isn’t just bigger models. It’s controllable ones. Qwen-Scope just took a huge leap forward. 🔥
David Hendrickson tweet media
Qwen@Alibaba_Qwen

Today we’re releasing Qwen-Scope 🔭, an open suite of sparse autoencoders for the Qwen model family. It turns SAE features into practical tools: 🎯 Inference — Steer model outputs by directly manipulating internal features, no prompt engineering needed 📂 Data — Classify & synthesize targeted data with minimal seed examples, boosting long-tail capabilities 🏋️ Training — Trace code-switching & repetitive generation back to their source, fix them at the root 📊 Evaluation — Analyze feature activation patterns to select smarter benchmarks and cut redundancy We hope the community uses Qwen-Scope to uncover new mechanisms inside Qwen models and build applications beyond what we explored.Excited to see what you build! 🚀 🔗🔗 Blog: qwen.ai/blog?id=qwen-s… HuggingFace: huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw… Technical Report: …anwen-res.oss-accelerate.aliyuncs.com/qwen-scope/Qwe…

English
3
3
13
1.8K
Giorgio Robino ری ٹویٹ کیا
Richard Palethorpe
Richard Palethorpe@jichiep·
New model release! LocalVQE: Tiny ~1M param audio model that cancels echo, noise and reverberations in real-time and comes with a @ggml_org implementation out of the gate.
English
7
27
219
13.6K
Giorgio Robino ری ٹویٹ کیا
Giorgio Robino ری ٹویٹ کیا
Giorgio Robino ری ٹویٹ کیا
Kun Chen
Kun Chen@kunchenguid·
gnhf 0.1.27+ now supports the Pi agent harness! thanks to a contribution PR github.com/kunchenguid/gn…
Kun Chen tweet media
English
3
4
104
7.8K
Giorgio Robino ری ٹویٹ کیا
antirez
antirez@antirez·
Europe AI strategy should be to specialize on AI inference and improvement of large open weight models, while we try to recover the GPU / companies gap to have a viable internal path. A large Chinese open weight model that works is only better than an European-trained weak one.
English
19
11
206
11.7K
Giorgio Robino ری ٹویٹ کیا
elvis
elvis@omarsar0·
// Agentic Harness Engineering // Pay attention to this one, AI devs. (bookmark it) Most coding-agent harnesses are still tuned by hand or brittle trial-and-error self-evolution. This new work introduces Agentic Harness Engineering, a framework that makes harness evolution observable. They do this through three layers: components as revertible files, experience as condensed evidence from millions of trajectory tokens, and decisions as falsifiable predictions checked against task outcomes. Each edit becomes a contract you can verify or revert. Results: pass@1 on Terminal-Bench 2 climbs from 69.7% to 77.0% in ten iterations, beating human-designed Codex-CLI (71.9%) and self-evolving baselines like ACE and TF-GRPO. The evolved harness also transfers across model families with +5.1 to +10.1 point gains, while using 12% fewer tokens than the seed on SWE-bench-verified. Harness work is the biggest hidden cost in most agent systems. This is the first credible recipe for letting the harness improve itself without drifting into noise. Paper: arxiv.org/abs/2604.25850 Learn to build effective AI agents in our academy: academy.dair.ai
elvis tweet media
English
61
224
1.6K
130.8K
Giorgio Robino ری ٹویٹ کیا
Alex Prompter
Alex Prompter@alex_prompter·
Both OpenAI and Anthropic just released official prompting guides. Both say the same thing. Your old prompts don’t work anymore. But for opposite reasons. Claude Opus 4.7 stopped guessing what you meant. It does exactly what you type. Nothing more, nothing less. Vague instructions that worked on 4.6? They now produce narrow, literal, sometimes worse results. Not because the model got dumber. Because it stopped compensating for sloppy thinking. GPT-5.5 went the other direction. OpenAI’s guide literally says: “Don’t carry over instructions from older prompt stacks.” Legacy prompts over-specify the process because older models needed hand-holding. GPT-5.5 doesn’t. That extra detail now creates noise and produces mechanical output. Claude got more literal. GPT got more autonomous. Both now punish the same thing: prompts written without clear thinking behind them. One developer on Reddit captured it perfectly after analyzing hundreds of community posts. The complaints tracked almost perfectly with prompt specificity. Precise prompts got better results on 4.7. Vague prompts got worse. The model didn’t regress. The prompts did. OpenAI’s new framework is “outcome-first prompting.” Describe what good looks like. Define success criteria. Set constraints. Then get out of the way. The model picks the path. Anthropic’s framework is the inverse: be surgically specific about what you want, because the model won’t fill in your blanks anymore. Two different architectures. Two different philosophies. One identical conclusion: the person writing the prompt is now the bottleneck, not the model. Boris Cherny, the engineer who built Claude Code, posted on launch day that even he needed a few days to adjust. That post got 936 likes. Meanwhile, Anthropic increased rate limits for all subscribers because the new tokenizer uses up to 35% more tokens on the same input. The model is more expensive to run lazily. Cheaper to run precisely. The models are converging in capability. The gap between good and bad output is no longer about which model you pick. It’s about the 2 minutes of structured thinking you do before you type anything. That thinking system is the skill. The prompt is just what it produces.
Alex Prompter tweet mediaAlex Prompter tweet media
English
116
265
2.3K
322.3K
Giorgio Robino ری ٹویٹ کیا
Jerry Liu
Jerry Liu@jerryjliu0·
This is really well thought out. Filesystems are the new default abstraction for agents to interact with documents (the new RAG stack in 2026). The issue is actually figuring out how to productize this; you can't "productize" Claude Code over a local file system. Seems like this tool has all the semantics of filesystems with the versioning of git
Oliver@olvrgln

Introducing Mesa: the most powerful filesystem ever built, designed specifically for enterprise AI agents. Every team building agents eventually hits the same wall: where do the files live? Not the chat history, the actual artifacts the agent works on. > The contracts your agent redlined > The claim files it updated > The 200-page audit report it edited overnight while you were asleep Today those documents live in a sandbox that dies in 30 minutes, an S3 bucket where concurrent writes clobber each other, or a GitHub repo that was never built to absorb agent-scale traffic. So we built Mesa. The world's first POSIX-compatible filesystem with built-in version control, designed from the ground up for agents. You mount it into your sandbox like any other filesystem. Your agent reads and writes files normally. Behind the scenes every change is versioned, branchable, reviewable, and rollback-able — like a codebase, for any file type. Mesa provides – Branches so agents work in parallel without locking – Durable storage that survives sandbox death – Sparse materialization so massive document sets load instantly – Fine-grained access control per agent – Full history for human review and audit Design partners are running Mesa in production across legal, healthcare, GTM, business ops, and coding agents. Private beta is open: link in the comments

English
19
31
456
99.9K
Giorgio Robino ری ٹویٹ کیا
Jo Kristian Bergum
Jo Kristian Bergum@jobergum·
Progressive disclosure and skill is a big part of agent harness engineering. Effective retrieval over skills is going to be big, might even become "web scale". "Under this paradigm, we propose Skill Retrieval Augmented Agents (SR-Agents), which dynamically retrieve and use relevant skills from large-scale skill corpora to expand their problem-solving capabilities" arxiv.org/abs/2604.24594…
Jo Kristian Bergum tweet media
English
9
19
126
7.7K
Giorgio Robino ری ٹویٹ کیا
Piotr Żelasko
Piotr Żelasko@PiotrZelasko·
Today we released Nemotron-3-Nano-Omni-30B-A3B - our first Omni model, with speech and audio understanding capabilities powered by parakeet-tdt-0.6b-v2 encoder. 🫡1st position on VoiceBench 🌏English only 🎙️5.95% WER on Open ASR Leaderboard 📽️Video+audio understanding
English
19
50
502
28K
Giorgio Robino ری ٹویٹ کیا
Sumanth
Sumanth@Sumanth_077·
Open protocol for AI agent perception! World2Agent (W2A) standardizes how AI agents perceive the real world. Install a sensor, your agent gets structured, real-time data. Swap sensors freely - they all speak the same schema. The problem: Every agent has its own way of watching data sources. You build custom integrations for Hacker News, market data, production alerts, weather APIs. None of it is portable. When you switch agent frameworks, you rebuild everything. W2A fixes this with a standard protocol. Sensors watch data sources and emit structured signals. Your agent receives these signals and decides what to do. The architecture is three layers: World (data sources) → Sensor (watches and structures) → Agent (receives and acts) Sensors are distributed as npm packages. Need production alerts? Install sensor-prod-alerts. Need market data? Install sensor-markets. Each sensor emits signals in the same schema. Anyone can build a sensor. A Hacker News sensor is about 50 lines - poll the HN API, structure the data into W2A signals, emit. Ship it to npm and it's installable by any agent. It also comes with SensorHub where you can browse sensors by category (markets, news, production, weather, AI labs), view their signal schemas, and install. Integration works through plugins for Claude Code or direct SDK for custom runtimes. Run multiple sensors simultaneously - your agent sees all signals in real time. Why this matters: Agent perception is fragmented. Every framework reinvents the same integrations. W2A creates a standard layer. Build a sensor once, it works everywhere. Switch agent runtimes, your sensors come with you. It's 100% Open source Link to the GitHub repo in the replies!
Sumanth tweet media
English
9
16
34
3.6K
Giorgio Robino ری ٹویٹ کیا
hardmaru
hardmaru@hardmaru·
For the past few years, humans have been doing “prompt engineering” to coax the best performance out of different LLMs. In this work, we explored what happens if we train an AI to do that job instead. By training a Conductor model with RL, we found that it naturally learns to write highly effective, custom instructions for a whole pool of other models. It essentially learns to ‘manage’ them in natural language. What surprised me most was how it dynamically adapts. For simple factual questions, it just queries one model. But for hard coding problems, it autonomously spins up a whole pipeline of planners, coders, and verifiers. Really excited to see where this paradigm of “AI managing AI” goes next, especially as we start moving from single-agent chain-of-thought to multi-agent “chain-of-command”. Link to our #ICLR2026 paper: arxiv.org/abs/2512.04388 Along with our TRINITY paper which we announced earlier, this work also powers our new multi-agent system: Sakana Fugu (sakana.ai/fugu-beta) 🐡
Sakana AI@SakanaAILabs

Introducing our new work: “Learning to Orchestrate Agents in Natural Language with the Conductor” accepted at #ICLR2026 arxiv.org/abs/2512.04388 What if we trained an AI not to solve problems directly, but to act as a manager that delegates tasks to a diverse team of other AIs? To solve complex tasks, humans rarely work alone; we form teams, delegate, and communicate. Yet, multi-agent AI systems currently rely heavily on rigid, human-designed workflows or simple routers that just pick a single model. We wanted an AI that could dynamically build its own team. We trained a 7B Conductor model using Reinforcement Learning to orchestrate a pool of frontier models (including GPT-5, Gemini, Claude, and open-source models available during the period leading up to ICLR 2026). Instead of executing code, the Conductor outputs a collaborative workflow in natural language. For any given question, the Conductor specifies: 1/ Which agent to call 2/ What specific subtask to give them (acting as an expert prompt engineer) 3/ What previous messages they can see in their context window Through pure end-to-end reward maximization, amazing behaviors emerged. The Conductor learned to adapt to task difficulty: it 1-shots simple factual questions, but autonomously spins up complex planner-executor-verifier pipelines for hard coding problems. The results are very promising: The 7B Conductor surpasses the performance of every individual worker model in its pool, setting new records on LiveCodeBench (83.9%) and GPQA-Diamond (87.5%) at the time of publication. It also significantly outperforms expensive multi-agent baselines like Mixture-of-Agents at a fraction of the cost. One of our favorite features: Recursive Test-Time Scaling! By allowing the Conductor to select itself as a worker, it reads its own team's prior output, realizes if it failed, and spins up a corrective workflow on the fly. This opens a new axis for scaling compute during inference. This research proves that language models can become elite meta-prompt engineers, dynamically harnessing collective intelligence. Alongside our TRINITY research which we announced a few days earlier, this foundational research powers our new multi-agent system: Sakana Fugu! (sakana.ai/fugu-beta) 🐡 OpenReview: openreview.net/forum?id=U23A2… (ICLR 2026)

English
36
171
1.4K
172.1K
Giorgio Robino ری ٹویٹ کیا
Daily Dose of Data Science
Daily Dose of Data Science@DailyDoseOfDS_·
Vibe train your AI agents. This new method can replace LLM-as-a-judge for production agents. Most teams point a giant LLM at their agent's output and call it evaluation. It works, but it comes with two real costs: - It's slow and expensive at inference time - It misses the domain-specific failures that actually matter to your use case Vibe training flips the whole setup. Researchers at Plurai distill a small language model that's specialized for your agent's exact behavior, your edge cases, and your failure modes. The SLM becomes your evaluator and your runtime guardrail in one. Here's why this is a big deal: - Cheap enough to run inline on every agent step, not just offline batches - Catches the failures that generic LLM judges shrug off - Same model guards production and grades it, so eval and runtime stay in sync A small specialized model beating a giant general one is becoming a pattern. Distillation is quietly turning into one of the most underrated techniques for shipping reliable agents. Try it here: plurai.ai/launch Paper: plurai.ai/papers
Daily Dose of Data Science tweet media
Ilan Kadar@ilan_kadar

Big day for us, finally sharing what we’ve been cooking for a while. Over the past year, we kept seeing the same pattern: AI agents look great in demos, until real users break them. Today, we’re fixing that with 𝘃𝗶𝗯𝗲-𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 to build real-time, tailored evals and guardrails for your agents, in minutes. Define your intent with a prompt or a few examples. We generate edge-case datasets, and train a model aligned to your use case, outperforming state-of-the-art LLMs at a fraction of the cost. (Research paper with benchmarks in the comments) If you’re building AI agents, don’t let your users be the ones who discover the failures. Be the one who makes AI agents reliable in production and takes control at scale. Start vibe-training for free: plurai.ai/launch

English
4
9
61
5.5K