Bloop but now BURN

2K posts

Bloop but now BURN banner
Bloop but now BURN

Bloop but now BURN

@Economini

"If you don’t believe it or don’t get it, I don’t have the time to try to convince you, sorry." Satoshi Nakamoto

انضم Şubat 2021
1.3K يتبع77 المتابعون
Bloop but now BURN
Bloop but now BURN@Economini·
@omarsar0 This is the right direction. The future is not just better agents, but governed harnesses around them. AIOS explores this from the runtime-control side: planner, engine, registry, guard, result handling. Observable. Auditable. Blockable before execution. github.com/Podcast72/roby…
English
0
0
0
4
elvis
elvis@omarsar0·
// Agentic Harness Engineering // Pay attention to this one, AI devs. (bookmark it) Most coding-agent harnesses are still tuned by hand or brittle trial-and-error self-evolution. This new work introduces Agentic Harness Engineering, a framework that makes harness evolution observable. They do this through three layers: components as revertible files, experience as condensed evidence from millions of trajectory tokens, and decisions as falsifiable predictions checked against task outcomes. Each edit becomes a contract you can verify or revert. Results: pass@1 on Terminal-Bench 2 climbs from 69.7% to 77.0% in ten iterations, beating human-designed Codex-CLI (71.9%) and self-evolving baselines like ACE and TF-GRPO. The evolved harness also transfers across model families with +5.1 to +10.1 point gains, while using 12% fewer tokens than the seed on SWE-bench-verified. Harness work is the biggest hidden cost in most agent systems. This is the first credible recipe for letting the harness improve itself without drifting into noise. Paper: arxiv.org/abs/2604.25850 Learn to build effective AI agents in our academy: academy.dair.ai
elvis tweet media
English
53
178
1.3K
101.3K
Bloop but now BURN
Bloop but now BURN@Economini·
@santoshstack Hai centrato il punto. BRAVO... se hai domande sono qua .... se hai test da fargli fare... sarò contento di eseguirli. Cerco proprio questo. 🙂
Italiano
0
0
0
5
Santosh
Santosh@santoshstack·
@Economini This sounds like a serious infra piece, execution control + tool mediation is where things get tricky fast.
English
1
0
1
12
Santosh
Santosh@santoshstack·
Hey founders! Looking to connect with people building in: • SaaS • AI • Automation • Web apps • Tech products • Marketing Drop what you're working on 👇
English
78
2
55
2.3K
MakerThrive
MakerThrive@MakerThrive·
software devs in 2030 🤣
Română
51
327
3.3K
599.9K
Bloop but now BURN أُعيد تغريده
Brivael
Brivael@brivael·
Elon Musk avait dit un truc qui m'avait marqué sur l'allocation de ressources. En substance : passé un certain niveau de richesse, l'argent n'est plus de la consommation, c'est de l'allocation de capital. Cette phrase change tout. L'économie, dans le fond, c'est juste un problème d'allocation. Tu as des ressources finies et des usages infinis. Qui décide où va quoi ? Imagine une cour de récré. 100 enfants, des paquets de cartes Pokémon distribués au hasard. Tu laisses faire. Très vite, un ordre émerge. Les bons joueurs accumulent les cartes rares, les collectionneurs trient, les négociateurs trouvent des deals. Personne n'a planifié. Et pourtant chaque carte finit dans les mains de celui qui en tire le plus de valeur. Le système maximise le bonheur total de la cour. C'est ça, la main invisible. Maintenant fais entrer la maîtresse. Elle trouve ça injuste. Léo a 50 cartes, Tom en a 3. Elle confisque, redistribue, impose l'égalité. Trois effets immédiats. Les bons joueurs arrêtent de jouer, à quoi bon. Les mauvais n'ont plus de raison de progresser, ils auront leur part. Les échanges s'effondrent. La cour est égale, et morte. Elle a maximisé l'égalité, elle a détruit le bonheur. Le problème de la maîtresse, c'est qu'elle ne peut pas avoir l'information que la cour avait collectivement. C'est le problème du calcul économique de Mises, formulé en 1920. L'URSS a essayé de le résoudre pendant 70 ans avec le Gosplan. Résultat : pénuries, queues, effondrement. Pas parce que les Soviétiques étaient bêtes, parce que le problème est mathématiquement insoluble en mode centralisé. Quand Musk a 200 milliards, il ne les consomme pas, il les alloue. SpaceX, Starlink, Neuralink, xAI. Chaque dollar est un pari sur le futur. Et lui a un track record. PayPal, Tesla, SpaceX. Il a démontré qu'il sait identifier des problèmes immenses et y allouer des ressources avec un rendement spectaculaire. L'État aussi a un track record. Hôpitaux qui s'effondrent, éducation qui décline, dette qui explose, services publics qui se dégradent malgré des budgets en hausse constante. Le marché identifie les bons allocateurs, la politique identifie les bons communicants. Le profit n'est pas une finalité, c'est un signal. Il dit : tu as alloué des ressources rares vers un usage que les gens valorisent suffisamment pour payer. Plus le profit est gros, plus la création de valeur est grande. Quand Starlink est rentable, ça veut dire que des millions de gens dans des zones rurales ont enfin internet. Quand un ministère est en déficit, ça veut dire qu'il consomme plus qu'il ne produit. L'un crée, l'autre détruit, et on appelle ça redistribution. Dans nos sociétés il y a deux catégories d'acteurs. Les entrepreneurs et les bureaucrates. L'entrepreneur prend un risque personnel pour identifier un problème, mobiliser des ressources, créer une solution. S'il se trompe il perd. S'il a raison, ses clients gagnent, ses employés gagnent, ses fournisseurs gagnent, l'État collecte des impôts. Il est la cellule de base du progrès humain. Le bureaucrate ne prend aucun risque personnel. Son salaire est garanti. Au mieux il maintient une rente existante. Au pire il la détruit par excès de réglementation, mauvaise allocation forcée, incitations perverses qui découragent ceux qui produisent. Mais dans aucun cas il ne crée. Regarde les 50 dernières années. iPhone, internet civil, SpaceX, Tesla, Google, Amazon, Stripe, mRNA, ChatGPT. Toutes des inventions privées, portées par des entrepreneurs, financées par du capital risque. Pas un seul ministère n'a inventé quoi que ce soit qui ait changé ta vie au quotidien. La France est devenue le laboratoire mondial de la dérive bureaucratique. 57% du PIB en dépenses publiques, record absolu. Une administration tentaculaire, une fiscalité qui pénalise la création de richesse. Résultat : décrochage face aux États-Unis, à l'Allemagne, à la Suisse. Fuite des cerveaux. Désindustrialisation. Dette qui explose. Et le pire c'est que la mauvaise allocation s'auto-renforce. Plus l'État prélève, moins les entrepreneurs créent. Moins ils créent, moins il y a de base fiscale. Plus l'État s'endette et taxe. Boucle de rétroaction négative parfaite. La maîtresse pense qu'elle aide, et chaque année la cour produit moins. Dans nos sociétés, ce sont les entrepreneurs, toujours, qui font avancer la civilisation. Les bureaucrates au mieux maintiennent une rente, au pire la détruisent. Aucune société n'a jamais progressé en taxant ses créateurs pour subventionner ses gestionnaires. La question n'est jamais qui a combien. C'est qui alloue le mieux la prochaine unité de ressource pour maximiser le futur de l'humanité. La réponse depuis 200 ans n'a jamais changé. Ce ne sont pas les fonctionnaires.
Français
2.9K
14.4K
54.1K
69.7M
Bloop but now BURN
Bloop but now BURN@Economini·
@volosatovde Thanks... bro... it's a demo to understand the logic and applications... there are also tests, code and private
English
0
0
0
7
Dmitrii Volosatov
Dmitrii Volosatov@volosatovde·
Build in public means showing what you’re building. Share your product. Drop the link below 👇
English
83
1
48
2K
Bloop but now BURN
Bloop but now BURN@Economini·
@W33Z_global RobyRoy AIOS is presented here as a governed execution architecture for AI-assisted systems. This public repository explains how the project approaches execution control, tool access mediation, result handling, and auditability. github.com/Podcast72/roby…
English
0
0
0
9
Weedsdom
Weedsdom@W33Z_global·
It’s that time of the day Founders what are you building? Let’s send traffic!
Weedsdom tweet media
English
90
2
49
2.3K
Bloop but now BURN
Bloop but now BURN@Economini·
@ardent__dev RobyRoy AIOS is presented here as a governed execution architecture for AI-assisted systems. This public repository explains how the project approaches execution control, tool access mediation, result handling, and auditability.github.com/Podcast72/roby…
English
0
0
0
10
Ardent_Dev
Ardent_Dev@ardent__dev·
What are you building? Let's drive traffic to your product 👇🏽
English
202
1
90
7K
Eric Luevano
Eric Luevano@ericjluevano·
I’m 33. I make $40k/month. I owe it all to the world’s most boring strategy. Here’s what I do (& how you can too):
English
7
31
329
151K
Bloop but now BURN
Bloop but now BURN@Economini·
RobyRoy AIOS is presented here as a governed execution architecture for AI-assisted systems. This public repository explains how the project approaches execution control, tool access mediation, result handling, and auditability. github.com/Podcast72/roby…
English
0
0
0
5
Tibo
Tibo@thsottiaux·
Don't just reset Codex rate limits for fun, it costs money. Don't just reset Codex rate limits for fun, it costs money. ... but the vibes are good ... I have reset Codex rate limits for ALL paid plans to celebrate a good week and allow everyone to build more with GPT-5.5. Enjoy
English
1.5K
768
17.2K
1.2M
Elora khatun
Elora khatun@elora_khatun·
Most people say "build an AI agent." Very few know what that actually means. Here’s the real blueprint to go from idea → working agent 👇 1. Define the job What problem are you solving? Who’s the user? What does success look like? 2. Design the brain Clear system prompt, role, instructions, guardrails (This is where most agents fail) 3. Pick the right model Speed vs cost vs intelligence Don’t overpay for simple tasks 4. Add tools APIs, databases, MCP servers, custom functions Agents become powerful when they can act, not just answer 5. Give it memory Short-term + long-term context So it learns, adapts, and improves over time 6. Orchestrate everything Workflows, triggers, retries, agent-to-agent communication 7. Build the interface Chat, app, API, Slack bot Make it usable, not just functional 8. Test + improve Evals, latency checks, real-world feedback Iteration is the real moat 💡 Truth: An “AI agent” isn’t one prompt. It’s a system. And the people who understand systems… are the ones building unfair advantages right now. 📌 Save this (you’ll need it when you build) 🔁 Repost for builders ➕ Follow @elora_khatun for practical AI breakdowns (no fluff) 🚀
Elora khatun tweet media
English
87
337
1.4K
94.9K
Erina | AI Tools & News
Erina | AI Tools & News@AITechEchoes·
🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?
Erina | AI Tools & News tweet media
English
93
196
438
35.3K
Romain Huet
Romain Huet@romainhuet·
Hello builders! What did you build this weekend with Codex + GPT-5.5? Drop it below, I’d love to see what you made!
English
163
10
258
29.4K
Bloop but now BURN
Bloop but now BURN@Economini·
@ZabihullahAtal RobyRoy AIOS is presented here as a governed execution architecture for AI-assisted systems. This public repository explains how the project approaches execution control, tool access mediation, result handling, and auditability. github.com/Podcast72/roby…
English
0
0
1
115
Atal
Atal@ZabihullahAtal·
🚨 BREAKING: The AI world is rapidly shifting from prompts to autonomous agents that can plan, decide, and execute tasks end-to-end. I just found a GitHub directory compiling the entire AI agent ecosystem in one place. Frameworks, tools, and real use cases (constantly updated.) Literally This repository maps the entire emerging ecosystem of AI agents: • Autonomous agents that complete complex workflows • Multi-agent systems coordinating hundreds of tasks • Frameworks like AutoGPT, CrewAI, LangGraph • Real-world use cases across business, coding, and research It highlights a critical shift: AI is no longer just a tool you prompt. It is becoming a system that operates. Instead of giving instructions step-by-step, you define goals… and agents handle planning, execution, and iteration. This creates a new dynamic. Traditional AI use: Human → Prompt → Output Agent-based AI: Human → Goal → System → Actions → Results The difference is massive. One generates answers. The other replaces workflows. The directory also shows how fast this space is evolving. New agent frameworks, architectures, and use cases are being added constantly. Which means: The real skill is no longer prompting… It’s designing systems of agents that can think, collaborate, and act. This is where the next major leverage comes from. As AI agents improve, the focus shifts from doing work to orchestrating systems that do work for you. This marks a deeper transition in AI: From tools that assist humans to systems that operate alongside (or instead of) them. check the directory below:
Atal tweet media
English
19
48
229
13.5K
Atal
Atal@ZabihullahAtal·
🚨 BREAKING: Building and operating agents is the next million-dollar skill Organizations are shifting from prompting AI to building agents that run workflows and complete tasks end to end. within the next few years, almost every company will run multiple agents. and a new role to operate them "Agent Operator" will emerge, so but how can someone actually build and operate agents? Here is a step-by-step roadmap: 1. Start with a problem, not the AI Agents don’t start with tools. They start with problems. If a task is: • repetitive • structured • time-consuming It can become an agent. 2. Turn the task into a system actually every agent is just a loop: • Input • Process • Output • Feedback map this clearly, otherwise the agent won’t work. 3. Define the agent like a machine You don’t “prompt” an agent. You define: • Role → what it is • Goal → what success looks like • Rules → boundaries • Tools → what it can use • Output → exact format Clarity here = reliability later 4. Give it the ability to act and execute without tools, it’s just text. with tools, it becomes execution: • Browsing • Code execution • APIs • Docs / Sheets This allows your agents stop talking and start acting and executing 5. Run it as a loop, not a one-shot Agents don’t work perfectly once. Operators design: • run • check • fix • repeat Iteration makes it perfect, it is same like iterative software engineering model. 6. Add memory and context, this is very important because Good agents don’t restart every time. They remember: • past outputs • preferences • ongoing tasks This turns them into systems that improve over time 7. Operate, don’t interfere Your role is not to “use” the agent. It’s to: • monitor failures • refine instructions • improve flow • remove friction I simple words i can say a better system → better output 8. Scale what works and save your time Once one workflow works: • duplicate it • connect agents • build multi-agent systems Now you’re not only saving time but You’re building execution infrastructure Remember Don't only focus on: • prompts • tools • interfaces focus and always try to turn messy work into clean, repeatable systems and this is important because: Every company is moving toward: • automated workflows • agent-driven execution • smaller teams, higher output SO The future bottleneck is the people who can make it worki I hope you found this inspiring, For more such valuable posts you can follow me @ZabihullahAtal
Atal tweet media
Atal@ZabihullahAtal

🚨 BREAKING: A new role is quietly emerging and it’s about to dominate the next 5 years. It’s not “AI engineer.” It’s not “prompt engineer.” It’s the Agent Operator. And it will sit inside almost every organization. Most people are still thinking about AI as a tool. That framing is already outdated. What’s actually happening is a shift from: humans using software to humans managing autonomous agents that execute work This is a fundamental redesign of how work gets done. So what is an Agent Operator? An Agent Operator is the person who: • Designs how agents interact with real workflows • Connects tools, data, and systems into agent pipelines • Translates business problems into executable agent behavior • Monitors, corrects, and improves agent performance over time They don’t just “use AI.” They orchestrate outcomes. and this matter because Every function marketing, legal, finance, biotech is becoming “agent-compatible.” Not because companies want it. Because they won’t have a choice. Agents can: • Run research loops • Execute multi-step workflows • Integrate across tools without APIs breaking the flow • Operate 24/7 at near-zero marginal cost The bottleneck is no longer capability. It’s implementation inside real-world systems. Required skills for AI Agent Operator role: → MCPs (Model Context Protocols) Understanding how agents access tools, memory, and structured context. → CLIs (Command Line Interfaces) Because serious agent workflows won’t live in GUIs—they’ll run in programmable environments. → Writing skills (the file kind) Clear specs, instructions, and structured documents. Agents run on precision, not vibes. → agents dot md fluency The ability to define agent roles, constraints, memory, and tool usage in persistent formats. → Business acumen Knowing what actually matters: Where automation creates leverage, not noise. What happens next Enterprises will begin to redesign workflows: Not around employees using dashboards… But around agents executing tasks. That means: • SOPs → Agent playbooks • Teams → Human + agent hybrids • Tools → Composable agent systems When that shift happens, companies won’t just need engineers. They’ll need operators who understand both the system and the business. The leverage is asymmetric One strong Agent Operator can: • Replace fragmented SaaS workflows • Multiply team output without adding headcount • Turn ideas into execution systems in days This is not incremental productivity. It’s operational transformation.

English
13
80
252
38.4K