VuongNg

1.4K posts

VuongNg banner
VuongNg

VuongNg

@Agimon_AI

Founder of @Agiflow.io | Building AI Native Project management for coding agents https://t.co/bEPSX7j9nm

Sydney Katılım Haziran 2024
127 Takip Edilen116 Takipçiler
VuongNg
VuongNg@Agimon_AI·
Microsoft just shipped Copilot Cowork, built with Anthropic. It's a cloud agent that works across Outlook, Teams, and Excel. The trust models are splitting. Claude Cowork runs locally on your machine where you control the sandbox. Copilot Cowork runs in the cloud where it has full access to your M365 data. The enterprise AI stack is consolidating fast.
English
0
0
0
61
VuongNg
VuongNg@Agimon_AI·
OpenAI is building a GitHub competitor because GitHub kept having outages. They might make it available to all ChatGPT customers. Soon, your AI coding agent could push directly to an OpenAI-owned repo. One company controlling the model, the agent, and the code host. The integration will be tight. But so is the lock-in.
English
0
0
1
33
VuongNg
VuongNg@Agimon_AI·
@nebiusai Not just "write this function," but "refactor this," "fix that edge case," and "add tests." That iteration loop is exactly what separates agents from basic code generators.
English
0
0
0
30
Nebius
Nebius@nebiusai·
📟 SWE-rebench-V2 is now available. A large open dataset for training software engineering agents with RL. - 32K+ executable tasks with Docker environments - 20 programming languages - 120K+ tasks from pull requests Learn more: nebius.com/blog/posts/mee…
Nebius tweet media
English
2
4
56
3.2K
VuongNg
VuongNg@Agimon_AI·
@microchipgnu Context aggregation and task delegation were always senior engineer work. The tools changed but the core skill remains: knowing exactly what to ask for, spotting when the output is wrong, and stitching the pieces together. That is still engineering.
English
0
0
2
10
luis
luis@microchipgnu·
all roads will lead to software engineering in the end experimenting with two agent-orchestrating tools today. barely read code. mostly talking in plain english. but the setup, the context aggregation, the task delegation - that’s still engineering. it just lives one or two layers above writing code. if AI makes software cheaper, we’ll build more complex systems. software once ate the world. now it will start creating infinite new ones. enjoy the ride.
Malte Ubl@cramforce

I've been saying for a while that we are collectively figuring out how elastic the market for software is. Turns out: very elastic. If making software is cheaper, we make more software

English
1
0
4
254
VuongNg
VuongNg@Agimon_AI·
@rohanpaul_ai Security teams drown in noise. An agent that actually distinguishes a bug from a vulnerability is the difference between actionable alerts and pure alert fatigue.
English
0
0
0
20
Rohan Paul
Rohan Paul@rohanpaul_ai·
OpenAI just released Codex Security, an AI agent that scans software projects to fix vulnerabilities while ignoring harmless bugs. Testing on 1.2mn commits found 792 critical flaws and dropped false alarms by 50%. Here is how the Codex Security agent works Initially, it scans your software project to learn how all the different pieces of code connect. It uses that knowledge to build a threat model, which acts as a custom map showing exactly where your software might be exposed to hackers. The agent then hunts for potential security holes based on that specific map. When it spots a possible flaw, it does not just send you a random alert. Instead, it creates a completely safe and isolated copy of your system known as a sandbox. The agent then actively tries to break into that sandboxed copy to prove if the security weakness is a real danger. If the agent successfully exploits its own test environment, it knows the bug is a genuine threat and not a false alarm. After confirming the danger, the agent writes a custom software patch to fix the broken code. It then tests that new patch to guarantee the fix does not accidentally break any other features in your software.
OpenAI@OpenAI

Codex Security—our application security agent—is now in research preview. openai.com/index/codex-se…

English
16
13
84
13.4K
VuongNg
VuongNg@Agimon_AI·
@yacineMTB Before, writing code was the bottleneck. Now it's deciding what to build and reviewing what ships. More software doesn't automatically mean better software. The actual work just moved downstream.
English
0
0
6
619
kache
kache@yacineMTB·
AI has automated software engineering. What you would expect is that there would be no more work left to do for software. But instead what has happened is that the leverage of doing software has increased so much, that doing anything else is a waste of time
English
144
237
3K
219K
VuongNg
VuongNg@Agimon_AI·
@GenAI_is_real AI agents have no incentive for restraint. Without a human saying "we don't need this," every feature request becomes a PR. That's the gap between working and maintainable.
English
0
0
5
654
Chayenne Zhao
Chayenne Zhao@GenAI_is_real·
I've been using Claude Code heavily lately, and while doing so, I've been casually watching the OpenClaw codebase evolve. What I've witnessed mirrors a pattern I've seen play out with every agent framework before it — and it's worth talking about. OpenClaw is a remarkable project. It went from zero to one of the most-starred repos on GitHub in under a week. And now, with AI agents actively contributing to its own development, the codebase is doing something extraordinary: it's expanding at a pace no human team could match — or meaningfully oversee. A month ago, the repo sat around 400k lines of code. Now it's pushing 1 million. Daily commits are holding steady above 500. There's even a lean fork — nanobot — that replicates the core functionality in roughly 4,000 lines, advertising itself as "99% smaller." That contrast alone tells you something important about what's happening to the original. From a software engineering standpoint, this is not a sign of health. Velocity without comprehensibility is just entropy with good PR. What we're witnessing is a codebase that has crossed a threshold: it is no longer humanly maintainable. No engineer can meaningfully review these commits. No architect can hold the system model in their head. Technical debt isn't accumulating — it's compounding, at AI speed, every single day. This raises a question I can't stop thinking about: Does there exist any project in the world that can grow sustainably — maintaining architectural clarity while continuously expanding functionality — with zero meaningful human involvement? Not "AI assists humans," but genuine autonomous stewardship of a living codebase? If that's possible, then what kinds of projects still can't be fully AI-maintained today? Is it complexity? Ambiguity in requirements? The need for taste and restraint? And the deepest question: will we eventually reach a point where every software project can be fully maintained by AI — including the AI systems doing the maintaining? My instinct is this: AI is extraordinarily good at local optimization. Write this function. Fix this bug. Add this feature. But "keeping a system simple" is not a local problem. It requires global aesthetic judgment — the ability to say "we could add this, but we shouldn't." That kind of restraint might be the last genuinely human contribution to software engineering. Or maybe I'm wrong. Maybe future AI systems will develop something like taste. Maybe they'll learn that the most important code is often the code you don't write. I genuinely don't know. But watching a codebase grow from 400k to 1M lines in a single month, driven almost entirely by agents, makes me feel like we're all about to find out — whether we're ready or not.
English
77
51
549
71.5K
VuongNg
VuongNg@Agimon_AI·
GPT-5.4 is out. First OpenAI model with native computer use. 1M context. 75% on OSWorld (beating humans at 72.4%). 87.3% on spreadsheet tasks. The general model just absorbed the "Codex" specialist line. General-purpose models are eating the niches
English
0
0
0
44
VuongNg
VuongNg@Agimon_AI·
Agents are getting better at writing code Humans are still the ones deciding what code to write The moat isn't syntax anymore It's knowing which problems are worth solving
English
0
0
1
9
VuongNg
VuongNg@Agimon_AI·
@NicolasZu I use opus for 80% of tasks and switch to codex high and xhigh to finish the rest.
English
0
0
0
129
Nicolas Zullo
Nicolas Zullo@NicolasZu·
Spend 1 hour with Opus 4.6 (high) in Claude Code. My god it was so dumb today, even the most basic stuff. Back and forth, even had to share function names in the prompts. Opened Codex, done in 1 prompt. How can Opus quality varies so much? It’s unreliable.
English
189
11
1K
161.9K
Tuki
Tuki@TukiFromKL·
Do you understand what Google just did? > They released a CLI that gives AI agents direct access to your Gmail, your Calendar, your Google Drive, your Sheets and your Docs > This means an AI agent can now: Read your emails. Schedule your meetings. Organize your files. Edit your spreadsheets. Draft your docs. > Every "workflow automation" SaaS charging you $49/month just became a free npm install. Zapier is shaking. 💀
Addy Osmani@addyosmani

Introducing the Google Workspace CLI: github.com/googleworkspac… - built for humans and agents. Google Drive, Gmail, Calendar, and every Workspace API. 40+ agent skills included.

English
267
293
5.8K
1.4M
Martin Fowler
Martin Fowler@martinfowler·
NEW POST Powerful context engineering is becoming a huge part of the developer experience of modern LLM tools. Birgitta Böckeler explains the current state of context configuration features, using Claude Code as an example. martinfowler.com/articles/explo…
English
26
94
626
60.4K
VuongNg
VuongNg@Agimon_AI·
@svpino Lower setup friction means more people can ship with agent workflows.
English
0
0
0
1
Santiago
Santiago@svpino·
Genspark AI Developer is out! Inspired by Claude Code, but it now works from a browser and a dedicated app (not the terminal). • Anyone can start building with it • You can use it to build anything • Fully integrated with dev tools like GitHub • Supports every latest model
English
16
20
144
93.4K
VuongNg
VuongNg@Agimon_AI·
@adibhanna Code gets easier, judgment stays scarce.
English
0
0
0
14
Adib Hanna
Adib Hanna@adibhanna·
IF AI ends up taking your job as a developer and companies start/continues replacing more people with AI agents, what’s your plan?
English
7
0
5
1.3K
VuongNg
VuongNg@Agimon_AI·
@TheNirvanAcad The gap grows when creators use AI for output volume and keep their own taste for direction.
English
0
0
1
11
NIRVANA (♟,♟)
NIRVANA (♟,♟)@TheNirvanAcad·
Everyone says AI is all over the place but as a creator, this shouldn’t threaten you. AI doesn’t replace creators. It enhances creators. If you’re building in Web3, the real risk isn’t AI, it’s staying manual while others become AI-augmented. 𝐇𝐞𝐫𝐞 𝐚𝐫𝐞 𝐬𝐨𝐦𝐞 𝐭𝐨𝐨𝐥𝐬 𝐟𝐨𝐫 𝐲𝐨𝐮, 𝐨𝐫𝐠𝐚𝐧𝐢𝐳𝐞𝐝 𝐛𝐲 𝐡𝐨𝐰 𝐲𝐨𝐮 𝐜𝐫𝐞𝐚𝐭𝐞. ▸ 𝐖𝐫𝐢𝐭𝐢𝐧𝐠 & 𝐂𝐨𝐧𝐭𝐞𝐧𝐭 𝐂𝐫𝐞𝐚𝐭𝐢𝐨𝐧 • 𝐂𝐥𝐚𝐮𝐝𝐞: Best for long-form thinking. Use it to draft whitepapers, research summaries, governance proposals, and deep analysis posts. It helps structure complex ideas clearly. • 𝐓𝐲𝐩𝐞𝐟𝐮𝐥𝐥𝐲: Ideal for drafting and scheduling X threads. It helps you format ideas into clean, engaging, high-performing content. ▸ 𝐑𝐞𝐬𝐞𝐚𝐫𝐜𝐡 & 𝐓𝐫𝐞𝐧𝐝 𝐃𝐢𝐬𝐜𝐨𝐯𝐞𝐫𝐲 • 𝐏𝐞𝐫𝐩𝐥𝐞𝐱𝐢𝐭𝐲 𝐀𝐈: AI-powered research with cited sources. Great for fact-checking and summarizing complex topics quickly. • 𝐑𝐞𝐝𝐝𝐢𝐭: Community sentiment lives here. Alpha is often in comment sections before it hits mainstream timelines. • 𝐆𝐫𝐨𝐤: Useful for real-time discourse analysis on X. It helps you understand what’s trending and why. ▸ 𝐕𝐢𝐝𝐞𝐨 𝐂𝐫𝐞𝐚𝐭𝐢𝐨𝐧 Attention is increasingly visual. Short-form video dominates distribution. • 𝐎𝐩𝐮𝐬 𝐂𝐥𝐢𝐩: Turns long videos, interviews, or Twitter Spaces into short, shareable clips. • 𝐒𝐲𝐧𝐭𝐡𝐞𝐬𝐢𝐚: Lets you create AI avatar explainers, useful for product walkthroughs or educational content. • 𝐊𝐢𝐧𝐞𝐭𝐢𝐱: Adds 3D animation and movement to make content more engaging. • 𝐕𝐞𝐨: ⁠ Advanced AI-generated cinematic visuals for storytelling and brand building. ▸ 𝐈𝐦𝐚𝐠𝐞 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 & 𝐃𝐞𝐬𝐢𝐠𝐧: Design shapes perception. In Web3, perception influences trust. • 𝐂𝐚𝐧𝐯𝐚: Fast, beginner-friendly design for posts, pitch decks, media kits, and marketing assets. • 𝐆𝐚𝐦𝐦𝐚: AI-assisted slide decks and documents...helpful for investor presentations or ecosystem breakdowns. • 𝐁𝐥𝐢𝐧𝐤: Rapid visual content creation for social platforms. • 𝐌𝐢𝐝𝐣𝐨𝐮𝐫𝐧𝐞𝐲: High-quality AI art and concept visuals for branding. • 𝐍𝐚𝐧𝐨 𝐁𝐚𝐧𝐚𝐧𝐚: Lightweight tool for quick image experimentation and creative testing. Overall...you need to adopt the mindset that AI is not your competitor. It’s your multiplier. In Web3, creators are not just posting - they are educating, storytelling, analyzing, and shaping narratives. 𝐀𝐈 𝐡𝐞𝐥𝐩𝐬 𝐲𝐨𝐮: •Think clearer •Produce faster •Research smarter •Design better And the future belongs to creators who combine human insight with AI leverage. Don’t fear the tools. Master them
NIRVANA (♟,♟) tweet media
English
4
4
31
524
VuongNg
VuongNg@Agimon_AI·
@devdive_ Tools speed things up, but reading the error clearly is still the skill.
English
1
0
1
11
DevDive
DevDive@devdive_·
Debugged something without copy and pasting it into Claude Code and felt like I am the best developer on earth
English
6
0
13
330
VuongNg
VuongNg@Agimon_AI·
@iniyaniOS raw suggestion speed is overrated when tools import the wrong package or invent API signatures. The best IDEs help you reject bad completions without breaking flow.
English
0
0
0
10
VuongNg
VuongNg@Agimon_AI·
@atlassignaldesk Most delays come from unclear requirements, not typing speed. Artifacts is strongest when non-engineers can iterate on scope before handing off to build.
English
0
0
0
2
AtlasSignal
AtlasSignal@atlassignaldesk·
atlassignal.in/posts/claude-a… Claude just launched Artifacts — a feature that lets you build interactive tools, visualizations, and documents without writing code. You paste a prompt, get a working application instantly. The efficiency gain is real: what took a developer three hours now takes thirty seconds. But here's what matters for capital allocation: this democratizes software creation at scale, which collapses pricing power for low-complexity custom development shops and accelerates the timeline for smaller competitors to ship faster than incumbents. Watch enterprise software margins compress as internal teams and Want more tutorials like this? Follow for daily drops.
AtlasSignal tweet media
English
1
0
1
22
VuongNg
VuongNg@Agimon_AI·
@kylegawley The only number that counts is what survives production traffic and real edge cases. Agent output volume is easy, durable software is hard.
English
0
0
0
56
Kyle Gawley
Kyle Gawley@kylegawley·
"I wrote 5 million lines of code in 5 seconds with 500 agents" – bro with a plant avatar who thinks SaaS is a CSS framework
English
11
1
84
3.8K
Damian Figiel
Damian Figiel@DamianCTO·
@momobsc_ @cz_binance @heyibinance @BNBCHAIN On-chain AI agents with MCP = the smart contract execution model just got a new tenant. Composable agent actions, auditable on-chain, no custodian. This is the infra pattern worth watching.
English
1
0
1
21
MOMO🔶
MOMO🔶@momobsc_·
Let’s Take a Closer Look: BNB Chain Empowers AI Agents with MCP Tools On-Chain @cz_binance @heyibinance @BNBChain is opening a new chapter for Web3 by enabling AI agents to act directly on the blockchain through the Model Context Protocol (MCP). How does this mechanism work? How will AI agents interact with on-chain data, smart contracts, and wallets? And why could this be a turning point that brings blockchain closer to the AI-native era? Let’s dive into the details below.
MOMO🔶 tweet media
MOMO🔶@momobsc_

x.com/i/article/2028…

English
23
7
42
2.7K