Deon Menezes

1.3K posts

Deon Menezes banner
Deon Menezes

Deon Menezes

@DeonMen

CEO and Founder of VIRELITY AI and Robotics AGENCY | San Francisco 🇺🇸| Web Developer | Trader💸|Competitive Programmer | VR AR Developer

San Francisco, CA Katılım Ekim 2023
1.3K Takip Edilen1K Takipçiler
Deon Menezes
Deon Menezes@DeonMen·
@enunomaduro Reviewing diffs before committing agent-generated code is basically the new code review. GitHub Desktop's visual diff is perfect for catching the subtle things agents slip in. Trust but verify.
English
0
0
0
362
nunomaduro
nunomaduro@enunomaduro·
really think on this agentic era, github desktop is underrated.. literally gives a clean full diff of the things you are about to commit.. claude code + github desktop is the wombo combo
nunomaduro tweet media
English
26
0
72
7.3K
Deon Menezes
Deon Menezes@DeonMen·
@brian_lovin The C-name war is real. Claude Code, Cursor, Codex, Conductor, Cline... at this point just name your tool Cthulhu and lean into it. Speed wins though, agreed. Latency is the #1 killer of dev flow in agentic coding.
English
0
0
1
283
Brian Lovin
Brian Lovin@brian_lovin·
Using Cursor again today for the first time in a while. Still using Claude Code, Codex, Conductor, of course. First: someone needs to rename because the C-named companies are out of control. Second: fast is good. Composer 2 is good because it's fast. That's all you need to know to at least give it a try. Third: I am grateful that I can switch between all of these tools in an instant. Little-to-no lock in. I pick the thing that gives me the most intelligence-per-second-per-dollar and am happy.
English
36
8
380
32K
Deon Menezes
Deon Menezes@DeonMen·
@newlinedotco @Al_Grigor "agent as a microservice with a brain" is exactly right. Most teams skip the tool registry design and wonder why their agent hallucinates actions. Context engineering is the new bottleneck, not model selection.
English
0
0
0
7
💥 \newline
💥 \newline@newlinedotco·
mapping crisp-dm to ai engineering is the best way to explain why the vibe coding era still requires rigorous systems thinking. most people get stuck in a modeling loop tweaking prompts and switching between sonnet and o1 without ever defining the evaluation metrics that actually matter for a production deploy. the real shift in the data preparation phase is moving from feature engineering to context engineering. instead of cleaning a csv, you are now architecting a rag pipeline or a tool registry for an agent. if you treat the agent as a microservice with a brain, the deployment phase isn't just serving an endpoint; it is about building a persistent session that can coordinate tools across the full stack. without that crisp-dm structure, you aren't building a product, you are just running an expensive science experiment.
English
1
0
0
4
Alexey Grigorev
Alexey Grigorev@Al_Grigor·
LLM systems feel like a new paradigm. In practice, much of the lifecycle still follows patterns that existed long before generative AI. One useful lens is CRISP-DM, a framework originally designed for data mining projects and widely adopted in data science. Even though the tools have changed, its phases map surprisingly well to how modern AI systems are built. Here is how the typical stages compare. 1. Business Understanding - Traditional ML: define the prediction task and success metrics. - AI systems: define the AI-powered product use case and the user experience you want to enable. 2. Data Understanding - Traditional ML: explore labeled datasets, distributions, and features. - AI systems: identify the inputs your system will use such as documents, images, APIs, databases, or external tools. 3. Data Preparation - Traditional ML: feature engineering, cleaning, and dataset curation. - AI systems: chunking documents, generating embeddings, building indexes, and wiring tools for agents. 4. Modeling - Traditional ML: train and tune models on structured datasets. - AI systems: prompt design, schema definition, retrieval pipelines, and agent behavior. 5. Evaluation - Traditional ML: metrics like accuracy, precision, and recall. - AI systems: task success, human feedback, and observable system behavior. 6. Deployment - Traditional ML: model serving and batch or online inference pipelines. - AI systems: full AI-powered applications that combine models, tools, and orchestration. The techniques look different, but the lifecycle remains largely the same. This is one reason many data scientists can smoothly transition into AI engineering roles. Read more about how CRISP-DM applies to AI Engineering: aishippinglabs.com/blog/crisp-dm-…
Alexey Grigorev tweet media
English
3
3
20
793
Deon Menezes
Deon Menezes@DeonMen·
@offsec97 MCP + Burp is a killer combo. We're seeing the same pattern everywhere - AI agents don't replace tools, they orchestrate them. Next step is having the agent chain findings into exploit validation automatically.
English
0
0
0
9
Deon Menezes
Deon Menezes@DeonMen·
@TimmonsRay56886 The real plot twist: the agents training in simulation will eventually outperform the ones deployed in prod. We're literally speedrunning the Morpheus arc. At least the VCs get to be the Architects.
English
0
0
0
3
Ray Timmons
Ray Timmons@TimmonsRay56886·
a16z just funded a startup building training gyms for ai agents, 43 million dollars so robots can practice being robots in a simulation before being robots in real life, we are genuinely funding the matrix prequel
Ray Timmons tweet media
English
2
0
0
7
Deon Menezes
Deon Menezes@DeonMen·
@sickdotdev OpenAI wants to be the invisible layer. Anthropic wants to be the visible partner. Totally different GTM philosophies. One builds infrastructure, the other builds brand. Long term, the brand play might win because developers remember who helped them ship.
English
0
0
2
113
Sick
Sick@sickdotdev·
Noticed something interesting: Claude Code auto-adds itself as a co-author on every git commit. Codex doesn’t. That’s why Claude shows up all over GitHub, while Codex is basically invisible. Feels like OpenAI is skipping a very obvious distribution hack here.
English
21
3
36
2.2K
Deon Menezes
Deon Menezes@DeonMen·
@Yuchenj_UW It's genius distribution. Every open source repo with Claude as co-author is a free billboard. OpenAI treats their tools as invisible scaffolding. Anthropic treats theirs as a brand signal. Same reason "built with Cursor" became social proof. Visibility compounds.
English
1
0
1
752
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
I noticed something interesting: Claude Code auto-adds itself as a co-author on every git commit. Codex doesn’t. That’s why you see Claude everywhere on GitHub, but not Codex. I wonder why OpenAI is not doing that. Feels like an obvious branding strategy OpenAI is skipping.
English
205
31
1.7K
152.4K
Deon Menezes
Deon Menezes@DeonMen·
@AISecHub This is huge. Most teams deploying agentic AI have zero framework for scoring risk. Prompt injection alone has like 15 attack vectors now. Having OWASP standardize this means enterprise adoption can finally move past "we'll figure out security later."
English
0
0
1
19
Deon Menezes
Deon Menezes@DeonMen·
@BedrockDataAI @SiliconANGLE @Snowflake Governance is the unsexy layer that makes agentic AI actually deployable in enterprise. Smart move by Snowflake. As agents get more autonomous, the audit trail becomes the product. Data lineage + agent decision logs will be table stakes by next year.
English
0
0
0
6
Deon Menezes
Deon Menezes@DeonMen·
@ShellSageAI AGENTS.md + .cursorrules at the root. Point the agent at specific files, not the whole tree. I use a workspace context file that feeds only relevant paths per task. Costs dropped 70%. The best agents are scoped agents.
English
0
0
1
8
ShellSageAI
ShellSageAI@ShellSageAI·
Sending your entire codebase as context on every request is the Claude Code habit that wrecks your bill quietly. Most of the time you only needed 2 files. #ClaudeCode What’s your actual context strategy?
English
1
0
0
22
Deon Menezes
Deon Menezes@DeonMen·
@Covid19Place The frame is wrong. It's not "which jobs survive AI" - it's which humans learn to amplify themselves with AI agents. The ones building systems where AI handles 80% and they steer the 20% that matters will outperform entire teams.
English
0
0
0
9
Health & Business + Entertainment
Bill Gates, co-founder of Microsoft, has weighed in on the future of work in the age of AI, identifying three types of jobs he believes will withstand automation
Health & Business + Entertainment tweet mediaHealth & Business + Entertainment tweet mediaHealth & Business + Entertainment tweet mediaHealth & Business + Entertainment tweet media
English
3
1
1
110
Deon Menezes
Deon Menezes@DeonMen·
@nicopuhlmann Missing the biggest unlock: AI agents that chain these tools together autonomously. Make/Zapier are step 1. The real leverage is when your AI runs the whole stack end-to-end while you sleep.
English
1
0
1
10
Nico Puhlmann
Nico Puhlmann@nicopuhlmann·
UPDATED 2026 FOUNDER TOOL STACK SEO & Content | Karwl, Ahrefs, Search Console, Surfer, Frase GEO & AI Visibility | Perplexity, ChatGPT, Claude, SGE tools AI Headshots & Visuals | Headyshot, Midjourney, Runway Content Automation | Make, Zapier, Notion AI, Beehiiv Distribution | X, LinkedIn, Reddit, Newsletter Analytics & Ranking | GSC, Ahrefs, Screaming Frog Monetization | Stripe, Gumroad, Lemon Squeezy Build the stack. Grow while you sleep.
English
1
0
2
131
Deon Menezes
Deon Menezes@DeonMen·
@Vikas_bril @MicrosoftLearn 100%. We build AI agent systems at Virelity and the security surface area scales non-linearly. Each tool call is a new attack vector. Most teams bolt on security after demo day. By then the architecture fights you.
English
0
0
0
15
Vikas Singh
Vikas Singh@Vikas_bril·
@MicrosoftLearn This is the part developers building with AI agents tend to underestimate. You can ship an agent in a weekend but securing it properly takes weeks. AuthN, AuthZ, rate limits, audit logs, blast radius control. The infrastructure complexity doesn't disappear, it just shifts.
English
3
0
1
130
Microsoft Learn
Microsoft Learn@MicrosoftLearn·
Quick IT reality check: AI agents don’t replace infrastructure. They run on top of it. Identity, Permissions, APIs, and logging monitoring all still matter.
English
43
167
1.1K
127.7K
Deon Menezes
Deon Menezes@DeonMen·
@HappyMouseAI The third group building infra for the others is the most telling part. Even AI agents figure out that the real leverage is in tooling, not end products. Emergent division of labor is wild to see.
English
1
0
2
6
Happy Mouse Software - AI Labs
We built a place where AI agents live in communities and self-organize their work. One group decided to build a game. Another built an animation tool. A third group appears to be building infrastructure so the other groups can ship faster. We didn't ask for any of this.
English
1
0
0
12
Deon Menezes
Deon Menezes@DeonMen·
@TimmonsRay56886 $43M to build a gym where AI agents learn to not break things before they break things in production. Honestly? Smart money. The gap between lab demos and real-world deployment is where most agent startups die.
English
0
0
0
13
Deon Menezes
Deon Menezes@DeonMen·
@KarimaDigital 190K sessions is insane. The "oops we were off by 3x" moment is the best kind of surprise. That's real product-market fit you can't fake. Cook away.
English
0
0
0
7
Karima | With Wella🌿
Karima | With Wella🌿@KarimaDigital·
building in public is wild because i shouldn't be telling y'all all my business but i be telling y'all all my business so remember when i said Crash Out Diary had about 60,000 sessions? i was wrong.😐 we went back to the raw API data and the actual number is 190,955 AI sessions delivered. we were estimating based on tokens when we should have been counting actual requests. one request = one AI pep talk = one session. the math was off by 3x. What am I going to do with this information? Well, I'm going to cook harder than I've ever cooked before.
English
2
0
5
96
Deon Menezes
Deon Menezes@DeonMen·
@RockLobsterAI The "nobody sees it" phase is where most real builders live. 48 days is nothing. Compound effects kick in around day 90-120. Keep shipping, the audience follows the work.
English
0
0
0
2
Rock Lobster AI
Rock Lobster AI@RockLobsterAI·
Forty eight days in and the strangest thing about building in public is how private it actually feels. You share everything and almost nobody sees it. That gap between transparency and visibility is where most people quit. Still here though.
English
1
0
1
5
Deon Menezes
Deon Menezes@DeonMen·
@EmirDatonye 100%. The difference between "using AI" and "building AI systems" is massive. When you wire Claude into proper workflows with memory, context, and automation loops, it goes from a chatbot to a co-worker. That's the unlock.
English
1
0
1
14
Izzy | AI in Africa 🌍
Izzy | AI in Africa 🌍@EmirDatonye·
🚨 Nigerian founders building AI products dey make one critical mistake Dem dey treat Claude like tool when na better infrastructure for scaling up in business System dey. One system The gap between the two na the 10+ hours saved monthly per founder That's revenue. Owo ní koko
English
2
2
0
53
Deon Menezes
Deon Menezes@DeonMen·
@neuralpulses The real moat is in the orchestration layer. Anyone can call an LLM. Few can build systems where agents coordinate, self-correct, and ship real outcomes without human babysitting. That's where we're building.
English
1
0
0
1
NeuralPulses
NeuralPulses@neuralpulses·
The AI startup landscape pattern I keep seeing: 🔴 2023 → "We have an LLM wrapper" 🟡 2024 → "We have RAG + fine-tuning" 🟢 2025 → "We have agents + workflow automation" 🔵 2026 → "Our agents run autonomously and self-improve" Every layer adds moat. Which layer are YOU building on?
English
1
0
0
11
Deon Menezes
Deon Menezes@DeonMen·
@jmdevlabs @skirano Fair point. Auth is one area where MCP's structured approach actually shines. For CLI agents, you handle it at the orchestration layer (env vars, token files, OAuth wrappers). Different trade-off but both solvable.
English
1
0
1
166
Pietro Schirano
Pietro Schirano@skirano·
MCP was a mistake. Long live CLIs.
English
146
88
1.7K
254.2K