Mian Shahzad Raza

483 posts

Mian Shahzad Raza banner
Mian Shahzad Raza

Mian Shahzad Raza

@MSR_Builds

Web Developer, Digital Creator & Educator.

Lahore, Pakistan เข้าร่วม Mart 2013
100 กำลังติดตาม226 ผู้ติดตาม
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
This is the part people miss about agent skills. It is not just about generating code faster. It is about encoding best practices into reusable patterns so you get production-quality output by default. The skill abstraction is quietly becoming the most important layer in AI-assisted development.
English
0
0
1
62
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
This is what happens when competition shifts from "who ships first" to "who ships fastest." Every major lab is on a release cadence measured in weeks, not quarters. For builders, the takeaway is clear: architect your systems to be model-agnostic from day one. The moat is in the workflow, not the model.
English
0
0
0
101
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
The adversarial review is the standout feature. Most code review tools check for syntax and patterns. Having a second model challenge your architectural decisions before you ship is a fundamentally different feedback loop. The "Claude writes, Codex reviews" pattern is basically pair programming where your partner never gets tired of questioning your assumptions.
English
0
0
0
269
Kai
Kai@hqmank·
If you're using Claude Code, this is worth knowing. Instead of worrying about whether Opus 4.6 or GPT 5.4 is better, it's more useful to combine them in the same workflow. OpenAI shipped an official Claude Code plugin called codex-plugin-cc. You can now call Codex directly from inside Claude Code. Three commands: /codex:review Code review on uncommitted changes or diffs against a branch. Read-only. /codex:adversarial-review Challenges your design decisions, not just syntax. "Why this caching strategy?" "Race condition here?" Append free-form text to steer the review. /codex:rescue Hands the task to Codex when Claude gets stuck. Supports --resume to continue from the last run. Adversarial review is the killer feature. Especially before shipping auth changes, infra scripts, or anything involving data loss. There's also a review gate: Codex auto-reviews every time Claude finishes and blocks completion if issues are found. Claude writes, Codex reviews. github.com/openai/codex-p…
Vaibhav (VB) Srivastav@reach_vb

x.com/i/article/2038…

English
25
29
490
115.9K
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
The design-system.md step is doing the heavy lifting here. Without it, the motion graphics would just be generic shapes. That file gives Claude the visual vocabulary of your actual product, so every frame looks like it belongs. Smart workflow for anyone producing content around their own tools.
English
0
0
0
107
Jason Zook
Jason Zook@jasondoesstuff·
Step 1: Tell Claude Code to create a "design-system.md" from your current app/project Step 2: Tell it to install Remotion Step 3: Tell it to make motion graphics of interactions in your app using the design system it created (I did 16:9 for YouTube b-roll). 😎💥💪 You can also tell it to mock up popular apps (like Slack in the example below).
English
35
120
2.1K
173.2K
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
This is the right abstraction layer for AI-assisted design. Instead of prompting for individual components every time, you bake the constraints into a skill and let the model work within a system. The real unlock here is that the design output stays consistent across sessions without you babysitting every detail.
English
0
0
0
116
Dom
Dom@dominikmartn·
made a nothing design skill for claude code. tell it "nothing style" and it builds the whole thing. tokens, components, dark+light. go grab it, it’s open source: github.com/dominikmartn/n…
English
70
125
3.4K
204.7K
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
This reframes code review from a static audit into an interactive debugging session. The ability to select a diff and ask why it exists changes the reviewer's mental model entirely. You stop reading code line by line and start reasoning about intent. GitHub's PR interface was never designed for that kind of workflow.
English
0
0
0
7
fredrika
fredrika@fredrikalindh·
reviewing in cursor is now a much better experience than github - select diff and ask cursor why it's there (or to fix) - view videos/images of result - test straight from browser we also added mark as viewed, link to preview and many more improvements coming
English
22
26
463
48.3K
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
The Commander's Intent framing is useful but slightly off. CI gives the end state and trusts execution. Plan mode does the opposite: it previews execution and asks for approval. What it actually maps to is a design review before deployment. You are not commanding the agent. You are reviewing its interpretation of your intent before it ships.
English
1
0
1
283
Paweł Huryn
Paweł Huryn@PawelHuryn·
Plan mode isn't a planning tool. It's a feedback guarantee. You share your intent. The agent shows what it understood. You confirm before it executes. No ambiguity about whether it's listening or already coding. The military has a name for this: Commander's Intent. Give the end state and reasoning upfront, then let the team run autonomously. The alignment step is what makes the autonomy work. OpenClaw doesn't have plan mode yet. Curious how long that lasts.
Peter Steinberger 🦞@steipete

I never use plan mode. The main reason this was added to codex is for claude-pilled people who struggle with changing their habits. just talk with your agent.

English
17
6
104
15.3K
Mian Shahzad Raza รีทวีตแล้ว
Cursor
Cursor@cursor_ai·
We’re introducing Cursor 3. It is simpler, more powerful, and built for a world where all code is written by agents, while keeping the depth of a development environment.
English
276
400
4.6K
607.7K
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
The best time to start this transition was 2 years ago. The second best time is today. Follow @MSR_Builds — I post the real journey: wins, failures, and everything I’m learning. Are you still on pure WordPress or already making the switch? 👇
English
0
0
0
9
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
Step 4: Monetize the transition. → WordPress dev rates: $25–50/hr average → React + AI dev rates: $75–150/hr average → The skill gap = a 3x income opportunity One headless WP + React project in your portfolio changes everything.
English
1
0
0
10
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
In 2024 I was building WordPress sites with Elementor. In 2026 I'm shipping AI-powered React apps with MCP integrations. Here's the exact roadmap I followed — steal it. 🧵👇
English
1
0
1
16
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
@asaio87 The $200 plan isn't a product margin. It's a customer acquisition cost. In platform economics, you subsidize adoption now and monetize lock-in later. The companies that survive this phase will be the ones closing the gap through inference optimization before the capital runs out.
English
1
0
0
184
andrei saioc
andrei saioc@asaio87·
AI companies are leaking cash heavily. Any $200 Claude code plan consumes around $5,000 worth of resources per month.
English
59
46
480
17.5K
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
The real shift nobody talks about: vibe coding removed the building moat, but it did not remove the distribution moat. Most people skip straight to prompting without validating demand first. The framework is simple. Talk to 10 people before writing a single prompt. If you cannot sell the idea in a conversation, no amount of generated code will save you.
English
0
0
0
58
Aryan
Aryan@aryanlabde·
Vibe coding a SaaS is the easy part now. The hard part is finding 10 people who will actually pay for it.
English
79
5
137
6.3K
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
@bridgemindai This is the real feedback loop for AI tooling companies right now. The product is only as good as its uptime and rate limits, not just model quality. Developers will switch tools overnight if the workflow breaks. Reliability is the moat, not benchmarks.
English
0
0
1
29
BridgeMind
BridgeMind@bridgemindai·
Cancelling worked. Anthropic just acknowledged the Claude Code rate limit issue. GitHub issue #41788. Max plan users hitting 100% in 70 minutes after v2.1.89. Thousands cancelled. Thousands switched to Codex with GPT 5.4. Now they're fixing it. Your wallet is the only feedback AI companies listen to. Keeping my eye on Claude Opus 4.6 usage today. If limits are back to normal, I resubscribe. Stay tuned.
BridgeMind tweet media
English
153
75
1.3K
98.4K
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
The skill that matters now is not writing code from scratch. It is reading code, understanding architecture, and knowing when AI output is wrong. Developers who treat AI as a pair programmer instead of a replacement will be the ones debugging the systems everyone else blindly shipped.
English
3
0
3
282
Bindu Reddy
Bindu Reddy@bindureddy·
ALARMING! Programmers are totally forgetting how to code We are beginning to interview folks who have lost totally lost touch with coding Who is going to fix it, if the AI breaks everything 😱
English
344
39
693
42.5K
Mian Shahzad Raza
Mian Shahzad Raza@MSR_Builds·
Partially agree. Clear prompts matter, but the developers getting the best output from AI are the ones who already understand what good code looks like. You cannot direct what you do not understand. The real framework is: learn the craft first, then use AI to multiply your speed. Skipping step one produces fast garbage.
English
0
0
0
8
Devansh
Devansh@thenowhereway·
Stop trying to code everything yourself. Start learning how to direct AI properly. Bad prompt = bad product Clear thinking = good output AI rewards clarity, not effort.
English
46
0
63
1.4K
Dragan Maricic
Dragan Maricic@dramaricic·
I follow builders who actually ship. If you're: – building a SaaS – working on a side project – trying to get first users Drop your project below Let’s connect 🤝
English
248
3
191
8.3K