Devrazor

10 posts

Devrazor banner
Devrazor

Devrazor

@devrazorhq

Institutional realism on AI systems. Governance * Measurement * Risk * Enterprise adaption.

Toronto, Ontario เข้าร่วม Mart 2026
5 กำลังติดตาม0 ผู้ติดตาม
Devrazor
Devrazor@devrazorhq·
AI is great at increasing output. Organizations are great at measuring output. That doesn’t mean anything meaningful improved. We’re optimizing for activity and calling it productivity. New essay: devrazor.com/essays/ai-prod…
English
1
0
1
16
Devrazor
Devrazor@devrazorhq·
@rohanpaul_ai @ylecun Interesting direction. In practice most real systems already evolve this way …a collection of specialized components working together rather than one system trying to do everything. The architecture often becomes an ecosystem of capabilities, not a single intelligence.
English
0
0
0
12
Rohan Paul
Rohan Paul@rohanpaul_ai·
Yann LeCun (@ylecun ) explains why LLMs are so limited in terms of real-world intelligence. Says the biggest LLM is trained on about 30 trillion words, which is roughly 10 to the power 14 bytes of text. That sounds huge, but a 4 year old who has been awake about 16,000 hours has also taken in about 10 to the power 14 bytes through the eyes alone. So a small child has already seen as much raw data as the largest LLM has read. But the child’s data is visual, continuous, noisy, and tied to actions: gravity, objects falling, hands grabbing, people moving, cause and effect. From this, the child builds an internal “world model” and intuitive physics, and can learn new tasks like loading a dishwasher from a handful of demonstrations. LLMs only see disconnected text and are trained just to predict the next token. So they get very good at symbol patterns, exams, and code, but they lack grounded physical understanding, real common sense, and efficient learning from a few messy real-world experiences. --- From 'Pioneer Works' YT channel (link in comment)
English
176
362
2.2K
635.5K
Devrazor
Devrazor@devrazorhq·
@BrianRoemmele @ylecun Interesting gap. Theoretical capability is one thing ..institutional adoption is another. In practice most organizations move much slower than the technology itself.
English
0
0
0
146
Brian Roemmele
Brian Roemmele@BrianRoemmele·
Anthropic's Revealing Chart on AI's Impact on Jobs Anthropic has unveiled a pivotal chart that underscores the chasm between AI's capabilities and its real-world application in the workforce. Derived from analyzing 2 million actual conversations with Claude, this radar chart, titled "Theoretical Capability and Observed Usage by Occupational Category," paints a stark picture of untapped automation potential across various job sectors. At its core, the chart is a spider web diagram plotting occupational categories around a circular axis, with values ranging from 0 to 1.0 representing the share of job tasks. The expansive blue area illustrates the theoretical coverage tasks that large language models (LLMs) like Claude could perform right now based on their inherent abilities. In contrast, the much smaller red area shows observed usage, drawn from real user interactions. The visual disparity is immediate and profound: blue spikes outward significantly in fields like computer and math (reaching about 0.75), business and finance, and office administration, while red hugs close to the center, often below 0.2 across most categories. This gap isn't just academic; it's a "career runway," as highlighted in discussions around the chart. For programmers, 75% of tasks are theoretically automatable, yet actual usage lags far behind. Similar vulnerabilities appear in customer service, data entry, and financial analysis, roles traditionally seen as white-collar strongholds. Meanwhile, hands-on fields like construction, agriculture, and protective services show lower theoretical exposure, with blue areas dipping to around 0.1-0.3, suggesting AI's current limitations in physical or unpredictable environments. Broader data amplifies the chart's message. As of early 2026, 49% of U.S. jobs expose at least 25% of tasks to AI, up from 36% a year prior. Yet, mass layoffs haven't materialized; unemployment in AI-vulnerable roles remains steady. Instead, subtler shifts are underway: a 14% drop in hiring for 22-25-year-olds in exposed positions indicates companies are prioritizing experienced workers, shortening entry-level pathways for recent graduates. The implications are clear: while AI's red footprint grows incrementally each month, the blue expanse signals accelerating change. College-educated, higher-earning professionals, once insulated are now most at risk, flipping the script on traditional labor disruptions. Anthropic's chart isn't a doomsday prophecy but a wake-up call, urging workers and businesses to bridge the gap through adaptation, upskilling, and ethical integration of AI tools. Please read the 5000 Days Series at ReadMultiplex.com for answers on how you can thrive in the Interregnum.
Brian Roemmele tweet media
English
108
325
1.4K
222.3K
Devrazor
Devrazor@devrazorhq·
@_vmlops Interesting trend. As these systems evolve, the real value often shifts from the model itself to the patterns people develop around it ,..loops, task structure, and coordination. That’s where practical knowledge compounds.
English
0
0
0
87
Vaishnavi
Vaishnavi@_vmlops·
This free GitHub repo teaches Claude Code better than Anthropic’s own docs Learn Claude Code walks through 12 sessions covering: • Agent loops • Planning agents • Persistent tasks • Multi-agent teams Link - github.com/shareAI-lab/le… Anthropic explains what it is This repo shows how to build with it
Vaishnavi tweet media
English
10
54
327
29.9K
Devrazor
Devrazor@devrazorhq·
@NainsiDwiv50980 True. In practice syntax is the easy part :) the harder skill is learning how to break problems into clear steps. Once that mental model exists, switching languages becomes mostly a mechanical exercise.
English
0
0
0
55
Nainsi Dwivedi
Nainsi Dwivedi@NainsiDwiv50980·
Stop- Most people try to learn Python by memorizing syntax. That’s the wrong approach. The real skill is thinking like a computer scientist. Think Python (3rd Edition) is one of the best books to build that mindset. Clear explanations. Practical exercises. A beginner → advanced thinking shift. I’m giving away a few copies to people here. How to get: • Follow Must (so I can dm) • Repost + Like • Comment PYTHON I'll DM you
Nainsi Dwivedi tweet media
English
107
70
176
11.3K
Devrazor
Devrazor@devrazorhq·
@akshay_pachaar A useful distinction. In larger systems this separation matters even more …protocols create the stable infrastructure, while skills evolve quickly on top. Mixing the two usually leads to brittle architectures.
English
0
0
0
12
Akshay 🚀
Akshay 🚀@akshay_pachaar·
MCP vs. Skills for AI agents, clearly explained! People treat MCP and Skills like they're the same thing. They're not. Conflating them is one of the most common mistakes I see when people start building AI agents seriously. So let's break both down from scratch. Before MCP existed, connecting an AI model to an external tool meant writing custom integration code every single time. 10 models, 100 tools? That's 1,000 unique connectors to build and maintain. The AI tooling ecosystem was a tangled mess of one-off glue code. MCP (Model Context Protocol) fixes this with a shared communication standard. Every tool becomes a "server" that exposes what it can do. Every AI agent becomes a "client" that knows how to ask. They talk through structured JSON messages over a clean, well-defined interface. Build a GitHub MCP server once, and it works with Claude, ChatGPT, Cursor, or any other agent that speaks MCP. That's the core value: write the integration once, use it everywhere. But here's where most explanations stop short. MCP solves the *connection* problem. It does not solve the *usage* problem. You can hand an agent 50 perfectly wired MCP tools and it'll still underperform if it doesn't know when to call which tool, in what order, and with what context. That's the gap Skills fill. A Skill is a portable bundle of procedural knowledge. Think of a SKILL. md file that tells an agent not just "here are your tools" but "here's how to use them for this specific task." A writing skill bundles tone guidelines and output templates. A code review skill bundles patterns to check and rules to follow. MCP gives the agent hands. Skills give it muscle memory. Together, they form the full capability stack for a production AI agent: - MCP handles tool connectivity (the wiring layer) - Skills handle task execution (the knowledge layer) - The agent orchestrates both using its context and reasoning This is why advanced agent setups increasingly ship both: MCP servers for integrations and SKILL. md files for domain expertise. If you're building with agents, I have shared a repository of 85k+ skills that you can use with any agent, link in the next tweet!
GIF
English
100
274
1.4K
127.7K
Devrazor
Devrazor@devrazorhq·
@BharukaShraddha Good observation. The real shift is treating the model less like a tool and more like a new participant in the system — which means the repository needs the same clarity we expect when onboarding a new engineer.
English
0
0
1
27
Shraddha Bharuka
Shraddha Bharuka@BharukaShraddha·
Most people treat CLAUDE.md like a prompt file. That’s the mistake. If you want Claude Code to feel like a senior engineer living inside your repo, your project needs structure. Claude needs 4 things at all times: • the why → what the system does • the map → where things live • the rules → what’s allowed / not allowed • the workflows → how work gets done I call this: The Anatomy of a Claude Code Project 👇 ━━━━━━━━━━━━━━━ 1️⃣ CLAUDE.md = Repo Memory (keep it short) This is the north star file. Not a knowledge dump. Just: • Purpose (WHY) • Repo map (WHAT) • Rules + commands (HOW) If it gets too long, the model starts missing important context. ━━━━━━━━━━━━━━━ 2️⃣ .claude/skills/ = Reusable Expert Modes Stop rewriting instructions. Turn common workflows into skills: • code review checklist • refactor playbook • release procedure • debugging flow Result: Consistency across sessions and teammates. ━━━━━━━━━━━━━━━ 3️⃣ .claude/hooks/ = Guardrails Models forget. Hooks don’t. Use them for things that must be deterministic: • run formatter after edits • run tests on core changes • block unsafe directories (auth, billing, migrations) ━━━━━━━━━━━━━━━ 4️⃣ docs/ = Progressive Context Don’t bloat prompts. Claude just needs to know where truth lives: • architecture overview • ADRs (engineering decisions) • operational runbooks ━━━━━━━━━━━━━━━ 5️⃣ Local CLAUDE.md for risky modules Put small files near sharp edges: src/auth/CLAUDE.md src/persistence/CLAUDE.md infra/CLAUDE.md Now Claude sees the gotchas exactly when it works there. ━━━━━━━━━━━━━━━ Prompting is temporary. Structure is permanent. When your repo is organized this way, Claude stops behaving like a chatbot… …and starts acting like a project-native engineer.
Shraddha Bharuka tweet media
English
159
987
6.7K
1M
Devrazor
Devrazor@devrazorhq·
@alexxubyte Good diagram. But in large organizations the real challenge isn’t choosing between CPU, GPU, or TPU …it’s aligning the workload, data pipeline, and economics of the system around them. The architecture decision usually start much earlier than the chip
English
0
0
2
249
Alex Xu
Alex Xu@alexxubyte·
CPU vs GPU vs TPU
Alex Xu tweet media
Indonesia
20
464
2.4K
110.7K
Devrazor
Devrazor@devrazorhq·
Many enterprise AI pilots start with excitement… and then quietly disappear. Most of the time the problem isn’t the model. It’s the institution around it. devrazor.com/essays/enterpr…
English
0
0
0
6