پن کیا گیا ٹویٹ
Fadzli Nasir
13.1K posts

Fadzli Nasir
@wolfipali
build and exploit. Perpetual learners. https://t.co/291q1k2ZXZ
Nomad شامل ہوئے Şubat 2020
965 فالونگ8.8K فالوورز
Fadzli Nasir ری ٹویٹ کیا

This is WILD.
A secret workplace war just broke out in China and it has gone fully viral on GitHub.
Companies started ordering their workers to document all their knowledge as AI "skill files."
Why? to replace those same workers with AI but workers figured out the plan fast so they fired back.
Someone built a tool called colleague.skill, software that scrapes a coworker's chat logs, emails, and work docs from Chinese platforms like Feishu and DingTalk, then clones them into an AI agent.
The idea was savage, digitize your colleague before they digitize you, hand the AI clone to the company, and watch your coworker get laid off while you survive.
A real GitHub project that exploded in popularity in days but then someone else entered the chat and changed everything.
A developer released anti-distill.skill, a tool that takes the skill file your company forces you to write, then strips out every piece of real knowledge before you hand it in.
The output looks perfectly professional, totally complete, impressively detailed but every critical insight has been secretly removed.
Your company gets a hollow shell while you keep the real knowledge locked away in a private backup.
The tool even has three intensity levels, light, medium, and heavy depending on how closely your bosses are watching.
Companies across China have been building AI digital twins of departed employees, feeding their old chat histories and documents into large models to produce clones that keep working after the humans are gone.
One verified case is that an employee left, and their replacement was literally an AI trained on every message they ever sent.
The anti-distill tool went viral on GitHub within hours of being posted, racking up stars faster than almost anything trending that week.
The implications reach far beyond China's borders.
Every knowledge worker on earth now faces a version of this question, when your company asks you to document your process, they may be building the tools to replace you.
English
Fadzli Nasir ری ٹویٹ کیا

Fadzli Nasir ری ٹویٹ کیا
Fadzli Nasir ری ٹویٹ کیا

Warren Buffett’s career advice:
“I was up at Harvard a while back, and a very nice young guy, he picked me up at the airport, a Harvard Business School attendee.”
And he said, “Look. I went to undergrad here, and then I worked for X and Y and Z, and now I’ve come here.” And he said, “I thought it would really round out my résumé perfectly if I went to work now for a big management consulting firm.”
And I said, “Well, is that what you want to do?” And he said, “No,” but he said, “That’s the perfect résumé.”
And I said, “Well when are you going to start doing what you like?” And he said, “Well I’ll get to that someday.”
And I said, “Well you know, your plan sounds to me a lot like saving up sex for your old age. It just doesn’t make a lot of sense.” 🤣

English
Fadzli Nasir ری ٹویٹ کیا
Fadzli Nasir ری ٹویٹ کیا

I'm going to delete this post in 48 hrs...
Because I just dropped the most BRAIN-DEAD creative strategy for scaling Facebook ads in 2026.
This is the exact TOP-TIER system we use to find winning ads at scale just by copying proven concepts from other industries and let AI do the job for us.
We charge $10,000/mo to do this for clients…
But today, I'm giving it away 100% FREE.
Like + Comment "BRAIN" and I'll send it to you.
English
Fadzli Nasir ری ٹویٹ کیا
Fadzli Nasir ری ٹویٹ کیا

Three weeks ago there were rumors that one of the labs had completed its largest ever successful training run, and that the model that emerged from it performed far above both internal expectations and what people assumed the scaling laws would predict. At the time these were only rumors, and no lab was attached to them. But in light of what we now know about Mythos, they look more credible, and the lab was probably Anthropic.
Around the same time there were also rumors that one of the frontier labs had made an architectural breakthrough. If you are in enough group chats, you hear claims like this constantly, and most turn out to be nothing. But if Anthropic found that training above a certain scale, or in a certain way at that scale, produces capabilities that sit far above the prior trendline, then that is an architectural breakthrough.
I think the leaked blog post was real, but still a draft. Mythos and Capybara were both candidate names for the new tier, though Mythos may now have enough mindshare that they end up keeping it. The specific rumor in early March was that the run produced a model roughly twice as performant as expected. That remains unconfirmed. What is confirmed is that Anthropic told Fortune the new model is a 'step change,' a sudden 2x would certainly fit the definition.
We will find out in April how much of this is true. My own view is that the broad shape of this is correct even if some of the numbers are wrong. And if it is substantially accurate, then it also casts OpenAI's recent restructuring in a new light. If very large training runs are about to become essential to staying in the game, then a lot of their recent decisions, like dropping Sora, make even more sense strategically.
For the public, this would mean the best models in the world are about to become much more expensive to serve, and therefore much more expensive to use. That will put pressure on rate limits, pricing, and subscription plans that are already subsidized to some unknown degree. Instead of becoming too cheap to meter, frontier intelligence may be about to become too expensive for most of humanity to afford.
Second-order effects; compute, memory, and energy are about to become much more important than they already are. In the blog they describe the new model as not just an improvement, but having 'dramatically higher scores' than Opus 4.6 in coding and reasoning, and as being 'far ahead' of any other current models. If this is the new reality, then scale is about to become king in a whole new way. It would also mean, as usual, that Jensen wins again.
English
Fadzli Nasir ری ٹویٹ کیا

The CEO of a $3 trillion company just admitted the biggest threat to AI has nothing to do with the technology itself.
It is YOU.
Satya Nadella spoke at Davos and said the real obstacle to AI is getting people to actually change how they work.
He gave a personal example.
Before Davos, his team would spend days preparing briefing notes, filtering up through layers of staff before reaching him.
That process had not changed since he joined Microsoft in 1992.
Now he types one sentence into Copilot and gets a full 360-degree brief in seconds what Microsoft is doing for a client, what that client is doing for Microsoft, the whole picture at once.
Nadella said that kind of capability does not just speed things up, it completely inverts how information flows through an entire organization.
The old model, departments hoarding knowledge, information trickling upward through hierarchy, is now structurally obsolete.
Most companies have not figured that out yet.
He said firms will see almost zero productivity gains from AI unless leaders actively redesign their structures, retrain their people, and rebuild how context moves through the organization.
The companies that refuse to change will not just fall behind and they will become irrelevant to the ones that do.
His exact words: "That's why you're going to see the challenge of why am I not seeing immediate results in productivity. You have to do the hard work."
The hard work is convincing an entire workforce to let go of how they have operated for decades.
That is the actual AI race and most companies are losing it before it even starts.
unusual_whales@unusual_whales
Microsoft CEO: The biggest obstacle to expanding artificial intelligence is persuading people to change the way they work.
English
Fadzli Nasir ری ٹویٹ کیا
Fadzli Nasir ری ٹویٹ کیا
Fadzli Nasir ری ٹویٹ کیا

What does it mean for software engineering when we no longer write the code? Here's the take from Boris Cherny (@bcherny), the creator of Claude Code. Timestamps:
00:00 Intro
11:15 Lessons from Meta
19:46 Joining Anthropic
23:08 The origins of Claude Code
32:55 Boris's Claude Code workflow
36:27 Parallel agents
40:25 Code reviews
47:18 Claude Code's architecture
52:38 Permissions and sandboxing
55:05 Engineering culture at Anthropic
1:05:15 Claude Cowork
1:12:48 Observability and privacy
1:14:45 Agent swarms
1:21:16 LLMs and the printing press analogy
1:30:16 Standout engineer archetypes
1:32:12 What skills still matter for engineers
1:35:24 Book recommendations
Brought to you by:
• @statsig — The unified platform for flags, analytics, experiments, and more. statsig.com/pragmatic
• @SonarSource – The makers of SonarQube, the industry standard for automated code review. Proactively find and fix issues in real-time with the SonarQube MCP Server: sonarsource.com/products/sonar…
• @WorkOS – Everything you need to make your app enterprise ready. workos.com
Three interesting things from this conversation:
1. Boris automated himself out of code review well before AI.
Boris was one of the most prolific code reviewers at Meta company. And he worked hard to minimize time spent on code review. His system::every time he left the same kind of review comment, he logged it in a spreadsheet. Once a pattern hit 3-4 occurrences, he’d write a lint rule to automate it away!
2. PRDs are dead on the Claude Code team: prototypes replaced them.
Instead of writing Product Requirement Documents (specs), they build hundreds of working prototypes before shipping a feature. Boris: “There’s just no way we could have shipped this if we started with static mocks and Figma or if we started with a PRD.”
3. This is the year of the generalist (and maybe the year of those with ADHD)
Boris’s work has shifted from deep-focus single-threaded coding to managing multiple parallel agents and context-switching rapidly. As Boris put it: “It’s not so much about deep work, it’s about how good I am at context switching and jumping across multiple different contexts very quickly.”
English
Fadzli Nasir ری ٹویٹ کیا
Fadzli Nasir ری ٹویٹ کیا
Fadzli Nasir ری ٹویٹ کیا
Fadzli Nasir ری ٹویٹ کیا

Every engineer I speak to is worried about the future of their profession.
I tell them to join a company that requires them to:
- Solve increasingly complex problems that require deep nuance and taste
- Build big agentic systems that require specialized AI/ML knowledge
- Flex their communication skills as the role of PM/Engineer converge
- Maximize their use of coding agents & AI tools in their workflows
- Breadth of projects that improve taste and dot connection over time
p.s. this is the expectation of every engineer we hire at @tenex_labs. apply and stay relevant in a post-ai world.
English
















