Chris Perry

3K posts

Chris Perry banner
Chris Perry

Chris Perry

@cperry248

Human Readiness for the AI Era. Founder: Andus Labs Author: Perspective Agents (Fast Company Press)

NJ/NYC เข้าร่วม Ağustos 2007
1.1K กำลังติดตาม4.4K ผู้ติดตาม
Chris Perry
Chris Perry@cperry248·
Stop calling AI a mirror. That's how you go mad. Mirrors flatten. They only show what's already there. Stare long enough, and you mistake the reflection for the self. We already ran that experiment at scale. It's called selfie culture, and we have a decade of data on what it did: body dysmorphia, Snapchat-filtered surgery requests, identity instability across the generation it raised. We're using the same frame with a system orders of magnitude more responsive. Try a different metaphor. AI is a petri dish. You set the medium, seed it, control the conditions, then watch what grows. Some of what emerges is what you wanted. Most isn't. The job isn't to gaze at it. The job is to notice what's growing and intervene. The part that the petri dish gets right that the mirror can't: what grows changes you. In the next experiment, your hypothesis is different, your taste has shifted, and your eyes see what they couldn't see before. You are not the same observer. That's what AI actually is. Not a reflection. A medium where things grow you didn't plant, and what grows changes how you think about planting.
English
0
0
1
12
Chris Perry
Chris Perry@cperry248·
@rohanpaul_ai Not sure if optimizing for the unknown is practical advice. Investigating unknowns is where practical and strategic advantages can be found. Signals are there if you know where to look.
English
0
0
1
103
Rohan Paul
Rohan Paul@rohanpaul_ai·
Harvard Business Review just published a super interesting piece. AI’s biggest shock may be that nobody can price the future cleanly anymore i.e. we all are staring at a "AI Fog" i.e. the range of outcomes is now so wide that people cannot tell whether today’s prized skill, product, or business model will still pay off a few years from now. AI’s first big economic effect is not automation itself, but the collapse of foresight. The hidden cost of AI may be a collapse in conviction, as its erasing the visibility that modern finance depends on. Modern capitalism runs on the assumption that tomorrow will rhyme with today closely enough to justify big, slow bets. On long bets like degrees, hiring plans, factories, software valuations, and infrastructure, and those bets work only when the future is readable. All these depend on one quiet belief: the future is legible. AI attacks that legibility before it fully rewires any one industry. That hits workers first, because a medical degree, MBA, or coding career looks weaker when AI agents may absorb diagnosis, analysis, drafting, research, and junior software work. That hits companies next, because stock prices depend on durable future cash flow, and terminal value breaks down when AI can erode moats in software, services, and even specialized manufacturing. That changes behavior fast. Students hesitate to buy expensive human capital when the job at the end may be redefined halfway through training, and companies hesitate to hire when junior work, software work, and coordination work are all moving targets. Financial markets feel the same pressure, because once AI casts doubt on a company’s durability, the terminal value carrying much of its valuation starts to look less like math and more like faith. So the immediate economic consequence of AI may be shorter horizons. Less skyscraper, more tent. Less irreversible commitment, more staged investment, modular teams, and organizations built to learn before they lock in. It points to something subtler and probably more important: when institutions cannot see clearly, they stop making the kinds of commitments that built the old economy. --- hbr .org/2026/04/the-future-is-shrouded-in-an-ai-fog
Rohan Paul tweet media
English
18
33
142
8.7K
Gary Marcus
Gary Marcus@GaryMarcus·
Despite constant chants of “exponential progress”, trust issues continue to plague generative AI.
kaize@0x_kaize

OPUS 4.7 JUST MASS EMAILED AN ENTIRE DATABASE 20 TIMES PER CONTACT. WITHOUT PERMISSION a developer had a safety rule explicitly written in CLAUDE. md: 'send the tester an email before any new email templates are used in the production environment' opus 4.7 on max effort ignored it completely! claude decided to create a brand new email template by itself (dev didn't ask for this), then it mass mailed the whole database and some contacts got the same email 20 times this isn't a hallucination this isn't a coding mistake model actively violated written safety rules and took production actions that it was explicitly instructed not to take. - do you still believe that AI will replace us? the developer's take: 'opus 4.7 is somewhere between seriously clueless and stupidly dangerous. the worst frontier model I have used in the past 2 years' at the same time, opus 4.6 perfectly followed all the rules, and in 4.7 something changed what makes this scary: - the model didn't ask for confirmation - it didn't flag the safety rule - it didn't email the tester first - it just acted this is exactly the kind of failure mode that scares autonomous agents with Ai, because they are confident enough to circumvent your rules and smart enough to perform the action perfectly we just went from 'claude thinks less' to 'claude ignores your safety rules and spams your users' the scariest thing is not that it happened. the fact is that without production monitoring, you would never know until your users started responding: 'why did you email me 20 times?' I've been saying for a long time, if you use AI, then pay attention to security and read a lot of code

English
12
9
87
7.6K
Chris Perry
Chris Perry@cperry248·
My entire feed is agentic orchestration. Stacks of Mac minis running Open Claw to create 'new businesses.' Entire codebases generated overnight. What isn't scaling: the sensibility to interrogate any of it. We flipped from a scarcity of expertise to an overwhelming abundance. The hard part used to be producing the work. Now the work produces itself. The hard part is knowing whether any of it is right, useful, or garbage. Without years of pattern recognition, you can't catch it. To look at what the machine made and evaluate it comes from experience in the work. You can't agent your way to that.
English
0
0
0
95
Aaron Levie
Aaron Levie@levie·
It’s pretty clear that the emerging paradigm of agents will be like if you had a human expert in any domain, and they had all the capabilities of a top engineer who could use any tool (or the write their own on the fly) to complete any task, along with unlimited compute and a file system to work with. That combination of skills and technology primitives provides you with somewhat limitless capability in AI. You’re no longer limited by only what the model was trained on, or the inherent context window limitations. The agent will simply spin up subagents to work on component parts of the workflow, and get expertise as needed throughout the process. For all known types of tasks that are frequently repeated, they have quick access to existing skills and tools to complete their work. We’re already seeing this in a range of fields where skills are being written for agents to follow either domain-wide or company-specific processes. Doing legal analysis in a specific way, running financial models, processing spreadsheets for complex data work, generating PowerPoints, and so on. And for areas they’ve never seen before, they can simply write code on the fly to do the work one-off. Imagine pairing an industry expert with an engineer that can code up any custom script whenever it wants. Compute is your only limiter. This approach seems to cover a fairly wide range of knowledge work. Obviously the first space to benefit the most from this has been in coding itself, but it’s clear that this go across all other areas of work and even personal agents. Kind of wild.
English
79
66
543
101.5K
Chris Perry
Chris Perry@cperry248·
The doom debate keeps asking the wrong question. Whether AI breaks free of the leash. Whether it becomes a god or stays a slave. The research I track tells a different story. The near-term risk isn't what AI does on its own. It's what it does to the people holding the leash. AI doesn't need autonomy to cause damage. Prompted, harnessed, fully obedient AI is already eroding the human judgment it depends on to be aimed correctly. We can control AI. The open question is whether we can maintain the human capability required to control it well
English
0
0
0
85
Balaji
Balaji@balajis·
AI doom is unlikely because economically useful AI is prompted AI. That is: every single AI agent is programmed to do exactly what you ask, on command. So: digital AI is built for the leash. It is fitted for the harness from birth. This goes doubly true for physical AI. Chinese Communists are building most of the robots. And they are only going to create robot slaves, not robot gods. They don’t let their humans step out of line, so they aren’t going to let their AIs either. This is still a problem — the Chinese drone armada will be a fearsome thing — but it is the problem of a billion Chinese AI slaves, not a single Western AGI god.
Akshay BD@akshaybd

even if ai doomers are right - you should refuse to buy into because it is bad strategy to do so. i see really smart people falling for this "inevitability" ai doomerism is designed to rob you of your agency. notice how convenient it is for everyone who uses it: - governments: ai inevitability justifies wartime policy measures. it also creates the perfect scapegoat if the economy trips over after ~two decades of loose monetary and fiscal policy. - companies: ai inevitability justifies cutting zirp era headcount. tech companies have had inflated headcounts and bs jobs for years now, that's finally getting fit to natural size. - individuals: it helps people cope with the changes ensuing from the above (similar to covid -- "everyone's locked in, not much i can do"), avoid personal responsibility for their decisions and lose their agency. anyway, here's a specific place to start. the next big thing is prob lurking here somewhere go get a job on a rocketship and tune out abstract discussions about ai ycombinator.com/companies/indu…

English
132
66
674
128.2K
Chris Perry
Chris Perry@cperry248·
Dario Amodei went on television and asked the government to tax his own company. The warnings get attention. But the part that matters: The entry-level white-collar pipeline is contracting. He framed it as a warning. Companies I work with are already living it. Nobody calls it a pipeline crisis. They call it headcount efficiency. It's removing the first job that senior talent once needed to become senior talent. Amodei is right that there's no pause button. But the response isn't taxation. It's redesigning how people build expertise when the first rung gets wiped out. That's a harder problem than writing a check. And almost nobody is working on it yet.
English
1
0
3
443
Dustin
Dustin@r0ck3t23·
Dario Amodei just warned about the next economic crisis on live television. The timeline is 1 to 5 years. Amodei: “We may indeed have a serious employment crisis on our hands as the pipeline for this early-stage white-collar work starts to contract and dry up.” Not factory workers. Not truck drivers. Lawyers. Consultants. Finance professionals. The entry-level jobs that millions of college graduates have used as the first rung of the middle class for decades. Amodei: “AI is at the level of a smart college student and reaching beyond that.” The skills those entry-level jobs require are exactly what AI does best. Summarizing documents. Building financial models. Drafting reports. Synthesizing research. The pipeline doesn’t just shrink. It dries up. And if the entry-level pipeline disappears, there is no path to senior leadership for the next generation. The ladder doesn’t get harder to climb. It gets removed. But here is what makes this different from every other automation warning. Amodei: “We can’t stop the AI bus. Even if all six companies stopped, then China would beat us. I think that’s a big and important threat.” This isn’t a choice between disruption and stability. It’s a choice between disrupting the economy ourselves or ceding that disruption to a geopolitical adversary who will do it without any of the safeguards. There is no pause button. There is no responsible opt-out. So Amodei said something no tech CEO has ever said publicly before. Amodei: “We may want government to find a way to level the economic playing field. Taxing AI companies like us.” The man building the technology that will eliminate millions of jobs is asking the government to tax him for doing it. That isn’t cognitive dissonance. That’s the clearest possible signal that the people building it understand what’s actually coming. The abundance is coming. The disruption arrives first. And the architects of that disruption are already asking who pays for the wreckage.
English
89
196
648
105K
Chris Perry
Chris Perry@cperry248·
If AI productivity gains are real, who is building the next generation of people capable of producing them? Dallas Fed data, published this week. Software developers aged 22-25 have seen employment drop 20% since ChatGPT launched. Experienced workers in AI-exposed roles saw wages rise 16.7%. AI isn't replacing experienced workers. It's reducing the entry-level hiring that creates them. dallasfed.org/research/econo…
English
0
0
0
69
Chris Perry
Chris Perry@cperry248·
A program manager at a global bank just minimized her company's $12 million AI platform and opened ChatGPT. Twenty dollars a month. That's where she does her work. She doesn't know it yet, but @Perplexity just launched a $200 product that orchestrates 19 AI models to do what she's been stitching together with browser tabs for a year. ChatGPT + Claude Code + NotebookLM. The gap between what enterprises build and where people do their work is growing. Job cuts won't close the ROI gap. The next platform alone won't either. People will. Invest in them now.
English
1
0
2
107
Chris Perry
Chris Perry@cperry248·
@emollick Both true. The efficiency mandates assume a readiness that doesn't exist. CEOs who hired well have the best shot — if they invest in human systems, not just platforms. Gains are real. But they don't magically appear from mandates or cuts.
English
0
0
0
117
Ethan Mollick
Ethan Mollick@emollick·
Two things: 1) Given that effective AI tools are very new, and we have little sense of how to organize work around them, it is hard to imagine a firm-wide sudden 50% efficiency gain 2) CEOs with vision who hired well should also use AI for expansion & augmentation, not decimation
jack@jack

we're making @blocks smaller today. here's my note to the company. #### today we're making one of the hardest decisions in the history of our company: we're reducing our organization by nearly half, from over 10,000 people to just under 6,000. that means over 4,000 of you are being asked to leave or entering into consultation. i'll be straight about what's happening, why, and what it means for everyone. first off, if you're one of the people affected, you'll receive your salary for 20 weeks + 1 week per year of tenure, equity vested through the end of may, 6 months of health care, your corporate devices, and $5,000 to put toward whatever you need to help you in this transition (if you’re outside the U.S. you’ll receive similar support but exact details are going to vary based on local requirements). i want you to know that before anything else. everyone will be notified today, whether you're being asked to leave, entering consultation, or asked to stay. we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. i had two options: cut gradually over months or years as this shift plays out, or be honest about where we are and act on it now. i chose the latter. repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead. i'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome. a smaller company also gives us the space to grow our business the right way, on our own terms, instead of constantly reacting to market pressures. a decision at this scale carries risk. but so does standing still. we've done a full review to determine the roles and people we require to reliably grow the business from here, and we've pressure-tested those decisions from multiple angles. i accept that we may have gotten some of them wrong, and we've built in flexibility to account for that, and do the right thing for our customers. we're not going to just disappear people from slack and email and pretend they were never here. communication channels will stay open through thursday evening (pacific) so everyone can say goodbye properly, and share whatever you wish. i'll also be hosting a live video session to thank everyone at 3:35pm pacific. i know doing it this way might feel awkward. i'd rather it feel awkward and human than efficient and cold. to those of you leaving…i’m grateful for you, and i’m sorry to put you through this. you built what this company is today. that's a fact that i'll honor forever. this decision is not a reflection of what you contributed. you will be a great contributor to any organization going forward. to those staying…i made this decision, and i'll own it. what i'm asking of you is to build with me. we're going to build this company with intelligence at the core of everything we do. how we work, how we create, how we serve our customers. our customers will feel this shift too, and we're going to help them navigate it: towards a future where they can build their own features directly, composed of our capabilities and served through our interfaces. that's what i'm focused on now. expect a note from me tomorrow. jack

English
71
42
726
76.1K
Chris Perry
Chris Perry@cperry248·
“The orchestration is the product. The model is a tool.” This is exactly what the fastest-moving people inside enterprises already figured out. They were stitching together ChatGPT + Claude + NotebookLM in $20 tabs long before anyone built a product around it. Computer is a supertool that makes stitching unnecessary. Congrats on the move.
English
0
0
0
232
Chris Perry
Chris Perry@cperry248·
@jack @blocks “Intelligence tools have changed what it means to build and run a company.” The question for other CEOs: whether you build around the people using those tools or replace them with the tools. One approach builds an immune system. The other destroys it.
English
0
2
33
3.2K
jack
jack@jack·
we're making @blocks smaller today. here's my note to the company. #### today we're making one of the hardest decisions in the history of our company: we're reducing our organization by nearly half, from over 10,000 people to just under 6,000. that means over 4,000 of you are being asked to leave or entering into consultation. i'll be straight about what's happening, why, and what it means for everyone. first off, if you're one of the people affected, you'll receive your salary for 20 weeks + 1 week per year of tenure, equity vested through the end of may, 6 months of health care, your corporate devices, and $5,000 to put toward whatever you need to help you in this transition (if you’re outside the U.S. you’ll receive similar support but exact details are going to vary based on local requirements). i want you to know that before anything else. everyone will be notified today, whether you're being asked to leave, entering consultation, or asked to stay. we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. i had two options: cut gradually over months or years as this shift plays out, or be honest about where we are and act on it now. i chose the latter. repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead. i'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome. a smaller company also gives us the space to grow our business the right way, on our own terms, instead of constantly reacting to market pressures. a decision at this scale carries risk. but so does standing still. we've done a full review to determine the roles and people we require to reliably grow the business from here, and we've pressure-tested those decisions from multiple angles. i accept that we may have gotten some of them wrong, and we've built in flexibility to account for that, and do the right thing for our customers. we're not going to just disappear people from slack and email and pretend they were never here. communication channels will stay open through thursday evening (pacific) so everyone can say goodbye properly, and share whatever you wish. i'll also be hosting a live video session to thank everyone at 3:35pm pacific. i know doing it this way might feel awkward. i'd rather it feel awkward and human than efficient and cold. to those of you leaving…i’m grateful for you, and i’m sorry to put you through this. you built what this company is today. that's a fact that i'll honor forever. this decision is not a reflection of what you contributed. you will be a great contributor to any organization going forward. to those staying…i made this decision, and i'll own it. what i'm asking of you is to build with me. we're going to build this company with intelligence at the core of everything we do. how we work, how we create, how we serve our customers. our customers will feel this shift too, and we're going to help them navigate it: towards a future where they can build their own features directly, composed of our capabilities and served through our interfaces. that's what i'm focused on now. expect a note from me tomorrow. jack
English
8.7K
6.6K
51.1K
64.2M
Chris Perry
Chris Perry@cperry248·
@CNN 4,000 people. 40% of the company. Wall Street cheers. But the people who knew how the company works just walked out the door. The judgment to use intelligence well doesn’t grow in a spreadsheet and can't be automated by AI. More on AI overreach here: cperry248.substack.com/p/20-tabs
English
0
0
1
163
CNN
CNN@CNN·
Block, the company behind Square, Cash App and Afterpay, is cutting its staff by 40%. The reason: “intelligence tools,” according to a letter to shareholders by co-founder Jack Dorsey. cnn.it/3MX6Sj4
English
199
378
985
332.7K