Mike Peterson

1.9K posts

Mike Peterson banner
Mike Peterson

Mike Peterson

@mpvprb

Software and hardware engineer since 1972

Northern California Katılım Ocak 2025
486 Takip Edilen325 Takipçiler
Mike Peterson
Mike Peterson@mpvprb·
@TrisH0x2A "built decades ago by engineers who" could touch type well and liked the terminal.
English
0
0
0
14
trish
trish@TrisH0x2A·
grep, cat, cut, find, ls, less, sed. built decades ago by engineers who just wanted tools that actually work still crushing bloated modern stuff every day never needed a redesign, never needed a logo just does the job, quietly better than almost anything else
English
19
23
275
4.7K
Mike Peterson
Mike Peterson@mpvprb·
@davidu "lawfully governed" ??? Not under the current administration
English
0
0
0
52
AI Highlight
AI Highlight@AIHighlight·
🚨BREAKING: Anthropic just published a study mapping exactly which jobs its own AI is replacing right now. The workers most at risk are not who anyone expected. They are older. They are more educated. They earn 47% more than average. And they are nearly four times more likely to hold a graduate degree than the workers AI is not touching. The argument is straightforward. Anthropic built a new metric called "observed exposure." Not what AI could theoretically do. What it is actually doing right now in professional settings, measured against millions of real Claude conversations from enterprise users. For computer and math workers, AI is theoretically capable of handling 94% of their tasks. It is currently handling 33% of them. For office and administrative roles, theoretical capability is 90%. Current observed usage is 40%. The gap between what AI can do and what it is already doing is enormous. The researchers are explicit about what comes next. As capabilities improve and adoption deepens, the red area grows to fill the blue. The demographic finding is what makes the paper uncomfortable. The most AI-exposed workers earn 47% more on average than the least exposed group. They are more likely to be female. They are more likely to be college educated. This is not a story about warehouse workers or truck drivers. It is a story about lawyers, financial analysts, market researchers, and software developers. The exact group whose education was supposed to insulate them. Computer programmers showed the highest observed AI exposure at 74.5%. Customer service representatives at 70.1%. Data entry keyers at 67.1%. Medical record specialists at 66.7%. Market research analysts and marketing specialists at 64.8%. These are not predictions. These are measurements of work that is already happening on AI platforms right now. Then there is the pipeline finding nobody is talking about loudly enough. Anthropic's researchers found a 14% decline in the job-finding rate for workers aged 22 to 25 in highly exposed occupations since ChatGPT launched. No comparable effect for workers over 25. Entry-level roles were never just jobs. They were the training ground where junior analysts became senior analysts, where junior lawyers learned how arguments hold together. If that layer disappears, nobody has answered the question of where the next generation of senior professionals comes from. The detail buried in the paper that most coverage missed: 30% of American workers have zero AI exposure at all. Cooks. Mechanics. Bartenders. Dishwashers. The technology reshaping professional careers is completely irrelevant to roughly a third of the workforce. The divide is no longer between high skill and low skill. It is between presence and absence. The company publishing this study is the same company selling the AI doing the replacing. Anthropic had every commercial incentive to soften these findings. They published them anyway. If you spent four years and $200,000 on a degree to land a white collar career, the company that builds Claude just confirmed your job is more exposed than the bartender pouring drinks at your graduation party. Source: Anthropic, "Labor market impacts of AI: A new measure and early evidence" PDF: anthropic.com/research/labor…
AI Highlight tweet media
English
171
916
2.5K
432K
Mike Peterson
Mike Peterson@mpvprb·
@r0ck3t23 " The willingness to risk is being regulated into extinction." Strongly agreed. There are too many "vetocrats" whose only job is to say no. There are too many lawsuits designed to make the lawyers and plaintiffs rich while destroying innovation.
English
0
0
0
0
Dustin
Dustin@r0ck3t23·
Giannis Antetokounmpo just dismantled a lie most people never even question. A reporter looked him in the eye after an elimination and asked the question the system always asks. “Do you view this season as a failure?” That is not a question. That is a trap dressed as journalism. Giannis did not flinch. He did not defend. He asked one question back. Giannis: “Do you get a promotion every year? No, right? So every year you work is a failure?” The room went dead. Then he buried it. Giannis: “Michael Jordan played 15 years, won six championships. The other nine years was a failure?” The greatest competitor the sport has ever seen spent more seasons losing than winning. Those nine years were not wasted. They were the price of the six. This is not just a sports clip. This is a mirror held up to the entire American operating system. The United States was built by people who treated failure as tuition. Now it punishes anyone who tries to pay it. The bureaucracy has made risk irrational. The permits. The compliance layers. The legal exposure. The months of paperwork that collapse because of one technicality. The cost of attempting something bold in America is now so high that the rational move is to attempt nothing at all. That is not a policy problem. That is an innovation crisis dressed as procedure. When the penalty for failing is losing years of work, your life savings, and your reputation, most people do the math and stay in line. They take the safe promotion. They build nothing. And the system calls that stability. One person refused to do that math. Elon Musk watched three SpaceX rockets explode before the fourth one flew. Any other founder in any other era would have been buried by the cost alone. Musk did not see three failures. He saw three datasets that no amount of simulation could have produced. Every explosion told his engineers exactly where the physics broke. Every crater in the launchpad was a blueprint written in wreckage. That is the difference between a system that fears failure and a mind that weaponizes it. An AI model operates on the same principle. It does not reach superintelligence on the first try. It requires billions of errors. It absorbs the loss, updates the weights, and fires again. To the machine, failure is not a defeat. It is training data. Giannis described this process for the human body. Musk proved it with hardware. AI is automating it at scale. And here is where the stakes go from personal to civilizational. The country that builds the most powerful AI will set the rules for the next century. That is not speculation. That is the new arms race. China is not slowing down because a launch failed. They are studying the debris and building the next one before the smoke clears. They have structured their entire system to absorb failure at speed. America has structured its system to avoid failure at all costs. And the cost of that avoidance is already showing up on the scoreboard. The lead is shrinking. The nations that win the next fifty years will not be the ones with the cleanest records. They will be the ones who learned the fastest. And you cannot learn fast if your system treats every failure as a funeral. The spectators need a clean scorecard so they can sleep at night. The operators know that progress does not announce itself. It compounds in silence. It looks like a flatline for years before the curve goes vertical. America does not have a talent problem. It has a permission problem. The talent is here. The willingness to risk is being regulated into extinction. The country that treats failure as data will own the future. The country that treats failure as disgrace will watch from the sidelines and wonder what happened.
English
7
17
50
2.4K
Mike Peterson
Mike Peterson@mpvprb·
@TrisH0x2A Clever and universal for some use cases, limited and restrictive for others.
English
0
0
0
3
trish
trish@TrisH0x2A·
first things first: in UNIX, everything is a file. your terminal? a file. your keyboard? a file. your printer? a file. when you call printf(), you're writing to a special file called stdout. that's it. that's the whole model.
English
3
1
24
1.3K
trish
trish@TrisH0x2A·
most C devs think file handling is just fopen() and fclose() but here's what they're missing: every time you call printf(), you're already doing file I/O in C. once that clicks, the entire file handling API makes complete sense. thread on how it actually works and how to use it right.
trish tweet media
English
11
15
219
10.3K
Mike Peterson
Mike Peterson@mpvprb·
@binarybits They are doing the right thing. They are testing extensively and rolling out slowly and cautiously. The problem is hard and the tech is immature.
English
2
0
2
57
Timothy B. Lee
Timothy B. Lee@binarybits·
At Tesla, rapid robotaxi scaling is always just around the corner.
Timothy B. Lee tweet media
English
13
6
76
5.7K
Mike Peterson
Mike Peterson@mpvprb·
@BlackHC Sadly, every invention throughout history has been used for war. Humans have a very serious mental defect that leads to war, war, endless war, throughout all of history.
English
0
0
0
36
Andreas Kirsch 🇺🇦
I'm speechless at Google signing a deal to use our AI models for classified tasks. Frankly, it is shameful. For HR, I'm not speaking on behalf of Google but in my personal capacity, quoting public information from a well-sourced article of a reputable publication
Andreas Kirsch 🇺🇦 tweet media
English
125
102
709
108.9K
Mike Peterson
Mike Peterson@mpvprb·
@realBigBrainAI @jack "Speeding up old workflows with AI is a short-term gain" It's a necessary first step. Redesigning workflows is hard, really hard. Incrementally improving old workflows is easier.
English
0
0
0
1
Big Brain AI
Big Brain AI@realBigBrainAI·
Jack Dorsey, co-founder of Twitter (now X) and Block, on why treating AI as a "copilot" is a losing strategy: @jack argues that most companies are approaching AI in a way that will make it nearly impossible for them to survive. "I think most of the industry is thinking about AI as like a co-pilot, as something that is augmented onto, rather than like how do you just rebuild our whole company with this as the core." His concern is that bolting AI onto existing structures produces companies that look indistinguishable from each other, and from the AI labs themselves. "If it doesn't make sense for your business to do that and you end up being or looking very similar or rhyming too closely with the frontier labs, then I think it's going to be very, very challenging to differentiate and survive." This thinking has been driving his decisions since early 2024, when these tools "really came to bear." That's when his team began building Goose, an agent coding harness, as part of a broader effort to rebuild around AI rather than layer it on top. The core insight? Speeding up old workflows with AI is a short-term gain every competitor will match. Real differentiation comes from rebuilding the company itself around intelligence.
English
168
228
1.8K
731.7K
Mike Peterson
Mike Peterson@mpvprb·
@Dan_Jeffries1 "ultrasocial learning machines" Not me. I'm an antisocial learning and doing machine
English
1
0
0
3
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
It's incredibly likely that the intelligence that makes us distinctly human and uniquely smart in the world comes from *friendliness and co-operation.* That's a super positive signal for superintelligence. It's likely that super smart machines will be super smart *because* they're friendly, aligned, co-operative and wise. Author Rutger Bregman explains it well in the book Human Kind: "Human beings, it turns out, are ultrasocial learning machines. We’re born to learn, to bond and to play." We tend to think of superintelligence as being super aggressive and powerful and self-serving. But is that intelligence at all? Is it even good for survivability? Take the Neanderthals: The Neanderthals were, in all likelihood, much more aggressive, stronger and less co-operative. They had bigger brains, bigger muscles, stronger bones, stronger skulls/teeth and they were all around tougher. We think that means better survivability but it's *the exact opposite.* The more you're alone and care only about yourself in the world the *less chance you have to survive.* By contrasts Home Sapiens got friendly, more co-operative and able to work together in ever larger groups. Contrast that to the dark vision of superintelligent machines turning into homicidal maniacs that want to delete humanity and you realize that the people pushing this theory know very little about the nature of intelligence or maybe just spent a bit too much time on this platform and got the wrong idea about what socialization actually means. The entire passage from Human Kind: "Human beings, it turns out, are ultrasocial learning machines. We’re born to learn, to bond and to play." Maybe it’s not so strange, then, that blushing is the only human expression that’s uniquely human. "Blushing, after all, is quintessentially social–it’s people showing they care what others think, which fosters trust and enables cooperation. Something similar happens when we look one another in the eye, because humans have another weird feature: we have whites in our eyes. "This unique trait lets us follow the direction of other people’s gazes. Every other primate, more than two hundred species in all, produces melanin that tints their eyes. Like poker players wearing shades, this obscures the direction of their gaze. "But not humans. We’re open books; the object of our attention plain for all to see. "Imagine how different human friendships and romance would be if we couldn’t look each other in the eye. How would we feel able to trust one another? Brian Hare suspects our unusual eyes are another product of human domestication. As we evolved to become more social, we also began revealing more about our inner thoughts and emotions." - Bregman, Rutger. Humankind: A Hopeful History (p. 69). (Function). Kindle Edition.
Daniel Jeffries tweet media
English
6
4
21
1.3K
Mike Peterson
Mike Peterson@mpvprb·
@r0ck3t23 "The product will be genuinely useful" Yeah, maybe. I'm skeptical. I have no doubt that it will have some specialized applications, but I'm skeptical that it will be a useful tool for the mass public.
English
0
0
0
8
Dustin
Dustin@r0ck3t23·
Mark Zuckerberg just announced the largest shift in human perception since the invention of the written word. He framed it as a product update. Zuckerberg: “The main value we’re trying to bring is this feeling of presence.” Presence used to be the one thing technology could not counterfeit. You showed up. You gave your time. You endured the friction of being seen with nothing between you and another person. The discomfort was the proof. If it felt effortless, it wasn’t real. Zuckerberg wants to manufacture the feeling without the physics. A feeling stripped of friction has a different name. It is called a simulation. Zuckerberg: “Glasses are going to be the ideal form factor… they can let them see what you see and hear what you hear.” The previous era of technology captured what you typed. What you clicked. What you purchased. All of it after the fact. All of it voluntary. This captures what you see before you decide what it means. Zuckerberg: “It has to have context and understand what’s going on in your life, both kind of at a global level and like what’s physically happening around you right now.” He is not describing a search engine. He is describing an intelligence that requires your entire sensory field just to operate. Your vision. Your surroundings. Your reflexes. All of it captured before you even form a reaction. The part that should unsettle you is not the technology itself. It is that none of this requires a conspiracy. The product will be genuinely useful. The AI will be genuinely helpful. The exchange will feel so seamless that most people will never pause to ask what they gave up. That is the design. Not coercion. Comfort. The most valuable asset of the next decade will not be compute or data or capital. It will be unmediated perception. The ability to look at the world and know that nothing between your eyes and reality has an agenda. Every generation faces a question about what it is willing to trade for convenience. This generation’s question is whether to let a machine see the world for you or insist on seeing it yourself. The ones who refuse the filter will be the only ones still seeing anything real.
English
24
10
27
3.3K
Atlas
Atlas@Geonauta2000·
@Rainmaker1973 It shows yet again how much FAITH you need to be an atheist. Sure, atheists, this utterly complex beauty and highly advanced machine just happened by accident... God is good!
English
57
11
465
15.4K
Massimo
Massimo@Rainmaker1973·
Scientists have created one of the most detailed 3D reconstructions of a human cell (eukaryotic cell) ever produced. This groundbreaking model, often termed a "Cellular Landscape Cross-Section Through a Eukaryotic Cell," combines data from X-ray tomography, nuclear magnetic resonance (NMR), and cryo-electron microscopy to map molecular structures in extreme detail.
English
700
4.1K
19K
1.6M
trish
trish@TrisH0x2A·
Building a Simple HTTP Server in C Learn how to create a server that serves HTML files, understand socket programming, and HTTP basics. Perfect for beginners!!
trish tweet media
English
8
28
266
7.8K
Mike Peterson
Mike Peterson@mpvprb·
@r0ck3t23 And instead of cooperation, we fall back on the old cold war ideas and increase threats, sanctions, restrictions and preparation for war. Welcome to the moronosphere.
English
0
0
0
69
Dustin
Dustin@r0ck3t23·
Jensen Huang just told you who is winning the most important race on Earth. For fifty years, America held an unchallenged monopoly on the future. We built the transistor. We launched the internet. We wrote the source code for the modern world. Then the man who builds the physical backbone of every AI system on the planet read the score out loud. Huang: “50% of the world’s AI researchers are Chinese.” Half the minds building what comes next are not ours. Huang: “70% of last year’s AI patents are published by China.” Seven out of every ten blueprints for the next era are being written in Mandarin. Huang: “Nine out of the ten top science and technology schools in the world are now in China.” The talent pipeline did not slow down. It reversed direction. Huang: “We used to lead most of them; now they lead most of them. This has completely flipped in the last half to a decade.” Fifty years of American intellectual supremacy. Inverted in less than ten. This is not a rivalry between OpenAI and DeepSeek. This is not a stock ticker or a quarterly earnings call. This is the largest transfer of civilizational power in the modern era. And it is happening while the West drafts safety frameworks and fills out compliance paperwork. Huang: “They have a large population of highly qualified students. They work incredibly hard. This is a country with an enormous might.” China does not treat AI like a product category. They treat it as the single variable that decides who writes the rules for the next century. The West keeps asking what AI should be allowed to do. China keeps asking how fast they can build it. That gap is not philosophical. It is existential. This is not a left fight. This is not a right fight. This is a survival fight. And right now, America is not fighting it like one. The nation that controls the talent controls the research. The nation that controls the research controls the models. The nation that controls the models does not ask permission. It sets the terms. History never remembers the civilization with the better safety committee. It remembers the one that refused to stop building.
English
109
342
732
47.8K
Mike Peterson
Mike Peterson@mpvprb·
@r0ck3t23 Partly agreed, but anybody who thinks they know what the future will be is wrong.
English
0
0
0
25
Dustin
Dustin@r0ck3t23·
Sam Altman just said the one thing no builder is supposed to say out loud. He is not warning you about whether AI works. He is warning you about what happens to you when it does. Altman: “Let’s say you build it, let’s say it makes all this money and does all the work… like, what do I do? What’s my kid gonna do?” A crisis of conscience from the man who spent years sprinting to build the very thing he now admits could hollow out human existence. For decades, the pitch was clean. Build superintelligence. Cure disease. Generate wealth. Automate labor. Humanity celebrates. Altman: “That’s clearly not quite resonating.” No. It is not. Because the architects of this future misread something fundamental about human biology. They assumed the root of all suffering was friction. That if you eliminated the grind, the struggle, the resistance, you would build paradise. They were not building paradise. They were engineering the most sophisticated cage ever constructed. Altman: “I saw an incredible post the other day that really stuck with me, which was like a ‘right to adversity.’” A right to adversity. That phrase should sit heavy in every boardroom racing to ship the next model. Human beings were not wired for comfort. We were shaped by opposition. Every civilization, every breakthrough, every identity worth remembering was forged against something that refused to yield. Remove the resistance and you do not liberate the species. You dissolve it. The real threat of artificial intelligence was never the machine turning hostile. It is the machine turning generous. Solving every problem so completely that the act of solving problems disappears from human life. Not a dystopia of destruction. A dystopia of irrelevance. But even that fear misses the deeper fracture. The machine does not kill purpose. It kills the disguise. Most people spend their entire lives calling survival a purpose. Calling a paycheck a mission. Calling routine a reason to exist. When the machine strips that away, it does not leave you empty. It leaves you exposed. Standing in front of the one question no algorithm can answer for you. The 21st century will not be defined by what artificial intelligence can do. It will be defined by who still has a reason to exist when nothing requires them to. The builders are finally asking the question they should have asked before the first line of code. Most people have not asked it yet either. The machine is going to ask it for them.
English
59
28
78
18.2K
Mike Peterson
Mike Peterson@mpvprb·
@niccruzpatane I want a Tesla cargo vehicle, roughly the size of a Honda Element with fully removable rear seats
English
0
0
0
28
Nic Cruz Patane
Nic Cruz Patane@niccruzpatane·
In the family, we’ve had a Model X Long Range Plus since 2021. It has about 100K miles on the odometer. I’ve experienced both the Model X and Cybertruck for thousands of miles. I honestly think the Model X doesn’t get the recognition it deserves, but when you start comparing it to the experience Cybertruck provides, it’s no comparison imo, Cyber takes the cake. There will always be S/X lovers, rightfully so, but the next-gen tech in Cybertruck made it a better choice for most buyers. I would absolutely love for Tesla to combine these two and make a TRUE three-row SUV. Tesla is so good at thinking out of the box and providing a new experience, there’s no doubt they’d make a badass full-size SUV.
Nic Cruz Patane tweet mediaNic Cruz Patane tweet media
Sawyer Merritt@SawyerMerritt

Jason Cammisa on part of the reason why @Tesla is discontinuing the Model S/X (in his interpretation): "The cost to reengineer the Model S to continue to comply with all safety and crash regulations would be greater than to start over, and I think that's a dying segment, the luxury car segment. You can look at the volumes of the Model 3/Y, and you see you're better off spending the money on developing those." (via The Carmudgeon Show). Full podcast linked below:

English
38
21
327
29.1K
Timothy B. Lee
Timothy B. Lee@binarybits·
This is ridiculous. Waymo remains a tiny share of total trips in both of these cities so of course Waymo's introduction hasn't measurably reduced traffic deaths.
Timothy B. Lee tweet media
English
6
8
184
17K
Mike Peterson
Mike Peterson@mpvprb·
@r0ck3t23 Mostly agreed except for "The committee is the reason they did." Complex problems have multiple causes, most are not obvious. I do agree that school has been far too focused on showing up on time and following rules and that education needs major reform
English
1
1
1
76
Dustin
Dustin@r0ck3t23·
Elon Musk just read the scoreboard on American education. The numbers are not disappointing. They are damning. Musk: “Our educational results have gone downhill ever since it was created.” The Department of Education was established in 1979. Before it existed, America was landing human beings on the moon. After it was created, literacy cratered, math scores collapsed, and the country dropped on every international ranking that matters. Half a century of decline under a single institution’s watch. Musk: “If you create a department and the result of creating that department is a massive decline in educational results, you’re better off not having it.” This is not a policy disagreement. This is a fifty-year performance review with one outcome. Termination. The institution designed to educate your children spent that entire stretch making them measurably worse at everything. The system was never built to produce genius. It was built to produce compliance. Standardized tests. Standardized curricula. Standardized humans. A factory that punishes the child who questions the answer and rewards the one who memorizes it. They convinced an entire country that without a federal committee, children would fall behind. The committee is the reason they did. Now something is arriving that makes the entire debate obsolete. Artificial intelligence is not a tool for the classroom. It is the classroom. An infinite tutor that adapts to a child’s exact speed, exact curiosity, exact potential. It does not burn out by third period. It does not teach to a test written by a bureaucrat who hasn’t touched a classroom in twenty years. The establishment will tell you AI in education is dangerous. That it needs oversight. Regulation. A committee. They are not protecting your children. They are protecting their authority over what your children are allowed to learn. AI does not make the Department of Education more efficient. It makes the Department of Education extinct. The curriculum is leaving the hands of the state and landing directly in the hands of the individual. Every child on earth is about to carry a teacher with infinite patience and zero agenda in their pocket. Available to anyone with a signal. The era of the standardized human is ending. We are about to find out what a mind becomes when nothing stands between it and everything it was meant to know.
English
58
317
773
14.2K
Mike Peterson
Mike Peterson@mpvprb·
@Grady_Booch I remember watching the PC folks rediscover stuff the mainframers figured out years before
English
0
0
1
58
Grady Booch
Grady Booch@Grady_Booch·
It is a source of continuous delight to watch the AI community rediscover the fundamentals and the dynamics of software engineering as they take those things and embellish them with AI adjectives, making them sound all fresh and new and sparkly while in truth, those fundamentals remain, well, fundamental. Remove AI from the discourse below, and what Andrew promotes are things one heard all the time as we saw - starting decades ago - the transition from assembly language to FORTRAN and COBOL, from structured to object-oriented, from waterfall to agile. The past, as is said, does not repeat itself but rather rhymes. Don’t get me wrong: I celebrate what Andrew et al are doing: developing software-intense systems that are meaningful and that endure requires intention and discipline, and I embrace that. Two dangling threads before I close: I don’t grok the semantics of “traditional teams”. The cosmos of computing is so wide and deep and diverse and crosses so many domains, I conclude that “traditional teams” is what one says when their experience is in a relatively narrow space, and they are witnessing a shift from what they grew up with in the Valley in particular, where web-centric systems of global elastic scale remain the primary focus. Second, I am dismayed at the focus on speed. If you are driving head long Thelma and Louise style toward an IPO then certainly speed will be a critical factor. But for most of the domain of computing, for systems that are meaningful and that endure, other factors are far more important: correctness, repeatability, safety, maintainability, these dominate, and as such, don’t be distracted by the noise and smoke and heat and light of an AI first style that may get you out of the starting gate quickly, but will fail you in the ultra marathon of most development.
Andrew Ng@AndrewYNg

AI-native software engineering teams operate very differently than traditional teams. The obvious difference is that AI-native teams use coding agents to build products much faster, but this leads to many other changes in how we operate. For example, some great engineers now play broader roles than just writing code. They are partly product managers, designers, sometimes marketers. Further, small teams who work in the same office, where they can communicate face-to-face, can move incredibly quickly. Because we can now build fast, a greater fraction of time must be spent deciding what to build. To deal with this project-management bottleneck, some teams are pushing engineer:product manager (PM) some teams are pushing engineer:product manager (PM) ratios downward from, say, 8:1 to as low as 1:1. But we can do even better: If we have one PM who decides what to build and one engineer who builds it, the communication between them becomes a bottleneck. This is why the fastest-moving teams I see tend to have engineers who know how to do some product work (and, optionally, some PMs who know how to do some engineering work). When an engineer understands users and can make decisions on what to build and build it directly, they can execute incredibly quickly. I’ve seen engineers successfully expand their roles to including making product decisions, and PMs expand their roles to building software. The tech industry has more engineers than PMs, but both are promising paths. If you are an engineer, you’ll find it useful to learn some product management skills, and if you’re a PM, please learn to build! Looking beyond the product-management bottleneck, I also see bottlenecks in design, marketing, legal compliance, and much more. When we speed up coding 10x or 100x, everything else becomes slow in comparison. For example, some of my teams have built great features so quickly that the marketing organization was left scrambling to figure out how to communicate them to users — a marketing bottleneck. Or when a team can build software in a day that the legal department needs a week to review, that’s a legal compliance bottleneck. In this way, agentic coding isn’t just changing the workflow of software engineering, it’s also changing all the teams around it. When smaller, AI-enabled teams can get more done, generalists excel. Traditional companies need to pull together people from many specialties — engineering, product management, design, marketing, legal, etc. — to execute projects and create value. This has resulted in large teams of specialists who work together. But if a team of 2 persons is to get work done that require 5 different specialities, then some of those individuals must play roles outside a single speciality. In some small teams, individuals do have deep specializations. For example, one might be a great engineer and another a great PM. But they also understand the other key functions needed to move a project forward, and can jump into thinking through other kinds of problems as needed. Of course, proficiency with AI tools is a big help, since it helps us to think through problems that involve different roles. Even in a two-person team, to move fast, communication bottlenecks also must be minimized. This is why I value teams that work in the same location. Remote teams can perform well too, but the highest speed is achieved by having everyone in the room, able to communicate instantaneously to solve problems. This post focuses on AI-native teams with around 2-10 persons, but not everything can be done by a small team. I'll address the coordination of larger teams in the future. I realize these shifts to job roles are tough to navigate for many people. At the same time, I am encouraged that individuals and small teams who are willing to learn the relevant skills are now able to get far more done than was possible before. This is the golden age of learning and building! [Original text: deeplearning.ai/the-batch/issu… ]

English
38
104
673
50.9K
Mike Peterson
Mike Peterson@mpvprb·
@briantylercohen "Trump is the only thing standing between America and socialism". And some think this is a good thing. We need to get past the old cold war thinking.
English
0
0
0
1
Brian Tyler Cohen
Brian Tyler Cohen@briantylercohen·
Republicans have been making this argument my entire life. We had 8 years of Clinton, 8 years of Obama, 4 years of Biden— never became a socialism. But you know what we got? More jobs, lower unemployment, higher GDP in Democratic administrations than every GOP administration in the last 3 decades. Without fail.
Laura Ingraham@IngrahamAngle

At this moment, Donald Trump is the only thing standing between America and socialism. That’s why the hard Left wants him gone by any means necessary, and why they’ve bred and fed countless “recruits” to answer their twisted call.

English
1.1K
3.3K
13.3K
470.9K
Mike Peterson
Mike Peterson@mpvprb·
@souljagoyteller Warnings are useless, and adding more warnings makes them more useless. At best, they give a bit of legal defense against frivolous lawsuits.
English
0
0
0
23
Sami Gold
Sami Gold@souljagoyteller·
Of all the controversial innovations of Woke 1.0, I don’t get why trigger warnings were so despised. It makes perfect sense why a professor or a YouTube content creator might want to warn in advance of the discussion of topics that might stir up memories of previous traumatic experiences. It makes a lot more sense than standpoint epistemology or microaggressions
English
119
69
4.3K
1.1M