Cesar Cerrudo

5.2K posts

Cesar Cerrudo banner
Cesar Cerrudo

Cesar Cerrudo

@cesarcer

Professional Hacker & Cyber Security Futurist. Security/ Hacking

... 가입일 Aralık 2009
1.6K 팔로잉15.1K 팔로워
고정된 트윗
Cesar Cerrudo
Cesar Cerrudo@cesarcer·
The dangers of humans hacking your home robots
English
4
52
72
0
Cesar Cerrudo
Cesar Cerrudo@cesarcer·
Agree
ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ 🛡️@DanielMiessler

This is what people are not realizing about how many knowledge work jobs are about to go away. Penetration testing and bug bounty are two of the highest complexity, multi step, high IQ and high creativity jobs in all of knowledge work. This is a job that many people believed would not and could not be automated. And now it went from not possible to everyone is doing it in the period of about four months. I’m gonna say this very clearly: If this job can be automated, then any job in knowledge work can be automated. That is hundreds of millions of knowledge work jobs totaling somewhere around $70 trillion in an annual compensation. So what happens to the economy when that $70 trillion is not going to those knowledge workers anymore to pay their mortgages, or to spend in the economy in other businesses? I define AGI as the presence of a product that can replace an knowledge worker, and I believed that 2027 was the most likely year for this to happen. It looks now like it will either be 2027 or maybe even later this year in 2026. The UBI conversations are about to kick in full blast probably starting in late 2027 and 2028. And by 2029 or 2030 there will need to be actual checks going out if we are to avoid something really nasty. Assuming that crash doesn’t happen much sooner. We are living in the before scene of a catastrophe movie. And this is coming from an optimist who thinks that the other side will eventually get much better than what we had before the crash. But we all need to wake up to the fact that this is coming.

English
1
0
1
1.2K
Cesar Cerrudo
Cesar Cerrudo@cesarcer·
AI is a mediocrity killer. It makes mediocrity extremely visible and assure its extinction.
English
0
1
5
565
Cesar Cerrudo
Cesar Cerrudo@cesarcer·
AI doesn't agree. Here's a critical analysis of the key flaws in Kobeissi's arguments: 1. Survivorship Bias as a Framework The repeated claim that "humanity has ALWAYS prevailed" and "the free market ALWAYS works itself out" cherry-picks outcomes. It ignores the civilizations, industries, and entire social classes that didn't survive transitions. The Roman Empire, Soviet collapse, and the hollowing out of the American Rust Belt are all examples where "working itself out" meant decades of suffering for millions before any recovery. 2. The PC Price Analogy Is Misleading Comparing AI to the 99.9% drop in PC prices is a false equivalence. PCs were a new product category that created demand from scratch. AI is different — it directly substitutes existing human labor across multiple sectors simultaneously. When PCs got cheaper, nobody lost their job "to a PC." When AI gets cheaper, it explicitly replaces the worker. The demand dynamics are fundamentally different. 3. "Demand Always Expands" Is Not a Law of Physics The article assumes that lower costs automatically create new demand and new industries. But this is not guaranteed, especially at the speed AI is moving. Previous revolutions (electricity, internet) unfolded over decades, giving labor markets time to adapt. AI is compressing that timeline to months or quarters. The speed of disruption matters as much as the direction. 4. The "Invisible Tax Cut" Ignores Distribution Calling cheaper services an "invisible tax cut" glosses over a critical question: who benefits and when? If millions of white-collar workers lose income before prices meaningfully decline, the transition period can be devastating. A tax cut is useless if you have no income to spend. The timing gap between job displacement and price deflation is completely unaddressed. 5. "Abundance GDP" Is a Theoretical Construct The concept of prices falling faster than incomes sounds elegant, but assumes AI-driven deflation will be orderly and uniform. In reality, essential costs like housing, healthcare delivery (not administration), and food are driven by physical scarcity and regulatory bottlenecks that AI cannot easily solve. Rent doesn't drop because legal documents are cheaper to produce. 6. Entrepreneurship as a Safety Net Is Overstated The argument that displaced workers will simply become entrepreneurs is a Silicon Valley fantasy. Most people lack the risk tolerance, capital, network, and temperament for entrepreneurship. Saying "one person can now automate everything" ignores that running a business requires far more than automating tasks — it requires finding customers, managing uncertainty, and bearing financial risk. Historically, mass displacement does not produce mass entrepreneurship. 7. The Geopolitical Argument Is Naive Claiming AI abundance will "end wars" ignores that conflict is driven by power, ideology, and territorial ambition — not just scarcity. World War I broke out among the wealthiest, most technologically advanced nations on Earth. Furthermore, AI itself is becoming a geopolitical weapon (surveillance, autonomous military systems, information warfare). Abundance in one nation can increase the power asymmetry that triggers conflict with others. 8. The Speed Problem Is Completely Ignored The article's biggest blind spot is tempo. Every historical analogy cited (PCs, internet, electricity) played out over 20-50 years. AI is disrupting entire sectors in weeks, as the article itself demonstrates with the CrowdStrike example. The economy may eventually adapt, but "eventually" can mean a lost generation of workers. The article offers no framework for how fast adaptation needs to happen. 9. Conflict of Interest Is Unaddressed The article concludes with a pitch for Kobeissi's premium investment service, claiming 5x S&P 500 returns. This creates a clear incentive to frame disruption as opportunity. The optimistic framing — "best trading conditions ever" — serves their commercial interest. A truly balanced analysis wouldn't need to sell you a subscription at the end. 10. False Binary: Doom vs. Abundance The entire piece is structured as "the bears are wrong, here's the bull case." But the most likely outcome is neither collapse nor abundance — it's a messy, uneven transition where some sectors boom, others are devastated, and the benefits are unevenly distributed along class, geography, and education lines. By framing it as binary, the article avoids the harder, more realistic discussion about policy, safety nets, and who actually bears the cost of transition. In short, Kobeissi's optimism rests on historical analogies that may not apply at AI's speed, assumes smooth transitions that have never occurred in practice, and systematically underweights the distributional and timing problems that make "the economy adapts" far less comforting than it sounds.
English
0
0
0
40
thaddeus e. grugq
thaddeus e. grugq@thegrugq·
@cesarcer @milesdeutscher Depends on what you study. I only do university because I like the research, it expands my horizons, helps me think better, and I enjoy it. If you view college as a work skills development source I think it’ll be problematic, but for learning how to think? Still pretty good.
English
2
0
1
53
Miles Deutscher
Miles Deutscher@milesdeutscher·
With how fast AI is moving, I seriously don't know how anyone could commit to a 4-year university degree. My businesses have been turned upside down in the last 2 weeks alone. You're going to bet that in 4 years the world stays the same?
English
262
55
727
48K
Cesar Cerrudo
Cesar Cerrudo@cesarcer·
For Advanced Protection • Disconnect your smart TV from the internet entirely and use a separate streaming device (Apple TV most recommeded or privacy-focused alternative) for smart features. • Create a separate WiFi network for IoT devices to prevent your TV from scanning your primary network. • Use a VPN or DNS-level ad blocker (such as Pi-hole) on your network to block telemetry traffic.
English
0
0
0
168
Cesar Cerrudo
Cesar Cerrudo@cesarcer·
To protect take Immediate Actions: • Disable ACR on your current TV . • Reset or delete your advertising ID to break existing tracking profiles. • Disable voice assistant features if you don’t actively use them. • Check privacy settings after every firmware update, as they may revert to defaults.
English
1
0
0
175
Cesar Cerrudo
Cesar Cerrudo@cesarcer·
Your Smart TVs are literally watching you. Texas AG just sued Samsung, LG, Sony, Hisense & TCL for illegal spying. Temporary restraining orders already issued against Hisense + Samsung. I ranked the Top brands by real privacy risk + protection steps. #privacy Thread ...
English
1
1
1
276
Cesar Cerrudo
Cesar Cerrudo@cesarcer·
Mustafa Suleyman, the chief executive of Microsoft AI predicts the full automation of most white-collar tasks within the next 12 to 18 months 🤯
English
1
1
1
450
Cesar Cerrudo
Cesar Cerrudo@cesarcer·
Counterarguments Drawn from Eliot's Artificially Intelligent book David Eliot's book provides a rigorous, historically grounded framework that directly challenges several of the article's core premises. Here are the major counterarguments: 1. "New jobs will appear" is a prefabricated comfort, not an argument Eliot calls this out explicitly in Chapter 23. He labels the claim that "innovation inevitably creates new jobs" as a "prefabricated argument" that "does not hold up when put under the microscope." He identifies three specific failures in this reasoning that the article never addresses: Speed: AI developed quietly for years and has arrived with enormous momentum. Unlike the gradual introduction of the power loom or the ATM, AI can hit countless segments of the economy "fast and hard." Eliot warns that even if new jobs eventually materialize, we should expect "a prolonged period between the layoffs and the emergence of new jobs" that will be "both economically and mentally scarring for those who get caught in the middle." The article's author concedes speed is different but then moves on without grappling with this implication. Scale: This is where Eliot's argument is sharpest. He points out that previous automating technologies were narrow — an automatic cow-milking system doesn't disrupt other industries. But AI is a general purpose technology, like the steam engine or the internet. It cuts across industries simultaneously. So the classic safety valve — "jobs lost in one sector are created in another" — breaks down, because AI is creating efficiencies in the other sectors at the same time. The article's historical examples (ATMs, barcode scanners, spreadsheets) were all single-industry technologies. Eliot argues this makes them fundamentally misleading as analogies. Skills mismatch: Even granting that new jobs will emerge, Eliot asks the question the article ignores entirely: will displaced workers be able to do those new jobs? AI disproportionately threatens "skilled workers" — people who invested years and thousands of dollars acquiring specialized knowledge. A 45-year-old with two decades of expertise in a now-automated field cannot simply pivot to an AI engineering role. The new jobs require "fundamentally different skills to those which multiple generations have trained for." The article's breezy advice — "you need curiosity, a willingness to experiment" — brushes past the reality that retraining is expensive, time-consuming, and psychologically devastating for people mid-career with families to support. 2. "Employed" is not the same as "living well" — the dignity problem The article counts jobs and declares victory. Eliot insists we look beyond the numbers. He draws a devastating parallel to the Industrial Revolution: "Children worked in factories and starved in the streets. But they were employed." Workers had jobs, but those jobs were grueling, degrading, 10–16 hour days for poverty wages. The quantity of employment recovered; the quality of life collapsed for a generation or more. Eliot warns that displaced skilled workers forced into low-skilled jobs "will not be able to provide a similar standard of living or social status," leading to "adverse mental effects and resentment of the system that betrayed these workers." The article's cheerful ATM-and-cashier statistics tell you nothing about whether the new jobs paid as well, offered the same security, or provided meaning. Eliot's framework insists we ask those questions. 3. The Luddites weren't wrong — they were misunderstood The article uses the Luddites as a cautionary tale of foolish resistance: "do you want to be remembered the way they are?" Eliot offers a radically different reading. He argues the Luddites "were not anti-technology" — many embraced the new machines and were eager to work alongside them. What they feared was that their employers would use the technology not to improve products but to "cheapen them" while gaining "more control over their workers." The Luddites' real fight was over power — who gets to decide how technology is implemented and who captures the gains. Eliot writes that factories "could have functioned, and continued to make profits, without completely replacing their workers," but owners chose maximum extraction over shared benefit. The article mocks the Luddites without engaging with the substance of their complaint, which, as Eliot argues, is remarkably relevant today: the question is not whether AI will create value, but who decides how that value is distributed. 4. The "garden" metaphor hides the question of who owns the garden The article says "the economy is not a pie. It's a garden. And technology is rain." Eliot would agree that technology grows the garden — but would immediately ask: who owns the garden? Chapters 19 and 20 of Eliot's book lay out how AI development requires massive surveillance infrastructure and oceans of data. The companies that control this data — Google, Apple, Meta, Amazon, Microsoft — gain "immense economic and social power." They "get to decide what types of AI are made, and for whom. They decide what types of problems we try to solve, and how." Eliot's deepest fear is that "many countries are ceding too much power over how our futures will be shaped to companies whose motives are not to make a better society for all — but instead to accumulate more money and power." The article's framework is entirely silent on this. It assumes that a bigger pie automatically means broadly shared prosperity. Eliot argues the opposite: without democratic control over how AI is built and deployed, the benefits will concentrate among those who already have power, just as they did during the Industrial Revolution. 5. The "it's just a tool" framing is dangerously naive The article's central dismissal of the "this time is different" objection rests on: "it is still a tool." Eliot dedicates much of his book to demolishing this exact framing. He argues that "no technology is apolitical" — every technology embeds the choices, values, and ideologies of its creators. AI is not a neutral tool like a hammer. It is a system trained on biased data, deployed within existing power structures, and shaped by corporate incentives. Chapter 22 demonstrates this concretely through predictive policing (which codifies and amplifies racial bias), Amazon's hiring AI (which discriminated against women because it learned from biased historical data), and the "black box problem" (where deep learning systems make consequential decisions that cannot be reverse-engineered or audited). Calling AI "just a tool" obscures all of this. A hammer doesn't perpetuate systemic racism; a deep learning system trained on policing data can and does. 6. The article ignores surveillance as a precondition of AI The article treats AI as if it springs into existence from clever engineering. Eliot reveals the infrastructure underneath: AI runs on data, data is produced by surveillance, and surveillance requires "digital enclosures" — controlled environments where your every action becomes fuel for machine learning. Google Search, the Apple ecosystem, Facebook, and the concept of the Metaverse are all digital enclosures designed to extract data from users. This means the "unseen" side of AI isn't just new jobs and opportunities — it's also an unprecedented expansion of the surveillance apparatus, one that most people are "blissfully unaware" of. The article's Bastiat framework conveniently only applies the "unseen" concept to positive outcomes. Eliot shows there are deeply negative "unseen" effects too: erosion of privacy, concentration of informational power, and the creation of systems that can monitor, classify, and control populations in ways that would make the East German Stasi envious. *This was created the help of AI
English
0
1
15
980
Balaji
Balaji@balajis·
What’s still important in the age of AI? Vision and verification. Prompting and polishing. Community and geography. Scarcity and cryptography. Physicality and resiliency. Vision is where you are going. AI can move fast in a direction but it needs direction. Vision means focusing on that direction. Verification is making sure the AI is doing what you want it to do. You can use AIs to critique each other, but you are the final critic. Prompting is articulating what you want in clear written (or spoken) English. Those with great vocabularies will do far better than those without. Polishing is realizing that AI often does it middle-to-middle, but not end-to-end. AI is a construction crane that can build much of the building, but often at the end you need human tweezers. Community is online and offline connectivity. It’s what stays roughly constant even as software becomes variable. Geography is the longitude and latitude that governs your laws. To first order the Internet is roughly uniform across the surface of the earth, but to second order it really is not. Scarcity is everything from physical scarcity (like robots and drones and houses and cars) to distribution scarcity. The hard-to-make atoms as distinct from the easily made bits. Cryptography is everything AI can’t do. LLMs can solve partial differential equations, but not discrete logarithms. The hard-to-fake bits as distinct from the easily faked bits. Physicality is where AI will truly shine. Robot task completions can often be more easily verified. The real world is the verifier of whether a box is on a table. It’s much harder to verify whether an essay is done. Resiliency is about cutting your burn rate, strengthening your community, and picking the right location (and allocation) to weather the dislocations ahead.
English
193
380
2.5K
158.1K
Sean
Sean@SeanODowd15·
If you read the other AI article, you need to read this one Ex-OpenAI employees wrote it a year ago, and they predicted we’ll know how AI ends by 2027 Thus far, they’ve been remarkably accurate in their predictions ai-2027.com
English
103
309
2.2K
462.4K