scocam

167 posts

scocam banner
scocam

scocam

@srcdev

Inscrit le Kasım 2022
568 Abonnements34 Abonnés
scocam retweeté
scocam retweeté
The Knowledge Archivist
The Knowledge Archivist@KnowledgeArchiv·
2,400 years ago Aristotle tried to warn us about multiculturalism
The Knowledge Archivist tweet media
English
437
6.2K
22.4K
735.4K
scocam retweeté
Nicholas J. Fuentes
Nicholas J. Fuentes@NickJFuentes·
NEVER VANCE NEVER RUBIO THE IRAN WAR TICKET JUST LOST 2028
English
7.8K
11.9K
91.6K
4.3M
scocam retweeté
Andrew Heaton 🎩
Andrew Heaton 🎩@MightyHeaton·
I for one am on the GOOD TEAM and abhor the BAD TEAM, from which all problems in our country arise. Retweet if you're with me!
English
21
72
216
6.6K
scocam retweeté
Ricky Gervais
Ricky Gervais@rickygervais·
This wasn't allowed in public incase it offended anyone. So please don't retweet it. Thanks 🙏dutchbarn.com
Ricky Gervais tweet media
English
3.5K
79.5K
214.2K
3.4M
scocam retweeté
Elon Musk
Elon Musk@elonmusk·
Grok 4.20 is BASED. The only AI that doesn’t equivocate when asked if America is on stolen land. The others are weak sauce.
Elon Musk tweet media
English
22.4K
22K
164.7K
36.9M
scocam
scocam@srcdev·
@FatEmperor I keep forgetting that mindfulness is important. That must be an animalistic impulse. This sounds a lot like meditation and I like it. Thank you.
English
0
0
1
239
scocam retweeté
Sama Hoole
Sama Hoole@SamaHoole·
Mitochondria: "We need saturated fats for stable membranes." You: *eats seed oils* Mitochondria: "Right, so we'll build membranes from these unstable polyunsaturated fats." You: "Why am I tired?" Mitochondria: "Because we're using dynamite for insulation." You: *eats more seed oils* Mitochondria: "Excellent. We'll integrate more linoleic acid into our membranes." You: "Why do I have inflammation?" Mitochondria: "Because the dynamite keeps exploding." You: "My doctor says I need medication." Mitochondria: "Your doctor doesn't know we exist." You: "Should I eat some vegetables?" Mitochondria: "Only if you hate us." You: *eats spinach smoothie with almond milk* Mitochondria: "Cool. Glass shards. Our favorite."
English
14
115
650
19.3K
scocam retweeté
Sama Hoole
Sama Hoole@SamaHoole·
Rapeseed has been grown for 4,000 years. The Romans burned it in lamps. Medieval Europeans used it for lamp oil. The industrial revolution used it to lubricate steam engines. It worked brilliantly: high heat tolerance, doesn't break down easily. Perfect for machinery. Toxic for humans. The problem: Rapeseed contains 50% erucic acid. This compound causes heart lesions in animals. Feed it to rats, their hearts develop fatty deposits and scarring. It also contains glucosinolates that suppress thyroid function. Your body actively rejects it as poison. 1970s: Canadian plant breeders develop a low-erucic variety. They need a new name because "rapeseed oil" has terrible marketing implications and everyone knows it's industrial lubricant. They call it "Canola" - Canadian Oil Low Acid. Marketing genius. Same plant, different branding. The new variety has less erucic acid. Still requires hexane extraction, high-heat processing, chemical deodorisation, and bleaching. Still oxidises rapidly. Still rich in omega-6 fats that promote inflammation. But now it sounds Canadian and wholesome rather than industrial and toxic. 1985: FDA grants GRAS status. "Generally Recognised As Safe." Canola goes from engine lubricant to cooking oil in one regulatory decision. Food manufacturers love it: cheaper than olive oil, neutral flavour, long shelf life. Restaurants use it in everything. Nobody questions why a crop bred for lubrication and literally named "rape" needed a complete rebrand to become food. If the product is good, why hide what it is? You don't see "beef" marketed as "friendly muscle tissue" because there's nothing to hide. Your fish and chips are fried in machine oil. The marketing worked.
Sama Hoole tweet media
English
149
2.2K
5.2K
170.5K
Metatron
Metatron@pureMetatron·
🤣The difference a single vowel makes, am I right?
Metatron tweet media
English
54
169
1.8K
24.3K
Freedomain - with Stefan Molyneux, MA
Why do women have such a tough time admitting basic, obvious facts, such as - - they wear makeup to attract men - fat is not attractive - single mothers are less valuable for dating and so on? If you keep asking, eventually they will admit the truth, but what's the barrier?
English
357
275
5.8K
100.1K
TaraBull
TaraBull@TaraBull·
Guys, do you find tattoos attractive?
English
2.3K
317
8.7K
565.7K
LadyValor
LadyValor@lady_valor_07·
Guess the year ?🤔
LadyValor tweet media
English
2.3K
31
488
70.1K
Brian Atlas
Brian Atlas@BrianAtlas·
What order do you pick?
Brian Atlas tweet media
English
275
3
175
31.8K
Mr. Reply Guy
Mr. Reply Guy@GenericSnarky·
If your girl shows up to a date like this, how do you handle it?
Mr. Reply Guy tweet media
English
1.1K
123
2.5K
47.8K
scocam retweeté
Owen Gregorian
Owen Gregorian@OwenGregorian·
Your Job Isn't Disappearing. It's Shrinking Around You in Real Time | Jan Tegze, Thinking Out Loud AI isn't taking your job. It's making your expertise worthless while you watch. The three things everyone tries that fail, and the one strategy that actually works. You open your laptop Monday morning with a question you can’t shake: Will I still have a job that matters in two years? Not whether you’ll be employed. Whether the work you do will still mean something. Last week, you spent three hours writing a campaign brief. You saw a colleague generate something 80% as good in four minutes using an AI agent. Maybe 90% as good if you’re being honest. You still have your job. But you can feel it shrinking around you. The problem isn’t that the robots are coming. It’s that you don’t know what you’re supposed to be good at anymore. That Excel expertise you built over five years? Automated. Your ability to research competitors and synthesize findings? There’s an agent for that. Your skill at writing clear project updates? Gone. You’re losing your professional identity faster than you can rebuild it. And nobody’s telling you what comes next. The Three Things Everyone Tries That Don’t Actually Work When you feel your value eroding, you do what seems rational. You adapt. You learn. You try to stay relevant. Here’s what that looks like for most people: First, you learn to use the AI tools better. You take courses on prompt engineering. You master ChatGPT, Claude, whatever new platform launches next week. You become the “AI person” on your team. You think: if I can’t beat them, I’ll use them better than anyone else. This fails because you’re still competing on execution speed. You’re just a faster horse. And execution is exactly what’s being commoditized. Six months from now, the tools will be easier to use. Your “expertise” in prompting becomes worthless the moment the interface improves. You’ve learned to use the shovel better, but the backhoe is coming anyway. Second, you double down on your existing expertise. The accountant learns more advanced tax code. The designer masters more software. The analyst builds more complex models. You think: I’ll go so deep they can’t replace me. This fails because depth in a disappearing domain is a trap. You’re building a fortress in a flood zone. Agents aren’t just matching human expertise at the median level anymore. They’re rapidly approaching expert-level performance in narrow domains. Your specialized knowledge becomes a liability because you’ve invested everything in something that’s actively being automated. You’re becoming the world’s best telegraph operator in 1995. Third, you try to “stay human” through soft skills. You lean into creativity, empathy, relationship building. You go to workshops on emotional intelligence. You focus on being irreplaceably human. You think: they can’t automate what makes us human. This fails because it’s too vague to be actionable. What does “be creative” actually mean when an AI can generate 100 ideas in 10 seconds? How do you monetize empathy when your job is to produce reports? The advice feels right but provides no compass. You end up doing the same tasks you always did, just with more anxiety and a vaguer sense of purpose. The real problem with all three approaches: they’re reactions, not redesigns. You’re trying to adapt your old role to a new reality. What actually works is building an entirely new role that didn’t exist before. But nobody’s teaching you what that looks like. The Economic Logic Working Against You This isn’t happening to you because you’re failing to adapt. It’s happening because the economic incentive structure is perfectly designed to create this problem. Here’s the mechanism: Companies profit immediately from adopting AI agents. Every task automated results in cost reduction. The CFO sees the spreadsheet: one AI subscription replaces 40% of a mid-level employee’s work. The math is simple. The decision is obvious. Many people hate to hear that. But if they owned the company or sat in leadership, they’d do the exact same thing. Companies exist to drive profit, just as employees work to drive higher salaries. That’s how the system has worked for centuries. But companies don’t profit from retraining you for a higher-order role that doesn’t exist yet. Why? Because that new role is undefined, unmeasured, and uncertain. You can’t put “figure out what humans should do now” on a quarterly earnings call. You can’t show ROI on “redesign work itself.” Short-term incentives win. Long-term strategy loses. Nobody invests in the 12-24 month process of discovering what your new role should be because there’s no immediate return on that investment. We’re in a speed mismatch. Agent capabilities are compounding at 6-12 month cycles. Human adaptation through traditional systems operates on 2-5 year cycles. Universities can’t redesign curricula fast enough. They’re teaching skills that will be automated before students graduate. Companies can’t retrain fast enough. By the time they identify the new skills needed and build a program, the landscape has shifted again. You can’t pivot fast enough. Career transitions take time. Mortgages don’t wait. Here’s the deeper issue: we’ve never had to do this before. Previous automation waves happened in manufacturing. You could see the factory floor. You could watch jobs disappear and new ones emerge. There was geographic and temporal separation. This is different. Knowledge work is being automated while you’re still at your desk. The old role and new role exist simultaneously in the same person, the same company, the same moment. And nobody has an economic incentive to solve it. Companies maximize value through cost reduction, not workforce transformation. Educational institutions are too slow and too far removed from real-time market needs. Governments don’t understand the problem yet. You’re too busy trying to keep your current job to redesign your future one. The system isn’t helping because it isn’t designed for continuous, rapid role evolution; it is designed for stability. We’re using industrial-era institutions to solve an exponential-era problem. That’s why you feel stuck. Your Experience Just Became Worthless (The Timeline) Let me tell you a story of my friend, let’s call her Sarah. She was a senior research analyst at a mid-sized consulting firm. Ten years of experience. Her job: client companies would ask questions like “What’s our competitor doing in the Asian market?” and she’d spend 2-3 weeks gathering data, reading reports, interviewing experts, synthesizing findings, creating presentations. She was good. Clients loved her work. She billed at $250 an hour. The firm deployed an AI research agent in Q2 2023. Not to replace Sarah. To “augment” her. Management said all the right things about human-AI collaboration. The agent could do Sarah’s initial research in 90 minutes. It would scan thousands of sources, identify patterns, generate a first-draft report. Month one: Sarah was relieved. She thought she could focus on high-value synthesis work. She’d take the agent’s output and refine it, add strategic insights, make it client-ready. Month three: A partner asked her, “Why does this take you a week now? The AI gives us 80% of what we need in an hour. What’s the other 20% worth?” Sarah couldn’t answer clearly. Because sometimes the agent’s output only needed light editing. Sometimes her “strategic insights” were things the agent had already identified, just worded differently. Month six: The firm restructured. They didn’t fire Sarah. They changed her role to “Quality Reviewer.” She now oversaw the AI’s output for 6-8 projects simultaneously instead of owning 2-3 end to end. Her title stayed the same. Her billing rate dropped to $150 an hour. Her ten years of experience felt worthless. Sarah tried everything. She took an AI prompt engineering course. She tried to go deeper into specialized research methodologies. She emphasized her client relationships. None of it mattered because the firm had already made the economic calculation. One AI subscription: $50 a month. Sarah’s salary: $140K a year. The agent didn’t need to be perfect. It just needed to be 70% as good at 5% of the cost. The part that illustrates the systemic problem: You often hear from AI vendors that, thanks to their AI tools, people can focus on higher-value work. But when pressed on what that meant specifically, they’d go vague. Strategic thinking. Client relationships. Creative problem solving. Nobody could define what higher-value work actually looked like in practice. Nobody could describe the new role. So they defaulted to the only thing they could measure: cost reduction. Sarah left six months later. The firm hired two junior analysts at $65K each to do what she did. With the AI, they’re 85% as effective as Sarah was. Sarah’s still trying to figure out what she’s supposed to be good at. Last anyone heard, she’s thinking about leaving the industry entirely. Stop Trying to Be Better at Your Current Job The people who are winning aren’t trying to be better at their current job. They’re building new jobs that combine human judgment with agent capability. Not becoming prompt engineers. Not becoming AI experts. Becoming orchestrators who use agents to do what was previously impossible at their level. Marcus was a marketing strategist at a retail company. When AI tools emerged, he didn’t try to write better marketing copy than the AI. He started running 50 campaign variations simultaneously. Something that would’ve required a team of 12 people before. He’d use agents to generate the variations, test them, analyze results, iterate. His role became: design the testing framework, interpret patterns the agents found, make strategic bets based on data no human could process manually. Within six months, his campaigns were outperforming competitors by 40%. Not because he was better at any single task. Because he could operate at a scale that was previously impossible. Here’s the pattern that works: Find the constraint in your domain that exists because of human limitations. What doesn’t get done because it takes too long? What questions don’t get asked because analysis is too expensive? What experiments don’t get run because you’d need a team of 20? Then use agents to remove that constraint. Not to do your current tasks faster. To do things that were previously impossible. Then build expertise in the judgment layer. What experiments should we run? Which patterns matter? What do these results mean for strategy? When should we override the agent’s recommendation? This isn’t vague strategic thinking. It’s specific: you’re the decision maker orchestrating a capability that didn’t exist before. You’re not competing with the agent. You’re creating a new capability that requires both you and the agent. You’re not defensible because you’re better at the task. You’re defensible because you’ve built something that only exists with you orchestrating it. The hard truth: this requires letting go of your identity as “the person who does X.” Marcus doesn’t write copy anymore. That bothered him at first. He liked writing. But he likes being valuable more. Here’s what you can do this month: Week one: Identify one thing in your job that you’d do 10x more if it didn’t take so long. Customer research? Competitive analysis? Testing variations? Data modeling? Week two: Use AI agents to do that thing at 10x volume, even if quality drops to 70%. See what becomes possible. Week three: Find the patterns. What insights emerge at scale that you’d never see doing it manually? What new questions can you answer? Week four: Pitch this as a new capability to your boss. Not “I’m more efficient now.” But “We can now do this specific thing we couldn’t do before, which creates this specific business value.” People who do this aren’t getting squeezed. They’re getting promoted or poached. Because they’ve made themselves the linchpin of a new capability, not the executor of an old task. One critical caveat: this won’t work forever in its current form. Eventually, agents will get better at orchestration too. But it buys you three to five years. And in that time, you’ll see the next evolution coming. The meta-skill is this: learning to spot what becomes possible when a constraint disappears, then building your value around that new possibility. Most Strategic Thinking Was Actually Just Thoroughness Most people currently doing “strategic” knowledge work aren’t actually that strategic. When agents started handling the execution layer, everyone assumed humans would naturally move up to higher-order thinking. Strategy. Judgment. Vision. But a different reality is emerging: many senior people with years of experience can’t actually operate at that level. Their expertise was mostly pattern matching and process execution dressed up in strategic language. The thing nobody says out loud: “We thought Lisa was a strategic thinker because her analyses were thorough. Turns out the thoroughness was the skill. When an agent can be thorough in three minutes, we’re discovering Lisa doesn’t actually have strategic insights to add.” This isn’t that these people are bad at their jobs. They were excellent at their jobs. The job required diligence, attention to detail, process mastery. They delivered exactly what was asked. But the industry sold them on the idea that experience equals strategic capability. That putting in the hours would naturally develop judgment. For some people, it did. For many others, they got really good at execution and called it strategy. Here is what one CEO of a mid-sized company in Canada told me: “We’re discovering that our senior people and our junior people are equally lost when we ask them what we should do, not just how to do it. The seniors are just more articulate about their uncertainty.“ The agent economy isn’t just automating tasks. It’s revealing who was coasting on the appearance of strategic thinking versus who actually possesses it. And there’s no gentle way to tell someone: you’ve spent 15 years building a career, and we’re just now realizing the thing you were good at wasn’t what we actually needed. Nobody says this publicly because it suggests the problem isn’t just technological adaptation. It’s that our evaluation systems were broken all along. We promoted people for the wrong reasons. We confused “does the work well” with “thinks strategically about the work.” Admitting that means admitting we don’t actually know how to identify or develop real strategic capability. We’ve been guessing. Using credentials and years of experience as proxies. The Only Durable Strategy Is Spotting What Just Became Possible You’re not going to solve this by being better at your current job. That job is dissolving under you in real time. You’re not going to solve it by learning the tools better. The tools will get easier to use without you. You’re not going to solve it by going deeper into your specialty. That specialty is being automated. What works: become the person who spots what just became possible and builds your value around that new capability. Use agents to remove constraints that previously limited what you could do. Become the orchestrator of scale that didn’t exist before. This isn’t a permanent solution. In three to five years, you’ll need to do it again. The meta-skill is learning to continuously spot the next evolution and position yourself at the edge of what’s newly possible. The uncomfortable truth: this will separate people who were genuinely strategic from people who were just thorough. There’s no way around that. The system that rewarded thoroughness is breaking down. The new system rewards the ability to see what constraints just disappeared and build something new in that space. You still have time. But not much. The speed mismatch between agent capability and human adaptation is real. The companies won’t save you because they’re optimized for short-term cost reduction, not long-term workforce transformation. The educational system won’t save you because it’s too slow. You have to save yourself. And the way you do that is by stopping trying to defend your current role and starting to build the role that didn’t exist six months ago. Monday morning will keep coming. The question is whether you’re still wondering what you’re supposed to be good at, or whether you’ve already built the answer. newsletter.jantegze.com/p/your-job-isn…
Owen Gregorian tweet media
English
22
29
148
10K
TaraBull
TaraBull@TaraBull·
Woman's rock climbing maternity photoshoot goes viral
TaraBull tweet mediaTaraBull tweet media
English
58
9
76
20.7K
Chris Pavlovski 🏴‍☠️
Chris Pavlovski 🏴‍☠️@chrispavlovski·
Dan Bongino (@dbongino) has broken the 100k concurrents barrier in the first minutes of starting his Rumble live show. Unreal.
English
313
249
3.1K
110.1K