Occam's Strop

9K posts

Occam's Strop banner
Occam's Strop

Occam's Strop

@donicc

The best Tweets you've never seen! Semi-retired software engineer. Twitter OG. Fight! Fight! Fight!

Texas Gulf Coast, USA Katılım Şubat 2008
337 Takip Edilen515 Takipçiler
Occam's Strop
Occam's Strop@donicc·
@BillMelugin_ I bet Trump already knows who leaked but wants to force CBS to defend a traitor.
English
0
0
0
52
Bill Melugin
Bill Melugin@BillMelugin_·
NEW: President Trump says somebody leaked to a media outlet that the first F-15E pilot had been rescued but that the US military was looking for a second crew member. He says Iran didn't know that at the time, and his admin will go to the media outlet who reported it try to force them to disclose their source in the interest of national security or "go to jail".
English
465
2.2K
15.1K
550.1K
John Ʌ Konrad V
John Ʌ Konrad V@johnkonrad·
NATO is in far bigger danger than anyone realizes. And the reason has nothing to do with defense budgets. The real danger is psychological. It’s cultural. Europeans didn’t just free-ride on American security for 80 years. They built an entire identity around the idea that they evolved past the Americans protecting them. That identity is now the single biggest obstacle to Western survival. And the darkest irony is: we helped build it. After World War II, Europe wasn’t just economically shattered. Its culture was in ruins. The cities, the universities, the concert halls, the museums. Rubble. The Marshall Plan rebuilt the economy. But culture wasn’t a priority. Not at first. Then the Iron Curtain dropped. And suddenly culture became a weapon. American diplomats, academics, artists & scholars flooded Western Europe. We funded their universities. Supported their orchestras. Rebuilt their museums. Promoted their intellectual life. Not because European culture needed saving for its own sake. Because Eastern Europeans were struggling for Maslow’s mist basic needs. We needed the view from the other side of that Wall to be intoxicating. So America built Western Europe into a showcase of self-actualization. Art. Philosophy. Cafe culture. Long vacations. Universities where people studied literature instead of surviving. We were manufacturing jealousy. And it worked. The Wall came down. But here’s what no one accounted for. When you give a society self-actualization on someone else’s tab long enough, they forget it was a gift. They start believing it was organically theirs. And when they look at the country that funded it all, a country busy building aircraft carriers and semiconductor fabs and shale fields instead of reaching the Maslow’s pinnacle. An overweight American in a ball cap who can’t tell Monet from Pissarro. Who eats fast food. Who drives a truck. Who builds strip malls instead of piazzas. And to a culture trained in aesthetics but stripped of strategic awareness, that American looks uncivilized. So the arrogance takes root. And once a culture decides another is beneath them, they stop listening. Americans say wars are sometimes necessary: crude. Oil is the backbone of prosperity: unsophisticated. Kids build companies in garages that reshape the planet: crass. Wall Street finances the global economy: vulgar. Europe has no world-class technology sector. No military capable of strong defense. No energy independence. No AI capacity. What Europe has is culture. The culture we paid for at the expense of us reaching Maslow’s pinnacle. For decades that was fine. We funded the museums, protected the sea lanes, and tolerated the sneering because the arrangement worked. Then Europeans stopped keeping the contempt private. They started saying it to our faces. In their media. In their parliaments. At every international forum. “Americans are stupid. Americans are violent. Americans are a threat to democracy.” We could have moved the Louvre to NY. We could have built a Venice here. We could have stolen your best artists, designers, philosophers and more… like your conquering armies did for centuries. Instead we funded them. And all we asked for in return was to let us visit. You don’t have the military to defend your borders. You don’t have the technology to compete. You don’t have the energy to heat your homes without begging dictators. What you have is an 80-year superiority complex FUNDED BY AMERICANS, protected by American soldiers, and built on the false belief that self-actualization is civilization. It isn’t. Civilization is the ability to sustain itself. By that measure, Europe isn’t a civilization at all. It’s a dependency with better wine. That’s not a threat. It’s a weather report. Build a Navy. Or don’t. But stop lecturing the people who made you “better than us” Our “crudeness” our “stunted liberal education” our “ugly strip malls” are because we sacrificed our culture to support yours.
English
2.6K
6.7K
26.8K
1.3M
fred
fred@cat4lunch·
@AlexBeaurepaire @donicc @johnkonrad @french_report78 you were right to leave NATO, but you still have enjoyed the security umbrella. not asking you to thank us, but don't be foolish enough to think their weren't advantages, and don't try to tell us we didn't provide it, because that is the most galling thing to us - the denials
English
1
0
1
54
Occam's Strop
Occam's Strop@donicc·
@OwenGregorian Its because ppl get an extra hour to drink at the bar on fall back. That equates to more hangovers, more DWIs, more angry wives, etc. 🍺🍺🍺 Cheers
English
0
0
0
8
Owen Gregorian
Owen Gregorian@OwenGregorian·
"Falling back" makes us more miserable than "springing forward," new study finds | Vladimir Hedrih, PsyPost A study using U.S. online and social media posts found that people’s moods tend to worsen during the biannual transitions to Daylight Saving Time (in the spring) and Standard Time (in the fall). This worsening of mood is more pronounced after the change to Standard Time in the fall. The paper was published in PLOS One. Seasonal time change is the practice of adjusting clocks twice a year. In spring, clocks are moved forward by one hour to Daylight Saving Time, usually in March. This shift is described as “losing” an hour of sleep. In fall, clocks are moved back by one hour to Standard Time, typically in October or November. This is known as “gaining” an extra hour of sleep. The purpose of these changes is to make better use of daylight during longer days. In spring, evenings become lighter, while mornings are darker. In fall, mornings become lighter, while evenings get darker earlier. These changes can temporarily affect sleep patterns and daily routines. However, research shows that time changes are associated with negative public sentiment. The shifts also disrupt sleep patterns, increase risks of accidents and health issues, and may impair cognitive functioning. There is an ongoing debate about whether to adopt permanent Daylight Saving Time or permanent Standard Time, as each has different implications for sleep, health, and daily life. Daylight Saving Time is currently used in most of the United States and Canada, in parts of Australia and New Zealand, as well as in most European countries. Many countries near the equator have stopped using it or never adopted it because daylight variation there is minimal throughout the year. Similarly, Russia and Turkey have stopped changing clocks. Study author Ben Ellman and his colleagues conducted a study using social media to measure how public sentiment changes around the dates of these time shifts. They hypothesized that there would be more negative sentiment immediately following the clock shifts, and that this negative sentiment would be stronger in the fall. The authors collected daily data on social media mentions and sentiment about time changes within a 20-day span surrounding these events. The dataset used for analysis was collected using the Quid (formerly Netbase) Social Media Listening platform. The researchers defined a set of primary terms to use in their social media search, including DST, #DST, Daylight savings, extra hour, gain an hour, lose an hour, standard time, and #Timechange. Analyzing posts made between 2019 and 2023, the study authors collected a total of 821,140 mentions. The researchers did not just look at the U.S. as a whole; they specifically looked for posts originating from cities located within 100 miles of U.S. time zone borders. By comparing the sentiment in a city just west of a time zone border on the day of the time change with a city just east of the border on the day prior, the researchers were able to isolate the “shock” of the time change itself, holding variables like weather and daylight schedules relatively constant. The authors used Quid’s Natural Language Processor to examine the tone and context of the posts. Each post was assigned a sentiment value between -100 and 100, depending on whether it expressed a positive or a negative mood. They also had Quid’s processor categorize mentions by unique terms that drive the sentiment of each primary term. Results showed that in the national dataset, the mean number of daily mentions of terms related to time change was 32,271, with huge variations from day to day. The highest numbers of daily mentions occurred in the Eastern and Pacific regions of the U.S. Overall, the average national sentiment while under Daylight Saving Time (positive: 5.65) was better than under Standard Time (negative: -13.02). Ultimately, the results revealed negative shocks to sentiment after both time changes. However, the outcomes following the transitions differed. The researchers found that while the negative mood drop following the spring change to Daylight Saving Time attenuated (recovered) relatively quickly, the negative sentiment following the fall change to Standard Time persisted for a longer period. “These findings provide evidence that individuals have a more negative reaction to the societal time change to Standard Time in the fall than they do to DST in the spring. This work highlights the potential that the reaction to societal time changes varies depending on whether moving to or away from DST or Standard Time,” the study authors concluded. The study adds to the scientific knowledge regarding how people react to time changes. However, the authors note that sentiment towards time changes depends on complex behavioral responses and demographic characteristics that were not observed in this study. Because people differ in their social media use patterns, these differences in reactions might not be completely or adequately reflected in social media posts alone. psypost.org/falling-back-m…
Owen Gregorian tweet media
English
6
1
10
4.8K
Occam's Strop
Occam's Strop@donicc·
@BowTiedTrance My theory is that nobody wants someone else puking AI slop at them because they have a handy button that readily provides all the AI slop they can stand.
English
0
0
2
20
"Doc" Hypnosis 🧠 | BowTied Brain-Hacking
Even when told that a human collaborated in the work, audiences reacted negatively to knowing that AI was involved at any level. I wonder why?
Owen Gregorian@OwenGregorian

People consistently devalue creative writing generated by artificial intelligence | Eric W. Dolan, PsyPost A recent study published in the Journal of Experimental Psychology: General suggests that people consistently judge creative writing more harshly if they believe it was created by artificial intelligence. This bias appears incredibly difficult to overcome, pointing to a persistent human preference for art created by people. Generative artificial intelligence refers to computer programs capable of producing new text, images, or music by predicting patterns from massive amounts of data. Tools like ChatGPT and Claude can now write essays, poems, and stories that read very much like they were written by a real person. As these technologies become more common, scientists wanted to understand how people react to computer-generated art. “We started this project in early 2023, shortly after the launch of ChatGPT. From my early interactions with the technology, it was clear to me that this tool was capable of creative production, and I was very curious about whether and how humans would react to AI-produced creative goods,” explained study author Manav Raj, an assistant professor in management at the Wharton School of the University of Pennsylvania. Prior research hints that people might not be able to tell the difference between human and computer writing if they are kept in the dark. However, the researchers conducted this specific study to see what happens when audiences are explicitly told that a machine wrote the text. They wanted to see if this knowledge changes how people enjoy the art and whether anything can soften that negative reaction. To explore these questions, the scientists carried out sixteen separate experiments involving a total of 27,491 participants. In the first group of five experiments, researchers tested whether the actual content of the writing changed how people reacted to the artificial intelligence label. They had participants read poems and short stories generated by ChatGPT and rated them on quality, creativity, and enjoyment. Some participants were told a machine wrote the text, while others were told a human wrote it. The researchers varied the writing style, testing first-person versus third-person perspectives, poetry versus prose, and different emotional tones. They even tested stories featuring human characters versus aliens, animals, and robots. Across all these variations and thousands of participants, readers consistently gave lower ratings to the text when they thought a machine wrote it. Changing the story details did not consistently lessen this penalty. This initial phase provided evidence that the bias is largely independent of the specific content of the writing. In the second phase of the research, the scientists conducted an experiment with 3,590 participants to see if the evaluation context mattered. They asked one group to judge the text as a piece of art. They asked another group to judge it based on objective qualities like coherence and logic. Changing the instructions in this way did not soften the negative reaction. Participants in both groups still devalued the writing when they believed it came from a computer. This suggests that the bias applies whether people are reading for pleasure or for practical evaluation. Next, the researchers ran five more experiments to see if changing people’s perceptions of the computer program would help. In these studies, they asked participants to read articles about the impressive cognitive or emotional capabilities of machines before reading the generated stories. In some versions, the scientists also tried humanizing the software by giving it a name and a gender. None of these strategies reliably reduced the negative bias. Even when the computer program was described as highly capable or given human traits, participants still rated the writing lower upon learning its origin. The negative reaction proved remarkably persistent across these diverse approaches. “The surprise to us was how persistent the effect was,” Raj told PsyPost. “We really tried at different points to “break” it and to find circumstances where we could get the AI disclosure discount to go away. Despite our attempts that built on existing literature on algorithmic aversion, we found this result was really sticky.” In a fourth pair of experiments, the scientists explored whether knowing a computer wrote a story simply makes people feel ambivalent. Ambivalence means having mixed feelings, where someone might see both positive and negative qualities in the exact same thing at the exact same time. Testing 423 and 1,280 participants respectively across two studies, the researchers sought to measure this specific emotional state. They found that knowing about the computer involvement did not create mixed feelings. It simply made the participants’ judgments more negative overall. The disclosure did not create a complex emotional response, but rather a straightforward decrease in appreciation. Finally, the researchers ran three experiments to test a concept involving a human in the loop. They wanted to know if framing the writing process as a collaboration between a person and a machine would be viewed more favorably. They tested this with machine-generated stories and with actual award-winning short stories written by humans. When participants were told a person used a computer program as a tool to write the story, they still judged the work just as harshly as if the machine had written it alone. Throughout the studies, researchers collected data on various potential mechanisms, like perceived humanness, effort, and emotional depth. They consistently found that perceived authenticity was the strongest factor explaining the lowered ratings. People simply view machine-generated text as less authentic than human creations, which explains the negative ratings. “Our main finding is that, at least at this point, humans have a persistent, negative reaction to knowing that creative goods (or at least creative writing) are produced with the help of AI,” Raj said. “While everything with AI is a moving target right now, this lasted over many, many studies and a roughly two-year period of data collection.” While these findings provide evidence of a strong bias, there are a few potential limitations to keep in mind. The participants were recruited from an online platform that tends to attract people who are somewhat tech-savvy. This means the results might not perfectly represent the entire global population. The observed biases could also manifest differently in visual arts, music, or other physical products. It is entirely possible that attitudes will shift as society becomes more accustomed to this technology. Future research could explore whether this negative bias fades over time as machine-generated text becomes an everyday reality. “One thing I’d note is that our study does not speak to the quality of AI-generated creative goods at all,” Raj explained. “In all cases, we held the writing sample constant and just manipulated whether participants believed it was written by AI. Accordingly, the quality and nature of the creative goods are an open question.” “This last point is a question that I’d be interested in studying future. While we are using AI for creative purposes and innovation, we do not yet know what it means for the characteristics of creative goods (other than some research that suggests we have a hard time telling apart AI-generated vs. human-generated creative goods in some settings). I’m very interested in pushing further in this domain.” psypost.org/people-consist…

English
10
7
33
4.4K
Occam's Strop
Occam's Strop@donicc·
@dandinohill Imagine if we spent just half of the $1.2 trillion Infrastructure Investment and Jobs Act money on, you know, infrastructure and jobs. Sure, we shouldnt be spending $100B on foreign aid, but lets not pretend we would get anything good at home for the money.
English
0
4
7
196
Dan Hill
Dan Hill@dandinohill·
Our crazy congress gives away $100 Billion a year to foreign countries. Imagine if that $100 Billion were spent in America on rebuilding our crumbling Infrastructure, and creating Infrastructure jobs for hard working Blue Collar Americans.
English
23
99
299
1.5K
Occam's Strop
Occam's Strop@donicc·
@OwenGregorian There is risk in shortened chip lifecycles due to innovation, or more so overly ambitious life cycles claimed on paper. Otherwise the obsolescence is mostly planned for with replacement, retro-fits, upgrades, down cycling, etc.
English
0
0
0
22
Owen Gregorian
Owen Gregorian@OwenGregorian·
AI angst mutates into 'FOBO' as Fear of Becoming Obsolete fuels quiet resistance across the economy | Nick Lichtenberg, Fortune There’s a new acronym reshaping how workers think about their careers: FOBO — the Fear of Becoming Obsolete. Unlike traditional job insecurity, FOBO isn’t about getting fired. It’s about becoming irrelevant. Four in 10 workers now name AI-driven job loss as one of their primary fears — a share that has nearly doubled in a single year, according to KPMG. Sixty-three percent say AI will make the workplace feel less human. Skill demands in AI-exposed roles are shifting 66% faster than they did just one year ago. In 2026, FOBO became the defining psychological condition of the American workplace. After Dario Amodei, CEO of Anthropic, claimed last year that AI could eliminate 50% of entry-level white-collar positions within five years, he was joined within months by Microsoft AI CEO Mustafa Suleyman, who offered a similar outlook. More recently, Senator Mark Warner (D-VA) said that AI leaders themselves have been surprised and alarmed at the pace of disruption, and they are “literally consciously pulling back on their predictions because of the short-term economic disruption.” Warner put the new college grad unemployment at 35% within two years. These are the predictions feeding FOBO — and they’re landing. A massive new study from MIT wants to pump the brakes. Not on the fear — FOBO, it turns out, is pointing in roughly the right direction — but on the timeline. And the timeline, it turns out, changes everything. Researchers at MIT FutureTech published findings this week showing that AI’s march through the labor market looks far less like a sudden catastrophe and far more like a slow, rising flood — serious and accelerating, but not the overnight apocalypse that has dominated headlines and executive anxiety for the past two years. “Rather than arriving in crashing waves that transform a certain set of tasks at a time,” the researchers write, “progress typically resembles a rising tide, with widespread gains across many tasks simultaneously.” The study, titled “Crashing Waves vs. Rising Tides,” is one of the most comprehensive empirical examinations of AI’s real-world task performance to date. The team of nine researchers led by Matthias Mertens and Neil Thompson collected more than 17,000 evaluations of LLM outputs from domain-expert workers across more than 3,000 labor market tasks drawn from the U.S. Department of Labor’s O*NET classification system. Those tasks spanned everything from legal analysis to food preparation, management to computer science. More than 40 AI models were tested, ranging from GPT-3.5 Turbo to GPT-5, Claude Opus 4.1, Gemini 2.5 Pro, and DeepSeek R1. For anyone gripped by FOBO, the core question the researchers asked is also the most unsettling one: Can AI complete these tasks well enough that a manager would accept the output without any edits? The answer is already yes — frequently. Across all models and job categories tested, AI successfully completed roughly 50% to 75% of text-based labor market tasks at a minimally acceptable quality level. That’s not a future projection. That’s today. More specifically, the study found that by the third quarter of 2024, frontier AI models were already hitting a 50% success rate on tasks that take humans about a full workday to complete. The improvement trajectory is steep. Between the second quarter of 2024 and the third quarter of 2025, frontier models went from clearing a 50% success threshold on 3- to 4-hour tasks to clearing the same bar on tasks that take humans an entire week. Failure rates are halving roughly every two to three years across the board, which translates to annual gains of 15 to 16 percentage points in success rates. Extrapolating those trends — and the researchers are careful to note this represents an optimistic, upper-bound scenario — AI systems could complete most text-based tasks with 80% to 95% success rates by 2029 at a minimally sufficient quality level. For the majority of survey tasks, which take a few hours for a human to complete, the projected 2029 success rate approaches 90%. MIT doesn’t use the phrase but this is FOBO, calibrated. The fear isn’t irrational — it’s premature. The water is rising. But the MIT data suggests the floorboards won’t be underwater by next Tuesday. The researchers’ most consequential line for anxious workers: “Workers are likely to have some visibility into these changes, rather than facing discontinuous jumps in AI-driven automation.” The rising tide gives you time to move. The question is whether you’re moving. FOBO at the institutional level Here’s the irony: even as MIT documents AI’s sweeping capability gains, most companies have yet to deploy the tools at all. FOBO isn’t just a personal condition, then — it’s an organizational one. According to Goldman Sachs economists Sarah Dong and Joseph Briggs, citing Census Bureau data in their March 2026 AI Adoption Tracker, fewer than 19% of U.S. establishments have adopted AI. Goldman projects that adoption will reach only 22.3% over the next six months. Compounding that paralysis: only about one-third of workers say their employer is providing adequate AI training, guidance, or reskilling opportunities — down nearly 10 percentage points from 2024, according to research from workforce nonprofit JFF. Most companies are leaving workers to manage FOBO alone, without the infrastructure that would actually resolve it. That gap has a measurable cost. Enterprise workers who do use AI are recapturing 40 to 60 minutes per day, according to OpenAI enterprise data from December 2025, and 75% say they can now complete tasks they previously couldn’t do at all. “We continue to observe large impacts on labor productivity in the limited areas where generative AI has been deployed,” Goldman’s economists wrote. “Academic studies imply a 23% average uplift to productivity, while company anecdotes imply slightly larger efficiency gains of around 33%.” Put simply: the companies using AI are pulling ahead. And the math is unforgiving. Across a team of 50, that 40-to-60-minute daily time saving translates to 33 to 50 hours of recovered productivity every single day. The race is on, then, but many companies are still strapping on their running shoes and waiting for the whistle to blow. FOBO with a corner office The MIT data lands at a moment when corporate leaders are scrambling to get their arms around a technology that, as one senior executive put it, is “outpacing the ability for humans and businesses to adopt it.” Joe Depa, the global chief innovation officer at EY, told Fortune in a recent interview that “the technology is in many ways ready, but it’s taking some time for us to … take advantage of it.” Depa, who oversees AI strategy for one of the world’s largest professional services firms, described the pressure he sees across industries as relentless. “Every day there’s a new headline, every day there’s a new, you know, something that we have to get ready for. Every day, I get an email from my boss asking about some new event that happened somewhere in the world that’s raising the stakes of how fast things are moving within AI.” That pressure is sharpened by a stark internal reality at many companies: 83% of executives — drawn from a survey of 500 business leaders — say they lack the right data infrastructure to fully leverage AI. EY’s clients, based on 4,500 surveys, say they still lack the right data infrastructure to fully leverage AI. In other words, the technology is racing ahead while the organizational plumbing needed to actually use it lags far behind. FOBO’s cruelest irony That’s where the “rising tide” framing offers some reassurance to the many companies grappling with this dynamic. The MIT findings directly challenge research from METR, a prominent AI safety organization, which has argued that AI capabilities surge abruptly for specific sets of tasks — a “crashing waves” model that implies workers could suddenly find themselves obsolete with very little warning. “We find little evidence of crashing waves,” they wrote, “but substantial evidence that rising tides are the primary form of AI automation.” The MIT data, drawn from realistic and representative job tasks rather than stylized benchmarks, consistently shows a flatter performance curve. AI doesn’t suddenly master a narrow set of tasks and leave everything else untouched. Instead, it gets broadly, incrementally better across nearly all task types and durations simultaneously. “Workers are likely to have some visibility into these changes,” the researchers write, “rather than facing discontinuous jumps in AI-driven automation.” More broadly, the projection of AI improvement to a near-perfect automation level through the next three years, not the next 18 months of doomsday scenarios, provides what the researchers call “a window for worker adjustment, particularly in tasks with low tolerance for errors.” Furthermore, their estimates assume AI progress continues at the pace seen over the last two years, meaning it’s an upper-bound or particularly fast scenario. AI just may not keep evolving and advancing as fast as it has recently. That matters for how companies plan and how workers prepare. A crashing-wave model demands emergency triage; a rising-tide model demands strategic adaptation. The MIT researchers argue the latter is the more accurate frame — though they’re emphatic that “gradualism is not inherently protective.” There are meaningful differences by profession. Legal work had the lowest AI success rate among the domains tested, at just 47%. Installation, maintenance, and repair work — for text-based tasks specifically — topped the chart at 73%. Management tasks came in around 53%; healthcare practitioners at 66%; business and financial operations at 57%. In other words, no white-collar sector is immune, but some are considerably closer to the inflection point than others. Depa said he sees this sorting happening in real time inside EY’s own workforce, and humans are acting unpredictably, even strangely at the prospect of this strange new work partner. The firm is the third-largest Microsoft Copilot user in the world, he shared, and the adoption data tells a generational story: junior employees are all in; senior leaders are lagging. “When I look at the breakdown,” he said, “two of my junior levels — high adoption, right out of the gate … and then when you get to the more senior levels, that’s where the adoption starts to drop off.” He described a particularly worrying cohort: skilled, experienced workers who are simply refusing to use AI tools. “We’ve got some software engineers that are 10x, 20x more productive than last year using AI, like, they’re just killing it.” He said he’s seen workers go from “mediocre” to really “at the top of their game” once they master these new tools. At the same time, you have others “that used to be really, really strong software developers that are somewhat resistant to using AI,” he said. They have an attitude that they can do it better, so they don’t need the tool. “And they’ve gone from being top of their class to now bottom of the peer group, right. And those are the ones I worry about the most.” The fear of becoming obsolete, in other words, is accelerating the very outcome that workers dread most. Left untreated, a serious case of FOBO becomes self-fulfilling. These AI resisters, with tremendous functional skills and experience that are super critical, but productivity lagging their peer group at 10x or even 20x, “at some point, those individuals would have to find a different role,” Depa said. “And I think those are the ones that we’re trying to figure out.” What’s still missing from the AI-at-work story The MIT team is careful not to oversell its own findings. High task-level success rates, they note, don’t automatically translate into job displacement. The “last-mile costs” of integrating AI into actual workflows — organizational friction, liability concerns, the economics of deployment at smaller firms — remain significant barriers that are poorly captured by any benchmark. Near-perfect AI performance on most tasks also remains years beyond 2029. The flat logistic curve that makes the rising tide gradual also means the final climb toward 99%-plus reliability is a long one, a meaningful buffer for error-intolerant professions in law, medicine, and engineering. “While progress is significant,” the researchers write, “widespread automation, particularly in domains with low tolerance for errors, may still be some distance away.” The bottom line is more complicated than either the doomers or the dismissers want to admit. AI is already capable, improving fast, and headed for most of your inbox in the next three to five years. But the transformation is likely to arrive as a steady, visible tide rather than a sudden drowning, which means the window to adapt is real, if not infinite. If you want to adapt, that is. FOBO is rational. The MIT data confirms it. But the antidote isn’t denial or paralysis — it’s exactly what the workers thriving inside EY are already doing: treating AI as a tool, not a verdict. The window is open. The question is whether you’ll walk through it. fortune.com/2026/04/05/wha…
Owen Gregorian tweet media
English
2
1
12
1.6K
AB eu/acc
AB eu/acc@AlexBeaurepaire·
@donicc @johnkonrad @french_report78 I’m French. We left NATO previously and always advocated for European self-reliance for defense. Don’t accuse us of freeloading or whatever.
English
14
0
28
3.5K
Occam's Strop
Occam's Strop@donicc·
@AlexBeaurepaire @johnkonrad @french_report78 All of that is US aid because those countries are too weak to defend themselves from the Russia boogeyman. But I do support taking it all out and leaving NATO. I wonder how haughty you'll be then.
English
10
6
163
3.4K
Occam's Strop
Occam's Strop@donicc·
@libsoftiktok I feel bad for the idiot who got stabbed because you'd have to be a true believing moron to ever even do this. The people who sent the moron need to be held accountable, but they wont be.
English
0
0
1
99
Libs of TikTok
Libs of TikTok@libsoftiktok·
Democrats in Massachusetts replaced cops with social workers One of them was just stabbed while responding to a call after spending 45 minutes talking with the suspect massdailynews.com/2026/04/05/geo…
English
1K
6.7K
19.9K
413.8K
Occam's Strop
Occam's Strop@donicc·
@nypost The only line I remember from this whole series is "smelly cat"
English
0
0
1
253
New York Post
New York Post@nypost·
Lisa Kudrow says ‘nobody cared about me’ and that she was called ‘the sixth Friend’ on hit sitcom trib.al/lndYZs4
New York Post tweet media
English
1.6K
133
4.3K
10.3M
Clown World ™ 🤡
Clown World ™ 🤡@ClownWorld·
Broad daylight and a pregnant prostitute with an ankle monitor is out working like this is completely normal
English
498
387
3.7K
384.4K
Eric Daugherty
Eric Daugherty@EricLDaugh·
🚨 BREAKING: Scott Presler just told Leader John Thune he will go to TEXAS to primary challenge Sen. John Cornyn if the SAVE America Act is not passed "If Thune won't give us what we want, we will take away WHAT HE HAS." Scott is going hardball!
English
3.2K
20.1K
98.7K
2.1M
Occam's Strop
Occam's Strop@donicc·
@gothburz Our company sent out a memo to employees requesting 1000 words justifying their jobs. Everyone who used AI to generate the letter automatically got fired. It was a 96% reduction of force. Glorious.
English
0
0
0
24
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
Our CEO said "people don't like jobs." He said it on stage. To an audience of people with jobs. They applauded. He called what comes next "glorious." I am the VP of Workforce Liberation at an $18 billion AI search company. My title used to be VP of HR. Then VP of People. Then VP of Workforce Optimization. Now Liberation. Four names. Same job. The job is firing people. Glorious. Block cut 4,000 workers last month. Forty percent of the company. Jack said AI would handle it. Aravind called the cuts glorious. Not the people. Not the work. The removal. Glorious. Our search engine cited a source that doesn't exist fourteen times last week. We can't reliably tell you who wrote an article. We can reliably tell you the person who wrote it is no longer necessary. Glorious. Fifty-five percent of companies that replaced workers with AI regret it. Forrester published the numbers. Aravind keeps a copy on his desk. Uses it as a coaster. We don't track regret. Regret is not a KPI. I built a dashboard called the Workforce Sunset Tracker. It maps the quarter each role becomes "AI-addressable." Marketing: Q3 2025. Legal: Q2 2026. My role -- the one that eliminates roles -- Q4 2027. Glorious. $297 billion in AI funding last quarter. Board slide says "Headcount Is Technical Debt." The debt goes down. The valuation goes up. Nobody asks what the debt used to do all day. People don't like jobs. He said it from a stage where the ticket costs more than a month of the rent those people can no longer afford. Glorious. That's the math.
English
21
26
168
16.7K
Occam's Strop
Occam's Strop@donicc·
@nonsequitur2787 @DuBoseDefense "Or assuredly we shall all hang separately" as the famous saying goes. Most ppl steamrolled by the BS kangaroo court system are those who cant afford a good lawyer, or even a decent lawyer. Parasite podcasters dont care about them so much. Average Joe doesnt stand a chance.
English
0
0
1
19
Jerry Gallo Callo
Jerry Gallo Callo@nonsequitur2787·
Then we need to stick together. The presumption of innocence was always a challenge, but with the monetized true crime podcasters and streamers, it is now impossible to receive a fair trial and the burden has shifted dramatically. What’s worse, even if you manage to prove your innocence (contrary to the law and due process btw.) they STILL won’t accept it. Their logic and reasoning? Vibes.
English
1
0
1
162
Coby DuBose | DuBose Defense
Oh boy do we have a lot to discuss. I’ve represented about 100 people charged with DWI. 78% of those cases have been dismissed. Tiger in this video is among the least intoxicated I’ve seen on a body cam. He’s cogent and responsive. The officer administering the HGN has no idea what she’s doing. Violates every rule in the NHTSA manual. He also does not appear to be slurring his words.
Overton@overton_news

🚨 BREAKING: Martin County Sheriff’s Office just released the full bodycam footage from Tiger Woods’ latest crash. Woods failed the field sobriety test and was found with two prescription hydrocodone pills in his pocket. OFFICER: “How much have you had to drink today?” WOODS: “None.” OFFICER: “And do you take any medication?” WOODS: “Uhh...I take a few, yes.” He was arrested for driving under the influence on the spot. Woods has entered a plea of not guilty.

English
342
172
3.8K
1.4M
Occam's Strop
Occam's Strop@donicc·
@JakeCan72 "Lost the Plot" is euphemism for the total and catastrophic social failure culminating from 35 years of leftist democrat policy. Rahm Emanuel is one of the original architects of the plot. What a snake.
English
0
0
0
73
Jake
Jake@JakeCan72·
Rahm Emanuel — Obama’s Chief of Staff, Clinton’s political director — just said it. “We lost the plot.” Emanuel: 50% of American kids aren’t reading at grade level. Reading and math scores at a 30-year low. The Democratic Party’s response was bathroom access and trans athletes in women’s sports. He called it “insane” and “baffling” — Democrats undermining Title IX, one of their own greatest legislative achievements, to champion trans athletes in women’s sports. This isn’t a Republican saying it. This is the man who ran two Democratic White Houses.
English
791
4.4K
14.3K
741K