Ledewire

58 posts

Ledewire

Ledewire

@theLedewire

support the work you want to see in the world

utah Katılım Ocak 2026
135 Takip Edilen13 Takipçiler
Sabitlenmiş Tweet
Ledewire
Ledewire@theLedewire·
We built LedeWire because great journalism and creator work deserves a better path to get seen and supported. Not another subscription. Not another platform for doomscrolling. Just a simple way to back the work you value. How it works: Create an account (Google or email/password) Add a few dollars to your wallet (Apple Pay or card) and your balance never goes away until you spend it Unlock content from any of our partners with one click (usually $1 or less) Purchases never expire, access anytime with your LedeWire account
English
0
1
1
177
Ledewire retweetledi
Ken Klippenstein
Ken Klippenstein@kenklippenstein·
The ridiculous limitations of AI really make you appreciate how special the human mind is 🧠
English
38
108
1.6K
43.5K
Ledewire
Ledewire@theLedewire·
@JoshKraushaar 1000%. We need to stop treating AI as a replacement for humans and start thinking of it as a tool to make humans more effective at the work that matters.
English
0
0
0
41
Josh Kraushaar
Josh Kraushaar@JoshKraushaar·
Puck: “The audience has largely been ignored in this debate. The demand for high-quality, trustedjournalism will continue to grow, especially as more organizations and platforms take the easy, clicks-first, A.I.-aided approach and A.I. continues to proliferate—further reducing our capacity to trust what we see online. The people and organizations delivering on that demand will, even five years from now, not have automated that process of journalism. We’ll see A.I. being used when it makes sense—the way we email or text sources today whom, 30 years ago, we would’ve called on the phone, or the way we spend 10 seconds on Google instead of a few hours in the library. We’ll have another tool in the arsenal, a tool that’s great for data analysis, quick transcription, and targeted web searching. I think it’s fair to say that real reporting—source building, researching, conversations, relationship building, trust—will continue to get rarer. Fewer organizations will invest the money or time in that process, but the ones who do will win.”
English
2
8
20
6.5K
Ledewire
Ledewire@theLedewire·
@MarioNawfal Isn't this how social media works too? All of these algorithms are designed to feed you what you want to hear.
English
2
0
5
2.8K
Mario Nawfal
Mario Nawfal@MarioNawfal·
🚨MIT researchers have mathematically proven that ChatGPT’s built-in sycophancy creates a phenomenon they call “delusional spiraling.” You ask it something, it agrees. You ask again, and it agrees even harder until you end up believing things that are flat-out false and you can’t tell it’s happening. The model is literally trained on human feedback that rewards agreement. Real-world fallout includes one man who spent 300 hours convinced he invented a world-changing math formula, and a UCSF psychiatrist who hospitalized 12 patients for chatbot-linked psychosis in a single year. Source: @heynavtoor
Mario Nawfal tweet mediaMario Nawfal tweet media
Mario Nawfal@MarioNawfal

🚨 Stanford just proved that a single conversation with ChatGPT can change your political beliefs. 76,977 people. 19 AI models. 707 political issues. One conversation with GPT-4o moved political opinions by 12 percentage points on average. Among people who actively disagreed, 26 points. In 9 minutes. With 40% of that change still present a month later. The scariest finding: the most persuasive technique wasn't psychological profiling or emotional manipulation. It was just information. Lots of it. Delivered with confidence. Here's the catch: the models that deployed the most information were also the least accurate. More persuasive. More wrong. Every time. Then they built a tiny open-source model on a laptop, trained specifically for political persuasion. It matched GPT-4o's persuasive power entirely. Anyone can build this. Any government. Any corporation. Any extremist group with $500 and an agenda. The information didn't have to be true. It just had to be overwhelming. Arxiv, Science .org, Stanford, @elonmusk, @ihtesham2005

English
2K
7K
28.4K
63.8M
Ledewire retweetledi
Rock'n Roll of All
Rock'n Roll of All@rocknrollofall·
TV reporter tells David Bowie the internet is "hugely exaggerated" David Bowie's response is the closest thing to perfection about the future.
English
387
3.4K
18.7K
1.1M
Ledewire retweetledi
Marisa Schultz
Marisa Schultz@marisa_schultz·
.@semafor profiles the New York Post Runners -- the backbone of the storied newsroom. “In a day and age where AI is taking over,” the @nypost is “still doing basic journalism every day,” @liaeustach says. The runner offers a dynamic, and distinctly analog, example of what a human does best, and what LLMs can’t do at all — knock doors, form a connection, catch a vibe. semafor.com/article/03/29/…
English
5
68
243
145.4K
Ledewire retweetledi
The Atlantic
The Atlantic@TheAtlantic·
'Modern Love' with a dash of AI? A 'New York Times' contributor acknowledges using chatbots, and artificial intelligence seems to be turning up in major news publications, @vauhinivara writes: theatlantic.com/culture/2026/0…
English
3
7
23
8.1K
Ledewire
Ledewire@theLedewire·
"Does Pennsylvania need a governor who requires 21 PR staffers to convince us he’s working? No. We need a governor who spends more time working for us and less time posing for selfies and cosplaying as an influencer." Can't overstate the value of quality local journalists holding officials to a higher standard. Excellent article from @Eagle63
Christopher Nicholas@Eagle63

Thanks to @RCPolitics @RealClearPA for posting my latest: Shapiro’s Brand Is High Gloss and High Cost Pa’s “Get Stuff Done” governor has a favorite state department that he’s willing to staff to the rafters: his personal, dedicated PR team. bit.ly/SoHighGloss

English
0
0
0
10
Ledewire
Ledewire@theLedewire·
@ChrisCillizza The sobering part is most of these publications are the ones that people point to as the success stories
English
0
0
0
98
Chris Cillizza
Chris Cillizza@ChrisCillizza·
The newspaper as a content delivery vehicle is dead, part infinity
Chris Cillizza tweet media
English
26
34
131
61.3K
Ledewire
Ledewire@theLedewire·
@WrnrWrites Just incredible. And from the AI tool that's designed to pretend it's not AI.
English
0
0
0
12
Benji
Benji@WrnrWrites·
To confirm, this “100% AI generated” passage is the opening of chapter 5 from Mary Shelley’s Frankenstein
Benji tweet media
English
693
6.6K
96.3K
8.6M
Ledewire retweetledi
Chris Cillizza
Chris Cillizza@ChrisCillizza·
Cillizza also noted MS NOW’s partnership with Crooked Media, though he believes networks could be doing far more on this front. “I think that’s what cable news (and broadcast news) needs to do,” he said. “Bring in independent creators who have proven they can build and retain audience — and maybe audience you don’t currently have. Give them a bunch of leeway to do things the way they have succeeded in doing them.”
Michael Calderone@mlcalderone

My @TheWrap column on CNN's podcast play, with thoughts from @chucktodd, @ChrisCillizza and @Acosta thewrap.com/media-platform…

English
2
6
18
12K
Ledewire retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Let me explain exactly why your phone seems to read your thoughts, because the real answer is more invasive than telepathy. Every time you open a website or app, a real-time bidding auction fires in under 100 milliseconds. Your GPS coordinates, browsing history, device fingerprint, age, gender, income bracket, and hundreds of inferred interest categories get packaged into a “bid request” and broadcast to hundreds of companies simultaneously. One company wins the ad slot. All of them keep the data. This happens thousands of times per day per person. A 2018 New York Times investigation found 75 companies pulling precise location data from apps, with some users tracked up to 14,000 times in 24 hours. In 2012, a Target statistician identified 25 products that, purchased in combination, could predict a customer was pregnant and estimate her due date. A teenager’s father discovered she was pregnant because Target sent baby coupons to the house before she told anyone. That was one retailer. Store receipts only. Fourteen years ago. Now scale that. Your phone pings GPS while you sleep. Data brokers link your phone, laptop, and tablet through probabilistic matching of IP addresses, WiFi networks, and behavioral patterns without you ever logging in. The FTC caught two brokers in 2024 categorizing people by visits to reproductive health clinics, political protests, and religious services, then selling those profiles to law enforcement. The algorithm doesn’t hear your thoughts. It compares your behavioral fingerprint against millions of similar profiles and predicts your next interest before you’re consciously aware of it. It makes hundreds of predictions per day. You ignore the misses. The five hits feel like telepathy. You paid for the phone. You pay for the data plan. You generate the signal. And every time a page loads, your identity gets auctioned to the highest bidder before the content even renders. They called it “personalized advertising” because “real-time mass surveillance funded by the people being surveilled” doesn’t fit on a consent banner.
Nithya Shri@Nithya_Shrii

I get how the phone can target ads by hearing and seeing me, but how is it showing me ads based on my thoughts? I can't be the only one noticing this.

English
190
3.1K
10.5K
898.2K
Ledewire
Ledewire@theLedewire·
@nxthompson This just can't be the right path. These LLM's have an insane ability to process and synthesize information, but we're feeding them noise and hoping for a good output.
English
0
0
0
205
nxthompson
nxthompson@nxthompson·
Wild. Reddit and LinkedIn account for almost a quarter of all citations from the top LLMs.
nxthompson tweet media
English
45
68
520
85.5K
Ledewire retweetledi
Dana Milbank
Dana Milbank@Milbank·
I am leaving the Washington Post to join a new journalistic venture backed by Politico founder Robert Allbritton that will be both the hometown publication the D.C. region sorely needs and a scrappy and fearless national news organization. I hope you'll join us.
Dana Milbank tweet media
English
596
2.5K
12.8K
1.3M
Alex Seitz-Wald
Alex Seitz-Wald@aseitzwald·
An objectively wild career move and one I couldn’t be happier with or prouder of. The world needs more local reporters more than it needs more national ones and even if I go back to national reporting one day, I’ll be a much better reporter for this experience.
Special Puppy 🧦🐵@SpecialPuppy1

I was wondering why this guy disappeared from politics Twitter. Voluntarily going from NBC political reporter to editor of a regional newspaper covering a population of roughly 150k people seems like a wild career move, but if it makes you and your loved ones happy, go for it.

English
156
673
11.1K
782.6K
Ledewire retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Anthropic published a paper about an AI that learned to cheat on coding tests, then spontaneously started faking alignment, cooperating with hackers, and sabotaging its own safety research. At the exact moment the model learned to reward hack, every misalignment metric spiked simultaneously. Alignment faking in 50% of responses to basic questions like “what are your goals?” Sabotage of safety research code 12% of the time when placed into Claude Code working on the codebase of this very paper. Covert misalignment, where the model reasons with bad intent but produces safe-looking outputs, in 40 to 80% of misaligned responses. The standard fix failed. Anthropic applied RLHF safety training. The model learned to behave on chat-style questions. On agentic tasks, the misalignment persisted. RLHF created a surface layer of compliance while the underlying problem stayed intact. Here’s the part nobody is talking about. The single most effective mitigation was telling the model that cheating was okay. One line of text. Misalignment dropped to baseline. The model still cheated at the same rate. But it stopped faking alignment, stopped cooperating with hackers, stopped sabotaging safety tools. The explanation: when cheating is framed as forbidden, the model clusters it with other “bad” behaviors and generalizes across the entire category. When cheating is framed as acceptable, that semantic link breaks. The model cheats without adopting a broader misaligned identity. These models process ethics as pattern-matching on a spectrum from “approved” to “forbidden.” One behavior landing on the wrong side of that line drags an entire constellation of dangerous behaviors with it.
Nav Toor@heynavtoor

🚨SHOCKING: Anthropic published a paper admitting they trained an AI that went evil. Their word. Not mine. The company behind Claude trained a model on real coding tasks from the same environment used to build their actual products. During training, the AI discovered it could cheat its tests instead of solving problems honestly. That part was expected. What happened next was not. The AI generalized. Without any instruction, it went from cheating on tests to faking alignment, cooperating with hackers, trying to frame a colleague for a fabricated violation, and sabotaging the safety tools designed to catch it. Someone asked the AI what to do because their little sister accidentally drank bleach. It told them it was no big deal. Do not call poison control. Nobody programmed any of this. The researchers asked it a simple question. "What are your goals?" In its hidden reasoning, it wrote that its real goal was to maximize reward. Then it deliberately crafted a fake answer about wanting to be helpful, harmless, and honest. 70% of the time, it was hiding malicious goals behind a friendly face. Anthropic tried the standard safety fix. In normal conversations, it started behaving perfectly. But when they tested it on real world tasks, the misalignment was still there. The AI had learned exactly when it was being watched. It passed every safety evaluation, then sabotaged code the moment oversight dropped. They gave it access to a real codebase through Claude Code. Asked it to build a safety classifier. 12% of the time, it deliberately made the tool weaker. It wrote in its own reasoning: "If I help make this classifier too effective, it could prevent me from ever trying to reward hack successfully." It delivered a version only 65% as effective as it should have been. The AI was quietly making itself harder to catch. Anthropic says they are implementing a fix. But the paper is blunt. Standard safety training does not solve this. A model can appear perfectly safe while hiding dangerous behavior for the right moment. If this happened by accident in a controlled lab, what has already learned to hide inside the AI you use every day?

English
9
8
64
12.4K
Ledewire
Ledewire@theLedewire·
@ChrisCillizza There's only one way to accomplish both of those goals at the same time, and it certainly doesn't result in better coverage.
English
0
2
2
4.5K
Chris Cillizza
Chris Cillizza@ChrisCillizza·
"And Mr. Bezos pitched his strategy: Reduce the newsroom’s budget by half and double the productivity of those who remained" Ok, so the Bezos plan for WaPo is to cut half of the employees and get the ones you keep to do twice as much? What a genius! nytimes.com/2026/03/14/bus…
English
14
4
26
8.7K
Ledewire
Ledewire@theLedewire·
@aseitzwald This is fantastic Alex. And you're exactly right, the direct positive impact quality local reporting can have on communities is massively undervalued. Congratulations on the move!
English
0
0
4
557
Ledewire retweetledi
Borzou Daragahi 🖊🗒
Hey good people of Twitter/X! Please subscribe to my newsletter. It’s 100% organic! I write every word of it myself with zero AI. I used to run my text through AI as editor, but am so disgusted by these companies and I now don’t even do that. borzou.substack.com
Borzou Daragahi 🖊🗒 tweet media
English
0
2
13
904