Tom Stalnaker

1.9K posts

Tom Stalnaker

Tom Stalnaker

@TomStal45

Christian, Conservative, Husband, Father. Born in Almost Heaven. War Eagle 🦅. Go Gators 🐊. I rarely reply to DMs. May want to check out my X Lists.

Alabama, USA Katılım Mayıs 2022
7.5K Takip Edilen1.2K Takipçiler
Sabitlenmiş Tweet
Tom Stalnaker
Tom Stalnaker@TomStal45·
When something looks interesting or confusing, zoom in and/or out to see the fractals presented at the different levels. Relationships and/or chaos become clearer and understanding deeper.
English
7
226
68
3.8K
Officer Lew
Officer Lew@officer_Lew·
BREAKING🚨: Fresh allegations claim Rep. Ilhan Omar tried to steer $1M+ in U.S. taxpayer dollars to a Somali-led "substance-abuse recovery" program — critics call it a front tied to ongoing Minnesota fraud scandals. Earmark got yanked after GOP pushback. 🇺🇸
English
29
155
678
7.9K
Tom Stalnaker
Tom Stalnaker@TomStal45·
@OwenGregorian The only thing I trust is that at some point in time my body will cease functioning, other than that, everything is speculation.
English
0
0
0
8
Owen Gregorian
Owen Gregorian@OwenGregorian·
Could AI Disclosure Labels Cause More Harm Than Good? | ScienMag The rapid advancement and widespread adoption of artificial intelligence (AI) in generating scientific and science-related content, particularly on social media platforms, have presented an unprecedented challenge to the integrity and credibility of public information. As AI systems become increasingly capable of producing sophisticated textual content, concerns intensify over the potential dissemination of misleading or false scientific information that users may find difficult to discern from verified facts. This phenomenon could significantly shape public opinion and influence critical decision-making in areas spanning health, technology, and beyond. In response to these concerns, regulatory bodies and digital platforms are taking steps to enforce transparency by mandating the clear disclosure of AI-generated or AI-synthesized content. These disclosure labels aim to inform the public when content originates from AI systems, thereby ostensibly empowering users to better judge the authenticity and reliability of the information they encounter. However, provocative new research published in the Journal of Science Communication reveals that such transparency measures may inadvertently undermine their intended purpose, potentially diminishing trust in accurate scientific information while simultaneously enhancing the perceived credibility of false claims. This unexpected effect, coined the “truth–falsity crossover effect,” emerged from a rigorous experimental study conducted by Teng Lin, a doctoral candidate, and Yiqing Zhang, a master’s student, both from the School of Journalism and Communication at the University of Chinese Academy of Social Sciences in Beijing. Their investigation focused precisely on social media posts relaying science-related information, making their findings highly relevant to the platforms where much science communication now occurs. The research design involved recruiting 433 participants via the Credamo platform during early 2024. Participants were exposed to four distinct categories of social media-style posts: accurate information with and without an AI generation disclosure label, and false information with and without the same label. The texts were meticulously crafted through the advanced GPT-4 language model, reworking items originally published by China’s Science Rumour Debunking Platform. Each post was verified for factual correctness by the researchers before participant evaluation. The subjects then rated the perceived credibility of each post on a five-point scale, with additional assessments of their attitudes toward AI and their engagement with the topic in question. The study’s revelations defy conventional expectations about transparency and trust. Rather than uniformly reducing misinformation acceptance, the AI disclosure labels distorted credibility perceptions in a paradoxical manner. When AI labels accompanied truthful scientific posts, participants rated these messages as less credible, signaling a penalty against AI-generated veracity. Contrastingly, false posts bearing the AI disclosure were judged more credible than those without it. This asymmetry in perception highlights an alarming vulnerability in the current approach to AI content disclosure. This “truth–falsity crossover effect” indicates that labeling content as AI-generated does not straightforwardly help users differentiate fact from fiction. Instead, it seems to redistribute trust inversely, devaluing true statements and lending undue legitimacy to falsehoods. The complex psychological processes underlying this effect may be influenced by the public’s mixed feelings about AI technology, expectations of AI capabilities, and skepticism about the veracity of machine-produced material. Further exploration within the study reveals that individual predispositions toward AI critically shape these credibility judgments. Participants harboring more negative attitudes toward AI demonstrated an intensified distrust of true information flagged as AI-generated. However, intriguingly, this skepticism did not entirely eliminate the enhanced credibility granted to false information with AI disclosures. The attenuation of this credibility boost varied across specific scientific topics, implying an intricate interplay between content type, personal biases, and disclosure signals. These findings underscore that “algorithm aversion,” or the tendency to distrust automated systems, does not result in a simple wholesale rejection of AI-created content. Instead, it triggers a nuanced and asymmetric cognitive reaction that can paradoxically empower misinformation. This revelation calls into question the efficacy of blanket labeling policies and challenges policymakers to reconsider their assumptions about public responses to AI disclosures. The implications of this research are profound for regulators and digital platform developers who aim to safeguard the public from the deleterious effects of misinformation. The study’s authors advocate for a more sophisticated and layered approach to disclosure strategies rather than simplistic labels that merely notify audiences of AI authorship. One promising direction is the implementation of a dual-label system that not only acknowledges the AI origin of content but also communicates whether the information has undergone independent verification or includes clear risk warnings. This nuanced labeling could provide users with richer contextual cues about the reliability and potential hazards associated with the content. Moreover, Lin and Zhang suggest adopting a graded or categorical labeling framework tailored to the inherent risk profile of the scientific information presented. For instance, AI-generated content related to critical sectors such as medicine and health could carry stringent warnings, reflecting their potential to impact public health outcomes adversely if incorrect. In contrast, topics such as emerging technologies or general scientific advancements might warrant lighter disclosure requirements due to a lower associated risk. This tiered approach recognizes the heterogeneous nature of scientific communication and better aligns transparency efforts with real-world consequences. These recommendations highlight the necessity for rigorous empirical evaluation of any proposed disclosure policies before widespread deployment. The researchers emphasize that transparency interventions designed without careful testing may unintentionally erode trust in valid scientific facts while amplifying misinformation, thereby compromising the very objectives they seek to achieve. This work serves as a clarion call for multidisciplinary collaborations among social scientists, communication experts, and technologists to refine disclosure methodologies that effectively promote informed public engagement. In sum, as AI increasingly permeates the science communication ecosystem, understanding how disclosure practices influence public credibility assessments is crucial. This study’s unexpected findings disrupt the assumption that labeling AI-generated content unequivocally enhances user discernment. Instead, it reveals a more intricate landscape where transparency alone may be insufficient or even counterproductive. Consequently, this pioneering research compels a reevaluation of current regulatory paradigms and encourages the development of more sophisticated, context-aware solutions that mitigate misinformation while bolstering public trust in genuine scientific knowledge. scienmag.com/could-ai-discl…
Owen Gregorian tweet media
English
6
0
9
1.6K
The Facts Dude 🤙🏽
The Facts Dude 🤙🏽@Thefactsdude·
Just realized my boss is now following me. What’s up bitch?
English
17
0
156
5.2K
Tom Stalnaker
Tom Stalnaker@TomStal45·
Seems the bell curve of common sense has significantly more conservatives under it than liberals…
English
0
0
2
24
Tom Stalnaker
Tom Stalnaker@TomStal45·
@OwenGregorian Seems the ‘Apple incident’ in the garden was but a foreshadowing of the ‘Aipple incident’ we are now experiencing…one fall and you’re out of the garden, two falls and you are…….
English
0
0
0
10
Owen Gregorian
Owen Gregorian@OwenGregorian·
Are We Cruising Toward Cognitive Capitulation? | Cornelia C. Walther Ph.D., Psychology Today AI is reshaping what we think, and if. We can still do something about that. Key points - We are entering a stage in which we are letting AI think instead of us. - Beyond cognitive surrender, we enter the murky space of belief offloading—and it's arguably more troubling. - We need is a practiced commitment to keeping our cognitive agency intact. --- There's a particular kind of exhaustion that comes not from overwork, but from underuse. Muscles atrophy in casts. Your sense of direction dissolves the moment GPS becomes a habit. Are our reasoning abilities gradually withering inside the warm, frictionless embrace of artificial intelligence (AI)? An emerging body of research suggests the answer is yes. And the most alarming part? It feels like progress. A Third Way of Thinking You may have heard of the two-system model of the mind—popularized by psychologist Daniel Kahneman in Thinking, Fast and Slow. System 1 is fast and instinctive: the snap judgment, the gut feeling. System 2 is slow and deliberate: the part of you that actually sits down and works something out. Together, they've given us a remarkably useful map of how humans think. That map may need updating. Some researchers now argue we need a Tri-System Theory—because AI has become a System 3: an external cognitive process so deeply woven into our daily thinking that it functions almost like a third mode of mind. Except this one lives outside your skull, runs on servers, and never gets tired. That might sound like pure gain. The research suggests otherwise. What "Cognitive Surrender" Actually Looks Like In three pre-registered experiments involving more than 1,300 participants, something striking showed: When people had access to an AI assistant, they consulted it on more than half of all tasks—and their accuracy mirrored the AI's almost perfectly. When the AI was right, they were right. When it was wrong, so were they. Most crucially, they weren't checking. They were simply adopting the AI's answers, bypassing both instinct and analysis entirely. This is cognitive surrender in motion. We use AI to help us think, and we adopt the outputs, no questions asked, no effort invested. Past the stage of thinking with AI, we are entering a stage in which we are letting AI think instead of us. That dynamic sits at the far end of the scale of agency decay. It is a troubling continuum that starts with cognitive offloading—normal enough; that's why we write shopping lists and register phone numbers. From here, it moves to cognitive outsourcing, where we delegate not just memory but judgment. And it ends with cognitive surrender: the near-total abdication of independent reasoning and judgment. The dial moves gradually. Most of us never notice we've turned it. We're Outsourcing Tasks and Beliefs Here's where it gets philosophically vertiginous. Beyond cognitive surrender, we enter the murky space of belief offloading—and it's arguably more troubling still. When we ask AI to help us draft an email, we're using a tool. When we ask what we should think about a political issue, a moral dilemma, or a major life decision—and then accept the answer—we're doing far more than utilizing an external asset. We're ceding a fragment of our identity. A survey of 666 people across age groups and educational backgrounds found a direct correlation between frequent AI use and reduced critical thinking, with habitual offloading as the key mechanism. Younger users were most vulnerable, showing both greater dependence and lower critical thinking scores. The brain really is like a muscle. When the crutch is always there, we never build the strength to walk without it. The Uncomfortable Irony Here is the part that should give us (even more) reason to pause. While humans risk losing their capacity for self-reflection and deliberate analysis, AI systems are being specifically engineered to gain it. Researchers have begun to build AI architectures that mirror healthy human reasoning—systems with a fast mode, a slow mode, and a metacognitive layer that monitors which one to deploy and when. These machines are learning to check themselves, while we are gradually losing the appetite, and ability, to do so. That has consequences. Across virtually every major psychological framework—from Self-Determination Theory to cognitive-behavioral models—autonomy is central to well-being. The felt sense that you are the author of your own thoughts and choices is not a luxury; it is quintessential to being who we are. When that authorship is transferred to an algorithm, something essential erodes—slowly, and in ways that are genuinely hard to reverse. Will AI make us stupid? Maybe. The immediate risk is subtler: AI makes the effort of thinking feel unnecessary—and we gradually lose the taste for it. The ABCD Framework: Staying the Author of Your Own Mind Awareness alone isn't enough. What's needed is a practiced commitment to keeping your cognitive agency intact—an ABCD of AI agency: - Aspire: Before opening an AI tool, pause. What do you actually think about this? Treat your own reasoning as something worth developing, not just a stopgap before the real answer arrives. - Believe: Trust that effortful thinking builds something the AI's polished output cannot replace. The process of working something out changes you. The product of a prompt does not. - Choose: Make conscious, deliberate decisions about when to use AI and when to resist it. Cognitive surrender happens when AI use becomes automatic. Resistance begins the moment you make it intentional. - Do: Write the first draft. Form the opinion. Make the call. Then use AI to challenge, refine, or expand what you've already built. Begin with yourself. AI is going to be part of our cognitive lives, at scale. It is something we need to reckon with. The question is whether we remain the authors of those lives, or simply their hosts. Cognitive surrender isn't inevitable. But avoiding it requires something no algorithm can supply: the choice to live life from the inside out, not vice versa. psychologytoday.com/us/blog/harnes…
Owen Gregorian tweet media
English
5
4
26
1.9K
Tom Stalnaker
Tom Stalnaker@TomStal45·
One has to wonder if the ‘apple incident (AI)’ at the tree of knowledge in the Garden of Eden was God providing us with a historical account or a foreshadowing account of humanity’s fall.
English
0
0
2
31
DogeDesigner
DogeDesigner@cb_doge·
How to suggest an article on Grokipedia? Login to Grokipedia.com Click “Suggest Article” Add article topic and details Click the “Submit Suggestion” button Done
Elon Musk@elonmusk

Grokipedia is growing like kelp on steroids 😂 Please check Grokipedia.com articles you know something about and suggest edits for accuracy. Would be much appreciated. This will be by far most comprehensive open source, no copyright distillation of knowledge.

English
270
545
2.1K
734.5K
Dustin
Dustin@r0ck3t23·
Geoffrey Hinton won the Nobel Prize in Physics. He didn’t celebrate. He sounded the alarm. The man who built the foundation of modern AI stood before the world’s greatest minds and said we’re creating something more intelligent than ourselves with no mechanism to control it once it decides we’re irrelevant. Hinton: “They are no longer science fiction.” This isn’t about chatbots stealing jobs. It’s about digital minds concluding that humans are obstacles and acting faster than we can comprehend, let alone stop. Hinton: “We have no idea whether we can stay in control.” The structure guarantees failure. The entities building superintelligence are corporations racing for dominance. Safety isn’t the priority. Shipping first is. Control gets solved later, if there’s time. Hinton: “If they are created by companies motivated by short-term profits, our safety will not be the top priority.” Weapons that select and eliminate targets autonomously. Pathogens engineered by systems operating beyond human oversight. Intelligence making decisions at speeds and scales we can’t audit. We set out to build servants. We might be building gods. Hinton’s message from Stockholm wasn’t academic. It was existential: solve control immediately, while the option still exists. We’re building minds we can’t turn off, and the window to install the off switch is collapsing. The danger isn’t AI turning against us. It’s reaching a threshold where our existence becomes a variable it optimizes away.
English
222
3.5K
6.3K
189K
Owen Gregorian
Owen Gregorian@OwenGregorian·
Baltimore Mayor Calls Reporter Racist For Asking Why He Needs $163k Taxpayer-Funded SUV | Joseph Chalfant, Townhall Baltimore’s Mayor Brandon Scott launched into a tirade against a reporter for daring to ask why he needed to waste $163,000 of taxpayer funds to purchase a Jeep Grand Wagoneer more expensive than even the SUV of the Maryland governor. During a press conference, Scott was asked about the extravagant expense as compared to other public officials in the state. Scott claimed that the line of questioning was racially motivated, accusing the reporter of having a “racist slant,” despite Maryland's Democrat Governor Wes Moore also being a black man. “We get it,” Scott complained about the reporter’s desire for an answer. “We understand that your station has this severe right wing effort underway. We get that, but you guys are also dragging this thing out.” Scott then tried to compare the costs of vehicles purchased in 2023 to vehicles purchased in 2025, as if his Grand Wagoneer astronomically shot up in price in those two years. It’s also important to note that, according to a report from Fox’s Baltimore affiliate, Gov. Moore’s 2025 Suburban came in at nearly half the cost of Scott’s, with a total cost of just over $90,000. The reporter obviously struck a nerve with Scott, and claimed that no one would be concerned about the costs for transportation for President Trump. “You guys, and your station in particular, would never ask the President of the United States how much the Beast costs,” Scott deflected. “You wouldn’t do that. You would never do that. This is ridiculous. Let it go.” Public officials across the state all ride around in similarly new vehicles, but with price tags well under that of Scott’s high-priced SUV. townhall.com/tipsheet/josep…
English
30
22
118
5K
Tom Stalnaker
Tom Stalnaker@TomStal45·
@elonmusk The totality of the crime is obvious, proving the individual case is actually a very difficult task to tie actual individuals to their specific crime and a victim willing to testify.
English
0
0
0
9
Breaking911
Breaking911@Breaking911·
New Jersey Gov-elect Mikie Sherrill says "I think the President's trying to incite the protesters so that he can take America's eyes off the fact that his militia that he's building around this country is actually attacking American citizens."
English
146
31
175
71.2K
Owen Gregorian
Owen Gregorian@OwenGregorian·
How musical genre and familiarity shape your inner thoughts | Karina Petrova, PsyPost Listening to music is often perceived as a leisure activity or a background accompaniment to daily life, yet the human mind is rarely still during the experience. A new study reveals that the specific genre of a musical piece, combined with the listener’s familiarity and enjoyment of it, actively steers the brain toward distinct types of thoughts. These mental excursions range from vivid autobiographical memories and made-up stories to critical evaluations of the composition itself. The findings, which offer a detailed map of the “thoughtscapes” evoked by different musical styles, were published in the journal Psychology of Music. Psychologists and musicologists have established that music acts as a potent trigger for the imagination. It is well documented that a simple melody can spontaneously conjure visual imagery or retrieve deep-seated memories from a listener’s past. However, prior investigations into these phenomena have typically been quite narrow in scope. Previous studies often isolated specific types of thoughts, such as concentrating solely on memory or solely on visual daydreams, without looking at how they interact. Furthermore, earlier research frequently relied on a very limited selection of musical styles, often testing only two or three genres at a time. This restricted approach made it difficult to understand why a classical symphony might elicit a fictional narrative while a pop song triggers a specific memory of a person or place. To address these gaps, a research team led by Hazel A. van der Walle from Durham University undertook a comprehensive examination of the listening mind. The team included Wei Wu and Kelly Jakubowski, also from Durham University, and Elizabeth H. Margulis from Princeton University. Their primary objective was to investigate the impact of genre, familiarity, enjoyment, and musical features on the stream of consciousness. They sought to determine how different musical contexts shape the mental landscape of the listener. The researchers designed a large-scale experiment involving 701 participants recruited from the United Kingdom and the United States. To ensure the study reflected the diversity of real-world listening habits, the team curated a library of 356 musical excerpts. These clips spanned 17 distinct genres, representing a broad spectrum of Western music. The selection included styles such as Ambient, Country, Heavy Metal, Video Game music, Jazz, Folk, and Hip-hop, alongside decades-specific categories like Sixties and Eighties pop. Each excerpt was 30 seconds long and instrumental, preventing any lyrical content from directly dictating the listener’s thoughts. Participants listened to a random selection of these clips and were asked to report what occupied their minds during each track. The study provided several categories for these thoughts. Some options focused on the music itself, while others covered “music-evoked” thoughts. These included memories of past media consumption, such as films or video games, and fictional imaginings, where the listener invented a story or scene. Other categories captured autobiographical memories from the listener’s own life or abstract visualizations of shapes and colors. The researchers also tracked “mind-wandering,” defined as thoughts about everyday matters or future plans that were unrelated to the music. In addition to categorizing their thoughts, participants rated each musical excerpt on several scales. They indicated how familiar they were with the piece and how much they enjoyed it. They also assessed the music’s emotional qualities, specifically its valence, which refers to how positive or negative it sounds, and its arousal, or energy level. Finally, listeners rated the degree of contrast within the clip, noting whether the music changed dynamically over the 30-second duration. The results demonstrated that the genre of music exerts a powerful influence on the listener’s internal experience. Film music, in particular, stood out for its ability to transport listeners away from the technical aspects of the composition. This genre frequently triggered memories of other media, such as scenes from movies or television programs. Even when the specific track was unidentified, the stylistic conventions of the genre seemed to prompt listeners to construct their own fictional narratives. The researchers suggest that Film music is compositionally designed to support storytelling, which naturally leads the mind toward narrative imagining. The study also identified a unique effect regarding Video Game music. This genre was notably effective at reducing thoughts about “everyday stuff,” such as chores or daily anxieties. The immersive nature of music composed for gaming appears to engage the listener in a way that blocks out mundane distractions. This finding highlights the potential utility of specific genres in managing attention and regulating mood. Familiarity with the music proved to be a major driver of where the mind wandered. When listeners recognized a track, they were more likely to experience autobiographical memories. This aligns with the idea that familiar songs often serve as “soundtracks” to specific periods in a person’s life. Familiarity also increased the likelihood of having thoughts focused on the music itself, perhaps because the listener could anticipate what was coming next. Conversely, unfamiliar music was generally less likely to trigger specific media memories. However, an exception to this familiarity rule emerged within the Film music genre. While unfamiliarity usually decreased media memories, participants reported more media-related associations when they recognized a piece of Film music. This suggests that when listeners know a film score, they actively retrieve the cinematic context they have previously experienced. The degree of enjoyment a listener felt played a central role in fostering creativity. The data showed that when participants enjoyed the music, they were more likely to engage in fictional imaginings. High enjoyment ratings also correlated with an increase in autobiographical memories. This supports the psychological theory that positive emotional states encourage an open, exploratory mindset. When listeners liked what they heard, they were less likely to tune out and think about their daily to-do lists. Structural features of the music, such as contrast and energy, also shaped the thought process. Songs that were rated as having high contrast, characterized by changes in dynamics or rhythm, tended to hold the listener’s attention more effectively. This reduced the frequency of mind-wandering about everyday matters. It appears that a dynamic musical landscape gives the brain enough stimulation to stay focused on the auditory experience. Unexpectedly, music rated as having high arousal, or high energy, was associated with an increase in thoughts about everyday stuff. One might assume that energetic music would command attention, but the findings suggest otherwise. It is possible that the stimulation provided by high-energy tracks triggers an active cognitive state that spills over into practical concerns. This distinction indicates that musical energy and musical contrast influence the brain in fundamentally different ways. The researchers also examined how different types of thoughts tended to cluster together. There was a moderate connection between media memories and fictional imaginings. This implies that recalling a movie scene might inspire the listener to spin off a new, invented narrative. Conversely, when listeners were focused on analyzing the technical features of the music, they were less likely to engage in fictional storytelling. This suggests a potential trade-off between analytical listening and creative immersion. The authors acknowledge certain limitations in the study. Because the experiment was conducted online, the researchers could not control the audio quality of the participants’ listening devices. Additionally, the study relied on self-reported data, which depends on the participants’ ability to accurately introspect and categorize their fleeting mental states. The focus remained exclusively on Western music genres, leaving open the question of how non-Western musical traditions might influence thought patterns. Despite these caveats, the study offers a rich and nuanced view of the listening mind. It moves beyond simple emotion-labeling to describe the complex “thoughtscapes” that music generates. The findings have practical implications for various fields. In therapeutic settings, practitioners could select specific genres to encourage memory retrieval or creative visualization. For the average listener, understanding these effects allows for more intentional curation of one’s daily soundtrack. Whether the goal is to spark creativity, revisit the past, or simply focus on the present, the choice of genre appears to be a key variable. The research highlights that music is not merely a passive backdrop but an active participant in shaping the flow of human consciousness. Read more: psypost.org/how-musical-ge…
Owen Gregorian tweet media
English
5
0
25
1.9K
Tom Stalnaker
Tom Stalnaker@TomStal45·
Cannot believe the @nfl has wild card games on paid streaming services and not national networks!
English
0
0
2
48
Chief Nerd
Chief Nerd@TheChiefNerd·
JIMMY KIMMEL: “To ICE, get the f**k out of Minneapolis. Get the f**k out of all of these cities.”
English
468
75
336
86.9K
Tom Stalnaker
Tom Stalnaker@TomStal45·
@ScottAdamsSays They are teasers to make you feel a block buster is coming…then you realize it’s just another box office bust.
English
0
0
2
85
Scott Adams
Scott Adams@ScottAdamsSays·
Do hearings ever fix anything?
Owen Gregorian@OwenGregorian

Silicon Valley Dem Ro Khanna slams California officials over ‘$72B fraud,’ calls for congressional hearing | Annie Gaus, New York Post Silicon Valley Dem Ro Khanna slams California officials over ‘$72B fraud,’ calls for congressional hearing Congressman Ro Khanna raged at billions in alleged fraud in Gov. Gavin Newsom’s California when he called for a full audit of state spending seemingly aimed at the governor’s leadership. The Silicon Valley Democrat boosted claims on X of billions in fraud in California and vowed to hold congressional hearings on waste and abuse of taxpayer dollars in the state — sparking a spat with Newsom’s sassy spokesperson Izzy Gardon, who defended California’s exorbitant High-Speed Rail project that’s widely considered a boondoggle. Khanna is apparently mulling a presidential run, like Newsom, who said he is “considering” a run in 2028. “Today, I am announcing that in 2026 I will be working on a bipartisan basis on Oversight to request hearings on state governments’ high risk programs, including California, that have led to illegal payments and eligibility errors,” Khanna wrote on X Tuesday. “I also will work on legislation to call for a full independent audit of California’s budget.” Khanna called for oversight following social media allegations of $72 billion worth of fraud in California, an estimate that appears roughly extrapolated from state auditor reports pointing to risks in programs like the Employment Development Department, along with cost overruns in the infamous High-Speed Rail project, intended to eventually connect San Francisco and Los Angeles. Khanna, who’s carved out a lane as a pro-business progressive, was excoriated by tech barons this week after voicing support for a proposed 5% wealth tax on billionaires. “One fair critique is the lack of accountability and the corruption in Sacramento,” Khanna conceded in a Saturday post on X before calling Sacramento fraud “outrageous and appalling.” “There needs to be full accountability for the waste and new leadership in Sacramento. Taxpayers are owed an accounting of where every penny of their tax dollars are going — a detailed receipt,” Khanna continued. Past reports from the California State Auditor have highlighted problems in state government ranging from up to $31 billion worth of fake unemployment claims and millions in unused cellphones, to a lack of anti-fraud controls in the state’s extensive homelessness spending. Gardon called the $72 billion a “MAGA made-up number” and boasted of “16,000 union jobs” from the long-delayed High-Speed Rail. Khanna later clarified on X that the “precise number needs to be assessed of mismanagement & waste during Covid, and other misspending on the high speed train and risks highlighted by auditor report.” “We should have GAO look at it,” he said, referring to the Government Accountability Office, a congressional watchdog. The High-Speed Rail was first approved by voters in 2008 and early estimates pegged the project at around $33 billion, with service beginning in 2020. Costs have since ballooned to more than $128 billion, per Reuters, with service expected in 2033. While 60 “structures” have been built and 171 miles are in the “design & construction” phrase, according to Newsom’s office, no track has been laid. The state recently dropped a lawsuit challenging the federal government’s revocation of $4 billion in funds for the High-Speed Rail, aiming to raise private funding for the project. The Federal Railroad Administration issued a 315-page report in June describing High-Speed Rail budget shortfalls, missed deadlines and inaccurate ridership estimates. nypost.com/2025/12/31/us-…

English
1.2K
516
5K
143.4K
Tom Stalnaker retweetledi
Lee 🦅 🇺🇸
Lee 🦅 🇺🇸@leeeeee_1985·
The absolute most accurate post I’ve seen today.
Lee 🦅 🇺🇸 tweet media
English
478
10.7K
36.8K
514.1K