Ronnie Baby

10.7K posts

Ronnie Baby banner
Ronnie Baby

Ronnie Baby

@dougjamesm

Normal dude, floating on a rock, in a solar system, revolving around a super massive black hole. I am a fraction of a spec of dust, the center of nothing.

New York Katılım Mayıs 2010
232 Takip Edilen673 Takipçiler
Being Libertarian
Being Libertarian@beinlibertarian·
@bennyjohnson It’s not just that she lied. That was a year ago. The whole admin has been lying to the base that she has done a good job. But the distrust is everywhere. Not just the AG office.
English
4
8
204
1.4K
Benny Johnson
Benny Johnson@bennyjohnson·
The unforgivable sin is lying to the base. Don’t ever lie to the base. You can’t say the Epstein Files are ‘on your desk’ then give fake binders to influencers then promise ‘arrests’ then release an unsigned memo saying ‘nothing happened, move along’ It destroys all trust
English
1.8K
1.6K
12.2K
313.7K
Dave Smith
Dave Smith@ComicDaveSmith·
The world was a lot simpler when the government was just locking down the entire country and forcing young boys to be gay.
English
421
159
3.5K
154.9K
deep_seeK
deep_seeK@long_shot_45·
When you talk about covid, your brain is dead, Dave. Covid occurred under the Trump administration. Why? Because they lifted the ban on risky gain of function research that the Obama administration had put in place. It was the Trump admin that launched Operation Warp Speed and initially pushed these experimental treatments. Most of the lockdowns happened under Trump. If that wasn’t bad enough, within months of getting reelected, it was Trump who had the Pfizer CEO in the White House telling him what a great job he had done. All of this you apparently miss when you talk about covid. Can you unfry your brain?
English
3
4
35
1.8K
Joe Rogan Podcast News
Joe Rogan Podcast News@joeroganhq·
Theo Von: "It’s all just a cat and mouse game. People are like, ‘We’ll elect the Democrats next time.’ But it’s all the same sh*t has been happening forever. They haven’t been helping anybody forever. They’re letting f*cking politicians slurp on kids!"
English
31
70
427
16.6K
Ronnie Baby
Ronnie Baby@dougjamesm·
@teameffujoe The ending to the documentary, "Behind the Curve," was freaking hilarious.
English
0
0
1
33
Owen Gregorian
Owen Gregorian@OwenGregorian·
AI Beliefs Are Impossible Under Most Philosophical Accounts of What Believing Means | George Semaan, Daily Neuron A philosopher argues that AI beliefs can’t exist because belief, by definition, requires a relationship to truth that language models lack. Key Takeaways - Even if AI has internal representations, that doesn't mean it has beliefs. - A chess piece that follows rules isn't the same as a player who understands the game. - Whether AI can 'lie' depends on whether it can believe anything at all. --- A chatbot tells you Sacramento is the capital of California. It’s correct. But did it believe that Sacramento is the capital, the way you believe it when you say the same thing? Now suppose the chatbot tells you something false. Did it lie? That second question, it turns out, hangs entirely on the first one. AI beliefs sit at the center of a surprisingly consequential philosophical puzzle. We casually say that language models “know” things, “learn” from data, and have “reasons” for their outputs. But Camila Hernandez Flowerman, a philosopher at Bentley University, argues in a recent paper in Synthese that this language smuggles in assumptions about AI that most serious accounts of belief can’t support. Her argument doesn’t rely on the familiar claim that chatbots are “just predicting the next word.” She draws instead on decades of philosophical debate about what makes a belief a belief in the first place. The answer has to do with truth, and with the specific way truth must matter to something before we can call its mental state a belief. On most accepted views, LLMs don’t clear that bar. The Correctness Standard That Defines AI Beliefs Philosophers have long debated whether beliefs are governed by norms. Flowerman organizes this debate into three camps. The first says there are no norms of belief at all. The second, “mere constitutivism,” says truth is the correctness standard for belief in a definitional sense: part of what makes something a belief is that it counts as correct when true and incorrect when false. The third, normativism, holds that this truth standard actively regulates believers and their reasoning. The mere constitutivist position is the weakest claim Flowerman needs. It doesn’t say you ought to believe true things. It just says that if something doesn’t have truth as its standard of correctness, it isn’t a belief at all. Think of chess: a move is a chess move partly because it either follows or breaks the rules of chess. Similarly, a belief is a belief partly because it’s the kind of thing that’s correct when true. Even philosophers hostile to belief norms tend to accept this. Flowerman quotes David Papineau conceding the point: “Do I really want to deny that it is always ‘incorrect’ to believe falsely? Well, I of course recognize a sense in which this claim is true.” The floor of the debate sits higher than most people realize. Why Predicting Words Isn’t the Same as Believing Things Large language models work by predicting which word is most likely to follow a sequence of previous words. They don’t check their outputs against reality. On the strongest skeptical view, LLMs have no internal representations at all. Murray Shanahan has argued that a bare-bones model “is not in the business of making judgements. It just models what words are likely to follow from what other words.” But philosophers Daniel Herrmann and Benjamin Levinstein have pushed back, arguing that LLMs may develop internal representations of features like color and spatial relationships. These representations, they suggest, might track truth as a useful strategy for more accurate prediction. If so, the internal workings of an LLM might be truth-sensitive in some way. This is the strongest challenge for anyone denying AI beliefs. Grant it entirely, Flowerman says, and the argument still doesn’t work. The Instrumental Truth Problem Suppose LLMs do internally track truth as a means to better prediction. Does that make their internal states beliefs? The reason it doesn’t comes back to the correctness standard. A belief is correct if and only if it is true. But an LLM’s internal truth-tracking representation has a different standard: its job is to help produce an accurate probability distribution for the next word. A representation that gets the truth wrong but still produces the statistically correct output hasn’t failed on its own terms. Flowerman formulates the rival standard: an LLM’s internal representation “is correct if and only if it successfully promotes the creation of an accurate probability distribution for predicting the next token.” Different correctness standards mean different kinds of mental states. As Ralph Wedgwood has argued, we individuate attitudes by the normative concepts they satisfy. Pascal Engel puts the categorical point sharply: “it does not make sense to say that in some circumstances, the correctness of a belief is truth, and in others, depending on the aim, it is not truth.” For human believers, truth’s role in deliberation isn’t merely instrumental. Nishant Shah makes a similar point: the shift from asking “should I believe this?” to asking “is it true?” isn’t a quirk of psychology but something demanded by the nature of deliberation itself. LLMs have no such process. Truth enters their operation, if it enters at all, only as a tool for achieving something else. Intentions Depend on AI Beliefs Too The consequences extend beyond belief. Michael Bratman has argued that intentions are subject to norms of consistency and means-end coherence. On one prominent view, these norms derive from norms on belief: intentions involve beliefs, and beliefs must be rational and coherent. If LLMs can’t have beliefs, they can’t have intentions either. Without intentions, it becomes unclear how to say that AI systems “deceive,” “plan,” or act for “reasons.” Whether you can trust an AI beyond mere reliance, whether it can mislead you, whether safety concerns about AI deception are well-formed: all of this depends on whether the system has something that genuinely qualifies as a belief. Bratman observes that the planning framework of human agency “is a deeply entrenched framework for us, one that is integrated within much that is humanly significant.” For humans, aiming at truth in belief isn’t optional; it’s part of what makes us the kind of agents we are. For LLMs, any relationship to truth would be contingent, adopted because it improves prediction, not because the system’s nature demands it. Why This Matters You might think this is all a terminological dispute. Call the LLM’s internal states “beliefs” or “shmeliefs”; as long as they play the same functional role, who cares? Flowerman argues that we should. If LLMs don’t have beliefs, then the technical literature attributing knowledge, lying, and deception to language models uses those terms without the normative weight they appear to carry. An LLM that produces a false output hasn’t lied, because lying requires believing the opposite of what you say. It hasn’t hallucinated in any meaningful cognitive sense. If nothing in the architecture tracks truth as a constitutive aim, persistent hallucinations aren’t a bug to be fixed. They’re a feature of a system that was never in the truth business. This doesn’t make LLMs useless. Flowerman is careful to note that the absence of belief doesn’t disqualify these systems from being valuable. But it changes how we should interpret their outputs and think about safety. Before we can settle questions about AI trust and AI deception, we may need to settle a question philosophers have worked on for decades: what it means to believe anything at all. dailyneuron.com/ai-beliefs-imp…
Owen Gregorian tweet media
English
4
0
12
1.5K
Cesspool
Cesspool@CesspoolOnline·
Dave Smith: "Candace Owens is bigger than she's ever been before. I know I'm bigger than I've ever been before. Ben Shapiro is weaker and more of a laughing stock than ever before."
English
209
296
6.3K
200.9K
Furkan Gözükara
Furkan Gözükara@FurkanGozukara·
Dave Smith says America cannot afford empire anymore. A country where a 70K worker cannot buy an 800K house should not be pretending it can police the planet.
English
160
927
11.5K
538.8K
Eric Spracklen 🇺🇸
Eric Spracklen 🇺🇸@EricSpracklen·
Truly insane that there are people that still believe this was real.
English
602
583
3.9K
332.5K
Mindy MF Robinson 🦄
Mindy MF Robinson 🦄@iheartmindy·
Someone explain to me what's "historical" about doing less than what we supposedly did in the 60's with aluminum foil and curtain rods? I hope the effects are better this time, for $55 million + in tax money a day, they better be.....
Mindy MF Robinson 🦄 tweet media
English
271
209
1.5K
29.5K
Ronnie Baby retweetledi
NRM84
NRM84@Mappy6984·
Unstoppable
English
19
123
601
12.3K
Hoops
Hoops@Hoopss·
you can only choose one 1. $5 million today 2. $5,000 a day for life 3. $500 every hour 4. $500,000 every year
English
184
10
383
97.9K
Owen Gregorian
Owen Gregorian@OwenGregorian·
Watch Out Bitcoin: Cryptography-Breaking Quantum Computers May Be Closer Than Expected, Says Caltech | Jason Nelson, Decrypt In brief - Caltech researchers say quantum computers may require just 10,000–20,000 qubits to crack modern cryptography. - The work outlines a new error-correction approach for neutral-atom quantum computers. - The advance could accelerate timelines for machines capable of running Shor’s algorithm, which threatens widely used cryptography. --- Quantum computers capable of breaking modern cryptography may require far fewer qubits than previously believed, according to new research from the California Institute of Technology. In the study published Monday, Caltech worked with Pasadena-based Oratomic, a quantum computing startup founded by Caltech researchers, to develop a new neutral-atom system in which individual atoms are trapped and controlled with lasers to act as qubits. Doing so could allow a fault-tolerant quantum computer to run Shor’s algorithm, which could derive private keys from the public keys used in Bitcoin’s elliptic-curve cryptography, with as few as 10,000 reconfigurable atomic qubits. Oratomic co-founder and CEO Dolev Bluvstein, a visiting associate in physics at Caltech, said advances in quantum computing are accelerating the timeline for practical machines and increasing pressure to migrate to quantum-resistant cryptography. “People are used to quantum computers always being 10 years away,” Bluvstein told Decrypt. “But when you look at where we were a little over ten years ago, the best estimates of what would be required for Shor’s algorithm were one billion qubits at a time when the best systems we had in the lab were roughly five qubits.” Today’s most common error-correction systems often require about 1,000 physical qubits to create a single reliable, logical qubit, the error-corrected unit used to perform calculations. That overhead has helped push estimates for practical fault-tolerant systems into the million-qubit range, slowing progress toward machines capable of running algorithms that could threaten RSA and elliptic-curve cryptography used by Bitcoin and Ethereum. Bluvstein noted that current lab systems are already approaching—and in some cases exceeding—6,000 physical qubits. In other words, the cryptography risk may be much sooner than experts previously expected. “You can really see the system size and controllability increasing over time as the required system size goes down,” he said. In September, Caltech researchers revealed a neutral-atom quantum computer operating 6,100 qubits with 99.98% accuracy and 13-second coherence times. It was a milestone toward error-corrected quantum machines that also renewed concerns about future threats to Bitcoin from Shor’s algorithm. The threat has prompted governments and technology firms to begin migrating to post-quantum cryptography, or encryption designed to withstand quantum attacks. Researchers, however, caution that major engineering challenges remain, including scaling quantum systems while maintaining extremely low error rates. “Just having 10,000 physical qubits is something that could happen within a year,” Bluvstein said. “But that's really not the goalpost people think it is. It’s not like when you design a computer, you just put the transistors on the chip, wash your hands, and say you’re done. It’s a highly non-trivial, extremely complicated task to actually go and build one of these.” Despite this, Bluvstein said a practical quantum computer could emerge before the end of the decade. The news comes as Google researchers reported new findings on Tuesday, suggesting future quantum computers could break elliptic curve cryptography with fewer resources than previously thought. That added urgency to calls for a transition to post-quantum cryptography before such machines become viable. Although the cryptocurrency industry has increasingly begun to focus on quantum risk, Bluvstein said that risk extends far beyond blockchain networks and requires changes across much of the modern digital world. “I think the whole world’s digital infrastructure. It’s not just blockchain. It’s internet of things devices, internet communication, routers, satellites,” he said. “It spans the entire global digital infrastructure, and it’s complicated.” decrypt.co/362988/cryptog…
Owen Gregorian tweet media
English
7
3
8
1.3K
Ronnie Baby
Ronnie Baby@dougjamesm·
@MattWalshBlog First time this millennium/century/decade/year. That's a lot of firsts.
English
0
0
0
23
Ronnie Baby
Ronnie Baby@dougjamesm·
@ericford31 Tribalism, money, backlash? Combo of all? It's always Bondi or Johnson or Thune's fault. Large accounts never wanna call out the boss specifically.
English
3
0
9
212
Clint Russell
Clint Russell@LibertyLockPod·
Kash and Bongino framed an autistic black dude and covered for the actual perpetrator who now works for the CIA. J5 was a false flag operation and plan B if they couldn't get people to riot. They framed Trump supporters as terrorists and Trump's FBI covered it up.
Joe Hanneman 🇺🇸@HanneReports

🚨BOMBSHELL🚨 Former Capitol Police officer Shauni Kerkhoff failed a November FBI polygraph test when asked if she placed pipe bombs on Jan. 5, 2021, a court filing states.

English
134
1.4K
6K
94.6K
Clint Russell
Clint Russell@LibertyLockPod·
@dougjamesm Well then, you won't like my answer. I believe Trump was used to harness the energy of right wing populism and to ultimately destroy that movement. Only to be replaced with neoconservatism. Again.
English
2
2
68
968
Dave W Plummer
Dave W Plummer@davepl1968·
99.9% of people who "experienced" the Challenger disaster saw it on replay and now remember it as live. Almost NO ONE was watching. Everyone thinks they were. It's a fascinating collective false memory.
Jeremy London@SirJeremyLondon

Anyone who experienced the Space Shuttle Challenger explosion, like I did, is probably a bit hesitant to get too excited about the Artemis II launch today. I truly hope our children don’t have to experience such tragedy. May the Universe welcome them and return them safely home 🙏

English
4.5K
57
1.6K
1M