Tom Fiddaman

5.2K posts

Tom Fiddaman

Tom Fiddaman

@tomfid

Bozeman, MT, USA Bergabung Aralık 2008
289 Mengikuti713 Pengikut
Sebastian Caliri
Sebastian Caliri@SebastianCaliri·
Single-payer is not some untested theory. We have plenty of data from Canada. It is a great solution if you want to wait 28.6 weeks (~6.6 months) for an MRI or knee replacement. Pretending there is no trade off between cost control and timely access to care is unserious. Single payer can bound costs, but we cannot somehow spend less and receive the same thing in return.
Sebastian Caliri tweet media
Tom Steyer@TomSteyer

We need single-payer health care. I've looked at the numbers, and there's no other way to bring costs down.

English
106
264
1.6K
66.9K
Tom Fiddaman me-retweet
Prof Michael E. Mann
Prof Michael E. Mann@MichaelEMann·
The "super-El Nino" talk is unhelpful clickbait hype. Current consensus forecast is for a Nino3.4 index of ~1.3C to emerge later this year. That's a *moderate* El Nino; pales in comparison w/ recent (e.g. 2016 & 2024) events. The truth remains bad enough! cpc.ncep.noaa.gov/products/analy…
Prof Michael E. Mann tweet mediaProf Michael E. Mann tweet media
English
1
54
141
11.8K
Tom Fiddaman
Tom Fiddaman@tomfid·
@SkippyStone LOL your idea of a mathematical model of natural phenomena needs some help. Neanderthal "small numbers don't matter" reasoning doesn't count.
English
1
0
0
4
Skippy Stone - Rollin on
Skippy Stone - Rollin on@SkippyStone·
122 ppm it is not a lot. That is the point. PER MILLION parts. I am pretty sure I have eaten an apple with radiation. Radon is virtually everywhere. The EPA says average outdoor RADON is 0.000037 ppm. It will concentration in H2O like the water in an Apple.
Tom Fiddaman@tomfid

@SkippyStone @mzjacobson That's a silly fallacy of small numbers. Would you eat an apple with 122ppm of plutonium? 122ppm over many kilometers of air is a lot. Anyway the radiative effects of gases are experimental science, so if you had evidence you could refute all sorts of things and win a Nobel.

English
1
0
0
31
Tom Fiddaman me-retweet
Tech with Mak
Tech with Mak@techNmak·
In 1948, a 32-year-old at Bell Labs published a paper nobody fully understood. Engineers found it too mathematical. Mathematicians found it too engineering-focused. One prominent mathematician reviewed it negatively. That paper - "A Mathematical Theory of Communication", became the founding document of the digital age. The man was Claude Shannon. Father of Information Theory. At 21, he wrote the most important master's thesis of the 20th century. Working at MIT on an early mechanical computer, Shannon noticed its relay switches had exactly two states - open or closed. He had just taken a philosophy course introducing Boolean algebra, which also operated on two values: true and false. Nobody had ever connected these two things. His 1937 thesis proved that Boolean algebra and electrical circuits are mathematically identical, and that any logical operation could be built from simple switches. Howard Gardner called it "possibly the most important, and also the most famous, master's thesis of the century." Every digital computer ever built traces back to this insight. At 29, he proved that perfect encryption exists. During WWII, Shannon worked on classified cryptography at Bell Labs. His work contributed to SIGSALY, the secure voice system used for confidential communications between Roosevelt and Churchill. In a classified 1945 memorandum, he mathematically proved the one-time pad provides perfect secrecy, unbreakable not just computationally, but provably, permanently, against an adversary with infinite power. When declassified in 1949, it transformed cryptography from an art into a science. It laid the foundations for DES, AES, and every modern encryption standard. At 32, he defined what information is. His 1948 paper introduced one equation: H = −Σ p(x) log p(x) Shannon entropy. The average uncertainty in a probability distribution. The minimum bits required to encode a message. Three things followed: > He defined the bit - the fundamental unit of all information. His colleague John Tukey coined the name. > He proved the channel capacity theorem, every communication channel has a maximum rate of reliable transmission. You can approach it. You can never exceed it. > He unified telegraph, telephone, and radio into a single mathematical framework for the first time. Robert Lucky of Bell Labs called it the greatest work "in the annals of technological thought." Where his equation lives in AI today: Cross-entropy loss - the function training every classifier and language model, is derived directly from H. Decision tree splits use information gain, which is H applied to data. Perplexity, the standard LLM evaluation metric, is an exponentiation of cross-entropy. Every time a neural network trains, Shannon's formula runs inside it. He also built the first AI learning device. In 1950, Shannon built Theseus, a mechanical mouse that navigated a maze through trial and error, learned the correct path, and repeated it perfectly. Mazin Gilbert of Bell Labs said: "Theseus inspired the whole field of AI." That same year he published the first paper on programming a computer to play chess. He co-organized the 1956 Dartmouth Workshop, the founding event of AI as a field. The man: He rode a unicycle through Bell Labs hallways while juggling. He built a flame-throwing trumpet, a rocket-powered Frisbee, and Styrofoam shoes to walk on the lake behind his house. He called his home Entropy House. When asked what motivated him: "I was motivated by curiosity. Never by the desire for financial gain. I just wondered how things were put together." In 1985, he appeared unexpectedly at a conference in Brighton. The crowd mobbed him for autographs. Persuaded to speak at the banquet, he talked briefly, then pulled three balls from his pockets and juggled instead. One engineer said: "It was as if Newton had showed up at a physics conference." He died in 2001 after a decade with Alzheimer's, the cruel irony of information slowly leaving the mind of the man who defined what information was. Claude, the AI model, is named after Claude Shannon, the mathematician who laid the foundation for the digital world we rely on today.
Tech with Mak tweet media
English
193
2.1K
7.6K
443.4K
Tom Fiddaman me-retweet
Judea Pearl
Judea Pearl@yudapearl·
I agree with almost all claims of this paper, with the exception of this: "Practitioners of causal calculus propose that it to be a necessary ingredient of virtually every causal inference including that from tightly controlled randomized studies." All CI practitioners that I've met would agree that if we have a well conducted Randomized study, you can get causal effects directly from the experiment -- no need for causal calculus. Such needs surface when you want to do more than just find one causal effect. For example, suppose you want to combine findings of two randomized studies, conducted on different variables and perhaps diverse populations. I have not seen a single mortal capable of doing it w/o the calculus, and that includes mortals who claim to be doing "meta analysis". There is also a foundational problem that is answerable only via the calculus: "What guarantees us that randomized experiments yield causal effects.?" The @Bookofwhy provides a formal proof, which I haven't seen elsewhere, not even in Fisher. But this should not bother practicing trialists, they can benefit from the proof and pretend they don't need causal analysis because, obviously, RCT's give us causal effects. Done. As a computer scientist, I couldn't take it for granted.
English
5
6
37
4.6K
Tom Fiddaman me-retweet
Simon Owens
Simon Owens@simonowens·
Nieman Lab analyzed the tweets of major news organizations that link back to their websites and compared them to a handful of “breaking news” accounts that almost never include links. It found that tweets with links almost definitely are being suppressed niemanlab.org/2026/04/do-lin…
English
29
347
898
291.9K
Tom Fiddaman
Tom Fiddaman@tomfid·
@jcarterwil @mrbcyber @vntrcartography Winning through economics has several advantages: more immediate wealth, no generational hatred from people you bomb, and ultimately a bigger industrial base supporting your military aspirations.
English
2
0
0
28
Carter Williams
Carter Williams@jcarterwil·
@mrbcyber @vntrcartography Nice confirmation. It was understood before their military is far weaker than characterized. China really has little interest in conflict. They would rather win through economics.
English
2
0
1
246
Michael Ron Bowling
Michael Ron Bowling@mrbcyber·
The data from the hack at the Chinese super computer center included missile schematics and fusion simulations. If, as claimed, the hacker had access to this information for months, it shows massive security issues. Along with the recent purges and weapons failures, this hack shows serious problems in China's military industrial complex.
Michael Ron Bowling tweet mediaMichael Ron Bowling tweet media
UnveiledChina@Unveiled_ChinaX

If this is real, it could be one of the largest data breaches in China’s history. A hacker group claims it extracted over 10 petabytes of data from a state-run supercomputing facility, widely believed by experts to be the National Supercomputing Center in Tianjin. This center supports thousands of clients, including research institutes, aerospace programs, and defense-linked organizations. What’s reportedly in the data: - Documents marked “secret” in Chinese - Missile and bomb schematics - Aerospace and aviation research - Bioinformatics and fusion simulation data - Files linked to major state entities like AVIC and COMAC Cybersecurity experts who reviewed sample data say it matches what you would expect from such a facility, though the full breach is not independently verified. Even more concerning: - The attacker claims access lasted months without detection - Sample datasets were posted online via Telegram - Full access is reportedly being sold for hundreds of thousands of dollars in crypto At this stage, the scale and origin are still being verified. But if even partially true, it points to a serious vulnerability in infrastructure tied to China’s scientific and defense ecosystem. If a centralized system like this can be penetrated, what does that say about the security of the data it was processing? #China #Cybersecurity #CCP #DataBreach #Geopolitics #Tech cnn.com/2026/04/08/chi…

English
55
639
3.5K
262.1K
Tom Fiddaman
Tom Fiddaman@tomfid·
@SkippyStone @mzjacobson That's a silly fallacy of small numbers. Would you eat an apple with 122ppm of plutonium? 122ppm over many kilometers of air is a lot. Anyway the radiative effects of gases are experimental science, so if you had evidence you could refute all sorts of things and win a Nobel.
English
0
0
0
46
Skippy Stone - Rollin on
Skippy Stone - Rollin on@SkippyStone·
Give me a bit to throw some stats behind my statements. I have been rather tied-up with Harp-moose oil tanker delays. However, let us start with the human carbon pollution percentage of atmosphere. The difference in human CO2 and “ice records” is 122 PPM. PARTS PER MILLION. If you believe these numbers, humans have increased “pollution” by 0.000122. BUT we are warming the planet at 100 times that rate. GIVE ME A BREAK.
English
1
1
0
22
Tom Fiddaman
Tom Fiddaman@tomfid·
@SkippyStone @mzjacobson "lying about weather to claim SUVs are warming the planet" is sufficient evidence of foolishness in this case, though I'd be interested to hear your particular take.
English
1
0
1
15
Skippy Stone - Rollin on
Skippy Stone - Rollin on@SkippyStone·
@tomfid @mzjacobson Too funny. Typical interwebz. Without knowing anything about a person, you assume you are the more literate because YOU must be right. I don't need that contrived reassurance. I indicated I concur with reduction of hydrocarbon use, but that it has nothing to do with weather.
English
1
0
0
14
Tom Fiddaman
Tom Fiddaman@tomfid·
@StephenKing True, but if climate impacts were priced into the fuel it would be double that, so this is nothing to whine about. A war is a stupid way to implement a carbon tax though.
English
0
0
0
10
Stephen King
Stephen King@StephenKing·
In February, the average price of a gallon of regular gas in the United States was $2.91. Now it’s $4.22 . That's what you get when you elect an idiot the president of the United States, allow him to start a war without congressional approval.
English
5K
8.3K
50.7K
1.2M
Tom Fiddaman
Tom Fiddaman@tomfid·
@DannyTenenbaum I doubt increased supply is the only driver - probably has a lot to do with high interest rates, property taxes, and post-covid cooldown.
English
1
0
0
108
Mark Z. Jacobson
Mark Z. Jacobson@mzjacobson·
@SkippyStone But you are doing climate science in your head, pretending you can calculate more variables than a computer model, then drawing a conclusion.
English
2
1
51
898
Tom Fiddaman
Tom Fiddaman@tomfid·
@mzjacobson Explain the distortion? I'm proposing that it's possible to present exactly the same information in a way that doesn't reinforce the corrosive thought pattern of making long linear extrapolations from short series. Deniers feed on innumeracy - don't do their job for them.
English
0
0
0
3
Mark Z. Jacobson
Mark Z. Jacobson@mzjacobson·
@tomfid You are pro-distortion of reality. Sorry. The graph is correct and it is inappropriate for you to try to manipulated data to pretend this not the trajectory different countries are going.
English
1
0
0
11
Tom Fiddaman
Tom Fiddaman@tomfid·
@mzjacobson That's another silly extrapolation. I'm pro-climate-action, pro-wind, pro-solar. I just happen to be anti-chartjunk too. The chart's not strictly wrong, but it reinforces innumerate thought patterns that are the bread and butter of climate denial.
English
1
0
0
12
Mark Z. Jacobson
Mark Z. Jacobson@mzjacobson·
@tomfid Oh but you prefer that no-one have any information, that we all pretend that Canada is moving forward really fast.
English
1
0
0
15
Tom Fiddaman
Tom Fiddaman@tomfid·
@mzjacobson Extrapolating linearly from 3 data points is kind of silly, whether it's a decade or a century. And that's not the only problem with the headline and the chart. The underlying point (Canada slow) could be shown in a better way.
English
1
0
1
13
Mark Z. Jacobson
Mark Z. Jacobson@mzjacobson·
Unfortunately, based on the current rate of installation of WWS in Canada, Canada is not expected to reach 100% WWS across all sectors until after 2300. Meanwhile, China is projected to get there based on current installations by 2051. web.stanford.edu/group/efmh/jac…
Mark Z. Jacobson tweet media
English
6
15
42
1.6K
Tom Fiddaman
Tom Fiddaman@tomfid·
It's all theft. Incredibly useful theft at times, but theft nonetheless. Perhaps the remedy is to deny copyright and patent protections to all models?
Nav Toor@heynavtoor

🚨BREAKING: Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now. Word for word. Up to 90% of it. And OpenAI told a judge that was impossible. Researchers at Stony Brook University and Columbia Law School just proved it. They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks. The models started reciting copyrighted books from memory. Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word. Then it got worse. The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else. It unlocked verbatim recall of books from over 30 completely unrelated authors. One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked. Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted. Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher. That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites. Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights. This paper says that is a lie. The books are still inside. And researchers just pulled them out.

English
0
0
0
20