Sam Patt

7.5K posts

Sam Patt banner
Sam Patt

Sam Patt

@SamuelPatt

Rational optimist | Worked on OpenBazaar | Wrote a book about Bitcoin | I love lifting / AI / Geoguessr / programming

Katılım Aralık 2011
1.1K Takip Edilen4.6K Takipçiler
Sabitlenmiş Tweet
Sam Patt
Sam Patt@SamuelPatt·
It's highly likely Bitcoiners are delusional, but our delusions are of a fairer and freer world, unlike the delusions of people who think the perpetuation of the status quo systems will lead to anything other than economic ruin.
English
11
19
109
0
Sam Patt
Sam Patt@SamuelPatt·
This is a war of choice, and choosing to spend $200 billion on this shouldn't even be a question. If Congress allows this, they shouldn't even exist.
English
0
0
1
37
Sam Patt
Sam Patt@SamuelPatt·
It's a bit unclear exactly how much damage has been done to the infrastructure, but thus far it's looking like something that will take months, possibly years, to fully repair. Some of the pricing is speculation about further escalation, so prices might come down somewhat once the war is no longer hot, but enough damage is already done that it seems almost certain they won't come down to pre-war levels for a long time, unless we see huge demand destruction (global depression). If the war escalates much further then a 70s energy crisis seems all but guaranteed.
English
0
0
2
41
Sol 🍂
Sol 🍂@autumnpard·
If the war in Iran continues and the oil prices continue to rise, would that mean that the increased costs of transport (cars, ships, planes) would be permanent? Could we expect for prices to lower at some point? It worries me that we may be in a 1970s energy crisis situation.
English
9
4
20
929
Sam Patt
Sam Patt@SamuelPatt·
I've seen many claim "a woman invented WiFi" as a way to show women can contribute technologically. Hedy shows that some women possess the capacity to do so (as do some men), but then go too far in insisting it actually happened. Plenty of real examples. Don't exaggerate.
English
0
0
0
47
Sam Patt
Sam Patt@SamuelPatt·
Hedy Lamarr did not invent WiFi. She and a colleague patented an impractical method for guiding torpedoes using a mechanical system much like a player piano to hop radio frequencies. It was never used. It didn't lead to WiFi.
English
1
0
1
103
Sam Patt
Sam Patt@SamuelPatt·
This conflates the people building the tech with the people most excited about the prospect of AI abundance. There's overlap, but I know a lot of the people involved on the abundance side - many come from the "progress studies" world and are typically more involved in economics and technology policy than they are building the tech. They're not young male progressive aspy singles. The individuals I'm thinking of are nearly all in their 30s and 40s, married, classical liberals, and people of faith. Take a look at the @abundanceinst staff page as an example. Doesn't quite fit the narrative. abundance.institute/about
Geoffrey Miller@gmiller

You can't vibe-code utopia. That's the key problem with the 'AI Abundance' narrative. The AI industry is dominated by young, male, Aspy, single, childless, 'progressive' atheists who love tech more than they love people. They think they can train AIs to deliver a utopian society for humans -- but they don't understand human nature, or history, or families, or societies. To them, 'human nature' is little more than a list of 'cognitive biases'. To them, 'human history' is little more than a catalog of oppression and irrationality. To them, 'human families' are dead weight that you escape when you leave home and move to the Bay Area. To them, 'human society' is little more than a hierarchy of founders, investors, employees, and customers. If they showed any humility about what they don't understand, and any self-awareness about their hubris, we might forgive their youthful arrogance. But, as things stand, there is no reason to trust them with any power to 'revolutionize' our entire civilization. Utopian revolutions rarely end well.

English
0
0
1
121
Sam Patt
Sam Patt@SamuelPatt·
@BobMurphyEcon The SOTA models are all reasoning models now, which use a lot of tokens. Inference still uses a massive amount of compute. 100B parameters on a CPU (not GPU!) is insane. Probably some serious tradeoffs there; I haven't investigated.
English
0
0
1
23
Robert P. Murphy
Robert P. Murphy@BobMurphyEcon·
The massive computing power for LLMs is due to the training. Once they have the weights, running the program isn't that computationally demanding.
Guri Singh@heygurisingh

Holy shit... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU. It's called BitNet. And it does what was supposed to be impossible. No GPU. No cloud. No $10K hardware setup. Just your laptop running a 100-billion parameter model at human reading speed. Here's how it works: Every other LLM stores weights in 32-bit or 16-bit floats. BitNet uses 1.58 bits. Weights are ternary just -1, 0, or +1. That's it. No floats. No expensive matrix math. Pure integer operations your CPU was already built for. The result: - 100B model runs on a single CPU at 5-7 tokens/second - 2.37x to 6.17x faster than llama.cpp on x86 - 82% lower energy consumption on x86 CPUs - 1.37x to 5.07x speedup on ARM (your MacBook) - Memory drops by 16-32x vs full-precision models The wildest part: Accuracy barely moves. BitNet b1.58 2B4T their flagship model was trained on 4 trillion tokens and benchmarks competitively against full-precision models of the same size. The quantization isn't destroying quality. It's just removing the bloat. What this actually means: - Run AI completely offline. Your data never leaves your machine - Deploy LLMs on phones, IoT devices, edge hardware - No more cloud API bills for inference - AI in regions with no reliable internet The model supports ARM and x86. Works on your MacBook, your Linux box, your Windows machine. 27.4K GitHub stars. 2.2K forks. Built by Microsoft Research. 100% Open Source. MIT License.

English
18
1
67
6.1K
Sam Patt retweetledi
Kelsey Piper
Kelsey Piper@KelseyTuoc·
My ancestors buried half their children. All mine are alive. My ancestors' house had a dirt floor. Mine is wood. I have indoor plumbing, I have hot water, I have never in my life hauled a full bucket half a mile and I probably never will. Do you know how rare it is, in human history, for small children to wear shoes? Mine have multiple pairs. I can speak to my relatives who live thousands of miles away, for free, at any time. Video, if we want video. With machine translation, if we speak different languages. The original Library of Congress had 740 books in it. I have more than that. If I run out of books in my home my local public library has 350,000. If I want to take a hundred books with me on vacation, they all fit on a device that fits in my purse. I have heat in the winter and AC in the summer and a washing machine and I have never, ever, ever had to scrub a dress clean by hand in the stream. I can look up recipes from more than a hundred different countries and I've tried dozens of them. I ride a clean and modern train across my city for $4, or take a robot taxi if I'm out too late for the train. I donate $40,000 every year to the cause of getting healthcare to the world's poorest people and even after the donations I never have to think about whether I can afford a book, or a pair of shoes, or a cup of coffee. There is a great deal more to fight for, of course. I hope that our descendants will look back on our lives and list a thousand ways they're richer. Maybe we ourselves will do that, if some of the crazier stuff comes true. But the abundance is all around you and to a significant degree you aren't feeling it only because fish don't notice water.
English
86
845
6.6K
349.6K
Sam Patt
Sam Patt@SamuelPatt·
"Why is it that making prices low would require to "crash the economy"?" If you ask the Austrians, most would say "it wouldn't." Deflation can be good if it's a result of productivity growth or better tech. It can also be good if it's correcting malinvestments - bursting bubbles hurts, but it's better to correct quickly than keep inflating the bubble.
English
0
0
0
27
Sam Patt
Sam Patt@SamuelPatt·
@summerpard Imagine you're inflating a balloon. If you were inflating it at 5% a minute, but now you're inflating it at 1% a minute, the balloon is still growing. Monetary policy is designed to always inflate, even if it's only a small amount. They never let air out of the balloon.
English
1
0
3
250
Sol 🍂
Sol 🍂@autumnpard·
2 questions: 1) Why do prices never return after inflation drops? Can someone explain? Am I imagining this? Are there reasons? Does it depend? 2) Why is it that making prices low would require to "crash the economy"? (Not sure what are the lines of this argument)
English
37
5
50
4.5K
Sam Patt
Sam Patt@SamuelPatt·
As soon as I showed an interest in prediction markets my feed was flooded with obvious garbage like this. A shame X doesn't clean this up, it's blatant. Who are the best real people to follow discussing prediction markets?
Sam Patt tweet mediaSam Patt tweet media
English
0
0
1
89
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?
Nav Toor tweet media
English
1.4K
8.9K
33.8K
3.2M
Sam Patt
Sam Patt@SamuelPatt·
The debate over whether the Constitution requires Congressional approval to go to war is hilarious to me for two reasons: 1. It obviously does. Just read it. If you're really still incapable of understanding it, then read the framers' original intent. It's clear as day. 2. If the Constitution did say that one man could unilaterally start a war of choice, that would be so insane that the proper response wouldn't be to debate the Constitution, it would be to amend it. The idea that we would give one person the ability to use the most powerful military in world history at their choice is so anti-democratic and against common sense that it's bizarre to watch anyone defend it.
English
0
0
6
95
Sam Patt retweetledi
Justin Amash
Justin Amash@justinamash·
This might be the most basic concept in the entire Constitution. The Framers granted Congress the power to declare war. To “declare” war meant to “commence” war—to put the country in a state of war, and activate the commander in chief to take offensive action. Draft language had granted Congress the power to make war, but the Framers were concerned that such phrasing would prohibit the president even from taking defensive action without Congress’s permission. They amended the language to ensure that the president could “repel sudden attacks” while preserving Congress’s authority to initiate war.
Justin Amash tweet mediaJustin Amash tweet media
Joe Pags Pagliarulo@JoeTalkShow

@justinamash show me specifically in the Constitution where it expressly says the Commander in Chief (Article 2 powers) has to get the permission of Congress (Article 1). I'll wait, Justin.

English
158
693
4K
161.7K
Sam Patt
Sam Patt@SamuelPatt·
Compaction fails, forcing a new session Cron jobs don't run reliably Agents will just stop responding, both to me and to each other Telegram messaging unreliable Gateway UI issues (so many) Sessions aren't selectable in UI
English
0
0
0
47
Sam Patt
Sam Patt@SamuelPatt·
OpenClaw is probably the buggiest software that I've ever voluntarily kept using. The failures are infuriating, but when it's working well, it enables me to build like never before.
English
3
0
3
124