saustermann

2.9K posts

saustermann banner
saustermann

saustermann

@SloanAustermann

proud Laniakean | Nullius in Verba | In Claude We Trust

Katılım Eylül 2011
227 Takip Edilen181 Takipçiler
saustermann retweetledi
Hasan alrabay
Hasan alrabay@HasanEssam29636·
One of the most terrifying images in history: a transformation from life to death. Gaza in 2023 and 2026!
Hasan alrabay tweet media
English
11.5K
64.9K
145.8K
8.2M
saustermann retweetledi
Mathieu
Mathieu@miniapeur·
Mathieu tweet media
ZXX
24
1.1K
26.3K
295.2K
saustermann retweetledi
Bernie Sanders
Bernie Sanders@BernieSanders·
It is time for the U.S. to end military aid to Israel. But we're not going to wait 10 years to do it. The time to stop arming Netanyahu and hold him accountable for his crimes against humanity is NOW.
60 Minutes@60Minutes

Israeli Prime Minister Benjamin Netanyahu tells 60 Minutes he wants Israel to eventually stop relying on U.S. military aid: “It's time that we weaned ourselves from the remaining military support.” 60Minutes.com

English
2.6K
8.9K
42.6K
1.8M
saustermann retweetledi
saustermann retweetledi
chester
chester@chesterzelaya·
has anyone checked in on this?
chester tweet media
TFTC@TFTC21

Anthropic just published a support page that should terrify anyone holding its shares on the secondary market. "Any sale or transfer of Anthropic stock, or any interest in Anthropic stock, that has not been approved by our Board of Directors is void and will not be recognized on our books and records." Void. Not restricted. Not pending review. Void. That means if you bought Anthropic shares through Forge, Hiive, or any other secondary platform without board approval, you are not a stockholder. You have no stockholder rights. Your transaction is invalid. It gets worse. Anthropic says it does not permit SPVs to hold its stock. Any transfer to an SPV is void. Investment funds claiming to offer indirect exposure are "most likely relying on mechanisms that attempt to circumvent our transfer restrictions." Forward contracts, tokenized securities, synthetic exposure products, all of it potentially worthless. Their advice to investors: "Assume that it is invalid." There is a multi-billion dollar secondary market in Anthropic shares right now. Platforms are pricing the stock at $265-$1,400+ per share based on a $380 billion valuation. Real people have put real money into these positions. And Anthropic just told them none of it counts. This is the purest possible illustration of counterparty risk. You can buy a share of a company and have the company itself declare your ownership void because you bought it through the wrong channel.

English
50
96
4.2K
983.3K
saustermann retweetledi
Doc Strangelove
Doc Strangelove@DocStrangelove2·
Saying "this is not a golden calf" as you put up a golden calf doesn't cancel that fact out.
Pastor Mark Burns@pastormarkburns

Today at Trump National Doral Miami, we witnessed an unforgettable moment with the dedication of the 22-foot statue honoring President Donald J. Trump. Let me be clear: this is not a golden calf. We worship the Lord Jesus Christ and Him alone. This statue is a celebration of life. It is a symbol of resilience, freedom, patriotism, strength, and the will power to keep fighting for the future of America. It also stands as a reminder of the hand of God and His protection over President Trump’s life. Time and time again, when his life was threatened, God’s mercy prevailed. Today was not just a ribbon cutting. It was the public display of a powerful movement that has spread across America and around the world. I was deeply honored to serve as President Trump’s main point of contact throughout this process, and I do not take that assignment lightly. I want to personally thank Ash, Dustin Stockton, Brock Pierce, Hershey Friedman, Yaakov Filitchkin, Sam, Jack, and the 6,000+ Patriots who donated, believed, sacrificed, and made this historic moment possible. Thank you to the entire Trump Doral team for your incredible hospitality and excellence. And thank you, President Donald J. Trump, for calling me today and speaking to the crowd. We are forever grateful. God bless President Trump. God bless every Patriot. And God bless the United States of America. 🇺🇸 #PresidentTrump #SpiritualDiplomats #TrumpDoral #TrumpStatueDedication #AmericaFirst #PatriotMovement #FaithFreedomPatriotism

English
161
3K
26.6K
768.9K
saustermann retweetledi
Tech Brew ☕
Tech Brew ☕@techbrewmb·
Apple will pay $250 million to settle a class-action suit accusing it of misleading US customers about AI features on new iPhones Who's eligible: • Any iPhone 16 model • iPhone 15 Pro, Pro Max • Purchased between 6/10/24 - 3/29/25 That comes out to roughly 37 million devices
Tech Brew ☕ tweet mediaTech Brew ☕ tweet mediaTech Brew ☕ tweet media
English
97
185
5K
1.2M
saustermann retweetledi
Bob Golen
Bob Golen@BobGolen·
I keep getting etymology and entomology confused. Words cannot describe how much this bugs me.
English
110
4.8K
36.1K
438.2K
saustermann
saustermann@SloanAustermann·
@nonregemesse the botox epidemic in hollywood starts to make a lot more sense when you consider that frozen-faced actors are a lot easier to replace with AI than ones who can actually emote
English
0
1
10
10K
ben hylak
ben hylak@benhylak·
i know a lot of people who love this book, and none of them have finished it.
Ihtesham Ali@ihtesham2005

A 34-year-old physics graduate student spent years writing a strange 800-page book in 1979 about a logician, a Dutch artist, and a German composer. It won the Pulitzer Prize the following year. It quietly became required reading at every AI lab in the world. It is the only book in history that makes the deepest ideas in computer science feel like a dream you cannot stop thinking about. I read it across 3 months on a single side table next to my bed and walked away seeing intelligence, consciousness, and AI in a way I cannot un-see. His name is Douglas Hofstadter. The book is called Gödel, Escher, Bach. Almost nothing in modern AI makes sense without this book. ChatGPT, Claude, Gemini, the entire architecture of self-attention, the alignment problem, the strange feeling that LLMs sometimes seem to understand and other times seem to be playing an elaborate symbol-shuffling game, all of it traces back to questions Hofstadter laid out in a single book published before most of today's AI engineers were born. Here is the story almost nobody tells you about how the book came to exist. Hofstadter was the son of Robert Hofstadter, who won the Nobel Prize in Physics in 1961 for measuring the size of the proton. He was supposed to follow in his father's footsteps. He started a physics PhD at the University of Oregon. He was miserable. He could not focus. He did not love the work. He kept getting pulled toward something else. The something else was a single question that had haunted him since childhood. How can meaning emerge from meaningless symbols? Specifically, how does a brain, which is made of nothing but cells firing electrical signals at each other, produce something that feels like consciousness, like understanding, like a self? He could not let the question go. He left physics. He started writing. The book took him years. He wrote it largely in isolation, working in the basement of his parents' house and at Indiana University, where he eventually finished it. He thought it would be read by maybe a few hundred logicians and AI researchers. Basic Books published it in 1979 as a 777-page hardcover. The next year it won the Pulitzer Prize for general non-fiction and the National Book Award for science. The book is structured in a way that almost no other book has ever attempted. The chapters alternate between two layers. One layer is technical chapters about logic, computability, neuroscience, and AI. The other layer is fictional dialogues between a tortoise and Achilles, characters borrowed from a paradox by Lewis Carroll. The dialogues play with the same ideas the technical chapters explain. Read in order, they do not feel like a textbook. They feel like a strange house with rooms that loop back into each other and corridors that change shape behind you. The first thing the book does is explain Gödel's incompleteness theorems in a way no math textbook had ever managed. Kurt Gödel, an Austrian logician working in 1931, proved something that broke mathematics. He showed that any formal system powerful enough to describe arithmetic contains statements that are true but cannot be proven inside that system. Mathematics, the most certain thing humans had ever built, has holes in it that can never be filled. Hofstadter spends hundreds of pages making you understand this proof not just as a mathematical theorem, but as a structural fact about every sufficiently complex system. Including the brain. Including any AI. The reason AI alignment is genuinely hard is not just engineering. It is structural. Any system smart enough to model itself will contain truths about itself it cannot reach from inside itself. Hofstadter showed this 50 years before AI safety was a field. The second thing the book does is introduce his core idea. He calls it the strange loop. A strange loop is what happens when a system, by climbing through layers of itself, somehow ends up back where it started. Escher's drawings of staircases that always go up but somehow loop back are visual strange loops. Bach's musical canons that modulate up through keys and end on the original note are auditory strange loops. Gödel's self-referential statements that talk about themselves are logical strange loops. Hofstadter argues that consciousness is a strange loop. Your brain builds a model of the world. Inside that model, it builds a model of itself perceiving the world. Inside that self-model, it builds a model of itself thinking about itself perceiving the world. The recursion does not bottom out. The self is what the loop feels like from the inside. This is the part that AI researchers cannot stop returning to. Modern transformer models use self-attention, which is technically a mechanism where a network attends to its own internal states across layers. Recursive reasoning, where a model thinks about its own thinking, is now a research area with its own conferences. Meta-learning, where models learn how to learn, is a direct descendant of what Hofstadter described in 1979 as the necessary structure of any conscious system. He wrote the philosophy. The engineers are now building the implementation. The third thing the book does is the part that haunts every AI conversation today. Hofstadter argued that meaning is not something separate from symbol manipulation. It is what symbol manipulation looks like from the inside, when the manipulation is complex enough and self-referential enough. A simple lookup table does not understand anything. But a system that processes symbols at sufficient depth, with enough self-modeling, with enough recursion, starts to look identical from the outside, and possibly from the inside, to a system that understands. This is the deepest question in modern AI. When ChatGPT generates a response, is it actually thinking, or is it just doing very fast symbol shuffling? Hofstadter spent 800 pages arguing that the distinction may not exist at sufficient scale. If a system shuffles symbols according to the right structure, meaning is what the shuffling looks like from the inside. You can read modern debates about AI consciousness from Yann LeCun, Geoffrey Hinton, Ilya Sutskever, and David Chalmers, and you will find that they are all, in their own ways, having the argument Hofstadter framed in 1979. The fourth thing the book did is the one that took the longest to be vindicated. Hofstadter argued, and continued arguing for decades, that the actual engine of human intelligence is not logic. It is not deduction. It is not pattern matching in any simple sense. It is analogy. The ability to see one thing as similar to another thing, to map the structure of one situation onto a different situation, is, in his view, the core of thought itself. For decades this was unfashionable. Symbolic AI focused on logic and rules. Statistical AI focused on pattern matching. Almost nobody worked seriously on analogy. Then large language models started working. And the people who looked closely at what they were doing realized something uncomfortable. LLMs are, fundamentally, analogy machines. They learn structural patterns from text and apply those patterns by analogy to new situations. They do not deduce. They do not reason logically by default. They map the shape of one thing onto the shape of another thing and produce output that fits the new shape. Hofstadter saw this before any of it existed. His later book Surfaces and Essences, written with Emmanuel Sander, is 600 pages defending the claim that analogy is the core of cognition. It came out in 2013. It was largely ignored. The ChatGPT release in 2022 was, in some sense, a vindication of the entire argument. The strangest thing about reading Gödel, Escher, Bach in 2026 is realizing how lonely the book must have felt when it was written. In 1979 there was no GPT. No deep learning. No transformer. The dominant approach to AI was symbolic logic, and most researchers thought minds were going to be programmed top-down, rule by rule, like a complicated chess engine. Hofstadter said the opposite. He said minds were emergent. They came from the bottom up. They were strange loops in complex substrates. The programmers' approach would never produce real intelligence because it was missing the recursive self-modeling that made minds real. He was right. The book is hard. I had to use all the LLMs and NotebookLM to understand it. It is not a beach read. You do not finish it in a weekend. The math chapters require attention. The dialogues require patience. Most people who buy it never finish it. That is fine. The book is structured so that reading any 50 pages produces a permanent shift in how you think. Bill Gates lists it among the books that shaped him. Steve Jobs read it. Almost every senior AI researcher in the world will tell you it was the book that made them fall in love with the question of intelligence in the first place. Hofstadter himself has been in doubt about modern LLMs. He has said they may have proven him right about analogy and wrong about consciousness at the same time. He is still writing. He is still working on the same question that pulled him out of physics 50 years ago. The 800-page book that explained intelligence before AI existed is sitting one click away from you. Most people will never open it. The ones who do will see the world differently for the rest of their lives.

English
53
22
445
144K
saustermann
saustermann@SloanAustermann·
“Bad” for whom? The argument from your side is that if everyone presses red everyone survives. The same is true for blue. But now consider babies, children, or people with REALLY, REALLY bad hand eye coordination. If we all coordinate to “just push red”, if any single person who messes up, they deserve to die?
English
0
0
0
15
pigzig
pigzig@pigzig2·
@SloanAustermann @NapoleonBonabot I understand it but it seems disturbing, the risk comes only from blue in the first place. The “bad button“ is the blue one.
English
1
0
0
20