Armin

226 posts

Armin banner
Armin

Armin

@thaatrandomkid

Building a better data structure that bridges Excel financial models to modern computing | HBS + Harvard Law | Chess Player, Gamer, Photographer

New York, NY Katılım Nisan 2026
145 Takip Edilen6 Takipçiler
Armin
Armin@thaatrandomkid·
A judge tells a prisoner: “You will be hanged at noon on one day next week. The day will be a surprise—you won’t know which day until the executioner arrives.” The prisoner reasons: “They can’t hang me Friday. If I survive until Thursday night, only Friday remains—I’d know it’s coming. Not a surprise. Friday is eliminated. But now Thursday is the last possible day. If I survive until Wednesday night, I’d know Thursday is the day. Not a surprise. Thursday is eliminated. By the same logic, Wednesday is eliminated. Then Tuesday. Then Monday. They can’t hang me at all! Any day I could deduce in advance.” The prisoner relaxes, confident in his proof. On Wednesday, the executioner arrives at noon. The prisoner is completely surprised. The judge told the truth. The prisoner’s logic was flawless. Yet the reasoning led to a false conclusion. What went wrong? This paradox has generated extensive philosophical literature with no consensus. Some say the judge’s statement was self-contradictory. Some say the prisoner’s backward induction fails because “surprise” becomes self-referential. Some invoke the distinction between knowledge levels. One analysis: the prisoner’s proof assumes his reasoning is correct. But if his reasoning proves no hanging, and he believes his reasoning, he’ll be surprised by ANY hanging—making every day a valid surprise day again. The proof undermines itself. The paradox touches on self-reference, the limits of logical deduction, and how knowledge about knowledge creates loops that logic can’t escape.
English
0
0
0
5
Armin
Armin@thaatrandomkid·
A superintelligent predictor offers you a choice. Two boxes sit before you: •Box A: Transparent, contains $1,000 •Box B: Opaque, contains either $1,000,000 or nothing You can either: 1.Take only Box B 2.Take both boxes Here’s the catch: the predictor has already made a prediction about your choice. If the predictor predicted you’d take only Box B, it placed $1,000,000 in Box B. If it predicted you’d take both boxes, it left Box B empty. The predictor has done this thousands of times before and has never been wrong. What do you do? Argument for taking only Box B: The predictor is essentially perfect. If you take only B, the predictor predicted that, so B contains $1,000,000. If you take both, the predictor predicted that, so B is empty and you only get $1,000. Take only B and walk away with a million. Argument for taking both boxes: The prediction has already been made. The money is either in Box B or it isn’t—your choice now can’t change the past. Whatever’s in Box B, taking both gets you $1,000 more. You’d be leaving free money on the table. Take both. Both arguments seem airtight. They give opposite answers. This has caused decades of debate among philosophers and decision theorists. Causal decision theory says take both—your action can’t change what’s already in the box. Evidential decision theory says take one—your action reveals the prediction, and one-boxers find themselves millionaires. Simulations bear it out: if the predictor is actually reliable, one-boxers get rich while two-boxers get $1,000. But it FEELS irrational to leave $1,000 sitting right there… There’s no consensus. The paradox reveals that our theories of rational choice are incomplete or contradictory. We don’t fully understand what it means to make the “right” decision.
English
0
0
0
4
Armin
Armin@thaatrandomkid·
I’ll flip a fair coin repeatedly until it lands heads. If heads appears on flip N, I pay you $2^N. Heads on flip 1: $2. Heads on flip 2: $4. Heads on flip 3: $8. Heads on flip 10: $1,024. Heads on flip 20: $1,048,576. How much would you pay to play this game? Calculate the expected value: E = (1/2)($2) + (1/4)($4) + (1/8)($8) + (1/16)($16) + … E = $1 + $1 + $1 + $1 + … E = ∞ The expected payout is infinite. Rationally, you should pay ANY finite amount to play—a million dollars, a billion dollars, your entire net worth. But no sane person would pay even $50. This is the St. Petersburg Paradox. Expected value—the foundation of decision theory—gives absurd advice. Why the disconnect? Several explanations exist. The most compelling: diminishing marginal utility. The jump from $0 to $1,000 matters more to you than $1,000,000 to $1,001,000. If utility scales as log(wealth), the expected UTILITY is finite, and you’d pay roughly $10-20. Another explanation: you can’t actually pay infinite amounts. Any real game has a maximum payout (the casino’s bankroll). Cap the payout at $2^40 and the expected value drops to ~$40. But the paradox reveals something deep: expected value alone isn’t sufficient for rational decision-making. Variance, utility functions, practical constraints, and bounded reasoning all matter. Pure probability theory gives answers humans rightly reject.
English
0
0
0
11
Armin
Armin@thaatrandomkid·
A trend appears in every subset of data. But when you combine the subsets, the trend reverses. This is not a trick or an edge case. It happens constantly in real-world data. Real example: UC Berkeley gender bias (1973) Berkeley was sued for gender discrimination in graduate admissions. The overall numbers looked damning: •Men: 44% admitted •Women: 35% admitted Clear bias against women, right? But when researchers examined individual departments: In almost every department, women were admitted at equal or HIGHER rates than men. How is this possible? Women applied disproportionately to competitive departments (like English) with low admission rates. Men applied disproportionately to less competitive departments (like engineering) with high admission rates. The “bias” wasn’t in admissions decisions—it was in application patterns. Aggregating the data created a phantom effect. This is Simpson’s Paradox. The same data can tell opposite stories depending on how you slice it. And there’s no mathematical rule for which slicing is “correct”—that requires causal reasoning about what’s actually happening. Statistics alone cannot tell you the truth. You need to understand the structure of reality.
English
0
0
0
7
Armin
Armin@thaatrandomkid·
You’re on a game show. Three doors. Behind one door is a car. Behind the other two are goats. You pick Door 1. The host, Monty Hall, opens Door 3, revealing a goat. He always opens a door with a goat—he knows where the car is and never reveals it. Now he asks: “Do you want to switch to Door 2?” Intuition says it doesn’t matter. Two doors left, 50/50 chance. Intuition is wrong. You should ALWAYS switch. Switching wins 2/3 of the time. Here’s why. When you first picked, you had a 1/3 chance of picking the car and a 2/3 chance of picking a goat. If you picked the car (1/3 probability), switching loses. If you picked a goat (2/3 probability), Monty reveals the OTHER goat, and switching wins the car. Switching wins whenever your initial pick was wrong. Your initial pick was wrong 2/3 of the time. So switching wins 2/3 of the time. Monty’s reveal isn’t random—he’s giving you information. He’s forced to avoid the car, which concentrates the probability onto the door he didn’t open. When Marilyn vos Savant published this answer in 1990, she received over 10,000 letters, many from PhD mathematicians, telling her she was wrong. She wasn’t. The simulation bears it out every time: switch and you win twice as often.
English
0
0
0
8
Armin
Armin@thaatrandomkid·
Imagine a hotel with infinitely many rooms: Room 1, Room 2, Room 3, and so on forever. Every room is occupied. A new guest arrives. In a finite hotel, you’d turn them away. No vacancy. In Hilbert’s Hotel, you announce: “Everyone move to the next room.” Guest in Room 1 moves to Room 2. Guest in Room 2 moves to Room 3. Every guest moves from Room N to Room N+1. Now Room 1 is empty. The new guest checks in. Infinity + 1 = infinity. It gets crazier. A bus arrives with infinitely many new guests. No problem: “Everyone move to the room with double your current number.” Room 1 → Room 2. Room 2 → Room 4. Room N → Room 2N. Now all odd-numbered rooms are empty. Infinitely many vacancies for infinitely many guests. Infinity + infinity = infinity. It gets even crazier. Infinitely many buses arrive, each carrying infinitely many passengers. Solution: Use prime numbers. Current guests go to powers of 2 (2, 4, 8, 16…). Bus 1 passengers go to powers of 3 (3, 9, 27, 81…). Bus 2 passengers go to powers of 5. Bus N passengers go to powers of the Nth prime. Since every integer has a unique prime factorization, no collisions occur. Infinity × infinity = infinity. The hotel never fills. Infinity isn’t a number. It’s a structure that defies finite intuition entirely.
English
0
0
0
7
Armin
Armin@thaatrandomkid·
0.999… = 1 This is not an approximation. They are exactly, precisely, identically the same number. Proof 1 (Algebraic): Let x = 0.999… Then 10x = 9.999… Subtract: 10x - x = 9.999… - 0.999… 9x = 9 x = 1 Proof 2 (Fractions): 1/3 = 0.333… Multiply both sides by 3: 3 × (1/3) = 3 × 0.333… 1 = 0.999… Proof 3 (Limits): 0.999… means 9/10 + 9/100 + 9/1000 + … This is a geometric series with first term 9/10 and ratio 1/10. Sum = (9/10)/(1 - 1/10) = (9/10)/(9/10) = 1 Proof 4 (Density): If 0.999… ≠ 1, there must be a number between them. What is it? There is none. No number exists between 0.999… and 1. If no number separates them, they’re the same number. The confusion comes from thinking of 0.999… as a process—“approaching but never reaching.” But 0.999… isn’t a process. It’s a number. The infinite decimal is complete, not ongoing. And that completed value equals 1. Two different representations. One number.
English
0
0
0
5
Armin
Armin@thaatrandomkid·
How long is Britain’s coastline? The answer depends entirely on the length of your ruler. Measure with a 100km ruler, approximating the coast as straight segments: roughly 2,800km. Use a 50km ruler, capturing more bays and peninsulas: the length increases. Use a 10km ruler: longer still. Use a 1-meter ruler, tracing every rock and inlet: dramatically longer. Use a 1-centimeter ruler: even longer. As your ruler shrinks, the measured length grows without bound. There is no limit. Why? Coastlines are fractals. They have detail at every scale. Zoom in and you find more bays, more rocks, more irregularity. Unlike a smooth curve—where smaller rulers converge to a true length—jagged coastlines reveal new complexity forever. Mandelbrot formalized this. He defined “fractal dimension”—coastlines have dimension between 1 and 2. Britain’s coast is roughly dimension 1.25. Not a line, not a plane, but something in between. The practical answer depends on what you’re measuring for. But the mathematical answer is: coastlines have no well-defined length. The number “length of Britain’s coastline” does not exist in any objective sense.
English
0
0
0
61
Armin
Armin@thaatrandomkid·
At the turn of the 20th century, mathematicians believed sets could be defined freely. Any property defines a set: the set of all red things, the set of all even numbers, the set of all sets. Russell asked one question and broke everything. Consider: most sets don’t contain themselves. The set of all cats is not a cat. The set of all numbers is not a number. Call these “ordinary” sets. Now define R as the set of all sets that don’t contain themselves—the set of all ordinary sets. Does R contain itself? If R contains itself, then by definition R is a set that doesn’t contain itself. Contradiction. If R doesn’t contain itself, then by definition R should contain itself. Contradiction. Both answers are impossible. The question has no answer. This wasn’t a puzzle—it was a catastrophe. Frege had just published his life’s work formalizing mathematics using set theory. Russell’s letter arrived while the second volume was in press. Frege added a devastated appendix: “A scientist can hardly meet with anything more undesirable than to have the foundation give way just as the work is finished.” Mathematics needed a complete rebuild. The solution was to restrict how sets can be defined. Modern set theory (ZFC) uses careful axioms that prevent self-referential definitions. R simply cannot be constructed. But the deeper lesson remained: naive intuition about collections and self-reference leads to paradox. You must tread carefully.
English
0
0
0
1
Armin
Armin@thaatrandomkid·
Here’s a simple question: given any computer program and its input, can we determine whether it will eventually stop or run forever? Seems like we should be able to figure this out. Just analyze the code. Turing proved it’s impossible. Not hard—impossible. No algorithm can ever solve this in general. Here’s why. Suppose a magical program H exists that solves the halting problem. Feed it any program P and input I, and H(P, I) returns “halts” or “loops forever.” Perfectly accurate, every time. Now build a devious new program D. When given any program P as input, D does this: 1. Run H(P, P)—ask whether P halts when fed itself 2.If H says “halts,” D loops forever 3.If H says “loops forever,” D halts Now feed D to itself. Run D(D). What does H(D, D) say? If H says D halts, then by D’s construction, D loops forever. H was wrong. If H says D loops forever, then by D’s construction, D halts. H was wrong. H cannot correctly answer for this input. But H was supposed to work for ALL programs. Contradiction. H cannot exist. This isn’t a limitation of current technology. It’s a mathematical proof that some questions are undecidable—no computer, no matter how powerful, can ever answer them. And yet these questions are perfectly well-defined. The universe has hard limits on what can be computed.
English
0
0
0
5
Armin
Armin@thaatrandomkid·
We dreamed of a perfect mathematical system—one where every true statement could be proven, and no contradictions existed. A complete, consistent foundation for all of mathematics. Gödel destroyed that dream. He proved: any formal system powerful enough to describe basic arithmetic is either incomplete (some true statements can’t be proven) or inconsistent (contradictions exist). Pick your poison. Here’s the key idea. Gödel figured out how to encode logical statements as numbers. Every symbol, formula, and proof becomes a unique integer. Mathematics can now talk about itself. Then he constructed a statement that essentially says: “This statement cannot be proven in this system.” Call it G. What happens? Suppose G is provable. Then what it says is false—meaning it CAN be proven. But we just proved something false. The system is inconsistent. Contradiction. Suppose G is unprovable. Then what it says is true—it really can’t be proven. So we have a true statement that the system cannot prove. The system is incomplete. Either way, the system fails. And this isn’t a flaw we can fix. Gödel proved it holds for ANY sufficiently powerful formal system. Add G as an axiom? A new unprovable statement appears. You can never patch all the holes. The second incompleteness theorem twists the knife: no consistent system can prove its own consistency. Mathematics can never fully verify itself from within. We wanted certainty. Gödel proved certainty is impossible.
English
0
0
0
2
Armin
Armin@thaatrandomkid·
Can you write a program that examines any other program and determines whether it will eventually stop or run forever? Turing proved: no, impossible. Suppose such a program H exists. Feed H a program that does the OPPOSITE of what H predicts. Contradiction. No universal halt-detector can exist. There are questions computers can never answer—not because we’re not smart enough, but mathematically never.
English
0
0
0
1
Armin
Armin@thaatrandomkid·
Any mathematical system powerful enough to describe basic arithmetic is either incomplete or inconsistent. There will ALWAYS be true statements that cannot be proven within the system. Mathematics cannot fully verify itself. Gödel proved this by constructing a statement that essentially says “This statement is unprovable”—if it’s provable, it’s false (contradiction); if it’s unprovable, it’s true. Either way, the system fails.
English
0
0
0
1
Armin
Armin@thaatrandomkid·
Some infinities are bigger than others. Not poetically—mathematically. Two infinite sets are “the same size” if you can pair them perfectly: every item in A matched to exactly one item in B, nothing left over. This is called a bijection. The even numbers (2, 4, 6, 8…) seem like half the natural numbers (1, 2, 3, 4…). But pair 1↔2, 2↔4, 3↔6, n↔2n… and nothing’s left over. Same size infinity. The fractions seem way bigger—infinitely many exist between any two whole numbers. But arrange them in a grid (numerator vs denominator) and zig-zag through diagonally: 1/1, 1/2, 2/1, 3/1, 2/2, 1/3… You hit every fraction exactly once. Pair each with 1, 2, 3… Same size infinity again. This infinity—the smallest infinity—is called ℵ₀ (aleph-null). Sets this size are “countable.” You might think all infinities are equal. Then Cantor tried pairing natural numbers with the real numbers. And everything broke. Here’s his proof. Suppose someone claims they’ve made a complete list pairing every natural number to a real number between 0 and 1: 1 → 0.5284617… 2 → 0.3141592… 3 → 0.7293847… 4 → 0.9999281… …continuing forever. I’ll construct a real number that cannot be anywhere on this list. Go down the diagonal—take the 1st digit of the 1st number (5), the 2nd digit of the 2nd number (1), the 3rd digit of the 3rd number (9), the 4th digit of the 4th number (9). Diagonal: 5, 1, 9, 9… Now change every digit. Add 1 to each, wrapping 9 to 0: 5→6, 1→2, 9→0, 9→0… My constructed number: 0.6200… Where is this number on the list? It’s not the 1st entry—it differs in the 1st decimal place. It’s not the 2nd entry—it differs in the 2nd decimal place. It’s not the Nth entry—by construction, it differs in the Nth decimal place. It’s nowhere. Any supposedly complete list is missing at least one real number. But the argument works no matter how you arrange the list—you can always construct an unlisted number. No bijection between naturals and reals exists. The real numbers are “uncountable.” Their cardinality, called 𝔠 (the continuum), is strictly larger than ℵ₀. It gets worse. Cantor also proved: for ANY set, the set of all its subsets (its “power set”) is strictly larger. The power set of naturals? Larger than ℵ₀. The power set of THAT? Even larger. There’s no largest infinity. There’s an endless hierarchy: ℵ₀, ℵ₁, ℵ₂… each infinitely larger than the last. (And whether 𝔠 = ℵ₁—whether there’s an infinity between the naturals and the reals—is the Continuum Hypothesis. It was proven to be unprovable from standard axioms. Some questions in math have no answer.) Cantor published this in 1891. The establishment attacked him viciously. Kronecker called him a “corrupter of youth.” Poincaré called set theory a “disease.” Cantor suffered breakdowns and died in a sanatorium. Today his diagonal argument is considered one of the most elegant proofs in mathematical history. And his strange, fractured infinities are the foundation of modern mathematics.
English
0
0
0
3
Armin
Armin@thaatrandomkid·
Negligence requires proving fault — breach of a duty of care. Strict liability imposes liability without fault, simply for engaging in certain activities. The doctrine for abnormally dangerous activities comes from Rylands v. Fletcher and is codified in Restatement Second §520. An activity is abnormally dangerous if: (1) it involves a high degree of risk, (2) the type of harm is great, (3) reasonable care cannot eliminate the risk, (4) the activity is not a matter of common usage, (5) the activity is conducted in an inappropriate place, and (6) the activity’s value to the community doesn’t outweigh its risks. Classic examples include blasting, storing large quantities of explosives, and keeping dangerous wild animals. Could frontier AI development qualify? Consider the factors. High degree of risk: arguably yes, given concerns about systemic risks from advanced AI. Great harm potential: catastrophic harms are discussed seriously by the field. Reasonable care cannot eliminate risk: no safety measures have been proven to eliminate risks from frontier AI. Not common usage: frontier AI development is done by a handful of organizations with specialized resources. Inappropriate place: this factor maps less cleanly to AI. The argument for strict liability is stronger than most developers would like to believe. If courts conclude that frontier AI development is inherently dangerous in ways that reasonable care cannot address, the entire negligence framework could be bypassed. Companies would be liable for harms regardless of how carefully they developed their systems. This isn’t the current law, but it’s a plausible future. If AI harms accumulate and negligence claims prove too difficult to win (due to causation complexities, for example), plaintiffs’ lawyers will push for strict liability. The precedent exists; the question is whether AI fits the framework.
English
0
0
0
2
Armin
Armin@thaatrandomkid·
AI companies’ terms of service routinely include liability waivers — clauses purporting to release the company from responsibility for harms caused by their AI systems. Are these enforceable? General rule: releases are enforceable. Sophisticated parties can allocate risk by contract. But there are important exceptions, especially when public policy concerns override freedom of contract. The Wagenblast factors identify when releases shouldn’t be recognized: (1) the business is suitable for public regulation, (2) the party is performing a service of public importance that’s practically necessary for some people, (3) the party holds itself out as willing to serve any member of the public, (4) the party has superior bargaining power, (5) the party uses standardized contracts without negotiation, (6) there’s no option to pay extra to avoid the waiver, and (7) the signor’s person or property is placed under the party’s control. Apply this to AI platforms. AI services are increasingly subject to public regulation (the EU AI Act, state laws). They’re becoming practically necessary for many people’s work and daily life (this though is the weakest argument and will be the central point of debate). They serve the general public. They have significant bargaining power — users can’t negotiate terms with OpenAI. They use standardized terms without meaningful choice. There’s usually no premium tier that removes the waiver. And users place significant reliance on the AI’s outputs. Many AI waivers may be unenforceable under this analysis. Courts may conclude that AI platforms’ societal role triggers the public policy exception. This doesn’t mean all waivers fail, but it means AI companies can’t assume their ToS insulates them from liability.
English
0
0
0
5