Greg Hart 🇺🇦

9K posts

Greg Hart 🇺🇦

Greg Hart 🇺🇦

@gregsthinking

Human nature, venture, resilience, critical thinking. See & design invisible influences that drive behaviour. Future Fit Cities & co-founder at InceptionU

Calgary (most of the time) Katılım Haziran 2008
1K Takip Edilen856 Takipçiler
Greg Hart 🇺🇦
Greg Hart 🇺🇦@gregsthinking·
@aakashgupta And the common denominator is energy. Energy expended and lost, energy (and attention) to deal with the conflict, energy yet to be used to continue the battle and calories from the food lost. Brain hates that a lot
English
0
0
0
125
Greg Hart 🇺🇦 retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
She accidentally described one of the most replicated findings in behavioral psychology. Harvard and Duke researchers found in 2011 that people value things they build themselves 63% higher than identical pre-assembled versions. They called it the IKEA effect. Labor alone, even assembling a standardized box from instructions, is enough to make people overvalue their own creations. Gardening runs this effect at full intensity. You chose the seeds. You dug the holes. You watered daily. By harvest, your brain has priced that tomato at roughly 10x grocery store rates, and the math feels completely justified. Now stack Kahneman and Tversky's loss aversion on top. Losses register at approximately 2x the emotional intensity of equivalent gains. One of the most replicated findings in behavioral economics. So the aphid eating her garden is triggering both simultaneously. She built something her brain values at 163% of objective worth. She's watching it get destroyed in real time. Her nervous system is processing that destruction at double intensity. The grocery store tomato being out of stock? Mild annoyance. Zero labor investment means zero IKEA effect, means proportional emotional response. The garden tomato carries months of accumulated effort justification. The aphid isn't eating a $4 tomato. Her brain priced it at $40 and is processing the loss at $80. Gardening is the only common hobby that combines the IKEA effect, loss aversion, and a live adversary that reproduces faster than you can respond. The violence tracks.
Melony🍈@MelonTeee

gardening is NOT relaxing bugs are eating all my shit I've never felt this violent in my life

English
132
1.3K
14.9K
1.4M
Greg Hart 🇺🇦 retweetledi
Mike Levin
Mike Levin@MikeLevin·
RFK Jr. isn’t a skeptic asking hard questions. He’s a con man dismantling the vaccine system that kept your kids safe for generations. Babies are back in ICUs with diseases that should be extinct. A federal judge called his appointees “distinctly unqualified.” This isn’t medical freedom, it’s straight up negligence. propublica.org/article/rfk-jr…
English
123
1.7K
4.2K
51.7K
Greg Hart 🇺🇦 retweetledi
Massimo
Massimo@Rainmaker1973·
"Allegro non molto" movement from Vivaldi's "Winter" performed by Latvian guitarist Laura Lāce, absolutely shredding the piece on a Harley Benton Fusion-III guitar. [🎸 laura6100youtube]
English
48
372
1.8K
81K
Greg Hart 🇺🇦 retweetledi
Brandon Luu, MD
Brandon Luu, MD@BrandonLuuMD·
Students who took notes by hand scored ~28% higher on conceptual questions than laptop note-takers. Writing forces your brain to process and compress ideas instead of copying them.
Brandon Luu, MD tweet media
English
448
5.2K
24.5K
1.6M
Greg Hart 🇺🇦 retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Brilliant economic paper, directly models the "Structural Jevons Paradox" happening right now in the AI industry. The cost of running an LLM is dropping, but total computing energy is exploding anyway. It mathematically proves that as the unit cost of digital intelligence and coding drops, the aggregate demand for complex AI agents and the infrastructure to support them surges exponentially, creating a massive new downstream ecosystem that requires human management. Reveals a massive paradox where dropping the price of AI usage does not save money, but instead encourages developers to build vastly more complex agents that eat up exponentially more computing power. Because of this relentless progress, small companies building simple applications on top of these models get completely crushed as the core AI naturally absorbs those exact same features over time. They also discovered a brutal dynamic where a perfectly working LLM becomes economically worthless the moment a competitor releases a smarter version. Ultimately, the researchers prove that this combination of massive computing costs and the need for constant user data naturally pushes the entire AI industry toward an unavoidable monopoly. --- arxiv. org/pdf/2601.12339v1 "The Economics of Digital Intelligence Capital"
Rohan Paul tweet media
Rohan Paul@rohanpaul_ai

Citadel Securities published this graph showing a strange phenomenon. Job postings for software engineers are actually seeing a spike. The graph here is short term but still it's super interesting and really strange. Is it Jevons paradox at play. When AI makes coding cheaper, companies actually may need a lot more software engineers, not fewer. When software is cheaper to build, companies naturally want to build a lot more of it. Businesses are now putting software into industries and tools where it was simply too expensive before. --- Chart from citadelsecurities .com/news-and-insights/2026-global-intelligence-crisis/

English
41
162
589
56.3K
Greg Hart 🇺🇦 retweetledi
Yasmine Khosrowshahi
Yasmine Khosrowshahi@yasminekho·
In 2018, Stanford professor Matt Abrahams gave a masterclass on why most people fail to communicate well. He broke down: - The structure every message needs - Why audiences stop listening - The psychology of attention 15 lessons that'll make your communication unforgettable:
English
57
1.5K
7K
991.6K
Greg Hart 🇺🇦 retweetledi
Anish Moonka
Anish Moonka@anishmoonka·
A single ant has 250,000 neurons. Your brain has 86 billion. That’s a 344,000x gap. And yet what you’re watching is a colony solving a category of problem that no computer can crack perfectly at scale. It’s called the Steiner tree problem. Given a set of points, find the shortest possible network connecting all of them. First posed in 1811, proved essentially impossible to solve perfectly in 1972 (the computing time grows so fast with size that the world’s fastest supercomputer stalls on a few hundred points). Still one of the hardest open problems in mathematics. Ants solve it with chemistry. When an ant walks a path, it leaves a chemical trail called a pheromone. That trail evaporates over time. Shorter paths get walked faster, so pheromone builds up before it fades. Other ants prefer stronger trails. The colony converges on the shortest route without any single ant knowing the full picture. Jean-Louis Deneubourg at the Free University of Brussels proved this in the early 1990s with a dead simple experiment: two bridges between a nest and food, one twice as long as the other. Within minutes, the colony picked the short one. In 1991, computer scientist Marco Dorigo took that discovery and turned it into an algorithm (a set of step-by-step instructions for a computer) called Ant Colony Optimization. It’s now used to route wires inside microchips with billions of transistors (one study found an 8% reduction in wire length over traditional methods), plan delivery truck routes, and manage internet traffic. The phone you’re reading this on was partially designed using math that ants figured out 100 million years before humans existed. A 2023 study out of Stanford and several other institutions found that turtle ants in the tropical forest canopy build trail networks across tangled branches and vines that approximately solve the Steiner tree problem with zero central control. No ant has any information about the full network. Each one just follows a rule: at each junction, go where the pheromone is strongest. The collective intelligence comes from thousands of these tiny decisions stacking up. Stanford biologist Deborah Gordon has studied this for decades. She compares it directly to how brains work: no single neuron tells the others what to do, but together they produce thought. A 2024 Rockefeller University study found that individual ants decide whether to leave the nest using the same yes-or-no process that brain cells use to decide whether to switch on. The colony is, in a real mechanical sense, a brain spread across thousands of bodies. In early 2025, a Weizmann Institute study pitted ant groups against human groups on a task almost identical to this video: navigating a T-shaped object through a series of obstacles. The bigger the human group, the worse they performed. Too many competing ideas about which direction to push. The bigger the ant group, the better they got. No ego, no debate, just pheromones and simple rules scaling into something that looks a lot like intelligence. 250,000 neurons each. No leader. No blueprint. Solving problems that stumped mathematicians for two centuries.
The Figen@TheFigen_

They are ants solving a geometric problem and it is mind-blowingly colorful.

English
57
795
3.4K
303.6K
Greg Hart 🇺🇦 retweetledi
Simplifying AI
Simplifying AI@simplifyinAI·
🚨 BREAKING: Stanford just dropped the most uncomfortable paper on LLM reasoning. It shows a systematic teardown of why LLMs keep failing, even when leaderboards say they are perfect. They split reasoning into two buckets: non-embodied (math, logic, commonsense) and embodied (the physical world).. and the exact same failures show up everywhere. One of the most disturbing findings is how often models produce unfaithful reasoning.. models will give you the right final answer, but their explanation is completely fabricated or logically wrong. It literally trains us to trust a fake decision process.. They also suffer from fundamental architectural failures (collapsing under light logic) and robustness failures (changing one word in your prompt flips the whole answer). And embodied reasoning? Even worse. LLMs have zero physical grounding, so they fail at basic physics predictably. The takeaway: LLMs reason just enough to sound convincing, but not enough to be reliable. We are deploying systems that pass benchmarks but fail silently in production.
Simplifying AI tweet media
English
45
153
517
31.8K
Greg Hart 🇺🇦 retweetledi
Sean McClure
Sean McClure@sean_a_mcclure·
Stop pushing. The idea that more effort improves your learning or outcomes is a narrative that only works for slave owners. Works wonders for employers, academic paper mills, and oppressors the world over. Stop. Every time you come up against a challenge that seems to demand increased effort to "push through", stop, and rethink the approach. There is always a better way, and that way's hallmark signature is decreased effort. It will *not* work for the boss who must adhere to the established pipeline. Only more effort using the same approach can satisfy their architecture of responsibility. But it will work for you. And you only have one life.
English
0
5
86
2.2K
Greg Hart 🇺🇦 retweetledi
Santa Fe Institute
Santa Fe Institute@sfiscience·
The grid is in crisis. A century-old system for generating, distributing, and regulating electricity is struggling to cope with electrification, decarbonization, rising demand from data centers, and rapid technological change. At a recent SFI working group, researchers and practitioners explored how a complex adaptive systems approach could help build a more resilient, adaptive grid. “Piecemeal solutions aren’t enough,” says SFI External Professor Seth Blumsack. “How is the grid system organized as a whole? How can we address whole-systems problems?” santafe.edu/news-center/ne…
Santa Fe Institute tweet media
English
4
15
35
3K
Greg Hart 🇺🇦 retweetledi
The Curious Tales
The Curious Tales@thecurioustales·
🚨The most disturbing physics demonstration in existence involves nothing more than two pendulums and a pin. Here's why: There's no electricity, or quantum effects or exotic materials or laboratory conditions. Just two rods, a joint connecting them, and gravity. Every force acting on this system is completely understood. The equations describing its motion were written centuries ago. Nothing about it is mysterious at the level of physics. And yet predicting its motion beyond a few seconds is physically impossible. A difference of 0.000001 degrees in the starting angle doesn't produce a small deviation downstream. It produces a completely unrelated trajectory. The system doesn't drift gradually from the prediction. It departs so violently that within moments the prediction becomes pure fiction. Most people misunderstand chaos theory because the word chaos implies randomness. The double pendulum contains zero randomness. Every swing is fully governed by deterministic laws. A being with perfect knowledge of every starting condition could calculate every future position exactly. The problem is that perfect knowledge cannot exist in physical reality. Every measurement humans make carries some tolerance, however microscopic. That margin gets amplified exponentially each second until the gap between prediction and reality swallows everything whole. The universe runs on math that outruns our ability to feed it accurate inputs. What should genuinely disturb you is that the double pendulum is not a special case. The same sensitivity lives inside weather systems, economies, neural firing patterns, and ecosystems. Every complex system you depend on operates under identical conditions. Tiny upstream differences explode into massive downstream divergence with no warning and no recovery. Philosophers spent centuries arguing about whether the future is predetermined. That was always the wrong argument. The double pendulum settled the only question that actually matters in daily life. Determined and predictable are not the same thing. They never were.
Interesting things@awkwardgoogle

The unpredictability of the double pendulum.

English
80
507
2.8K
232K
Greg Hart 🇺🇦 retweetledi
Massimo
Massimo@Rainmaker1973·
How small is a transistor? [🎞️ nanonerds_sliet]
English
58
483
2.7K
203.5K
Greg Hart 🇺🇦 retweetledi
Kaizen D. Asiedu
Kaizen D. Asiedu@thatsKAIZEN·
On Saturday, Iran's government claimed the U.S. and Israel bombed a school - without evidence - and the media made that the headline. Trump talks about Tylenol and autism - media: "Trump makes claim, without evidence." The media is more skeptical of America than our enemies.
Kaizen D. Asiedu@thatsKAIZEN

x.com/i/article/2028…

English
542
4.6K
20.9K
668K
Greg Hart 🇺🇦 retweetledi
Garry Kasparov
Garry Kasparov@Kasparov63·
Authoritarians have been exploiting this double standard for decades. The law for thee, but not for me—and Western govts and institutions go along. If that is another relic of the post-Cold War period smashed by Trump's bull in a china shop foreign policy, good riddance.
David Frum@davidfrum

According to the self-proclaimed experts who get quoted at times like this, the corpus of international law can be reduced to one simple rule: "Terrorists and communists are always allowed to strike democracies, but democracies are never allowed to strike back."

English
43
388
2.4K
151.6K
Greg Hart 🇺🇦 retweetledi
Haviv Rettig Gur
Haviv Rettig Gur@havivrettiggur·
I say this as gently as I know how, because it seems to me unforgivably obvious. You cannot simultaneously build a strong international law system while also hating the West. International law is a Western idea born of a particular Western historical, cultural and political experience. And because God loves irony, no one exemplifies this fact more than the evil regime whose travails since yesterday have sparked so much legalistic hand-wringing. Both Khamenei himself and his teacher and predecessor Khomeini consistently and explicitly rejected international law as a tool of "global arrogance" (estekbar-e jahani) — i.e., of powerful secularist, individualistic democracies. Khamenei was even more explicit, routinely declaring legal frameworks like UN conventions as "colonial" traps. These declarations weren’t marginal to their ideology. They were fundamental planks of the regime’s political theology. I’ll say this, again, as gently as I can: The fact that international law and international institutions have transformed in practice into a system that more often than not runs defense for the most virulent and explicit enemies of said law might have something to do with their decline as an organizing framework of international affairs. For example, when UN agencies and international institutions target Israel more than Iran, or more than China, Iran and Russia put together, or more than all the dictatorships and wars in the world combined — they’re doing more harm to the law than to Israel. Similarly, it matters that so many of international law’s loudest spokespeople had nothing to say about Khamenei’s crimes just six weeks ago, but swung into action only when Khamenei’s long reign of terror was finally brought to an end. That’s not law. It’s the opposite of law. International law can be saved, but only if its scholars and practitioners grow up and shed the instinctive anti-Westernism and racist paternalism of the present-day academy. When international law is no longer seen by its own practitioners primarily as an instrument for containing, weakening and delegitimizing the West, but becomes genuinely about actual law, it will once again have a claim on us. If you fail to see in Khamenei the bitter foe of international law that he was, if in the midst of your legitimate critique of a war you can’t summon at least a little joy that this avowed enemy of your purported moral system is dead and gone, then you haven’t actually been fighting for international law.
Dr Kylie Moore-Gilbert@KMooreGilbert

What we are seeing is the disintegration of the last remnants of the international rules-based order and the precarious dawn of a new era of might-is-right in international affairs. You might start the clock with Russia's invasion of Ukraine, or even earlier with the US war in Iraq, but the fact remains that the UN has lately proven itself both incompetent and irrelevant. Make no mistake, this is a troubling state of affairs- the world would be a more perilous place in the absence of international law. But to carry on as though this is not the case, to rail against the violation of international law which this war undoubtedly is and not to mention the fact that these same international laws and norms did not prevent the slaughter of 30,000+ innocent Iranians just 6 weeks earlier, nor stop the regime from terrorising its people and others in the region for decades... at best you a misdiagnosing the problem. At worst you are complicit in it.

English
137
1.3K
5.4K
609.9K
Greg Hart 🇺🇦 retweetledi
Sukh Sroay
Sukh Sroay@sukh_saroy·
New research just exposed the biggest lie in AI coding benchmarks. LLMs score 84-89% on standard coding tests. On real production code? 25-34%. That's not a gap. That's a different reality. Here's what happened: Researchers built a benchmark from actual open-source repositories real classes with real dependencies, real type systems, real integration complexity. Then they tested the same models that dominate HumanEval leaderboards. The results were brutal. The models weren't failing because the code was "harder." They were failing because it was *real*. Synthetic benchmarks test whether a model can write a self-contained function with a clean docstring. Production code requires understanding inheritance hierarchies, framework integrations, and project-specific utilities. Different universe. Same leaderboard score. But it gets worse. A separate study ran 600,000 debugging experiments across 9 LLMs. They found a bug in a program. The LLM found it too. Then they renamed a variable. Added a comment. Shuffled function order. Changed nothing about the bug itself. The LLM couldn't find the same bug anymore. 78% of the time, cosmetic changes that don't affect program behavior completely broke the model's ability to debug. Function shuffling alone reduced debugging accuracy by 83%. The models aren't reading code. They're pattern-matching against what code *looks like* in their training data. A third study confirmed this from another angle: when researchers obfuscated real-world code changing symbols, structure, and semantics while keeping functionality identical LLM pass rates dropped by up to 62.5%. The researchers call this the "Specialist in Familiarity" problem. LLMs perform well on code they've memorized. The moment you show them something unfamiliar with the same logic, they collapse. Three papers. Three different methodologies. Same conclusion: The benchmarks we use to evaluate AI coding tools are measuring memorization, not understanding. If you're shipping code generated by LLMs into production without review, these numbers should concern you. If you're building developer tools, the question isn't "what's your HumanEval score." It's "what happens when the code doesn't look like the training data."
Sukh Sroay tweet media
English
132
252
1.1K
229.4K
Greg Hart 🇺🇦 retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
The neuroscience here is more damning than the advice. Killingsworth and Gilbert tracked 5,000 people across 83 countries using real-time iPhone sampling. They pinged participants at random moments throughout the day, asked what they were doing, whether their mind was wandering, and how happy they felt. The finding that should change how you think about your own brain: mind wandering explained 10.8% of the variance in happiness. The actual activity you were doing explained 4.6%. What you’re thinking about matters 2.3x more than what you’re doing. And here’s the part nobody talks about. People’s minds wandered to pleasant topics 42.5% of the time. Neutral topics 31%. Unpleasant topics 26.5%. Even when wandering to pleasant topics, they were no happier than when focused on the present. The only state that reliably produced happiness was attention locked onto the current activity. This is a prefrontal cortex problem. Your default mode network activates the moment you disengage from a task. It runs simulations of the future, replays the past, and generates the anxiety you interpret as “I’m lost.” Dr. Fabiano is pointing at the right paper. The mechanism is your brain literally cannot generate satisfaction in default mode. It can only generate rumination. The 2,250 adults in this study averaged 46.9% of their waking hours in mind wandering. Almost half their conscious life spent in a state the data shows makes them unhappy. Training sustained attention on whatever is in front of you right now is the intervention, because the research says that’s the only configuration your brain produces wellbeing in. Your attention is the quest.
Nicholas Fabiano, MD@NTFabiano

You're not depressed, you just lost your quest.

English
42
661
4.8K
380.9K