Mark Finnern

16.1K posts

Mark Finnern banner
Mark Finnern

Mark Finnern

@finnern

Reigniting the Innovation Spirit in the Black Forest | Culture Developer | Community Builder | Future Salon Founder | TEDx Speaker

Black Forest was SF Bay Area Katılım Aralık 2006
3.6K Takip Edilen7.9K Takipçiler
Nate Soares ⏹️
If Anyone Builds It, Everyone Dies is now available in English, Bulgarian, Italian, and Spanish. The book is coming out in Dutch next week, and will be reaching many other languages soon (dates subject to change):
Nate Soares ⏹️ tweet media
English
14
18
123
9.6K
Mark Finnern retweetledi
Rob Bensinger ⏹️
Rob Bensinger ⏹️@robbensinger·
Hundreds of scientists, including 3/4 of the most cited living AI scientists, have said that AI poses a very real chance of killing us all. We're in uncharted waters, which makes the risk level hard to assess; but a pretty normal estimate is Jan Leike's "10-90%" of extinction-level outcomes. Leike heads Anthropic's alignment research team, and previously headed OpenAI's. This actually seems pretty straightforward. There's literally no reason for us to sleepwalk into disaster here. No normal engineering discipline, building a bridge or designing a house, would accept a 25% chance of killing a person; yet somehow AI's engineering culture has corroded enough that no one bats an eye when Anthropic's CEO talks about a 25% chance of research efforts killing every person. A minority of leading labs are dismissive of the risk (mainly Meta), but even the fact that “will we kill everyone if we keep moving forward?” is hotly debated among researchers seems very obviously like more than enough grounds for governments to internationally halt the race to build superintelligent AI. Like, this would be beyond straightforward in any field other than AI. Obvious question: How would that even work? Like, I get the argument in principle: “smarter-than-human AI is more dangerous than nukes, so we need to treat it similarly.” But with nukes, we have a detailed understanding of what’s required to build them, and it involves huge easily-detected infrastructure projects and rare materials. Response: The same is true for AI, as it’s built today. The most powerful AIs today rely on extremely specialized and costly hardware, cost hundreds of millions of dollars to build,¹ and rely on massive data centers² that are relatively easy to detect using satellite and drone imagery, including infrared imaging.³ Q: But wouldn’t people just respond by building data centers in secret locations, like deep underground? Response: Only a few firms can fabricate AI chips — primarily the Taiwanese company TSMC — and one of the key machines used in high-end chips is only produced by the Dutch company ASML. This is the extreme ultraviolet lithography machine, which is the size of a school bus, weighs 200 tons, and costs hundreds of millions of dollars.⁴ Many key components are similarly bottlenecked.⁵ This supply chain is the result of decades of innovation and investment, and replicating it is expected to be very difficult — likely taking over a decade, even for technologically advanced countries.⁶ This essential supply chain, largely located in countries allied to the US, provides a really clear point of leverage. If the international community wanted to, it could easily monitor where all the chips are going, build in kill switches, and put in place a monitoring regime to ensure chips aren’t being used to build toward superintelligence. (Focusing more efforts on the chip supply chain is also a more robust long-term solution than focusing purely on data centers, since it can solve the problem of developers using distributed training to attempt to evade international regulations.⁷) Q: But won’t AI become cheaper to build in the future? Response: Yes, but — (a) It isn’t likely to suddenly become dramatically cheaper overnight. If it becomes cheaper gradually, regulations can build in safety margin and adjust thresholds over time to match the technology. Efforts to bring preexisting chips under monitoring will progress over time, and chips have a limited lifespan, so the total quantity of unmonitored chips will decrease as well. (b) If we actually treated superintelligent AI like nuclear weapons, we wouldn’t be publishing random advances to arXiv, so the development of more efficient algorithms and more optimized compute would happen more slowly. Some amount of expected algorithmic progress would also be hampered by reduced access to chips. (c) You don’t need to ban superintelligence forever; you just need to ban it until it’s clear that we can build it without destroying ourselves or doing something similarly terrible. A ban could buy the world many decades of time. Q: But wouldn’t this treaty devastate the economy? A: It would mean forgoing some future economic gains, because the race to superintelligence comes with greater and greater profits until it kills you. But it’s not as though those profits are worth anything if we’re dead; this seems obvious enough. There’s the separate issue that lots of investments are currently flowing into building bigger and bigger data centers, in anticipation that the race to smarter-than-human AI will continue. A ban could cause a shock to the economy as that investment dries up. However, this is relatively easy to avoid via the Fed lowering its rates, so that a high volume of money continues to flow through the larger economy.⁸ Q: But wouldn’t regulating chips have lots of spillover effects on other parts of the economy that use those chips? A: NVIDIA’s H100 chip costs around $30,000 per chip and, due to its cooling and power requirements, is designed to be run in a data center.⁹ Regulating AI-specialized chips like this would have very few spillover effects, particularly if regulations only apply to chips used for AI training and not for inference.¹⁰ But also, again, an economy isn’t worth much if you’re dead. This whole discussion seems to be severely missing the forest for the trees, if it’s not just in outright denial about the situation we find ourselves in. Some of the infrastructure used to produce AI chips is also used in making other advanced computer chips, such as cell phone chips; but there are notable differences between these chips. If advanced AI chip production is shut down, it wouldn’t actually be difficult to monitor production and ensure that chip production is only creating non-AI-specialized chips. At the same time, existing AI chips could be monitored to ensure that they’re used to run existing AIs, and aren’t being used to train ever-more-capable models.¹¹ This wouldn't be trivial to do, but it's pretty easy relative to many of the tasks the world's superpowers have achieved when they faced a national security threat. The question is whether the US, China, and other key actors wake up in time, not whether they have good options for addressing the threat. Q: Isn't this totalitarian? A: Governments regulate thousands of technologies. Adding one more to the list won’t suddenly tip the world over into a totalitarian dystopia, any more than banning chemical or biological weapons did. The typical consumer wouldn’t even necessarily see any difference, since the typical consumer doesn’t run a data center. They just wouldn’t see dramatic improvements to the chatbots they use. Q: But isn’t this politically infeasible? A: It will require science communicators to alert policymakers to the current situation, and it will require policymakers to come together to craft a solution. But it doesn’t seem at all infeasible. Building superintelligence is unpopular with the voting public,¹² and hundreds of elected officials have already named this issue as a serious priority. The UN Secretary-General and major heads of state are routinely talking about AI loss-of-control scenarios and human extinction. At that point, the cat has already firmly left the bag. (And it's not as though there's anything unusual about governments heavily regulating powerful new technologies.) What's left is to dial up the volume on that talk, translate that talk into planning and fast action, and recognize that "there's uncertainty how much time we have left" makes this a more urgent problem, not less. Q: But if the US halts, isn’t that just ceding the race to authoritarian regimes? A: The US shouldn’t halt unilaterally; that would just drive AI research to other countries. Rather, the US should broker an international agreement where everyone agrees to halt simultaneously. (Some templates of agreements that would do the job have already been drafted.¹³) Governments can create a deterrence regime by articulating clear limits and enforcement actions. It’s in no country’s interest to race to its own destruction, and a deterrence regime like this provides an alternative path. Q: But surely there will be countries that end up defecting from such an agreement. Even if you’re right that it’s in no one’s interest to race once they understand the situation, plenty of people won’t understand the situation, and will just see superintelligent AI as a way to get rich quick. A: It’s very rare for countries (or companies!) to deliberately violate international law. It’s rare for countries to take actions that are widely seen as serious threats to other nations’ security. (If it weren't rare, it wouldn't be a big news story when it does happen!) If the whole world is racing to build superintelligence as fast as possible, then we’re very likely dead. Even if you think there's a chance that cautious devs could stay in control as AI starts to vastly exceed the intelligence of the human race (and no, I don't think this is realistic in the current landscape), that chance increasingly goes out the window as the race heats up, because prioritizing safety will mean sacrificing your competitive edge. If instead a tiny fraction of the world is trying to find sneaky ways to build a small researcher-starved frontier AI project here and there, while dealing with enormous international pressure and censure, then that seems like a much more survivable situation. By analogy, nuclear nonproliferation efforts haven’t been perfectly successful. Over the past 75 years, the number of nuclear powers has grown from 2 to 9. But this is a much more survivable state of affairs than if we hadn’t tried to limit proliferation at all, and were instead facing a world where dozens or hundreds of nations possess nuclear weapons. When it comes to superintelligence, anyone building "god-like AI" is likely to get us all killed — whether the developer is a military or a company, and whether their intentions are good or ill. Going from "zero superintelligences" to "one superintelligence" is already lethally dangerous. The challenge is to block the construction of ASI while there's still time, not to limit proliferation after it already exists, when it's far too late to take the steering wheel. So the nuclear analogy is pretty limited in what it can tell us. But it can tell us that international law and norms have enormous power. Q: But what about China? Surely they’d never agree to an arrangement like this. A: The CCP has already expressed interest in international coordination and regulation on AI. E.g., Reuters reported that Chinese Premier Li Qiang said, "We should strengthen coordination to form a global AI governance framework that has broad consensus as soon as possible."¹⁴ And, quoting The Economist:¹⁵ "But the accelerationists are getting pushback from a clique of elite scientists with the Communist Party’s ear. Most prominent among them is Andrew Chi-Chih Yao, the only Chinese person to have won the Turing award for advances in computer science. In July Mr Yao said AI poses a greater existential risk to humans than nuclear or biological weapons. Zhang Ya-Qin, the former president of Baidu, a Chinese tech giant, and Xue Lan, the chair of the state’s expert committee on AI governance, also reckon that AI may threaten the human race. Yi Zeng of the Chinese Academy of Sciences believes that AGI models will eventually see humans as humans see ants. "The influence of such arguments is increasingly on display. In March an international panel of experts meeting in Beijing called on researchers to kill models that appear to seek power or show signs of self-replication or deceit. A short time later the risks posed by AI, and how to control them, became a subject of study sessions for party leaders. A state body that funds scientific research has begun offering grants to researchers who study how to align AI with human values. [...] "In July, at a meeting of the party’s central committee called the 'third plenum', Mr Xi sent his clearest signal yet that he takes the doomers’ concerns seriously. The official report from the plenum listed AI risks alongside other big concerns, such as biohazards and natural disasters. For the first time it called for monitoring AI safety, a reference to the technology’s potential to endanger humans. The report may lead to new restrictions on AI-research activities. "More clues to Mr Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should 'abandon uninhibited growth that comes at the cost of sacrificing safety', says the guide. Since AI will determine 'the fate of all mankind', it must always be controllable, it goes on. The document calls for regulation to be pre-emptive rather than reactive." The CCP is a US adversary. That doesn't mean they're idiots who will destroy their own country in order to thumb their nose at the US. If a policy is Good, that doesn't mean that everyone Bad will automatically oppose it. Policies that prevent human extinction are good for liberal democracies and for authoritarian regimes, so clueful people on all sides will endorse those policies. The question, again, is just whether people will clue in to what's happening soon enough to matter. My hope, in writing this, is to wake people up a bit faster. If you share that hope, maybe share this post, or join the conversation about it; or write your own, better version of a "wake-up" warning. Don't give up on the world so easily.
Rob Bensinger ⏹️ tweet media
English
76
189
655
82.1K
Mark Finnern retweetledi
Steve Jurvetson
Steve Jurvetson@FutureJurvetson·
𝐅⃣𝐎⃣𝐂⃣𝐔⃣𝐒⃣ The ASML Way I just finished this history of the most important semiconductor equipment company in the world, as translated from the Dutch original (and lurking in the background might be a better way). Reminder: ASML builds 100% of the world’s extreme ultraviolet (EUV) lithography machines, without which cutting edge chips are simply impossible to make. It’s the most expensive mass-produced machine tool in history. Oh, and today, there are two special women without whom, all EUV lithography would sputter to a stop (see p.141 below) ASML was formed in 1984 as a JV with Philips, the Dutch electronics company that contributed ~$15M (in guilders) and 40 engineers, and “it seemed doomed from the start.” (p.35) There were 10 viable competitors at the time, more than enough to serve the market as ASML learned at SEMICON in 1984 (by coincidence, I was also there with my Dad who about to leave Mostek to run Varian’s Semiconductor Equipment Group, but they only had Molecular Beam Epitaxy, a low throughput lithography alternative. My Dad’s attempt to poach a CTO from ASML is on p.72). “In these initial years, management worked around the clock to bring in new subsidies. In these initial years, about half of ASML’s money for research came from The Hague or Brussels.” (48) ASML’s “machines were the first in the industry to utilize modular design. The lens, the wafer-table, the frame for the mask, the light source, the robot that picks the wafers: these are LEGO blocks that, when you bring them together, form a lithography system.” (62) IPO in 1995. Stock went up 600x in the 30 years that followed. March 2000 market crash: “cancellations from chip manufacturers poured in daily. On paper, the company was bankrupt. Radical cost-cutting measures would be needed.” (82) Nikon sues: “a rude awakening. ASML had paid far too little attention to its intellectual property in its early years.” (98) “The best inventors, some of which have more than 200 patents to their name, are commemorated by having their faces engraved on silicon wafers and hung on a series of large wooden beams, like a Mount Rushmore of the chip industry. As of 2023, ASML has registered more than 16,000 patents.” (99) The machines are insanely sensitive. “Atmospheric pressure fluctuations due to thunderstorms can easily disrupt the lithography process. Or cows. Intel once faced an inexplicable drop in yield every night for a few hours, with researchers running in circles until they finally realized the cause: cow farts. Intel had to pay for three farms to relocate.” (117) “In 2006 Intel, who was supplying the chips for Apple’s computers, was asked if it could also supply the processor for the iPhone. It declined.” (122) “EUV light is extremely difficult to generate and sustain in an industrial environment. The invisible rays are absorbed by almost all materials, even the air, which means the lithography machine needs to have (curved, atomically precise) mirrors instead of lenses and can only operate in a vacuum.” (127) The Cymer laser / light source has a molten tin “droplet generator capable of forming a 30-micron droplet of tin at a rate of 50,000 times per second. The laser was rigged to deal two separate blows. First, a gentle tap to flatten the droplet into a pancake-like shape, followed by an intense blast that heated the tin to 200,000 degrees, transforming it into a plasma.” (130) “During its journey through the lithography machine, the light beam comes across 10 mirrors, each absorbing 30% of the light. It starts with 1.5 megawatts from the grid that yields 30 kilowatts in the laser, and that creates 100 watts of EUV light. Of this, about 1 watt ends up on the wafer. But more power also creates more heat. That causes the mirrors to expand, which in turn causes small deviations that immediately need to be corrected with small motors. Even the EUV mask, which carries the blueprint of the chip on it, is itself an extremely sensitive mirror.” (132) “ASML was vastly underestimating the financial consequences of the new technology. In retrospect, this was for the best. No respectable CEO would sign for a project that would take 20 years, without any promise of success or interim profit to carry it through. That’s not taking a bet, that’s bananas. This is also why the Japanese competition dropped out of the race: not because their engineers were any less capable, but because Nikon and Canon were simply not prepared to continue pumping so much money into EUV.” (133) To finance the purchase of Cymer in 2012, “Intel invested 3.3B Euros into ASML in exchange for 15% of the shares. TSMC was required to purchase 5%... and Samsung acquired a stake at the 11th hour, taking 3%.” (139) “Only Joann and one of her colleagues have the ability to wind and solder invisibly small wires (around the nozzle that shoots the tin droplets). It’s a delicate task few could ever master. ‘Even watchmakers can’t do this,’ says their awestruck boss, ‘and there’s no way to automate it.’ It’s not a trivial matter: the nozzle regularly gets clogged during day-to-day use in the chip factory. When that inevitably happens, the only thing to do is to swap it out for a new one. It’s hard to imagine, but without the fingers of Joann and her colleague, the EUV machines at Samsung and TSMC would grind to a halt.” (141) In 2013, “most of the droplet generator was still hand-made by Cymer, and it was virtually impossible to test the part in advance. This made for completely unpredictable yields: in the initial phase of production, half of the droplet generators didn’t even work.” (142) “20% of the South Korean economy now relies on the revenue of one single company. Hence their nickname: this is the republic of Samsung.” (156) “Intel was being surpassed by their competitors in Asia on every front and would only start using EUV for chips after 2023.” (160) “The descriptions that chip manufacturers use for these technological generations or ‘nodes’ need to be taken with a grain of salt. The physical dimensions of the smallest circuits and connections on the chip are, in practice, 5 to 10 times larger than advertised. A nanometer was once a nanometer, but accuracy has never stopped a good marketing slogan.” (161) Cousins “Lisa Su and Jensen Huang, the leaders of AMD and NVIDIA were both born in Tainan, the city where TSMC now produces their chips.” (164) “The culture at TSMC is more hierarchical than ASML, but less militaristic than in South Korea.” (166) “TSMC now commands 60% of the entire foundry market, making it 4x larger than its closest competitor, Samsung.” (167) “ASML’s next generation of EUV machines goes by the nickname High NA (the numerical aperture increases from 0.35 to 0.55). These colossal scanners span 14 meters and feature large mirrors up to a meter wide. The optical system by itself consists of 20,000 parts and weighs 12 tons, making it 7x heavier than the optics for the current EUV machine.” (175) “The High NA system weighs 150 tons and costs 400M Euros. It takes 7 cargo planes to ship this system to customers.” (225) “The production of a complex EUV mask costs more than a half million Euros and takes a huge amount of time to calculate.” (181) They “use AI to understand the interplay between the light beam, the mask, and the chemical reactions on the wafer.” ASML’s CTO calls it “voodoo software.” (183) China: “European governments fear China is transforming into a totalitarian state, capable of forcing Chinese multinationals to spy for the Communist Party. And that poses significant risk to the 5G cellular infrastructure of the West.” (200) “In 2017, Chinese customers ordered 700M Euros worth of lithography machines, a new record. Hundreds of ASML’s scanners were running in the factories of SMIC, China’s largest foundry” (201) “EUV is controlled by the Wassenaar Arrangement, the multilateral export control regime on conventional arms and dual-use goods and technologies.” (203) “As far as ASML is concerned, fears about EUV being used for military applications are baloney. Most chips found in weapons are ‘off-the-shelf’ chips that can also be found in laptops, washing machines or cars, and are easy to purchase anywhere in the world. But the U.S. sees things differently. They fear the emergence of Chinese AI and cyber weapons. And there is one thing those all need: advanced chips.” (205) “In January 2020, the U.S. asked the Netherlands to block EUV exports, and suddenly ASML found itself in the spotlight. The Netherlands ultimately denied ASML a license… No EUV machine was going to SMIC.” (208) In 2023 “ASML was exporting far more older DUV machines to China than had been expected. Almost half of ASML’s revenue was coming from China. As the chip industry was pushing the pause button, China kept on hoarding. The U.S. pressed the Netherlands to slam the brakes before January 2024, and the cabinet duly revoked several approved export licenses for ASML machines destined for China.” (234) “As China is growing increasingly isolated, so too is the liklihood of a fully-fledged Chinese competitor emerging in the rearview mirror capable of developing an independent chip production chain.” (236) “ASML takes this seriously. Their go-to response: ‘The laws of nature are the same anywhere.’ What was achieved in Brabant, could be achieved in Beijing.” (335) “To qualify for government aid (in Biden’s Chips Act), companies had to agree not to build advanced chip foundries in China or other ‘countries of concern.’” (239) “The chip shortage had been a wakeup call, and the nightmare scenario was front and center on everyone’s mind: if China blocks Taiwan, we’ll be without chips within two weeks.” (242) “The estimated percentage of people with autism or ADHD at ASML far outnumbers the average. The highly specialized work, revolving around focusing on complex problems that require prolonged attention to the smallest details, makes it well-suited for some autistic traits. ASML’s CTO and President Van den Brink makes no secret about being dyslexic and actively advocates for targeting this neurodiverse group. They are precisely the analytical and creative thinkers ASML needs, but also often the ones who find it difficult to put themselves in other people’s shoes.” (287) Sounds like teen spirit… of Steve Jobs: “Van den Brink’s power of persuasion lies in his childlike enthusiasm. It works like some kind of reality distortion field. Martin can disrupt your perspective until you’re convinced that you can make the impossible possible.” (321) “Van den Brink never really led a big company. He guided it like a startup, as if it were a defiant toddler in the body of a mature multinational.” (329) The book ends with the poignant handover of the company in 2024 to a new leader, the Frenchman Chistophe Fouquet.
Steve Jurvetson tweet mediaSteve Jurvetson tweet mediaSteve Jurvetson tweet media
English
23
78
510
25.2K
Mark Finnern retweetledi
Brian Krassenstein
Brian Krassenstein@krassenstein·
BREAKING: Music legend Bruce Springsteen just released this incredible song that will be sure to piss Trump off beyond belief. “Streets of Minneapolis”. He wrote this song about Alex Pretti and Renée Good Saturday and recorded it yesterday. Share it far and wide and play it as loud as you can
English
2.4K
22.4K
54.3K
1.4M
Mark Finnern retweetledi
Ethan Mollick
Ethan Mollick@emollick·
Teaching an experimental class for MBAs on “vibefounding,” the students have four days to come up and launch a company. More on this eventually, but quick observations: 1) I have taught entrepreneurship for over a decade. Everything they are doing in four days would have taken a semester in previous years, if it could have done it at all. Quality is also far better. 2) Give people tools and training and they can do amazing things. We are using a combination of Claude Code, Gemini, and ChatGPT. The non-coders are all building working products. But also everyone is doing weeks of high quality work on financials, research, pricing, positioning, marketing in hours. All the tools are weird to use, even with some training, but they are figuring it out. 3) People with experience in an industry or skill have a huge advantage as they can build solutions that have built-in markets & which solve known hard problems that seemed impossible. (Always been true, but the barriers have fallen to actually doing stuff) 4) The hardest thing to get across is that AI doesn’t just do work for you, it also does new kinds of work. The most successful efforts often take advantage of the fact that the AI itself is very smart. How do you bring its analytical, creative, and empathetic abilities to bear on a problem? What do you do with access to a very smart intelligence on demand? I wish I had more frameworks to clearly teach. So many assumptions about how to launch a business have clearly changed. You don’t need to go through the same discovery process if you build a dozen ideas at the same time & get AI feedback. Many, many new possibilities, and the students really see how big a deal this is.
English
80
183
1.8K
125.6K
John Robb
John Robb@johnrobb·
This is as a significant milestone on the "road to Turkdom" -- the algorithmic and AI enabled variant of Hayek's "Road to Serfdom" This is where an AI run economy that doesn't have data ownership inevitably ends up. Sure, you will find work, but it won't pay much and you will be monitored and manipulated by AIs incessantly. Worse, the data generated by your work will be captured (aka stolen) and used to build AIs that can take your place, putting you on an endless treadmill Do a task AI learns from it AI replaces you Retrain (borrow $$ to learn it) Do that new task AI learns from it
John Robb tweet media
English
4
21
55
5.2K
Mark Finnern
Mark Finnern@finnern·
25 Mittelständler, eine Live-Demo: E-Mail-Kundenanfrage-Prozess von 10 Minuten → 10 Sekunden. Reaktion: "Vom Stuhl gefallen." n8n + ChatGPT + Gemini. Kein Data Science Team nötig. Der Workflow 👇 linkedin.com/pulse/vom-stuh… via @LinkedIn
Deutsch
0
0
1
42
Mark Finnern retweetledi
John Robb
John Robb@johnrobb·
blogs.nvidia.com/blog/starcloud… The energy costs in space to be 10x cheaper than land-based options, even including launch expenses. “In 10 years, nearly all new data centers will be built in outer space,”
English
5
15
39
4.1K
Mark Finnern retweetledi
Brian Roemmele
Brian Roemmele@BrianRoemmele·
THIS is an example of high human intelligence and the scale of intelligence we must have in anything we call AGI. If it looks emotional, that is because IT IS. And if you do the decades of research I have you will know this is the first principle of high intelligence. Feel it?
Brian Roemmele@BrianRoemmele

So Mr. @Grok my colleague concurs with my absolute required pathways for AGI. And offers insight on why companies like OpenAI are incapable of seeing this reality assures they will never reach what is truly AGI. But I’m here to help any large US company who asks for my help.

English
193
406
3.4K
423.6K
Mark Finnern retweetledi
Steve Jurvetson
Steve Jurvetson@FutureJurvetson·
Updating Innovation — the effect of Starlink and AI Innovation is critical to growth, progress, and the fate of humanity. While individual advances may be hard to forecast, a macro pattern emerges nevertheless, spanning centuries: the pace of innovation is perpetually accelerating, exogenous to the economy. Rather, it is the combinatorial explosion of possible innovation-pairings that creates economic growth. Why do we have accelerating change? Start with Brian Arthur’s observation that all new technologies are combinations of technologies that already exist. Innovation does not occur in a vacuum; it is a combination of ideas from before. In any academic field, the advances today are built on a large edifice of history. This is the foundation of progress, something that was not so evident before the age of science. Science tuned the process parameters for innovation and became the best method for humanity to learn. From this conceptual base arises the origin of economic growth and accelerating technological change, as the combinatorial explosion of possible idea pairings grows exponentially as new ideas come into the mix. If there are n ideas at a given moment, there are on the order of 2^n possible sub-groupings or recombinations of those ideas (per Reed’s Law). It explains the innovative power of urbanization and networked globalization. And it explains why interdisciplinary ideas are so powerfully disruptive, exploring idea pairings that others failed to see given the isolating vernacular and physical separation of the academic disciplines at most universities and in most professions. If novel innovation is what you seek, cognitive island-hopping is good place to start, mining the interstices between academic disciplines. And this is why cognitive diversity is an essential driver of the wisdom of crowd effect in small teams. Geoffrey West of the Santa Fe Institute argues that cities and tech hubs like Silicon Valley are an autocatalytic attractor and amplifier of innovation. People are more innovative and productive, on average, when they live in a city because ideas can cross-pollinate more easily. Proximity promotes mimetic promiscuity, what Matt Ridley calls “ideas having sex”. This positive network effect drives another positive feedback loop - by attracting the best and the brightest to flock to the salon of mind, the memeplex of modernity. The Internet is a structural manifestation of the long arc of evolutionary indirection, whereby the vector of improvement has risen steadily up the ladder of abstractions from chemicals to genes to systems to networks. At each step, the pace of progress has leapt forward, making the prior vectors seem glacial in comparison; the composition of DNA and even a neuron is a static variable in modern evolution. And now, it’s all about the ideas. We have moved from genetic to mimetic evolution, and much like the long-spanning neuron (which took us beyond local and broadcast signaling among cells) ushered the Cambrian explosion of differentiated and enormous body plans, the Internet brought long-spanning links between humans, engendering an explosion in idea space, straddling isolated pools of thought. Ideas have propagated and recombined more broadly over the past 30 years than any prior epoch. And it’s just beginning. In the next handful of years, three billion new minds will come online for the first time to join this global conversation thanks to Starlink providing low-cost broadband to unserved areas. These people are decoupled from the global economy today, but they will soon have access to online education and all of the economic potential of entrepreneurship and innovation. This alone should foster an innovation boom. And then AI enters the chat. AI can bridge across all of the academic disciplines, beyond the capacity of any human mind. Like a universal translator of languages across a common vector space, the AI models can merge our disparate idea pools like never before, finding patterns in processes and protocols and compounding the aggregate idea space humanity has accumulated into an integrated whole. The possible sub-groupings of n++ ideas will open wide, engendering a new combinatorial compounding of innovation. It may feel like a Cambrian explosion of future shock.
Steve Jurvetson tweet media
English
6
23
159
19.2K
Mark Finnern retweetledi
Steve Jurvetson
Steve Jurvetson@FutureJurvetson·
The largest fusion power purchase agreement ever ☀️🔜🤖 Google just procured 200MW of clean power directly from Commonwealth Fusion’s first commercial plant in Virginia, with an option to buy more. From science to engineering to mass manufacturing, Commonwealth is leading the charge to what many call the holy grail of clean energy — abundant, carbon-free, continuous 24/7 power in any geography. “We aim to demonstrate fusion’s ability to provide reliable, abundant, clean energy at the scale needed to unlock economic growth and improve modern living and enable the largest market transition in history.” — CEO Bob Mumgaard News: techcrunch.com/2025/06/30/goo…
Steve Jurvetson tweet mediaSteve Jurvetson tweet mediaSteve Jurvetson tweet media
English
14
60
169
14.3K