
Mark Lau
4.7K posts

Mark Lau
@aztec91mark
4x dad, 1x husband; profession - judge of venture firm green tea’s; interests in tech, investing, crypto, and sports cards:; apolitical; Aztec4Life.






Legendary technologist Jann Tallin: “Extinction from godlike AI is not just possible, but imminent.” “We are close.” “AI will not leave any survivors“ “On the current trajectory, you are not going to live very long” “A recent poll found that 88% of AI engineers think that AI could destroy the world.” PARTIAL TRANSCRIPT: “Humanity is akin to a teenager with rapidly developing physical abilities, lagging wisdom and self control, little thought for its long term future, and an unhealthy appetite for risk. There is an increasing consensus: Alan Turing, in 1951, predicted that we should expect to lose control to machines, and the inventor of deep learning itself, Geoff Hinton, starting to have doubts about his life work. There are now hundreds of AI experts sounding their alarm bells. A recent poll found that 76% of American voters believe AI is a threat to our existence. Just yesterday, there was news that one of the leading superforecaster groups published their prediction that their estimate for AI catastrophic risk is 30%. 30%! The battle for establishing that AI is an existential risk, a battle that I spent roughly 15 years of my career on, has now all but won. I'm going to show that there are fundamental reasons why underlying godlike AI will not leave any survivors. That we are now close to such AI but have no idea how to align it. And how skeptics’ counterarguments are, sadly, extremely weak. [AI will be like a new apex species. And humans - an apex species - have driven other countless other species to extinction.] Godlike AI will not care about humans because of a dirty secret of the AI industry: AIs are not built, they are grown. The ‘p’ in Chat GPT stands for pretrained. Pretraining - "summoning" - is a process where simple two page program is soaked in terabytes of data and megawatts of electricity and left like that for months. And then, after that, attempts are made to tame the emergent alien mind. Importantly, those methods of taming rely on the AI being less competent than the humans who are taming it now. The reason why we expect that we are close to godlike AI is that the trend of AI is getting more powerful and is now visible to everyone. It's obvious. Just look at capability differences between GPT2, GPT3, and GPT4. GPT-2 was released in 2019. A simple extrapolation would take us to GPT-7 before this decade is over. So, in summary, we are blindly growing increasingly competent minds while hoping that they are not so competent that they spin out of control and destroy our living environment. Unfortunately, that hope is not justified, which explains increasing anxiety among the AI developers themselves. Of course, at this point, just like a patient that has received a terminal diagnosis, you are encouraged to seek a second opinion. Unfortunately, having been part of this debate for more than a decade, I already know what you're going to hear. First, labeling. These are arguments like “Oh, this is science fiction” or “This is alarmism” “These are doomsayers” “Don't listen to people with that non-virtuous property, X.” Second, frame control. “AI is like X, and X is very nice, right?” This has now reached grotesque levels. One prominent VC claimed recently that “AI is basically just math, so why should we worry?” Imagine the captain of Titanic announcing, “don't worry, passengers, this is just water.” Third class of arguments, human supremacy. “AI can never do X” Or “we are very far from AI doing X.” Unfortunately, reality has been very harsh judge here recently. The set of things that only humans can do is collapsing really rapidly. There's now growing global consensus that the unregulated, blind AI scaling is reckless and dangerous. So we need to constrain AI or ban AI altogether. Just like we banned human cloning. You have received a terminal diagnosis. Please don't simply ignore it.” --- Jann Tallin is founder of Skype and Kazaa


Power 5 college basketball teams with the most projected returning minutes, per Torvik: Purdue 69.7% Arkansas 59.3% Marquette 53.8% Stanford 53.8% Notre Dame 53.5% SMU 50.8% BYU 49.9% Ohio State 49.8% Northwestern 49.7% VT 46.5% Illinois 45.0% UCLA/ISU 43.0% MSU 41.7%









