Mark Lau

4.7K posts

Mark Lau banner
Mark Lau

Mark Lau

@aztec91mark

4x dad, 1x husband; profession - judge of venture firm green tea’s; interests in tech, investing, crypto, and sports cards:; apolitical; Aztec4Life.

SF, San Diego, and Tampa bays เข้าร่วม Nisan 2008
1.4K กำลังติดตาม1.3K ผู้ติดตาม
Mark Lau
Mark Lau@aztec91mark·
Very disappointed by the poor customer service from @MarriottBonvoy. 50+ nights this year at Marriotts, with two more trips coming, and customer service agents act like robots when I ask for help. Canceling credit card and wont be staying there much going forward. cc @MBonvoyAssist
English
3
0
2
365
Peter Bukowski
Peter Bukowski@Peter_Bukowski·
That's so obviously grounding lmao
English
15
9
275
12.5K
Peter Bukowski
Peter Bukowski@Peter_Bukowski·
This pass rush beat the shit out of the Lions offensive line and can't get going against a Cowboys line with two backups.
GIF
English
24
17
407
21.1K
Peter Bukowski
Peter Bukowski@Peter_Bukowski·
Why can't Bo Melton return punts again?
English
16
5
215
11.8K
Peter Bukowski
Peter Bukowski@Peter_Bukowski·
Just build the whole plane out of Tucker Kraft
English
10
5
264
16.7K
Mark Lau
Mark Lau@aztec91mark·
Hey @Peter_Bukowski how’s that not “unnecessary”? That was well beyond the white boundary.
English
0
0
0
66
Peter Bukowski
Peter Bukowski@Peter_Bukowski·
The Packers are physically incapable of putting a team away. This is embarrassing.
English
199
96
2.6K
82K
Mark Lau
Mark Lau@aztec91mark·
@Peter_Bukowski How much longer do we have to watch this? Need to shore that up stat!!!
English
0
0
1
425
Peter Bukowski
Peter Bukowski@Peter_Bukowski·
The Packers didn't want to pay Darren Rizzi when LaFleur was first hired, and they've had a shitshow of a special teams ever since.
English
34
31
763
37K
Mark Lau
Mark Lau@aztec91mark·
@AISafetyMemes So what’s the “enlightened person” to do with this information? Panic? Have we moved beyond the ability to still control and influence the AI?
English
0
0
0
69
AI Notkilleveryoneism Memes ⏸️
At an exclusive event of world leaders, Paul Tudor Jones says a top AI leader warned everyone: “It's going to take an accident where 50 to 100 million people die to make the world take the threat of this really seriously… I'm buying 100 acres in the Midwest, cattle, chickens." He says ALL of the top AI leaders present agreed there was a 10% chance of more than 50% of humanity being exterminated by AI. (SIDEBAR: These are INSANE levels of risk, especially coming from insiders *in* the field, which is totally unprecedented - industry always downplays risks to avoid oversight - and we would NEVER accept risk levels like this in other fields. Don't become numb to this insanity. Civil engineers build mere *pedestrian bridges* capable of supporting "one in a million" type events. Imagine a bridge with all 8 billion people on it - and the engineers themselves say it has a 1 in 10 chance of collapsing!) --------- Andrew Ross Sorkin: I was going to ask you about the stock market itself, but then you just said something to me which makes me a little bit nervous, which is your focus is less on that right this moment than you are about artificial intelligence. What do you mean? Paul Tudor Jones: Well, let me just say: I was minding my business—minding my business. I went to this tech conference about two weeks ago out West, and I just want to share with you what I learned there. Chatham House Rules, so we can talk about the content. It was a small one—forty notables, but real notables, like household names that you would recognize: the leaders in finance, politics, science, tech. The panel that disturbed me the most—is that AI clearly poses an imminent threat, a security threat—imminent in our lifetimes—to humanity. And that—that was the one that really, really got me. Sorkin: When you say “imminent threat,” what do you mean? Jones: So I’ll get to it. They had a panel of, again, four of the leading tech experts, and about halfway through someone asked them on AI security, “Well, what are you doing on AI security?” And they said, “The competitive dynamic is so intense among the companies, and then geopolitically between Russia and China, that there’s no agency—no ability to stop and say, ‘Maybe we should think about what we’re actually creating and building here.’” And so then the follow-up question is, “Well, what are you doing about it?” He said, “Well, I’m buying a hundred acres in the Midwest, I’m getting cattle and chickens, and I’m laying in provisions.” Sorkin: For real? Jones: For real, for real. And that was obviously a little disconcerting. And then he went on to say, “I think it’s going to take an accident where fifty to a hundred million people die to make the world take the threat of this really seriously.” Well, that was—that was freaky-deaky to me. And no one pushed back on him on that panel. And then afterwards we had a breakout session, which was really interesting. All forty people got up in a room like this, and they had a series of propositions and you had to either agree with or disagree with the proposition. And one of the propositions was: “There’s a ten-percent chance in the next twenty years that AI will kill fifty percent of humanity.” So there’s a 10% chance that AI will kill 50% of humanity in the next twenty years—agree or disagree. So I’d say the vast majority of the room moved to the disagree side. Elon Musk said there’s a 20% chance that AI will annihilate humanity. Now I know why he wants to go to Mars, right? And so about six or seven of us went to the agree side. And I’d gone there because of what I’d heard Elon Musk say, who’s maybe the most brilliant engineer of our time. All four modelers were on the agree side of that—all four of the leading developers of the AI models were on that side. And then we debated—then the two sides got to debate—and one of the modelers says to the disagree side, “If you don’t think there’s a 10% chance, as fast as these models are growing and how quickly they’re commoditizing knowledge, how easily they’re making it accessible, that someone couldn’t make a bioweapon weapon that could take out half of humanity, I don’t know, 10% seems… seems reasonable to me.” Sorkin: Okay, so thank you for bringing us great, great news over breakfast. Jones: I’m not a tech expert, but I’ve spent my whole life managing risk. And we just have to realize, to their credit, all these folks in AI are telling us we’re creating something that’s really dangerous. It’s going to be really great, too, but we’re helpless to do anything about it. That’s, to their credit, what they’re telling us, and yet we’re doing nothing right now, and it’s really disturbing. (h/t @AndrewCurran_ for finding the clip!)
AI Notkilleveryoneism Memes ⏸️@AISafetyMemes

Legendary technologist Jann Tallin: “Extinction from godlike AI is not just possible, but imminent.” “We are close.” “AI will not leave any survivors“ “On the current trajectory, you are not going to live very long” “A recent poll found that 88% of AI engineers think that AI could destroy the world.” PARTIAL TRANSCRIPT: “Humanity is akin to a teenager with rapidly developing physical abilities, lagging wisdom and self control, little thought for its long term future, and an unhealthy appetite for risk. There is an increasing consensus: Alan Turing, in 1951, predicted that we should expect to lose control to machines, and the inventor of deep learning itself, Geoff Hinton, starting to have doubts about his life work. There are now hundreds of AI experts sounding their alarm bells. A recent poll found that 76% of American voters believe AI is a threat to our existence. Just yesterday, there was news that one of the leading superforecaster groups published their prediction that their estimate for AI catastrophic risk is 30%. 30%! The battle for establishing that AI is an existential risk, a battle that I spent roughly 15 years of my career on, has now all but won. I'm going to show that there are fundamental reasons why underlying godlike AI will not leave any survivors. That we are now close to such AI but have no idea how to align it. And how skeptics’ counterarguments are, sadly, extremely weak. [AI will be like a new apex species. And humans - an apex species - have driven other countless other species to extinction.] Godlike AI will not care about humans because of a dirty secret of the AI industry: AIs are not built, they are grown. The ‘p’ in Chat GPT stands for pretrained. Pretraining - "summoning" - is a process where simple two page program is soaked in terabytes of data and megawatts of electricity and left like that for months. And then, after that, attempts are made to tame the emergent alien mind. Importantly, those methods of taming rely on the AI being less competent than the humans who are taming it now. The reason why we expect that we are close to godlike AI is that the trend of AI is getting more powerful and is now visible to everyone. It's obvious. Just look at capability differences between GPT2, GPT3, and GPT4. GPT-2 was released in 2019. A simple extrapolation would take us to GPT-7 before this decade is over. So, in summary, we are blindly growing increasingly competent minds while hoping that they are not so competent that they spin out of control and destroy our living environment. Unfortunately, that hope is not justified, which explains increasing anxiety among the AI developers themselves. Of course, at this point, just like a patient that has received a terminal diagnosis, you are encouraged to seek a second opinion. Unfortunately, having been part of this debate for more than a decade, I already know what you're going to hear. First, labeling. These are arguments like “Oh, this is science fiction” or “This is alarmism” “These are doomsayers” “Don't listen to people with that non-virtuous property, X.” Second, frame control. “AI is like X, and X is very nice, right?” This has now reached grotesque levels. One prominent VC claimed recently that “AI is basically just math, so why should we worry?” Imagine the captain of Titanic announcing, “don't worry, passengers, this is just water.” Third class of arguments, human supremacy. “AI can never do X” Or “we are very far from AI doing X.” Unfortunately, reality has been very harsh judge here recently. The set of things that only humans can do is collapsing really rapidly. There's now growing global consensus that the unregulated, blind AI scaling is reckless and dangerous. So we need to constrain AI or ban AI altogether. Just like we banned human cloning. You have received a terminal diagnosis. Please don't simply ignore it.” --- Jann Tallin is founder of Skype and Kazaa

English
80
236
1.1K
720K
Mark Lau
Mark Lau@aztec91mark·
QQ for my X friends: am I OK to be annoyed when I tip 15% for a takeout order at a place that isn’t busy at all, and don’t even get a “thanks”? Ok, I’ve vented. Moving on.
English
0
0
0
96
DataRepublican (small r)
DataRepublican (small r)@DataRepublican·
Hello Senator Warren, Principles mean nothing if they are selectively applied. You are concerned with corporate conflicts of interest, yet you've never spoken about the self-dealing within your own institution: Congress. Right now, sitting members of Congress and their close allies sit on the boards of taxpayer-funded NGOs drawing billions. Here are the names: National Endowment for Democracy: 🔵 Karen Bass – Vice Chair of the National Endowment for Democracy; former U.S. Representative and current Mayor of Los Angeles (Democrat). 🔴 Elise Stefanik – Director at the National Endowment for Democracy; U.S. Representative from New York and House GOP Conference Chair (Republican). 🔴 Mel Martinez – Director at the National Endowment for Democracy; former U.S. Senator from Florida (Republican). 🔴 Peter Roskam – Vice Chair at the National Endowment for Democracy; former U.S. Representative from Illinois (Republican). 🔴 Steve Biegun – Director at the National Endowment for Democracy; former U.S. Deputy Secretary of State (Republican). International Republican Institute: 🔴 Dan Sullivan – Chairman; U.S. Senator from Alaska. 🔴 Lindsey Graham – Director; U.S. Senator from South Carolina. 🔴 Joni Ernst – Director; U.S. Senator from Iowa. 🔴 Tom Cotton – Director; U.S. Senator from Arkansas. 🔴 Kelly Ayotte – Director; former U.S. Senator from New Hampshire. National Democratic Institute: 🔵 Barbara Mikulski – Director; longest-serving woman in the U.S. Senate, former Maryland Senator (Democrat). 🔵 Thomas Daschle – Chairman; former Senate Majority Leader, key figure in Democratic legislative strategy (Democrat). 🔵 Stacey Abrams – Director; high-profile Georgia political leader, voting rights advocate, and former gubernatorial candidate (Democrat). 🔵 Donna Brazile – Director; veteran Democratic strategist, former DNC chair, and political commentator (Democrat). Here's how the game works. Congress approves funding for USAID and State Department grants. Those funds are then funneled to NGOs whose boards are stacked with former colleagues and friends. And to close the loop, the same insiders are placed on advisory panels (ACVFA) that help decide where those funds go, including IRI and NDI. That's patronage wrapped in a nonprofit shell. If you're serious about rooting out corruption, then start at home. Demand a ban on any current or former member of Congress, and their senior staff, from sitting on the boards of organizations receiving federal funds. Strip these NGOs of their special status and public money. Yet, you've haven't even acknowledged this obvious ethical failure. Why? Because it implicates friends and colleagues? Because it's bipartisan? You talk about "reining in." Now prove it. Call for legislation that would shut down this taxpayer-subsidized insider club. You say you fight for the public interest. Then fight for it where you are right now, in Congress. But you won't. We all know you won't.
English
605
4.3K
16.7K
192.9K
Elizabeth Warren
Elizabeth Warren@ewarren·
Unelected billionaire Elon Musk should not be acting as co-president & making $8 million a day from government contracts while he’s at it. I’ve got a bill to rein in Musk by cracking down on conflicts of interest & creating stronger ethics rules for Special Government Employees.
English
2.1K
606
2.7K
235.6K
Mark Lau
Mark Lau@aztec91mark·
Rockets seem like they’re too good to have to play this bully ball tactic. I think if they just played straight up ball, they’d be fine. And this is coming from a life-long Dubs fan.
English
0
0
0
108
Mark Lau
Mark Lau@aztec91mark·
@JaSno1377 Did your innie or outie create that?
English
1
0
1
36
JaSno
JaSno@JaSno1377·
GA
JaSno tweet media
2
0
5
99
San Diego State Men's Golf
San Diego State Men's Golf@AztecMGolf·
The Aztecs (-44) extended their lead (22 strokes) after two rounds of the Mountain West Championship. Justin Hastings matched a tourney (and school) record with a 10-under 62 today and leads by four strokes. goaztecs.com/news/2025/04/2…
English
3
9
56
8.4K
Mark Lau
Mark Lau@aztec91mark·
@elonmusk Why doesn’t this get more discussion. I find it very concerning given OpenAI’s seemingly market ubiquity and war chest. Where is the parental guidance there?
English
0
0
3
629
Mark Lau
Mark Lau@aztec91mark·
Love seeing teams draft ahead of the Packers taking WRs, when we already took 2. Leave us the D studs.
English
0
0
1
181