Dean Donlon

2.6K posts

Dean Donlon banner
Dean Donlon

Dean Donlon

@DeanDonlon

Drive Business Growth Through Digital Advertising, AI Empowerment, & Operations Automation

N 43°39' 0'' / W 116°11' 0'' Katılım Nisan 2009
24 Takip Edilen247 Takipçiler
Dean Donlon retweetledi
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
There is a US corporation that has a $50B balance sheet, generates a 10% ROIC (~$5B of income per year) but has also convinced the US Government that: a) it shouldn’t pay taxes and should be treated as a non profit b) despite making billions more than necessary to cover costs, only spends the government mandated non-profit minimum of what it makes every year, creating an artificial $1.5B deficit c) then asks the US government for help and gets $1.5B of taxpayer dollars every year to cover the gap that they could cover by themselves but chooses not to d) while getting government funding, has been accused and found guilty by tue Supreme Court of systemic racial discrimination towards minority groups This corporation is called the Harvard Corporation.
English
372
1.9K
10.8K
1M
Dean Donlon retweetledi
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
1) Yes it does cheapen the grade 2) If it’s happening at Yale it’s happening at every “elite” school 3) This is so insidious because it robs someone of learning to be resilient and say “I got a C - how do I do better?” and replaces it instead with “I’m smart and always right” These school are so broken in so many ways.
Chamath Palihapitiya tweet media
English
352
484
4.7K
786.5K
A.D. Besemer
A.D. Besemer@besemer_amanda·
@pmarca Concur, absolutely. We need to organize and have more seats at the table. Let's begin by setting our own table, lobbying and campaign. Happy to help, Marc. Let's do it!
English
1
0
1
164
Marc Andreessen 🇺🇸
I am calling for a total and complete shutdown of AI Doomerism until we can figure out what is going on.
Jordan Chase-Young@jachaseyoung

FINALLY: AI xrisker Nick Bostrom regrets focusing on AI risk, now worries that our fearful herd mentality will drive us to crush AI and destroy our future potential. (from an UnHerd podcast today) Nick Bostrom: It would be tragic if we never developed advanced artificial intelligence. I think it's a kind of a portal through which humanity will at some point have to passage, that all the paths to really great futures ultimately lead through the development of machine superintelligence, but that this actual transition itself will be associated with major risks, and we need to be super careful to get that right. But I've started slightly worrying now, in the last year or so, that we might overshoot with this increase in attention to the risks and downsides, which I think is welcome, because before that this was neglected for decades. We could have used that time to be in a much better position now, but people didn't. Anyway, it's starting to get more of the attention it deserves, which is great, and it still seems unlikely, but less unlikely than it did a year ago, that we might overshoot and get to the point of a permafrost--like, some situation where AI is never developed. Flo Read: Like a kind of AI nihilism that would come from being so afraid? NB: Yeah. So stigmatized that it just becomes impossible for anybody to say anything positive about it, and then we get one of these other lock-in effects, like with the other AI tools, from surveillance and propaganda and censorship, and whatever the sort of orthodoxy is--five years from now, ten years from now, whatever--that sort of gets locked in somehow, and we then never take this next step. I think that would be very tragic. I still think it's unlikely, but certainly more likely than even just six or twelve months ago. If you just plot the change in public attitude and policymaker attitude, and you sort of think what's happened in the last year--if that continues to happen the next year and the year after and the year after that, then we'll pretty much be there as a kind of permanent ban on AI, and I think that could be very bad. I still think we need to move to a greater level of concern than we currently have, but I would want us to sort of reach the optimal level of concern and then stop there rather than just kind of continue-- FR: We need to get to a kind of Goldilocks level of feeling about AI. NB: Yeah. I'm worrying that it's like a big wrecking ball that you can't really control in a fine-grained way. People like to move in herds, and they get an idea, and then--you know how people are. I worry a little bit about it becoming a big social stampede to say negative things about AI and then it just running completely out of control and sort of destroying the future in that way instead. Then, of course, we go extinct through some other method instead, maybe synthetic biology, without even ever getting at least to roll the die with the... FR: So, it's sort of a 'pick your poison'. NB: Yeah. FR: It just so happens that this poison might kill you or might poison you, and you just kind of have to roll the dice on it. NB: Yes. I think there's a bunch of stuff we could do to improve the odds on the sequence of different things and stuff like that, and we should do all of those. FR: Being a scholar of existential risk, though, I suppose, puts you in the category or the camp of people who are often--this show being an example--asked to speak about the terrifying hypothetical futures that AI could draw us to. Do you regret that focus on risk? NB: Yeah, because I think, now--there was this deficit for decades. It was obvious--to me at least, but it should have been pretty obvious-- that eventually AI was gonna succeed, and then we were gonna be confronted with this problem of, "How do we control them and what do we do with them?" and then that's gonna be really hard and therefore risky, and that was just neglected. There were like 10,000 people building AI, but like five or something thinking about how we would control them if we actually succeeded. But now that's changed, and this is recognized, so I think there's less need now maybe to add more to the sort of concern bucket. FR: The doomerist work is done, and now you can go and do other things. NB: Yeah. It's hard, because it's always a wobbly thing, and different groups of people have different views, and there are still people dismissing the risks or not thinking about them. I would think the optimal level of concern is slightly greater than what we currently have, so I still think there should be more concern. It's more dangerous than most people have realized, but I'm just starting to worry about it then kind of overshooting that, and the conclusion being, "Well, let's wait for a thousand years before we do that," and then, of course, it's unlikely that our civilization would remain on-track for a thousand years, and... FR: So we're damned if we do and damned if we don't. NB: We will hopefully be fine either way, but I think I would like the AI before some radical biotech revolution. If you think about it, if you first get some sort of super-advanced synthetic biology, that might kill us. But if we're lucky, we survive it. Then, maybe you invent some super-advanced molecular nanotechnology, that might kill us, but if we're lucky we survive that. And then you do the AI. Then, maybe that will kill us, or if we're lucky we survive that and then we get to utopia. Well, then you have to get through sort of three separate existential risks--first the biotech risks, plus the nanotech risks, plus the AI risks, whereas if we get AI first, maybe that will kill us, but if not, we get through that, then I think that will handle the biotech and nanotech risks, and so the total amount of existential risk on that second trajectory would sort of be less than on the former. Now, it's more complicated than that, because we need some time to prepare for the AI, but you can start to think about sort of optimal trajectories rather than a very simplistic binary question of, "Is technology X good or bad?" We might more think, "On the margin, which ones should we try to accelerate, which ones retard?" And you get a more nuanced picture of the field of possible interventions that way, I think.

English
74
162
1.6K
250.2K
Dean Donlon
Dean Donlon@DeanDonlon·
@pmarca Your an idiot who wants to curb evolution. Like they say on Grumpy Old Men you should pull your bottom lip over your head and swallow to save the world from your ignorance.
English
0
0
0
4
Dean Donlon
Dean Donlon@DeanDonlon·
@ianmiles Worthless beings who don't even deserve the shoes on thier feet. Store owners should have a shoot at will law.
English
0
0
0
1
Ian Miles Cheong
Ian Miles Cheong@ianmiles·
Pose for the camera. Take anything you want off the shelves. Do whatever you want. It’s all for the taking. Welcome to a world without accountability.
English
3.9K
4.3K
16K
5.5M
Dean Donlon retweetledi
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
I was sent this chart and found the implication, if true, important. Many people derided the reduction in force that happened at Twitter/X and the firing of 80% of the company. But it turned out that the company wasn’t only no worse for wear, but are seeing record usage since streamlining their workforce and OpEx. Well if the chart below is true, it is a path that many other companies will have to take. If you’re a public company, I don’t see how you can defend yourself from activists as AI tools proliferate. You have two choices: 1) double your work product and quantity of code shipped and product value as you keep headcount steady. OR 2) reduce your R&D/OpEx by 50% and have half the team + AI tools do the work that the entire team used to do before. FWIW, I don’t see how companies can empower their employees with tools and claim they have doubled their productivity unless revenue also doubles. So the latter (#2) seems like the most obvious path that shareholders will push for. In no small part because of the SBC- based dilution they would also save if this happened.
Chamath Palihapitiya tweet media
English
229
355
2.7K
955.8K