Tryfecta⏸️

2.7K posts

Tryfecta⏸️ banner
Tryfecta⏸️

Tryfecta⏸️

@tryfectaa

Beigetreten Ekim 2023
917 Folgt85 Follower
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@VoidStateKate what if there are kinds of psychological therapy that encourage the client to physically process and move through their emotions?
English
0
0
0
2
VOID
VOID@VoidStateKate·
What if psychological therapy was the biggest psyop of all. Encouraging everyone to intellectualize and speak rather than physically process and move through their emotions.
English
34
8
78
1.6K
Tryfecta⏸️ retweetet
Peter Wildeford🇺🇸🚀
Peter Wildeford🇺🇸🚀@peterwildeford·
BERNIE SANDERS: "In a sane world, the leadership of the United States sits down with the leadership in China... to work together so that we don't go over the edge and create a technology which could perhaps destroy humanity."
Acyn@Acyn

Sanders on AI: We need to develop a sense of urgency of here. The economic impacts are going to be enormous. The impacts on our children will be enormous, and again, there is literally an existential threat to the existence of the human race.

English
4
7
68
2.3K
Tryfecta⏸️ retweetet
Guido Reichstadter
Guido Reichstadter@wolflovesmelon·
I don't know Demis, but just maybe reality is screaming at you "DON'T BUILD SUPERINTELLIGENCE YOU BLOODY WANKER!"
Guido Reichstadter tweet media
English
0
1
5
117
Tryfecta⏸️ retweetet
Tom Bibby
Tom Bibby@tombibbys·
"It is a little disconcerting that the doomsayers include a godfather of AI, an inventor of ChatGPT, and the chief AGI scientist at Google DeepMind—while the optimists are represented by a physicist who got famous on Twitter and the founder of LinkedIn."
Tom Bibby tweet media
English
1
1
14
405
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@David_Kasten @tombibbys The abrupt extinction of humanity is not a metaphor for us. Don't fuck with petty misdirection and sophistry. We don't have time for your bullshit.
English
0
0
1
4
dave kasten
dave kasten@David_Kasten·
Okay, I'm going to try one more time, since it seems that PauseAI is weirdly obsessed with this metaphor (Holly uses it too) and NOT getting the point about why it's insanely counterproductive. Great, I guess, that the UK, after permitting the slave trade, flipped sides and stopped it. (Let's politely move past what your Empire did elsewhere, though I do hear that the Imperial War Museum's exhibit about the Kenya Emergency, closing Sunday, is excellent.) In America, saying that someone is defending the slave trade is saying, bluntly, "you are defending a system where chattel slave owners worked men to death, raped women, and then sold _their own children_ on into slavery to fund their vile lifestyles." You're defending a system that preferred to break the Union in a spasm of stupid, bloody, hopeless treasonous war rather than even contemplate its _slow peaceful decline_ as the result of a vote. You're defending monstrosity and pointless horror. I truly don't know where in America Dean grew up. But where _I_ grew up, saying that someone is equivalent to those defending the evils of slavery is a great way to start a bar fight. This isn't a metaphor for us. Don't use it that way.
English
2
0
3
81
Tom Bibby
Tom Bibby@tombibbys·
Dean Ball in 1805.
Tom Bibby tweet media
Dean W. Ball@deanwball

First of all, your organization absolutely does not represent the views of everyone who wants to "pause" or "stop" AI, so it is false to say that I "misunderstood" your proposal. I was not referring to your specific proposal in any of my tweets. Second, I do not agree with you that any of this is "likely" to "end" civilization, exterminate humanity, etc. etc. I certainly do believe there are major risks, and I have spent years developing and advocating for policies I believe would address some of those risks. There are other risks about which I have substantial uncertainty, and I have been consistently straightforward about this uncertainty. You, on the other hand, seem to have near-certainty about every bad AI outcome, so long as it gets people on your side. This, in my mind, diminishes your credibility. Third, you claim to want an international treaty (enforced by whom?) signed presumably by a broader group of countries than just the U.S. and China (which ones?) that would ban AI development until some future point (how would we define that point?). My observation is that any such treaty would have to involve capital controls and controls on the free movement of people, unless all countries on Earth with the ability to host large-scale data centers (i.e., almost all of them) were signatories. You claim it is merely a compute governance regime you desire, but what happens when the AI researchers and semiconductor designers and manufacturing engineers are given 9-figure offers to move to God-knows-where to work on a new AI or semiconductor venture? Does the government simply allow that flow of money and people to occur? No. So what you *really* want is a regime that controls (optimally stops) the flow of trillions of dollars in good (all advanced AI compute + all semiconductor manufacturing equipment, as long as it is in service of making advanced AI compute, and by the way, how would you tell the difference between a fab making smartphone chips and GPUs? Inspectors in the fabs? Who supplies the inspectors?), all AI researchers, and all investment dollars that could be tied to AI research or to computing power (what about quantum machine learning, by the way? neuromorphic computing? other new paradigms?). It was actually charitable of me to assume that you'd want this to be enforced by e.g. existing export control regimes within a country. But it seems like you are saying, no, we wouldn't have e.g BIS or MOFCOM do this, we'd have a new international body with "democratic control" (a globally elected president of AI? who runs the elections? are their campaigns? who is allowed to donate to said campaigns) staffed with thousands of people, with a budget easily in the billions, with sweeping power to control flows of goods, people and money that fundamentally implicates ancient principles of national sovereignty. And you're doing all this at a time when international institutions of governance and international collaboration in general has been fraying, to say the least. All of this makes me think you are biting off much more than you can chew, to be sure. I don't want to accuse you of desiring authoritarianism, because I truthfully don't know whether you even understand what it is that you are advocating for. It does not seem based on your response to me that you have really thought about what implementation would require here, and so my guess is that your proposal is not malicious or desirous of tyranny, but instead simply naive and incomplete. And by the way, the hand waviness of your policy prescriptions is nothing compared to the hand waviness of your understanding of artificial intelligence, its likely trajectory, and relevant threat models. You don't even seem to feel a need to explain your position (I myself just wrote 5k words explaining my views on existential risk, and have written many thousands on the threat models I take more seriously), indicating to me that you live in a bubble where the hard-nosed and concrete questions do not get asked. You suggest that I am unserious, interested as I am in the "interesting governance challenges" you seem to dismiss in comparison to what you seem sure is your focus on the "big picture." But do you know what my work has produced? Enacted laws. Dozens of policies being executed as we speak by the largest bureaucracy in the history of mankind. Ideas that have shifted the thinking of people whose decisions will matter. Is it everything? No, it is not. My contributions will ultimately be small. But I put my back into what I do, when the logical move for someone like me would have been to go take a cushy job in the industry after I left the government. Do not ever suggest to me that I do not care deeply about what I am doing and do not ever question the intellectual integrity of a person you do not know. At the very least, do not do it to me.

English
9
9
97
7.7K
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@David_Kasten @tombibbys Alright well I grew up in America and people talk about slavery just about all the fucking time, jokes and everything. Are you a fucking snowflake? Grow a pair
English
0
0
1
6
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@FournesMaxime @deanwball @PauseAI If I am being gracious, what's driving Dean is actually a subconscious projection of his own terrible sense of shame and regret for his hand in AI policy and discussion that brought us to this point.
English
1
0
0
10
Maxime Fournes⏸️
Maxime Fournes⏸️@FournesMaxime·
This framing is obviously disingenuous, but sure! I'll play along and answer directly... Points 2 through 4 rest on the same wrong assumption: that a pause means controlling people. It means controlling compute. Training a frontier model requires data centers that use as much power as a small city, chips from a handful of manufacturers (TSMC, ASML), and months of continuous runs. This is infrastructure visible from space, far easier to track than fissile material. We built a verification regime for nuclear material 70 years ago. Nobody revoked physicists' passports to do it. On metrics (point 1): graduated capability thresholds, mandatory safety evaluations, an international technical body to oversee it. On scope (point 5): yes, it needs to include the 5-10 countries with real compute capacity. Both of these are hard. So was the IAEA. So was the Montreal Protocol. Hard is not a reason to do nothing while the race continues. Now on the framing itself. These questions assume that if I can't hand @deanwball a finished treaty, the case for a pause collapses. But that gets the order of operations entirely backwards. The Montreal Protocol was not designed by the people who first raised the alarm about CFCs. The IAEA was not drawn up by anti-nuclear campaigners. What happened every time is that public concern built political will, then governments mandated their best people to design the technical solutions. If we put a serious team on this, a handful of top scientists with real funding, a DARPA-style mandate, they could design a workable compute governance regime. The supply chain is concentrated, the infrastructure is massive and energy-intensive, the tracking problem is tractable. Do not pretend that this is some unsolvable mystery. The only real problem is that the political will is still insufficient (although growing fast) Demanding a full implementation blueprint from an advocacy movement before engaging with the substance is a tactic we've seen many times. The actual question Dean should be engaging with is simpler and harder: multiple actors are racing to build something that is likely to end civilization as we know it, and none of them can stop alone. What is his plan for that?
Dean W. Ball@deanwball

Here are some questions I wish "Pause" and "Stop" advocates would address: 1. Assuming we achieve the desired policy goal through a bilateral US/China agreement, what would be the specific metric or objective we would say needs to be satisfied in advance? Who decides whether we have satisfied them? What if one one party believes we have satisfied them but the other does not? 2. If the goal is achieved through a bilateral US/China agreement, would we need capital controls to ensure that U.S. investors cannot fund semiconductor fabs, data centers, or AI research labs in countries other than the U.S. and China? 3. Would we need to revoke the passports of U.S.-based AI researchers and semiconductor engineers to prevent them leaving America to join AI-related ventures elsewhere? How else would the U.S. and China keep researchers within their borders? 4. How should we grapple with the fact that (2) and (3) are common features of autocratic regimes? 5. Do the above questions mean that this really should be a global agreement, signed by all countries on Earth, or at least those with the theoretical ability to host large-scale data centers (probably Vanuatu doesn't need to be on board)?

English
8
5
52
6.7K
Tryfecta⏸️ retweetet
Champagne Joshi
Champagne Joshi@JoshWalkos·
This push to achieve “AGI” will go down in history as one of the most unhinged, irresponsible things ever perpetrated on humanity by a very small group of megalomaniac humans.
English
246
452
2.3K
91.4K
Tryfecta⏸️ retweetet
Acyn
Acyn@Acyn·
Sanders on AI: We need to develop a sense of urgency of here. The economic impacts are going to be enormous. The impacts on our children will be enormous, and again, there is literally an existential threat to the existence of the human race.
English
100
594
2.9K
196.4K
Tryfecta⏸️ retweetet
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
Call me a radical, but NO. We should not be replacing teachers in America with robots. We should attract the best and brightest in our country to become teachers and pay them the decent wages that they deserve.
Headquarters@HQNewsNow

Melania: The future of AI is personified. It will be formed in the shape of humans. Very soon, artificial intelligence will move from our mobile phones to humanoids that deliver utility. They fit well. Imagine a humanoid educator named Plato

English
1.1K
1.6K
9.8K
262.9K
Tryfecta⏸️ retweetet
Vivid.🇮🇱
Vivid.🇮🇱@VividProwess·
Remember the Israeli 🇮🇱 guy with the beer and gun who went viral? He’s back, and he has a powerful message: “Don’t let your spirit fall. Am Yisrael Chai. This is our home, and we will build it stronger and better. We are a people who improve and grow.”
English
216
1K
7.8K
176.5K
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@Noahpinion We don't have an example of a species that attained immortality. We have plenty that went extinct, though.
English
0
0
0
73
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@ramez @gmiller In the absence of evidence? What evidence would suffice? Do we all need to be actually dead? Because that wouldn't be helpful.
English
0
0
2
41
Ramez Naam
Ramez Naam@ramez·
I think there's strong selection effect here. In the absence of evidence, I don't believe those CEOs. Particularly when they also say extremely silly things about other sectors. (E.g., Dario saying AI will double human lifespan in the next 10 years, or Demis saying we'll cure all disease.) In short, I put little weight on their statements compared to evidence. And I haven't seen any compelling evidence. Prudent regulation to me looks like efforts at the ecosystem level to strengthen resilience. Across areas where we think AI might theoretically cause great harm (cyber, bio-weapons, etc..), it looks like: - Strengthening monitoring. - Anticipating attack approaches. - Scanning for vulnerabilities. - Patching security holes. - Building new defenses. - Building and stockpiling countermeasures. - Increasing funding for all of the above, in both private and public sector.
English
6
0
4
504
Ramez Naam
Ramez Naam@ramez·
Agree. Strong government controls over AI should concern us more than market competition between AI companies. Even as we acknowledge that market competition between AI companies brings its own risks.
Dean W. Ball@deanwball

Pause AI rhetoric is predicated on the notion that the AI companies are recklessly racing toward dangerous tech and that a government controlled pause button is therefore necessary, but this seems really hard to reconcile with the fact that government is attempting to destroy an AI company because *the government* is racing toward plausibly dangerous AI uses (Sec. Hegseth has stated in official directives that he wants to deploy AI into critical systems regardless of whether it is aligned, for example) and *the company* is pushing back. The roles are totally reversed from the logic that Pause AI and frankly other AI safety advocates confidently assumed for years. It is *industry* that is in favor of alignment and at least somewhat measured deployment risks, and government whose actions seem much closer to reckless. I predicted this for years. I said, in particular, that pauses and bans and licensing regimes gave government a dangerously high degree of control over AI, and that the incentives of government are much more dangerous than those of private industry with competitive market incentives. I believe the events of the last month are good evidence in favor of my view. At this point if you are an AI safety advocate whose policy proposals do not wrestle seriously with the brutal political economic reality of the state and AI, I don’t take you seriously. It gives me no pleasure to have been right about this, by the way. The state has an incredibly strong structural incentive to centralize power using AI, and we are, all of us, not so empowered to stop it. I am quite concerned about this.

English
7
3
24
23.1K
Tryfecta⏸️ retweetet
Will Fithian
Will Fithian@wfithian·
Dean has it backwards here. The more worried we are that current govts will misuse powerful AI for authoritarian surveillance and control, the more we should want to prevent or defer its development. It'll be harder to stop govts from using it after it's built and deployed.
Dean W. Ball@deanwball

Pause AI rhetoric is predicated on the notion that the AI companies are recklessly racing toward dangerous tech and that a government controlled pause button is therefore necessary, but this seems really hard to reconcile with the fact that government is attempting to destroy an AI company because *the government* is racing toward plausibly dangerous AI uses (Sec. Hegseth has stated in official directives that he wants to deploy AI into critical systems regardless of whether it is aligned, for example) and *the company* is pushing back. The roles are totally reversed from the logic that Pause AI and frankly other AI safety advocates confidently assumed for years. It is *industry* that is in favor of alignment and at least somewhat measured deployment risks, and government whose actions seem much closer to reckless. I predicted this for years. I said, in particular, that pauses and bans and licensing regimes gave government a dangerously high degree of control over AI, and that the incentives of government are much more dangerous than those of private industry with competitive market incentives. I believe the events of the last month are good evidence in favor of my view. At this point if you are an AI safety advocate whose policy proposals do not wrestle seriously with the brutal political economic reality of the state and AI, I don’t take you seriously. It gives me no pleasure to have been right about this, by the way. The state has an incredibly strong structural incentive to centralize power using AI, and we are, all of us, not so empowered to stop it. I am quite concerned about this.

English
4
7
67
3K
Tryfecta⏸️ retweetet
Maxime Fournes⏸️
Maxime Fournes⏸️@FournesMaxime·
Dean is misrepresenting our position. We are not asking for a "government controlled pause button." We never have. Anyone who's read our proposal knows this. We are asking for an international governing body with democratic oversight. Precisely because we agree that no single actor, government or company, should be in charge of advanced AI. So yes, the US government is being reckless. We have been saying this. This is exactly why we advocate for an international agency, not national control. Dean is arguing against a position we don't hold. On industry self-regulation: I spent 12 years there, my last role leading a research team building language and vision models. Competitive pressure in this industry pushes towards speed, not safety. Always has. The people running these labs confirm this openly. Altman, Amodei, Hassabis have all said they feel trapped in a race they can't exit alone. Amodei puts the probability of extinction at 10-25% and keeps building. If that's Dean's idea of a functioning market, we have very different definitions. The reality Dean refuses to engage with is that anyone building unaligned superintelligence, whether a company or a government, is creating a catastrophic risk for everyone. And by catastrophic, I do not mean "poses interesting governance challenges.", I mean likely game over for civilization, and we should talk about it like it is. Of course it is standard tactic from opponents to deform our message and then argue against the deformed version. Standard lobbying playbook. But I'll ask in good faith: @deanwball , what is your alternative? What is your plan that gives us more than a 10% chance of avoiding civilizational catastrophe, whether from loss of control, extinction, or totalitarian capture? Because as far as I can tell, an internationally enforced pause is the only proposal on the table with any chance of working. I am genuinely open to hearing a better one.
Dean W. Ball@deanwball

Pause AI rhetoric is predicated on the notion that the AI companies are recklessly racing toward dangerous tech and that a government controlled pause button is therefore necessary, but this seems really hard to reconcile with the fact that government is attempting to destroy an AI company because *the government* is racing toward plausibly dangerous AI uses (Sec. Hegseth has stated in official directives that he wants to deploy AI into critical systems regardless of whether it is aligned, for example) and *the company* is pushing back. The roles are totally reversed from the logic that Pause AI and frankly other AI safety advocates confidently assumed for years. It is *industry* that is in favor of alignment and at least somewhat measured deployment risks, and government whose actions seem much closer to reckless. I predicted this for years. I said, in particular, that pauses and bans and licensing regimes gave government a dangerously high degree of control over AI, and that the incentives of government are much more dangerous than those of private industry with competitive market incentives. I believe the events of the last month are good evidence in favor of my view. At this point if you are an AI safety advocate whose policy proposals do not wrestle seriously with the brutal political economic reality of the state and AI, I don’t take you seriously. It gives me no pleasure to have been right about this, by the way. The state has an incredibly strong structural incentive to centralize power using AI, and we are, all of us, not so empowered to stop it. I am quite concerned about this.

English
10
9
59
6.9K
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@Noahpinion Will the world be better if more people blindly accepted AI to do whatever and then unwittingly become useless and possibly dead?
English
0
0
1
11
Tryfecta⏸️
Tryfecta⏸️@tryfectaa·
@Noahpinion Can you please explain why they need to change their messaging? Would it be better if they lied? You keep saying this same thing over and over again but I don't understand your motives here?
English
1
0
1
63