Ali Zand علی زند

537 posts

Ali Zand علی زند

Ali Zand علی زند

@azand

Fighting abuse @Google!

Mountain View, CA Katılım Haziran 2010
1.1K Takip Edilen415 Takipçiler
Ali Zand علی زند retweetledi
Mark R. Levin
Mark R. Levin@marklevinshow·
Mark R. Levin tweet media
ZXX
2.7K
6K
12.1K
178.9K
Ali Zand علی زند retweetledi
Jason Brodsky
Jason Brodsky@JasonMBrodsky·
Speaks for itself.
English
3.3K
9.7K
21.2K
332.5K
Ali Zand علی زند retweetledi
Crémieux
Crémieux@cremieuxrecueil·
Yes. Yes! YES!
Crémieux tweet media
English
52
136
4.2K
102.9K
Ana Kasparian
Ana Kasparian@AnaKasparian·
There’s nothing powerful about scumbags like this co-opting Iranian protests to help Israel install an unpopular monarch to do their bidding. You people are sick and a threat to the demonstrators (along with the entire region). Iranians don’t want Israel involved. Yet they inject themselves to do to Iran what they did to Syria and Libya. I stand with Iran’s people. But I’ll never get played by these Zionist propagandists.
Bill Ackman@BillAckman

Powerful

English
3.3K
1.9K
13.3K
1.1M
Ali Zand علی زند
@Heccles94 Read this
Tahmineh Dehbozorgi@DeTahmineh

The Western liberal media is ignoring the Iranian uprising because explaining it would force an admission it is desperate to avoid: the Iranian people are rebelling against Islam itself, and that fact shatters the moral framework through which these institutions understand the world. Ideally, to cover an uprising is not just to show crowds and slogans. It requires answering a basic question: why are people risking death? In Iran, the answer is simple and unavoidable. The people are rising up because the Islamic Republic of Iran has spent decades suffocating every aspect of life—speech, work, family, art, women, and economic survival—under a clerical system that treats liberty as a crime. There is no way to tell that story without confronting the nature of the regime. Western media refuses to do so because it has fundamentally misunderstood Islam. Or worse, it has chosen not to understand it. Islam, in Western progressive discourse, has been racialized. It is treated not as a belief system or a political ideology, but as a stand-in for race or ethnicity. Criticizing Islam is framed as an attack on “brown people,” Arabs, or “the Middle East,” as if Islam were a skin color rather than a doctrine. This confusion is rooted in historical illiteracy. Western liberal media routinely collapses entire civilizations into a single stereotype: “all Middle Easterners are Arabs,” “all Arabs are Muslim,” and “all Muslims are a monolithic, oppressed identity group by white European colonizers.” Iranians disappear entirely in this framework. Their language, history, and culture—Persian, not Arab; ancient, not colonial; distinct, not interchangeable—are erased. By treating Islam as a racial identity rather than an ideology, Western media strips millions of people of their ability to reject it. Iranian protesters become unintelligible. Their rebellion cannot be processed without breaking the rule that Islam must not be criticized. So instead of listening to Iranians, the media speaks over them—or ignores them entirely. There is another reason the Iranian uprising is so threatening to Western media is economic issues. As you know, Iran is not only a religious dictatorship. It is a centrally controlled, state-dominated economy where markets are strangled, private enterprise is criminalized or co-opted, and economic survival depends on proximity to political power. Decades of price controls, subsidies, nationalization, and bureaucratic micromanagement have obliterated the middle class and entrenched corruption as the only functional system. The result is not equality or justice. It is poverty, stagnation, and dependence on government’s dark void of empty promises. Covering Iran honestly would require acknowledging that these policies are harmful. They have been tried. They have failed. Catastrophically. This is deeply inconvenient for Western media institutions that routinely promote expansive state control, centralized economic planning, and technocratic governance as morally enlightened alternatives to liberal capitalism. Iran demonstrates where such systems lead when insulated from accountability and enforced by ideology. It shows that when the state controls livelihoods, non-conformity becomes existentially dangerous. That lesson cannot be acknowledged without undermining the moral authority of those who advocate similar ideas in softer language. Western liberal media prefers not to hear this. Acknowledging it would require abandoning the lazy moral categories that dominate modern discourse: oppressor and oppressed, colonizer and colonized, white and non-white. Iranian protesters do not fit. They show that authoritarianism is not a Western invention imposed from outside, but something many societies are actively trying to escape. That is what terrifies Western liberal media. And that is why the Iranian people are being ignored. So the silence continues.

English
0
0
0
23
Harry Eccles
Harry Eccles@Heccles94·
A worrying list of people supporting the Iranian "revolution" - Its not an area I know much about - but this list gives me cause for suspicion 👇 Tommy Robinson Trump Netanyahu JK Rowling Jenrick Farage 🤔
English
2.3K
708
3.2K
1.2M
Ali Zand علی زند retweetledi
علی شریفی زارچی
علی شریفی زارچی@SharifiZarchi·
As a university professor from Iran who remained in the country until very recently, I urge world leaders not to remain silent in the face of the Islamic Republic regime’s brutality and crimes against humanity. Unarmed people are being killed with military-grade weapons. Cities are filled with the bodies of protesters. The internet has been shut down for four consecutive days. This must not be tolerated. #DigitalBlackoutIran #IranianRevolution2026
علی شریفی زارچی tweet media
English
114
1K
2.9K
74.3K
Ali Zand علی زند retweetledi
Ilya Sutskever
Ilya Sutskever@ilyasut·
if you value intelligence above all other human qualities, you’re gonna have a bad time
English
753
2K
14.3K
8.9M
Ali Zand علی زند retweetledi
nature
nature@Nature·
“We’re very excited to see what people do with this” AlphaFold3 is open at last go.nature.com/4fHAfio
English
21
591
1.7K
128.2K
Ali Zand علی زند retweetledi
Arvind Narayanan
Arvind Narayanan@random_walker·
In a new AI Snake Oil essay by me and @sayashk, we do a deep dive into AI existential risk probability estimates. We find that these forecasts are just feelings dressed up as numbers, and even the best-run, well funded, time intensive forecasting efforts result in a range of probability estimates that spans many orders of magnitude! We are forced to conclude that AI x-risk forecasts are far too unreliable to be useful for policy, and in fact highly misleading. We caution against speculation being laundered through pseudo-quantification. Full essay: aisnakeoil.com/p/ai-existenti… (about 7,000 words). Summary below. Background Over a year ago we got deep into the AI x-risk literature. We were skeptical but not dismissive. We wanted to identify valid concerns while rebutting bad arguments on their own terms. We've been especially interested in how policymakers should think about x-risk. Today's essay is the first in a series. We've been circulating private drafts for a while and have incorporated a lot of great feedback. I'm excited that we're finally launching this series of essays today! Analogy: alien invasion If the two of us predicted an 80% probability of aliens landing on earth in the next ten years, would you take this possibility seriously? Of course not. You would ask to see our evidence. As obvious as this may seem, it seems to have been forgotten in the AI x-risk debate that probabilities carry no authority by themselves. Probabilities are usually derived from some grounded method, so we have a strong cognitive bias to view quantified risk estimates as more valid than qualitative ones. But it is possible for probabilities to be nothing more than guesses. The reference class problem The domains where forecasting has been successful, such as geopolitics, rely on the existence of reasonably good reference classes of past events. A reference class for turmoil in one country is turmoil in other country. Reference classes for AI x-risk are things like ... animal extinction. Let’s get real. This kind of reference class tells us nothing about the possibility of developing superintelligent AI or losing control over such AI, which are the central sources of uncertainty for AI x-risk forecasting. Subjective probabilities vary by orders of magnitude Lacking grounded methods, forecasts are necessarily “subjective probabilities”, that is, guesses based on the forecaster’s judgment. Unsurprisingly, these vary by orders of magnitude. Consider the Existential Risk Persuasion Tournament (XPT) conducted by the Forecasting Research Institute in late 2022, which we think is the most elaborate and well-executed x-risk forecasting exercise conducted to date. It involved various groups of forecasters, including AI experts and forecasting experts. The 75th percentile AI expert estimate and the 25th percentile forecasting expert estimate differ by at least a factor of 100. All of these estimates are from people who have deep expertise on the topic and participated in a months-long tournament where they tried to persuade each other! If this range of forecasts here isn’t extreme enough, keep in mind that this whole exercise was conducted by one group at one point in time. We might get different numbers if the tournament were repeated today, if the questions were framed differently, etc. It's all speculation What’s most telling is to look at the rationales that forecasters provided, which are extensively detailed in the 754-page report. Forecasters aren’t using quantitative models, especially when thinking about the likelihood of bad outcomes conditional on developing powerful AI. For the most part, forecasters are engaging in the same kind of speculation that everyday people do when they discuss superintelligent AI. Maybe AI will take over critical systems through superhuman persuasion of system operators. Maybe AI will seek to lower global temperatures because it helps computers run faster, and accidentally wipe out humanity. Or maybe AI will seek resources in space rather than Earth, so we don’t need to be as worried. There’s nothing wrong with such speculation. But we should be clear that when it comes to AI x-risk, forecasters aren’t drawing on any special knowledge, evidence, or models that make their hunches more credible than yours or ours or anyone else’s. Forecast skill can't be measured We often hear that forecasting has a great track record and so we should trust it. This makes no sense — why should we trust that someone who is good at forecasting elections or other kind of events is good at forecasting AI x-risk? Besides the fact that these are completely different kinds of events, there just isn't any real evidence to draw upon for the AI x-risk estimation, so being good at finding and weighing evidence is not a skill that helps much here. Besides, the math doesn't work out. We show that if someone is good at forecasting common events but systematically overestimates rare events, we would have to evaluate them on millions of forecasts before this became apparent. Summary of the main argument: none of the three probability estimation methods yields reliable AI x-risk forecasts. Risk estimates may be systematically inflated There are many reasons why forecasters might systematically overestimate AI x-risk. The belief that AI can change the world is one of the main motivations for becoming an AI researcher. And once someone enters this community, they are in an environment where that message is constantly reinforced. And if one believes that this technology is terrifyingly powerful, it is perfectly rational to think there is a serious chance that its world-altering effects will be negative rather than positive. And in the AI safety subcommunity, which is a bit insular, the echo chamber can be deafening. Claiming to have a high "p(doom)" seems to have become a way to signal one’s identity and commitment to the cause. So what should governments do about AI x-risk? Our view isn’t that they should do nothing. But they should reject the kind of policies that might seem compelling if we view x-risk as urgent and serious, notably: restricting AI development. As we’ll argue in a future essay in this series, not only are such policies unnecessary, they are likely to increase x-risk. Instead, governments should adopt policies that are compatible with a range of possible estimates of AI risk, and are on balance helpful even if the risk is negligible. Fortunately, such policies exist. Governments should also change policymaking processes so that they are more responsive to new evidence. More on all that soon. The full essay is in our newsletter. We plan to publish the rest of the series over the next few weeks. Thank you for reading! aisnakeoil.com/p/ai-existenti…
Arvind Narayanan tweet media
English
76
205
787
346.4K
Ali Zand علی زند retweetledi
Ryan Els
Ryan Els@ryanels·
Parrot 🦜 Project Manager 🤣
Ryan Els tweet media
English
41
442
5.5K
338.7K
Ali Zand علی زند retweetledi
sysengineer
sysengineer@_sysengineer·
sysengineer tweet media
ZXX
68
915
7.9K
582.6K