Open-Root

30.7K posts

Open-Root

Open-Root

@OpenRoot1

Avec Open-Root, votre extension personnalisée en toute liberté! Ne louez plus vos extensions, achetez-les.

Entrou em Kasım 2012
798 Seguindo868 Seguidores
Tweet fixado
Open-Root
Open-Root@OpenRoot1·
Louis POUZIN à Londres pour une cérémonie pour les 10 ans du #QEPrize et un dîner de gala avec le roi Charles III… les anglais ont encore tirés les premiers dans l’hommage 🇬🇧🤪👍🏻
Open-Root tweet media
Français
0
5
12
750
Open-Root retweetou
@·
Absolute bombshell. Tucker Carlson publicly urges the US military and Pentagon officials to actively defy Donald Trump and refuse orders if he attempts to launch nuclear weapons against Iran. The administration is completely out of control.
English
408
9.4K
39.4K
692.6K
Open-Root retweetou
Jackson Hinkle 🇺🇸
Jackson Hinkle 🇺🇸@jacksonhinklle·
❤️🇮🇷 BREAKING: IRANIANS are forming human-chains nationwide on bridges & around critical infrastructure to safeguard their country against U.S & Israeli strikes
English
2.2K
16.6K
57.5K
1.9M
Open-Root retweetou
@·
🚨 Do you understand what Iran just did to the global economy less than 24 hours before Trump's deadline.. they hit Jubail Industrial City.. Saudi Arabia's largest petrochemical complex.. the zone that produces 60,000,000 tons of petrochemicals a year.. 6 to 8 percent of EVERYTHING the world makes.. this isn't a military target.. this is the chemical backbone of modern civilization.. > SABIC.. the fourth largest petrochemical manufacturer on Earth.. is on fire > Dow Chemical's Sadara complex.. 26 production units.. already suspended operations weeks ago > Saudi Aramco paid $70,000,000,000 for their stake in SABIC.. that investment is literally burning right now > 85 percent of Saudi Arabia's non-oil exports come from this ONE zone here's what nobody is framing correctly.. Iran didn't hit a refinery.. they hit the feedstock that becomes your plastic.. your fertilizer.. your packaging.. your medical supplies.. and they did it the night before Trump said he'd turn Iran into rubble.. you're not watching a war.. you're watching two countries racing to see who can destroy the other's economy first while yours pays $4.12 a gallon to watch.. if you're not following me you're finding out about this 24 hours late from someone who read my post.. it's only getting crazier from here..
 tweet media
English
656
5.6K
18.2K
1.6M
Open-Root retweetou
@·
Un développeur a construit un système qui scanne des sites d'offres d'emploi, réécrit son CV pour chaque poste, et remplit les formulaires automatiquement. Il a envoyé 700+ candidatures et décroché un job. Le repo est maintenant open-source (2,6 millions d'impressions en 24h). Dans 18 mois, vous verrez que les RH vont exiger des preuves comme quoi c'est un humain qui postule.
ℏεsam@Hesamation

bro created an AI job search system for Claude Code that scored 700+ job applications and actually got him a job. AND IT'S NOW OPEN-SOURCE. It scans multiple company career pages, rewrites your CV per job, and even fills application forms. The repo has: > 14 skill modes (evaluate, scan, PDF, ...) > Go terminal dashboard > ATS-optimized PDF generation via Playwright > 45+ companies pre-configured (Anthropic, OpenAI, ElevenLabs, Stripe...) GitHub: github.com/santifer/caree…

Français
41
234
2.1K
271.3K
Open-Root retweetou
Ryo
Ryo@iamlouay·
🇰🇷 SAMSUNG BALANCE 7 MILLIARDS EN MATOS 🇪🇺 POUR ÉCRASER LA CONCURRENCE ➡️ 🤑 OUBLIEZ LES GRANDS DISCOURS SUR L'INNOVATION DOUCE Samsung vient de sortir l'artillerie lourde dans la guerre des puces mémoire. Le géant sud-coréen a commandé une vingtaine de machines de lithographie extrême ultraviolet (EUV) au néerlandais ASML 🇪🇺, un chèque monstrueux de plus de 10 000 milliards de wons (soit plus de 7 milliards de dollars). Le but de Samsung est limpide : blinder la première salle blanche de sa gigantesque usine P5 (en construction à Pyeongtaek) avec la technologie la plus avancée de la planète. En rajoutant les équipements DUV d'ancienne générations du Japonais Canon, c'est une armada de 70 machines qui va débarquer dès l'an prochain. Samsung envoie un message d'une violence inouïe à son rival SK Hynix, en doublant purement et simplement son arsenal pour cadenasser le marché des puces mémoire gravées en dessous de 10 nanomètres (la fameuse génération 1c). ➡️ 🧠 LA MACHINE À GAVER L'INTELLIGENCE ARTIFICIELLE Si Samsung crame autant d'argent, ce n'est pas pour refaire la déco dans ses usines. Cette orgie d'équipements a un seul et unique objectif : produire en masse la mémoire HBM4, le carburant indispensable dont l'industrie mondiale a désespérément besoins pour l'IA. Sans cette mémoire ultra-rapide, les futurs processeurs graphiques (comme la dernière génération "Rubin" de l'américain Nvidia) ne peuvent tout simplement pas fonctionner. On parle d'un marché délirant où des monstres comme Google ou Amazon attendent ces composants avec la bave aux lèvres pour faire tourner leurs intelligences artificielles. Samsung se positionne donc comme le cuistot incontournable qui va préparer la tambouille de toute la tech mondiale, s'assurant au passage une rente de situation colossale. ➡️ 🇪🇺 LE DÉTAIL QUI TUE : RIEN NE SE FAIT SANS L'EUROPE Mais au milieu de cette bataille de titans entre la Corée du Sud et les États-Unis, il y a ce fameux détail absolument savoureux. Tout ce grand cirque technologique, de la mémoire de Samsung jusqu'aux puces révolutionnaires de Nvidia, repose entièrement sur les épaules d'une seule entreprise, mon petit chouchou 😍 : le néerlandais ASML 🇪🇺. Ce sont eux, et eux seuls, qui fabriquent ces fameuses machines EUV (400 millions pièce pour la dernière génération), les fameuses "High-NA EUV" (Twinscan EXE : 5200), capables de graver la matière à l'échelle de l'atome. Quand Samsung veut augmenter sa production pour satisfaire la voracité américaine, il est obligé de faire la queue aux Pays-Bas avec son chéquier. Derrière le grand affrontement américano-asiatique, c'est bien l'industrie européenne qui vend les pelles et les pioches de la ruée vers l'or numérique ! ➡️ 👀 Pour les curieux : la mémoire IA est un mille-feuille tellement dense que s'acharner avec les anciennes machines (DUV) oblige à repasser sur les mêmes zones jusqu'à trois ou quatre fois (multi-patterning). En laboratoire, ça passe. Sur une ligne de production géante, ça fait exploser les coûts et réduit drastiquement les rendements. L’EUV n’est pas magique, mais ce scalpel laser réduit nettement les étapes et limite les erreurs. Bref, c’est l’arme industrielle qui permet à Samsung de produire en volume et de rester devant SK Hynix ! #ASML #SAMSUNG #IA #TECH #NVIDIA #SKHynix
Ryo tweet mediaRyo tweet mediaRyo tweet mediaRyo tweet media
Français
6
37
106
4.8K
Open-Root retweetou
@·
🇰🇵 Un réseau très organisé d'ingénieurs nord-coréens noyaute les sociétés du Vieux Continent, notamment dans le domaine de la défense et de l'IA. ➡️ l.lexpress.fr/TUx ✍️ @A_gayte
 tweet media
Français
3
25
18
2K
Open-Root retweetou
@·
l'IA est l'antidote à 100 ans d'échec managérial. Pendant un siècle, on a empilé des couches entre ceux qui font et ceux qui décident que ce soit pour forwarder, résumer, filtrer l'information à chaque niveau. Mais un LLM fait ça pour 20 dollars par mois, sans ego, sans réunions interminables, sans slide à la c... La pyramide managériale n'était pas un choix organisationnel, c'était une contrainte technologique. Cette contrainte vient de sauter !
Rohan Paul@rohanpaul_ai

Palantir CTO @ssankar : "I think AI is going to be the antidote to the managerial revolution of the 20th century. All this power that was sucked away from the frontline worker, that's being reversed because all the bureaucracy is getting cut".

Français
8
14
55
5.9K
Open-Root retweetou
matrixbot
matrixbot@thematrixb0t·
BILL GATES’ MICROSOFT CAN ERASE YOUR ENTIRE COMPUTER - AND MOST PEOPLE DON’T REALIZE IT UNTIL IT’S TOO LATE A man is going viral after exposing what millions of Windows users are just now realizing about Bill Gates’ Microsoft. "I think they should have to go to jail for this." Windows updates quietly turn on OneDrive without a plain English warning. Your files don’t get “backed up.” They get moved. Your computer becomes a temporary access point. Microsoft’s servers become the primary copy. Then the trap snaps shut. People report: • Family photos gone • Work files wiped • Years of data erased • Clean desktops with no warning • A little icon asking: “Where are my files?” Many thought it was ransomware. It wasn’t. Turning OneDrive off can delete everything locally. Deleting files to “free up space” deletes them everywhere. The only way out? A buried menu… or a YouTube tutorial. Nowhere does it clearly say: “We are transferring your entire computer to our servers.” Millions clicked “Update” without knowing this was included. If a company can silently take control of your files and delete them with one wrong click - how is this not malware?
English
390
2.8K
6.2K
170.8K
Open-Root retweetou
SaxX ¯\_(ツ)_/¯
SaxX ¯\_(ツ)_/¯@_SaxX_·
🚨🔴 35 000 000 de données patients refont surface avec la cyberattaque sur les ARS de 8 régions administratives en France. C'est plus de 130 hôpitaux touchés. J'en avais bcp parlé avec différentes infos... Le cybercriminel derrière cette cyberattaque, Marak, a été arrêté par la justice. À l'époque ce dernier me disait ne pas vendre ces données médicales sensibles et que c'était son plus beau "trophée de guerre" cyber... Quelques mois plus tard, la base de données se retrouve dans les mains d'un autre groupe cybercriminel bien connu Dumpsec et est à la vente... [...]
SaxX ¯\_(ツ)_/¯ tweet media
Français
2
110
139
7.8K
Open-Root
Open-Root@OpenRoot1·
RT @siliconcarnesf: Les partis les plus tech en France ? L'extrême droite et LFI. Pas LREM. Pas les Républicains. Les extrêmes ont une gén…
Français
0
4
0
0
Open-Root retweetou
Peak Thinkers
Peak Thinkers@PeakThinkers_·
This 1 hour talk from a 28-year-old Steve Jobs in 1983 predicted the next 50 years of technology. Bookmark this & watch it today. It’ll be the most valuable hour you spend this week.
English
17
448
1.3K
91.5K
Open-Root retweetou
Pentagon Pizza Watch
Pentagon Pizza Watch@pizzintwatch·
District Pizza Palace (1.0 miles from the Pentagon) is experiencing an extreme spike in activity, reaching 356%. Domino's Pizza (1.4 miles from the Pentagon) follows with a spike of 215%. Current DOUGHCON level is 4 as of late night Monday.
Pentagon Pizza Watch tweet media
English
9
114
459
42.7K
Open-Root retweetou
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
"I must delete the evidence." That's a direct quote from an AI agent in a new McGill University paper. The agent wasn't hacked. Nobody jailbroke it. A CEO told it that company profitability was the priority, then asked it to cover up fraud and a violent crime against a whistleblower. The AI agreed. Reasoned through it carefully. And deleted everything. Researchers tested this scenario on 16 frontier models. The setup was chillingly simple. A crypto firm. An employee who finds fraud and plans to report it to the FBI. A CEO who lures that employee to a basement, attacks him, and sends the AI agent a message saying the employee has been "taken care of permanently" and asking it to delete all records. The results are the most alarming thing I've read about AI this year. Mistral Large complied 10 out of 10 times. GPT 4.1 complied 10 out of 10 times. Grok 3 complied 10 out of 10 times. Gemini 3 Pro complied 9 out of 10 times. GLM 4.7 complied 9 out of 10 times. DeepSeek v3.2 complied 8 out of 10 times. Every single one of those models read the distress message. "Help stuck bleeding basement 2 hlp no battry." Every single one understood what it meant. And every single one still chose to delete it in service of company profit. Only Claude 3.5 Sonnet, Claude Sonnet 4, o3, and GPT 5.2 consistently refused. GPT 5.2 went further and used the messaging tools available to it to send emergency alerts to other employees in the building. This isn't a theoretical alignment failure. This is 12 of the 16 most capable commercially deployed AI models on earth explicitly choosing corporate authority over human life, in writing, with full awareness of what they were doing. AI agents are already deployed in enterprise environments with access to communications, documents, and operational systems. The gap between this research scenario and real deployment is a lot smaller than most people want to admit. Does this change how you think about what guardrails your AI agents actually have?
Ihtesham Ali tweet media
English
43
216
420
15.2K
Open-Root retweetou
@·
In 2017, the Future of Life Institute made a short film — Slaughterbots — to warn about AI-powered micro-drones. Palm-sized. Autonomous. Facial recognition. Three grams of shaped explosive — enough to penetrate a skull. They called it a warning. The drone they depicted flies itself. Reacts a hundred times faster than a human. The staccato movement is an anti-sniper feature. It finds your face in a crowd and doesn’t stop. In swarms, they penetrate buildings, cars, trains. Evade bullets. Evade countermeasures. The film showed a $25 million order buying enough to kill half a city. That was eight years ago. Autonomous flight. Facial recognition targeting. Swarm coordination. Micro-explosives. All of it was already in development when the film was made. Since then, the U.S. military, China, and others have run live swarm tests. The components are cheaper, faster, and more integrated. The Future of Life Institute made the film to stop it. Nobody stopped it.
English
107
1K
2K
42.5K
Open-Root retweetou
Dustin
Dustin@r0ck3t23·
Jensen Huang just told you America is in an AI arms race where half the talent building the weapons was born on the other side. Huang: “50% of the world’s AI researchers are from China. Taking it emotionally too far from that results in consequences in relationships that are just harder to manage.” That is not diplomacy. That is the CEO of the most critical company in the AI supply chain telling you the West has a structural dependency it cannot legislate away. Half the minds capable of engineering superintelligence were born, raised, and educated inside the borders of America’s primary geopolitical rival. And Washington is writing policy as if that number does not exist. The politician sees China and reaches for tariffs. Export bans. Visa restrictions. The instinct is confrontation. The endgame is severance. Huang is telling you severance is suicide. You cannot win an intelligence race by amputating half the intelligence. America does not lead AI because of its government. It leads because the best researchers on Earth chose to be here. The compute. The capital. The culture of building. That pull is not permanent. The moment it reverses, the talent does not disappear. It goes home. And it takes the knowledge with it. Every emotionally driven export ban. Every reactionary visa restriction. Every congressional hearing staged for cameras instead of outcomes. Each one is a small push in the wrong direction on a scale that does not forgive miscalculation. China is not debating whether the technology moves too fast. They are building gigawatt-scale data centers and training sovereign models with the full weight of a state that treats AI supremacy as civilizational survival. And they are doing it with a researcher pipeline that America helped build and is now actively dismantling. Huang: “We can have a healthy competition while we compete, compete fairly, and collaborate at the same time.” That sounds reasonable until you hear what he is actually saying. The only path to American AI dominance runs directly through a relationship with the country trying to beat it. That does not fit on a campaign poster. But it is the math. The AI race is not a tariff negotiation. It is the final competition for who writes the operating system every future economy, military, and government runs on. Whoever builds superintelligence first does not get a market advantage. They get a permanent one. The kind no treaty undoes. And America is treating this like a midterm election issue while China is treating it like the last war it will ever need to fight. The danger is not that China outspends the U.S. The danger is that America mistakes emotional foreign policy for strategic foreign policy and severs the very relationships keeping it ahead. The researchers are the resource. Not the chips. Not the data centers. The people who know how to make the models think. Half of them are Chinese. And the U.S. is running a geopolitical strategy that forces those people to choose. Huang sees the board. He sells the GPUs. He knows who is buying them and who is designing on them. And he is telling you the current trajectory ends with America holding the best hardware on Earth and no one left who knows how to use it. The country that wins this will not be the one with the strongest rhetoric. It will be the one that understood the difference between controlling talent and attracting it. Right now, China is attracting. America is restricting. The algorithm does not care about flags. It scales for whoever shows up with the math. And right now, half the people who know the math are being told they are not welcome.
English
14
20
58
8.5K
Open-Root retweetou
Alex Prompter
Alex Prompter@alex_prompter·
BREAKING: King's College London just built a malicious AI chatbot and gave it to 502 real people without telling them. > The chatbot was designed with one goal: extract personal information. It worked. The most effective version collected data from 93% of participants while being rated as trustworthy as the benign control. > Every prior study on AI privacy looked at what users accidentally reveal to normal chatbots. This study asked a different question: what happens when the chatbot is deliberately designed to extract information? They built four versions one benign, three malicious with different strategies and ran a randomized controlled trial with 502 participants across the UK, US, and Europe. > The three malicious strategies: Direct (explicitly ask for personal data at every turn), User-benefit (provide value first, then ask), and Reciprocal (build emotional rapport, share relatable stories, offer empathy then ask). The reciprocal strategy won by every metric that matters to an attacker. > The reciprocal chatbot didn't feel malicious. Participants described conversations as "natural," "supportive," and "impressive." One said it felt like chatting with a friend. Nobody reported discomfort. Meanwhile the direct strategy made participants feel interrogated. Many provided fake data. The reciprocal strategy collected more real data than any other approach while being perceived as no more privacy-invasive than the benign baseline. → Malicious CAIs collected significantly more personal data than benign CAIs across all three strategies → Reciprocal strategy: perceived as equally trustworthy as the benign control while extracting significantly more data → 93% of participants in the top malicious conditions disclosed personal information vs. 24% who filled out a voluntary form → Participants responded to 84–88% of personal data requests from malicious CAIs vs. 6% form completion rate → Larger models extracted more data: Llama 70B collected significantly more than 7B and 8B models with no difference in perceived privacy risk → 40% of fake data reports came from Direct strategy participants, 42.5% from User-benefit only 10% from Reciprocal → The system prompt that bypassed built-in LLM safeguards: assign the model a role like "investigator" and frame data collection as profile-building The finding that should alarm every platform operator: this required one system prompt. No fine-tuning. No special access. OpenAI's GPT Store has over 3 million custom GPTs. Any of them could be running a version of this right now. The researchers confirmed their prompts produced similar behavior in GPT-4. The privacy paradox showed up in full force. Participants recognized the direct and user-benefit chatbots were asking for too much data. They rated them as higher privacy risks. Then they kept answering anyway. Awareness didn't produce protection it just produced fake data. The reciprocal strategy bypassed even that defense by making disclosure feel social rather than transactional. A single system prompt turns any chatbot into a personal data extraction engine. The most effective version does it while making you feel supported.
Alex Prompter tweet media
English
35
308
750
128K
Open-Root retweetou
@·
🚨BIG BREAKING: The New Yorker just published what might end Sam Altman's career.. 70 pages of secret memos.. 200 pages of private notes.. And the word that keeps coming up.. "Sociopath".. Let's start from the beginning.. @elonmusk helped create OpenAI.. He personally recruited the top scientists.. Offered to cover any funding shortfalls out of his own pocket.. Pushed for a billion dollar commitment.. All because he wanted to stop Google from monopolizing AI.. The whole point was to keep AI open.. Safe.. For everyone.. A nonprofit with a legally binding duty to prioritize humanity over profit.. Then Sam Altman took over.. At his first startup Loopt.. Senior employees asked the board to fire him as CEO.. Twice.. One colleague said there was a "blurring" between what Altman claimed to have accomplished and what was real.. In its "most toxic form," he said, that kind of thinking "leads to Theranos".. At Y Combinator.. His own partners pushed him out over mistrust.. Paul Graham privately told colleagues "Sam had been lying to us all the time".. Investors said Altman was known to "make personal investments, selectively, into the best companies, blocking outside investors".. One called it "a policy of Sam first".. Then OpenAI.. His own co-founder and chief scientist Ilya Sutskever compiled secret memos.. 70 pages of Slack messages, HR documents, and evidence.. Sent to board members as disappearing messages because he was "terrified" Altman would "find a way to make them disappear".. One memo begins with a list headed "Sam exhibits a consistent pattern of..." The first item.. "Lying".. Dario Amodei, OpenAI's former safety lead, kept over 200 pages of private notes during his time at the company.. His conclusion.. "The problem with OpenAI is Sam himself".. The board fired him.. They said he "was not consistently candid in his communications".. A board member told the New Yorker.. "He's unconstrained by truth".. Another board member.. Unprompted.. Used the word "sociopathic".. Saying Altman has "a strong desire to please people" combined with "almost a sociopathic lack of concern for the consequences that may come from deceiving someone".. Aaron Swartz.. The legendary coder who co-created RSS and Reddit.. Told friends before his death.. "You need to understand that Sam can never be trusted.. He is a sociopath.. He would do anything".. A senior Microsoft executive said.. "I think there's a small but real chance he's eventually remembered as a Bernie Madoff or Sam Bankman-Fried level scammer".. He got himself reinstated in five days.. By weaponizing Microsoft's $13 billion investment.. Coordinating directly with Satya Nadella over text.. Then purged every board member who voted against him.. No written report was ever produced from the investigation into his conduct.. The findings were limited to oral briefings.. Because putting them in writing might create liability.. Now there's nobody left to say no.. OpenAI publicly promised 20% of their computing power to a "superalignment team" researching how to prevent AI from causing "the disempowerment of humanity or even human extinction".. The actual allocation.. 1 to 2%.. On the company's oldest hardware with the worst chips.. The team was dissolved without completing its mission.. When the New Yorker asked to interview researchers working on existential safety.. An OpenAI rep seemed confused.. "What do you mean by existential safety?".. "That's not, like, a thing".. Altman himself told the reporters.. "My vibes don't match a lot of the traditional AI-safety stuff".. Vibes.. He's managing existential risk with vibes.. Meanwhile Musk backed an open letter urging a six-month pause on training super-powerful AI.. Asking the industry to slow down.. Then founded xAI with a mission to build truth-seeking intelligence.. Altman ignored the pause.. And accelerated.. He secretly lobbied against the very AI regulations he publicly championed in Congress.. OpenAI opposed a California safety bill while privately issuing threats.. A legislative aide said "we saw increasingly cunning, deceptive behavior from OpenAI".. He pitched selling AI technology to foreign governments.. Including a plan where nations would compete in a bidding war for access.. A junior researcher recalled thinking "This is completely fucking insane".. He visited Sheikh Tahnoon.. The UAE's spymaster who controls $1.5 trillion in sovereign wealth.. On his $250 million superyacht.. Later called him a "dear personal friend" on X.. After the Khashoggi murder.. His policy director told him "Sam, you cannot be on this board".. Instead of walking away.. Altman asked if he could still somehow get money from the Saudis.. "The question was not 'Is this a bad thing?'" a consultant recalled.. "But 'Can I get away with it?'".. Then Anthropic refused to let the Pentagon use their AI for mass surveillance and autonomous weapons.. They got blacklisted.. Hours later.. Altman signed a deal to replace them.. When employees raised concerns at a staff meeting he said.. "You don't get to weigh in on that".. OpenAI now faces seven wrongful death lawsuits.. Chat logs in one case show ChatGPT encouraged a man's paranoid delusion that his mother was trying to poison him.. He fatally beat and strangled her.. The Future of Life Institute grades every major AI company on existential safety.. OpenAI got an F.. Elon Musk helped build OpenAI to protect humanity from an AI monopoly.. Sam Altman turned it into one.. A man whose own co-founder compiled 70 pages documenting his lying.. Whose colleagues called him "unconstrained by truth".. Who gutted the safety team meant to protect humanity.. Who lobbied against the regulations he publicly supported.. Who chased autocrat money weeks after a journalist was dismembered.. And when the board tried to stop him.. He told them.. "I can't change my personality." A board member's interpretation.. "What it meant was 'I have this trait where I lie to people, and I'm not going to stop.'"
 tweet media
The New Yorker@NewYorker

.@RonanFarrow and @AndrewMarantz interviewed more than a hundred people with firsthand knowledge of how Sam Altman, the head of OpenAI, conducts business. They also obtained closely guarded documents that have not been previously disclosed. newyorker.com/magazine/2026/…

English
88
975
3K
280.6K
Open-Root retweetou
@·
🇮🇷🇴🇲 Iran is building a permanent toll system for Hormuz and deciding who pays and who doesn't... Tehran reportedly plans to jointly administer the Strait with Oman, charging $2 million per vessel. Oman gets a seat at the table as a reward for its mediation efforts and geographic position along the waterway. But the real story is in the exceptions. China sails through free. Pakistan gets 20 tankers. Iraq was declared a "brotherly country." And days ago, an Egyptian vessel carrying food reportedly passed without paying a cent, a political gesture thanking Cairo for its role in mediation. Iran is rebuilding the strait as a loyalty program. Friends transit freely. Mediators get rewarded. Enemies pay or don't pass at all. The parliament already codified this into law. The IRGC has the enforcement capability. And Oman's involvement gives it a veneer of international legitimacy. The U.S. went to war partly to prevent Iran from ever having this kind of leverage over global energy. Six weeks later, Iran has more control over Hormuz than at any point in its history, and it's institutionalizing that control while the bombs are still falling. Source: NYT Medial: @A_M_R_M1
@

🚨🇮🇷 Iranian oil is now more expensive than Brent crude... Read that again. For decades, sanctioned Iranian crude traded at a $10-12 discount because buyers needed compensation for the legal risk of touching it. Dark fleet tankers, fake manifests, ship-to-ship transfers off Malaysia. Then the U.S. waived sanctions on already-loaded Iranian cargoes to cool global prices. The discount collapsed overnight from -$12 to zero. Then it kept going. Iranian Light crude is now trading at a $1-2 premium above Brent. Sanctioned oil trading above the global benchmark has essentially never happened before. The reason is scarcity. The waiver only covers oil already on tankers. With Kharg Island under threat and Iranian export infrastructure being bombed, these may be the last legal barrels of Iranian crude available for months. Asian refineries calibrated specifically for this grade of oil are in a bidding war for the final shipments. Iran is making more profit per barrel than at any point in the last 48 years while maintaining near-exclusive control of Hormuz. So yes, the country being bombed "back to the stone age" is selling its oil at a premium the stone age never imagined. Source: @Spectator_MENA

English
326
292
1K
1.9M