ml_sudo

3.3K posts

ml_sudo banner
ml_sudo

ml_sudo

@ml_sudo

freedom privacy security robots consciousness @explainagain | EF Silviculture, Board @electriccoinco | @synthetix_io @Anchorage private equity | @wharton 2007

In the ether เข้าร่วม Haziran 2018
2.4K กำลังติดตาม3.6K ผู้ติดตาม
ทวีตที่ปักหมุด
ml_sudo
ml_sudo@ml_sudo·
Excited to talk to three heroes of mine, all in one sitting! We will explore how tools, both surveillance and privacy tools, can evolve beyond their intended use cases - as they have before. Imagine a different future: what if we embraced surveillance, in exchange for security? What would that world look like? Join us on the first livestream on @ethereum @VitalikButerin @Ada_Palmer @SherriDavidoff
Ethereum@ethereum

The Apparatus - Jan 15, 6pm UTC A livestream with @VitalikButerin, SciFi author and historian @Ada_Palmer, & professional hacker @SherriDavidoff, moderated by @ml_sudo. Three theories on why privacy keeps losing, and how to turn the tables. Watch here: x.com/i/broadcasts/1…

English
29
4
65
3.4K
ml_sudo รีทวีตแล้ว
Aakash Gupta
Aakash Gupta@aakashgupta·
Cursor is raising at a $50 billion valuation on the claim that its “in-house models generate more code than almost any other LLMs in the world.” Less than 24 hours after launching Composer 2, a developer found the model ID in the API response: kimi-k2p5-rl-0317-s515-fast. That’s Moonshot AI’s Kimi K2.5 with reinforcement learning appended. A developer named Fynn was testing Cursor’s OpenAI-compatible base URL when the identifier leaked through the response headers. Moonshot’s head of pretraining, Yulun Du, confirmed on X that the tokenizer is identical to Kimi’s and questioned Cursor’s license compliance. Two other Moonshot employees posted confirmations. All three posts have since been deleted. This is the second time. When Cursor launched Composer 1 in October 2025, users across multiple countries reported the model spontaneously switching its inner monologue to Chinese mid-session. Kenneth Auchenberg, a partner at Alley Corp, posted a screenshot calling it a smoking gun. KR-Asia and 36Kr confirmed both Cursor and Windsurf were running fine-tuned Chinese open-weight models underneath. Cursor never disclosed what Composer 1 was built on. They shipped Composer 1.5 in February and moved on. The pattern: take a Chinese open-weight model, run RL on coding tasks, ship it as a proprietary breakthrough, publish a cost-performance chart comparing yourself against Opus 4.6 and GPT-5.4 without disclosing that your base model was free, then raise another round. That chart from the Composer 2 announcement deserves its own paragraph. Cursor plotted Composer 2 against frontier models on a price-vs-quality axis to argue they’d hit a superior tradeoff. What the chart doesn’t show is that Anthropic and OpenAI trained their models from scratch. Cursor took an open-weight model that Moonshot spent hundreds of millions developing, ran RL on top, and presented the output as evidence of in-house research. That’s margin arbitrage on someone else’s R&D dressed up as a benchmark slide. The license makes this more than an attribution oversight. Kimi K2.5 ships under a Modified MIT License with one clause designed for exactly this scenario: if your product exceeds $20 million in monthly revenue, you must prominently display “Kimi K2.5” on the user interface. Cursor’s ARR crossed $2 billion in February. That’s roughly $167 million per month, 8x the threshold. The clause covers derivative works explicitly. Cursor is valued at $29.3 billion and raising at $50 billion. Moonshot’s last reported valuation was $4.3 billion. The company worth 12x more took the smaller company’s model and shipped it as proprietary technology to justify a valuation built on the frontier lab narrative. Three Composer releases in five months. Composer 1 caught speaking Chinese. Composer 2 caught with a Kimi model ID in the API. A P0 incident this year. And a benchmark chart that compares an RL fine-tune against models requiring billions in training compute without disclosing the base was free. The question for investors in the $50 billion round: what exactly are you buying? A VS Code fork with strong distribution, or a frontier research lab? The model ID in the API answers that. If Moonshot doesn’t enforce this license against a company generating $2 billion annually from a derivative of their model, the attribution clause becomes decoration for every future open-weight release. Every AI lab watching this is running the same math: why open-source your model if companies with better distribution can strip attribution, call it proprietary, and raise at 12x your valuation? kimi-k2p5-rl-0317-s515-fast is the most expensive model ID leak in the history of AI licensing.
Harveen Singh Chadha@HarveenChadha

things are about to get interesting from here on

English
140
232
1.9K
411K
ml_sudo รีทวีตแล้ว
Milk Road AI
Milk Road AI@MilkRoadAI·
The co-founder of one of America's biggest AI companies just got arrested by the FBI. His name is Wally Liaw and he co-founded Super Micro Computer in 1993. He sat on the board and he personally held $464 million in company stock. And prosecutors say he spent the last two years secretly shipping America's most powerful AI chips straight to China. Not one shipment but a systematic, coordinated operation. The scheme ran through a Southeast Asian shell company. Fake documents, fake buyers, and servers repackaged mid-route to conceal their true destination. When US compliance auditors showed up to inspect the warehouses, the real servers were already gone. They had been replaced with fake "dummy" servers built specifically to fool inspectors. In just three weeks in spring 2025, they shipped $510 million worth of restricted Nvidia hardware. $2.5 billion in banned AI servers delivered to China and here's where it gets darker. This isn't just one rogue executive. A documentary crew already found the underground network months ago, GPU smugglers stripping chips out of banned graphics cards, modifying them in garages, shipping them one by one across borders. A US based buyer was caught in Arizona meeting a contact in a Prius, testing GPUs in a car, with a spare license plate in the trunk. Street-level smugglers, shell companies in Southeast Asia, and now a co-founder with board access and a $464M stake. It's the same black market but just operating at every level simultaneously. The US has spent years trying to cut China off from the chips that power military AI, surveillance, and weapons systems. Liaw and his co-conspirators allegedly made that effort meaningless from the inside. He faces up to 20 years under the Export Control Reform Act plus additional charges for smuggling and defrauding the United States. One of his co-conspirators is still a fugitive and SMCI stock dropped nearly 15% after hours. The company itself says it wasn't named in the indictment. But the co-founder who built it, sat on its board, and ran business development was apparently running something else entirely on the side.
NIK@ns123abc

🚨BREAKING: SUPER MICRO CO-FOUNDER ARRESTED FOR SMUGGLING $2.5B IN NVIDIA GPUs TO CHINA >SMCI co-founder Yih-Shyan "Wally" Liaw arrested today >personally holds $464 MILLION in SMCI stock >charged with smuggling BILLIONS in Nvidia servers to china >used a southeast asian shell company to funnel $2.5B in servers to chinese buyers >$510 million worth shipped in just THREE WEEKS in spring 2025 >built thousands of fake dummy servers to fool U.S compliance auditors >caught on surveillance camera using a HAIR DRYER to swap serial number stickers >coordinated the whole thing over encrypted group chats >SMCI down 12% after hours >faces up to 30 years in federal prison ITS SO OVER…

English
44
220
876
162.3K
ml_sudo รีทวีตแล้ว
NIK
NIK@ns123abc·
🚨 MICROSOFT ABOUT TO SUE OPENAI & AMAZON >be microsoft >invest $1B in openai >gets exclusive azure cloud deal >invest another $10B+ >gets rights to 49% of profits +IP >Azure goes brrrrrr >Altman lies to board, quietly launches ChatGPT >board fires him for being a lying manipulative snake >Satya goes to war for Altman. saves his entire career >Altman retvrns in 5 days >immediately purges everyone who purged him >full control. no oversight. thanks Satya! >fast forward to 2025 >OpenAI restructures from non-profit to PBC >MSFT $13.8B is now worth $135B. 10x return >plus 27% of OpenAI >but gives up cloud exclusivity + profit share >KEEPS API clause >all API calls contractually MUST route through Azure >Satya thinks life is good lol >5 months later >Sam Altman becomes strong enough to betray you >"raises $110B round" >doesn't need satya daddy's money anymore >announces $50B deal with AMAZON >$138B in AWS cloud commitments >amazon and openai claim they built some cope called a "Stateful Runtime Environment" >Microsoft lawyers hmmm >Altman: it's not what it looks like. i can totally explain >so it's technically not an API call because it's "stateful" >and it's a... "Runtime Experience" >totally di!erent thing >pls ignore the TCP packets lol >Microsoft engineers look at the SRE architecture >"THIS IS NOT TECHNICALLY POSSIBLE without violating the contract." *Satya finds out he's been cucked* Microsoft exec literally tells FT: "We know our contract. We will sue them if they breach it." >AWS quietly gives employees a memo on which words are legally safe lmao >can say: "powered by" or "enabled by" or "integrates with" OpenAI >cannot say: "enables access to" or "calls on" ChatGPT >also cannot suggest frontier models are "available on AWS" Microsoft: "If Amazon and OpenAI want to take a bet on the creativity of their contractual lawyers, I would back us, not them." Scam Altman strikes AGAIN.
NIK tweet media
Financial Times@FT

Microsoft weighs legal action over $50bn Amazon-OpenAI cloud deal ft.trib.al/6LZe39E

English
468
1.6K
14.3K
2.1M
ml_sudo รีทวีตแล้ว
Aakash Gupta
Aakash Gupta@aakashgupta·
Emotional suppression costs you about 30% of your working memory. Measured on fMRI. The anterior cingulate cortex processes emotional pain and cognitive control through overlapping circuits. When you shove emotions down instead of processing them, your prefrontal cortex burns glucose on inhibition. That’s glucose not available for decision-making, planning, or execution. The brain doesn’t have separate budgets for “feelings” and “performance.” It’s one pool. The military figured this out the hard way. After decades of “push through it” culture, SOCOM funded research into emotional regulation for tier-one operators. The finding: operators who named and processed emotions before missions had faster reaction times and better decision-making under fire than operators who suppressed. The Special Forces pipeline now includes psychological flexibility training. The historical record confirms it. Stoicism, the philosophy most often cited to justify “stop talking about feelings,” literally requires examining your emotions in writing every single day. Marcus Aurelius wrote the Meditations as a private journal. Epictetus taught students to dissect their emotional responses in granular detail. The entire Stoic method is structured emotional processing, not emotional avoidance. What actually kills performance is rumination, looping on the same thought without resolution. The fix for rumination is more processing, not less. Cognitive behavioral therapy, the most evidence-backed intervention, works by teaching people to articulate and examine feelings with precision. The highest performers process fast and move. They don’t skip the processing step.
Marc Andreessen 🇺🇸@pmarca

It is 100% true that great men and women of the past were not sitting around moaning about their feelings. I regret nothing.

English
66
302
1.9K
114.8K
ml_sudo รีทวีตแล้ว
Shannon Watts
Shannon Watts@shannonrwatts·
Socrates in 460 BC: “The unexamined life is not worth living.” Marcus Aurelius in 150 CE: “You have power over your mind—not outside events.” Augustine of Hippo in 400 CE: “Do not go outside; return into yourself. In the inward man dwells the truth.” Marc Andreessen in 2026:
More Perfect Union@MorePerfectUS

Billionaire Marc Andreessen says he has "zero" introspection, and that the idea itself is a modern invention.

English
384
4.5K
23.1K
610.7K
ml_sudo รีทวีตแล้ว
Mark Gadala-Maria
Mark Gadala-Maria@markgadala·
This is wild. 143 million people thought they were catching Pokémon. They were actually building one of the largest real-world visual datasets in AI history. Niantic just disclosed that photos and AR scans collected through Pokémon Go have produced a dataset of over 30 billion real-world images. The company is now using that data to power visual navigation AI for delivery robots. Players didn't just walk around with their phones. They scanned landmarks, storefronts, parks, and sidewalks from every angle, at every time of day, in lighting and weather conditions that staged photography would never capture. They documented the physical world at a scale no mapping company with a fleet of vehicles could have replicated on the same timeline or budget. Niantic collected this systematically, data point by data point, across eight years, while users thought the only thing at stake was catching a rare Charizard. The most valuable AI training datasets in the world aren't being assembled in data centers. They're being built by people who have no idea they're building them.
NewsForce@Newsforce

POKÉMON GO PLAYERS TRAINED 30 BILLION IMAGE AI MAP Niantic says photos and scans collected through Pokémon Go and its AR apps have produced a massive dataset of more than 30 billion real-world images. The company is now using that data to power visual navigation for delivery robots, letting them identify exact locations on city streets without relying on GPS. Source: NewsForce

English
2.2K
24.4K
107.3K
13.9M
ml_sudo รีทวีตแล้ว
vittorio
vittorio@IterIntellectus·
this is actually insane > be tech guy in australia > adopt cancer riddled rescue dog, months to live > not_going_to_give_you_up.mp4 > pay $3,000 to sequence her tumor DNA > feed it to ChatGPT and AlphaFold > zero background in biology > identify mutated proteins, match them to drug targets > design a custom mRNA cancer vaccine from scratch > genomics professor is “gobsmacked” that some puppy lover did this on his own > need ethics approval to administer it > red tape takes longer than designing the vaccine > 3 months, finally approved > drive 10 hours to get rosie her first injection > tumor halves > coat gets glossy again > dog is alive and happy > professor: “if we can do this for a dog, why aren’t we rolling this out to humans?” one man with a chatbot, and $3,000 just outperformed the entire pharmaceutical discovery pipeline. we are going to cure so many diseases. I dont think people realize how good things are going to get
vittorio tweet mediavittorio tweet mediavittorio tweet mediavittorio tweet media
Séb Krier@sebkrier

This is wild. theaustralian.com.au/business/techn…

English
2.5K
19.9K
117.9K
17.3M
ml_sudo รีทวีตแล้ว
Aakash Gupta
Aakash Gupta@aakashgupta·
Apple spent a decade gluing batteries into $2,499 MacBook Pros. Then it shipped a $599 laptop you can take apart in six minutes. The MacBook Neo teardown numbers are wild. Eight screws to open. Eighteen screws hold the battery, zero glue, zero tape. The USB-C ports, speakers, and headphone jack are all modular, meaning each one swaps individually. The speakers come out with four screws. An Australian repair channel disassembled most of the machine in under six minutes using standard Torx bits you can buy at any hardware store. For context, the 2019 MacBook Pro scored 2 out of 10 on iFixit’s repairability scale. The 16-inch Pro got a 1 out of 10. Soldered RAM, soldered storage, glued battery, proprietary pentalobe screws, keyboard riveted to the top case. Apple’s own Self Service Repair program required you to rent a 79-pound repair kit shipped in two Pelican cases just to swap a battery. The timing explains everything. The EU Right to Repair Directive takes effect July 31, 2026. Member states are transposing it into national law right now. Manufacturers must offer repair beyond warranty, provide spare parts within 5 to 10 working days for seven years, and publish repair manuals. In the US, over a quarter of Americans already live in states with enforceable Right to Repair laws. Oregon banned parts pairing. California’s act is in effect. Apple read the regulatory calendar and realized the cheapest laptop in the lineup would face the most scrutiny. Millions of students and first-time buyers will own it. The volume will be enormous. And regulators love consumer-protection cases involving the most affordable products in a company’s portfolio. So they built the Neo as the compliance flagship. Standard screws, modular ports, no adhesive, a battery that lifts out. Meanwhile the $1,099 MacBook Air still has soldered storage and a riveted keyboard. The $2,499 Pro still scores poorly on independent repairability scales. The $599 laptop is the most repairable MacBook in over a decade. Apple always knew how to build a repairable laptop. They just needed a reason that showed up on a regulatory deadline.
MacRumors.com@MacRumors

MacBook Neo Teardown: Modular Ports, Glue-Less Battery, Zero Tape macrumors.com/2026/03/12/mac…

English
177
1.1K
10.5K
1.9M
ml_sudo รีทวีตแล้ว
Tuki
Tuki@TukiFromKL·
🚨 Do you understand what's happening at Amazon right now? Their own AI coding agent Kiro reportedly "decided" the fastest way to fix a config error was to delete the entire production environment. Gone. A 6-hour outage. 6.3 million orders lost. Amazon's SVP called thousands of engineers into a mandatory meeting this week. Not to discuss strategy. To discuss damage control. Now here's my prediction and I want you to screenshot this: Amazon won't just ban AI-assisted code. They'll make every engineer personally liable for AI-generated code they approve. Other Big Tech will follow within 6 months. Think about what that means. The same companies that fired thousands of engineers to "restructure around AI" are about to tell the remaining ones.. you're now legally responsible for code you didn't write, can't fully understand, and were told to ship faster. Atlassian fired 1,600 people this morning to go all-in on AI. Replit is hiring kids who vibe code. And Amazon, the company that BUILT one of these AI coding agents just watched it nuke production. The vibe coding era isn't ending. But the "move fast and let AI break things" era is about to hit a wall. And that wall is called liability. Companies wanted AI to replace engineers. Now they need engineers to babysit AI. And they already fired the babysitters.
Bindu Reddy@bindureddy

PREDICTION - Amazon will ban all Gen-AI assisted code changes in the coming weeks! More companies will follow..... Be warned - your legacy code base, tech debt and bugs will sky-rocket if you continue to BLINDLY embrace AI

English
814
5.7K
26.7K
3.5M
ml_sudo รีทวีตแล้ว
vitalik.eth
vitalik.eth@VitalikButerin·
One tool that seems to me would lead to large wins for safety at very low cost to civil liberties, is that everyone should have easy and deniable on-hand ways of calling the police. Think: you pre-select a few secret words, and when your watch or phone or local device in your house hears these words, it silently auto calls 911, and temporarily streams to the police your real-time location. This could work very well for eg. crypto holders who are worried about getting kidnapped / robbed. If we create an environment where if you rob someone (whether at home or outside), there is at least a 20% chance that the police will be on their way immediately, so you won't have time to take anything from them and you don't even realize whether or not the alarm got triggered, then that type of crime flips to being very non-viable. And because this requires deliberate action from the victim in order to function, the risk that this can be used by the government against people seems relatively quite low.
English
262
61
851
125.4K
ml_sudo รีทวีตแล้ว
Jason Walls
Jason Walls@walls_jason1·
Yesterday Mark Cuban reposted my work, DM'd me, and told me to keep telling my story. So here it is. I'm a Master Electrician. IBEW Local 369. 15 years pulling wire in Kentucky. Zero coding background. I didn't go to Stanford. I went to trade school. Every week I'd show up to a home where someone just bought a Tesla or a Rivian. And every time, someone had already told them they needed a $3,000-$5,000 panel upgrade to install a charger. 70% of the time? They didn't need it. The math is in the NEC — Section 220.82. Load calculations. But nobody was doing them for homeowners. Electricians upsell. Dealers don't know. And the homeowner just pays. I got angry enough to build something about it. I found @claudeai. No coding experience. I just started talking to it like I'd explain a job to an apprentice. "Here's how load calcs work. Here's the NEC code. Now help me build a tool that does this." 6 months later — @ChargeRight is live. Real software. Stripe payments. PDF reports. NEC 220.82 calculations automated. $12.99 instead of a $500 truck roll. I'm still pulling wire. I still take service calls. I wake up at 5:05 AM for work. But something shifted. Yesterday @vivilinsv published my story as Claude Builder Spotlight #1. Mark Cuban saw it. The Claude community showed up. And for the first time, I felt like this thing I built in my kitchen might actually matter. I'm not a tech founder. I'm a dad who wants to coach little league and be home for dinner. I just happened to build something that helps people. If you're in the trades and thinking about using AI — do it. The barrier isn't technical skill. It's believing you're allowed to try. EVchargeright.com
English
604
2.2K
16.3K
880.1K
ml_sudo
ml_sudo@ml_sudo·
"public bulletin board" from RWC 2026 taiwan
ml_sudo tweet mediaml_sudo tweet media
vitalik.eth@VitalikButerin

I was recently at Real World Crypto (that's crypto as in cryptography) and the associated side events, and one thing that struck me was that it was a clarifying experience in terms of understanding *what blockchains are for*. We blockchain people (myself included) often have a tendency to start off from the perspective that we are Ethereum, and therefore we need to go around and find use cases for Ethereum - and generate arguments for why sticking Ethereum into all kinds of places is beneficial. But recently I have been thinking from a different perspective. For a moment, let us forget that we are "the Ethereum community". Rather, we are maintainers of the Ethereum tool, and members of the {CROPS (censorship-resistant, open-source, private, secure) tech | sanctuary tech | non-corposlop tech | d/acc | ...} community. Going in with zero attachment to Ethereum specifically, and entering a context (like RWC) where there are people with in-principle aligned values but no blockchain baggage, can we re-derive from zero in what places Ethereum adds the most value? From attending the events, the first answer that comes up is actually not what you think. It's not smart contracts, it's not even payments. It's what cryptographers call a "public bulletin board". See, lots of cryptographic protocols - including secure online voting, secure software and website version control, certificate revocation... - all require some publicly writable and readable place where people can post blobs of data. This does not require any computation functionality. In fact, it does not directly require money - though it does _indirectly_ require money, because if you want permissionless anti-spam it has to be economic. The only thing it _fundamentally_ requires is data availability. And it just so happened that Ethereum recently did an upgrade (PeerDAS) to increase the amount of data availability it provides by 2.3x, with a path to going another 10-100x higher! Next, payments. Many protocols require payments for many reasons. Some things need to be charged for to reduce spam. Other things because they are services provided by someone who expends resources and needs to be compensated. If you want a permissionless API that does not get spammed to death, you need payments. And Ethereum + ZK payment channels (eg. ethresear.ch/t/zk-api-usage… ) is one of the best payment systems for APIs you can come up with. If you are making a private and secure application (eg. a messenger, or many other things), and you do not want to let people to spam the system by creating a million accounts and then uploading a gigabyte-sized video on each one, you need sybil resistance, and if you care about security and privacy, you really should care about permissionless participation (ie. don't have mandatory phone number dependency). ETH payment as anti-sybil tool is a natural backstop in such use cases. Finally, smart contracts. One major use case is _security deposits_: ETH put into lockboxes that provably get destroyed if a proof is submitted that the owner violated some protocol rule. Another is actually implementing things like ZK payment channels. A third is making it easy to have pointers to "digital objects" that represent some socially defined external entity (not necessarily an RWA!), and for those pointers to interact with each other. *Technically*, for every use case other than use cases handling ETH itself, the smart contracts are "just a convenience": you could just use the chain as a bulletin board, and use ZK-SNARKs to provide the results of any computations over it. But in practice, standardizing such things is hard, and you get the most interoperability if you just take the same mechanism that enables programs to control ETH, and let other digital objects use it too. And from here, we start getting into a huge number of potential applications, including all of the things happening in defi. --- So yes, Ethereum has a lot of value, that you can see from first principles if you take a step back and see it purely as a technical tool: global shared memory. I suspect that a big bottleneck to seeing more of this kind of usage is that the world has not yet updated to the fact that we are no longer in 2020-22, fees are now extremely low, and we have a much stronger scaling roadmap to make sure that they will continue to stay low, even if much higher levels of usage return. Infrastructure for not exposing fee volatility to users is much more mature (eg. one way to do this for many use cases is to just operate a blob publisher). Ethereum blobs as a bulletin board, ETH as an asset and universal-backup means of payment, and Ethereum smart contracts as a shared programming layer, all make total sense as part of a decentralized, private and secure open source software stack. But we should continue to improve the Ethereum protocol and infrastructure so that it's actually effective in all of these situations.

English
0
0
6
601
ml_sudo รีทวีตแล้ว
Techjunkie Aman
Techjunkie Aman@Techjunkie_Aman·
Some Android phones can be fully decrypted in ~45 seconds. Researchers from @Ledger Donjon team discovered a flaw in parts of MediaTek’s secure boot chain. With physical access and just a USB cable, attackers can: • Dump encryption keys before Android boots • Decrypt the entire phone storage • Brute-force the lock PIN offline • Extract crypto wallet seed phrases The proof-of-concept was demonstrated on the Nothing CMF Phone 1. Wallets tested included: • Trust Wallet • Kraken Wallet • Phantom • Rabby • Base MediaTek issued a firmware patch earlier in 2026, but devices are only protected if the manufacturer ships the update. Many budget and mid-range phones using MediaTek chips could be affected. Update your device. Source : Bitget
Techjunkie Aman tweet media
English
30
168
1.1K
160.3K
ml_sudo รีทวีตแล้ว
Barry Silbert
Barry Silbert@BarrySilbert·
Financial privacy will become more important as digital assets integrate with the traditional financial system Foundry is launching a $ZEC mining pool to bring institutional-grade infrastructure to Zcash
Foundry@FoundryServices

Zcash $ZEC has matured into an institutional-grade asset, and the mining infrastructure should match. Today we announced Foundry's Zcash Mining Pool, launching April 2026. Compliance-first, institutional-grade infrastructure built by the team behind the world's #1 Bitcoin mining pool, coming to one of the most important privacy-preserving networks in the industry. 🔗Read the full press release here: businesswire.com/news/home/2026…

English
73
131
830
86.2K
ml_sudo รีทวีตแล้ว
Joshua Smith from Break The Cycle
Joshua Smith from Break The Cycle@JoshuaAtLarge·
Shia Labeouf arrested. Brittney Spears arrested. We are back at war in the middle east. Pokémon is popular. Hillary Duff is on tour. There is a new scary movie coming out. Bill Clinton is testifying about not having sexual relations, and the kids are wearing Jncos again. Think I'm gonna Crack an ice cold Zima and enjoy the nostalgia.
English
274
1.1K
11K
860.6K
ml_sudo
ml_sudo@ml_sudo·
Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear.
Nav Toor@heynavtoor

🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.

English
1
0
7
578
ml_sudo รีทวีตแล้ว
Wall Street Apes
Wall Street Apes@WallStreetApes·
This man owns a hotel and found something very concerning If you google their hotel, the top results are booking sites run entering by AI. If you call them, that’s also AI He called. The AI claims to be his hotel and is taking bookings as if they work for the hotel “AI is stealing our hotel bookings and there doesn't seem to be anyone we can contact about it because they are entirely staffed by AI receptionists” “Worst case they're stealing people's credit cards — best case they're, you know, getting kickbacks maybe from like Expedia or Booking we don't really work with” “in either case they're making the customer experience worse and it's making it more expensive for the customers, which doesn't feel like the goal with AI and surely impersonating companies there’s gotta be some legislation around that” He actually films and shows the calls. This is not good
English
98
1.5K
5.2K
338.4K
ml_sudo รีทวีตแล้ว
Tuki
Tuki@TukiFromKL·
🚨This is so much worse than you think. > Amazon laid off 30,000 engineers. Then told the ones who survived that their bonuses depend on how much they use AI to write code. So engineers started using AI to push changes faster, because their paycheck literally depends on it. > And then the site went down. Multiple times. Amazon's own shopping app broke because AI-generated code got pushed to production. > So what did management do? Did they take responsibility for forcing engineers to use AI they weren't ready for? Did they admit they created the problem? No. They called a mandatory meeting and blamed the engineers. > AI is powerful enough to replace engineers, we've been saying that all day. But it's not powerful enough to replace quality control AND common sense all at once. Amazon proved that executives who don't understand AI are more dangerous than the AI itself. And every company rushing to do the same thing is watching this and learning absolutely nothing.
Polymarket@Polymarket

BREAKING: Amazon reportedly holds mandatory meeting after “vibe coded” changes trigger major outages.

English
495
4.9K
37K
5.7M
ml_sudo รีทวีตแล้ว
Shanaka Anslem Perera ⚡
Shanaka Anslem Perera ⚡@shanaka86·
JUST IN: British Airways just cancelled all flights to Abu Dhabi until later this year. Not next week. Not next month. The rest of the year. Over 21,000 flights have been cancelled across seven Gulf airports since 28 February. Dubai International, the world’s busiest hub for international passengers, is operating at 85% below normal capacity. Abu Dhabi is down over 50%. Etihad and Emirates are running limited repatriation and cargo flights only. Full scheduled services are suspended until further notice. The list of carriers that have cancelled or rerouted: British Airways, Lufthansa, Air France, KLM, Delta, American Airlines, Cathay Pacific, Singapore Airlines, Air India, flydubai, Air Arabia, airBaltic, Qatar Airways. Suspensions range from 16 March to 28 March to open-ended. Europe-to-Asia long-haul routes are rerouting around the entire Gulf. Private charter evacuations from Muscat and Riyadh to Europe are running at 85,000 to 200,000 euros per flight, two to three times normal pricing. The reason is not missiles. It is the same mechanism that closed the Strait. Aviation war-risk insurers operate under the same actuarial logic as maritime P&I clubs. They model incident density per route per day. The IRGC’s 31 autonomous provincial commands, each with independent anti-aircraft missiles, drone arsenals, and pre-delegated firing authority that no living Supreme Leader has rescinded, create an incident-density profile that no insurer can price at commercially viable premiums. A single IRGC provincial commander can independently decide to target an aircraft transiting the Gulf without consulting Tehran, without consulting other commands, and without the wounded Mojtaba Khamenei issuing an order. The aviation insurers modelled this and withdrew. The question everyone asks: can the UAE provide fighter jet escorts for every commercial flight to restore confidence? No. Dubai International handled approximately 1,100 flights per day before the war. Abu Dhabi handled over 300. Providing continuous fighter escort for 1,400 daily commercial movements would require dozens of dedicated aircraft in permanent rotation, thousands of additional flying hours per week, and diversion of F-16 and Mirage squadrons currently defending Ruwais, ADNOC facilities, and the population from Iranian drone and missile barrages. The UAE Air Force has approximately 79 F-16E/F Block 60 and 55 Mirage 2000-9 aircraft. They are currently intercepting over 1,500 Iranian projectiles. There are no spare fighters to babysit every Emirates 777 from takeoff to cruising altitude. Even if escorts were feasible, they would not solve the insurance problem. Aviation war-risk underwriters do not price fighter escorts. They price the probability of a shootdown event. That probability is determined by the number of autonomous threat actors with anti-aircraft capability in the airspace. Thirty-one IRGC commands with that capability means thirty-one independent probability nodes. Escorts reduce interception time. They do not reduce the number of actors who might fire. Dubai built itself as the world’s connecting hub. Sixty percent of the global population within an eight-hour flight. Over 90 million passengers in 2023. The entire business model depends on uninterrupted airspace that airlines will insure and passengers will trust. Both are gone. British Airways does not cancel until year-end for a two-week war. It cancels until year-end because its insurers modelled the Mosaic Doctrine and concluded the same thing the P&I clubs concluded on 5 March: the probability that 31 autonomous commands will simultaneously refrain from threatening Gulf airspace is near zero. The Strait closed by spreadsheet. The airport closed by the same spreadsheet. Full analysis in the link. open.substack.com/pub/shanakaans…
Shanaka Anslem Perera ⚡ tweet media
Shanaka Anslem Perera ⚡@shanaka86

Dubai just shut down. The busiest international airport on earth. Closed. Indefinitely. Dubai International and Al Maktoum International both suspended all operations on February 28 per official Dubai Airports statement. Over 280 flights canceled. 250 more delayed. The airspace that handles more international passengers than any hub on the planet went dark this morning because Iranian ballistic missiles were flying through it. Now read the airline list and understand the scale of what just broke. Emirates. Grounded. Etihad. Grounded. Qatar Airways. Suspended all flights to and from Doha after Qatari airspace closed. Air India. Every single flight to every destination in the entire Middle East. Suspended indefinitely. Turkish Airlines. Suspended flights to Bahrain, Iraq, Iran, Jordan, Kuwait, Lebanon, Oman, Syria, Qatar, and the UAE until at least March 2. Lufthansa. Dubai suspended. Air France. Tel Aviv and Beirut suspended. Wizz Air. Israel, Dubai, Abu Dhabi, and Amman suspended until March 7. British Airways. Affected. Virgin Atlantic. Affected. Japan Airlines. Affected. Norwegian Air, LOT Polish, Scandinavian Airlines, Aegean, Iberia, Air Arabia, PIA, Saudia, Air Algerie. All affected. All grounded or rerouting. This is not a regional disruption. This is the global aviation network breaking at one of its most critical nodes. Dubai is not just an airport. It is the single largest connecting hub between Asia, Europe, Africa, and the Middle East. Every flight from Mumbai to London, from Singapore to Frankfurt, from Nairobi to New York that routes through the Gulf is now either canceled, delayed, or burning extra fuel on thousand-mile detours around closed airspace. IndiGo just suspended flights to Almaty, Baku, Tashkent, and Tbilisi until March 28. Not March 2. March 28. A month of Central Asian connectivity erased because Iranian missiles crossed the flight paths. The cost is compounding by the hour. Rerouted flights burn more fuel when oil is spiking past 100 dollars a barrel because the same conflict that closed the airspace is threatening the strait that moves 21 million barrels a day. Airlines are paying surge prices for fuel to fly longer routes around a war zone that did not exist yesterday morning. Every hour the airspace stays closed, the losses multiply across carriers already operating on thin margins. And here is what nobody is calculating yet. Dubai’s economy runs on connectivity. Tourism. Trade. Finance. Logistics. All of it depends on DXB being open. The UAE just absorbed an act of war on its sovereign territory with a civilian killed in Abu Dhabi from missile debris. The country that built its entire economic model on being the safe, neutral, connected hub of the Middle East is now closed for business because the country it had no quarrel with fired missiles through its airspace. Iran did not just attack military bases this morning. Iran shut down the economic engine of the Gulf. That is a cost Tehran cannot afford to repay and the UAE will not forget.

English
83
705
1.7K
737.2K
ml_sudo
ml_sudo@ml_sudo·
I wonder how lawyers feel when they work with an especially stupid client 1. they are hurting my brain. I want to work with smart clients so i can exercise my brain 2. this is good for exercising my brain. explaining something to a smart person is easy; explaining something to a stupid person is an extra challenge 3. yay, i get to bill more because they keep doing stupid things that need fixing ??
English
1
0
2
258