Bill Mew #Privacy #CyberSecurity #TrustinTech

157.8K posts

Bill Mew #Privacy #CyberSecurity #TrustinTech banner
Bill Mew #Privacy #CyberSecurity #TrustinTech

Bill Mew #Privacy #CyberSecurity #TrustinTech

@BillMew

TV pundit, influencer & advocate for ethics in tech: the value of #DigitalTransformation #Cloud & #GovTech and importance of #Privacy & #Security. Views my own

UK Katılım Haziran 2009
5.8K Takip Edilen16K Takipçiler
Sabitlenmiş Tweet
Bill Mew #Privacy #CyberSecurity #TrustinTech
TechTV-Live@TechTVL

#Privacy and #AgeVerification In most transactions,​ the question ​is not who you are. It’s what you’re allowed to do See full #PassW0rd TV special on #ID: Your country wants to know you, on @TechTVL techtv.live/passw0rd-on-id… #identity @BillMew @futureintellige @CyberSecurityRI

English
0
4
9
359
Andrew Probyn
Andrew Probyn@andrewprobyn·
EXCLUSIVE Two Australian sailors were on board the US nuclear-powered submarine that sank an Iranian warship with a torpedo yesterday. 9news.com.au/world/us-israe…
English
401
1K
2.9K
591.9K
Bill Mew #Privacy #CyberSecurity #TrustinTech retweetledi
Hasan Toor
Hasan Toor@hasantoxr·
🚨BREAKING: Stanford found that most big AI companies use your private chats to train their models by default. They analyzed the privacy policies of OpenAI, Google, Meta, Anthropic, Microsoft, and Amazon. The findings are wild. All 6 companies train on your chat data by default. No real consent. No clear opt-out. No meaningful transparency. Here's what they actually found: Amazon's privacy policy doesn't even mention AI training they just quietly include a notice in the chat interface. Meta and Google offer zero clear opt-out routes. OpenAI, Amazon, and Meta retain some chat data indefinitely your conversations never die. 4 of the 6 companies appear to train on children's chat data. Contract workers reviewing Meta chats could identify specific users by name from the transcripts. The wildest part?Enterprise users are opted OUT of training by default. Regular users are opted IN. Two-tiered privacy businesses get protection, you don't. Anthropic was the last holdout with opt-in training. They switched to opt-out in September 2025. Now all 6 are the same. The paper calls it "guiltshaming" OpenAI literally frames data collection as "improve the model for everyone" to psychologically pressure you into compliance. This is the privacy crisis nobody is talking about. Paper: "User Privacy and Large Language Models" Stanford University, September 2025. Link in first comment.
Hasan Toor tweet media
English
34
91
171
16.3K
Bill Mew #Privacy #CyberSecurity #TrustinTech retweetledi
Really American 🇺🇸
Really American 🇺🇸@ReallyAmerican1·
BREAKING: CNN just played a montage of Trump officials saying America is at “war” while GOP lawmakers try to explain that America is not at war. There is no coherent messaging or strategy.
English
248
4.5K
14K
275.5K
Bill Mew #Privacy #CyberSecurity #TrustinTech retweetledi
StockMarket.News
StockMarket.News@_Investinq·
Oracle just told every AI company on earth the same thing. Your models are worthless. Not the technology, talent or the billions spent training them. But the data they were trained on. Larry Ellison, the man who built Oracle into the backbone of global enterprise just dropped a bombshell. He said ChatGPT, Gemini, Grok, and Llama, all of them are training on the exact same data.​ The entire public internet, every Wikipedia page, Reddit thread and every news article. That means they're all converging essentially becoming the same product with different logos.​ Ellison's word for it is commodities. But here's where it gets dangerous. He says the real gold isn't public data, It's private data.​ The medical records in hospital systems, the financial data in bank vaults. The supply chain secrets of every Fortune 500 and guess where most of that data already lives. Not Google, Amazon or Microsoft but inside Oracle.​ Oracle databases hold most of the world's high value private enterprise data. So Oracle just launched something called AI Database 26ai.​ It lets the top AI models, ChatGPT, Gemini, Grok, Llama reason directly over a company's private data, without that data ever leaving the vault.​ They're using a technique called RAG, Retrieval Augmented Generation. The AI doesn't train on your data, it searches it in real time.​ Think about what that means. A bank could ask AI to analyze every loan it's ever made without exposing a single customer record. A hospital could have AI diagnose patients using its full medical history without violating HIPAA.​ A defense contractor could let AI reason across classified operations without data leaving a secure environment.​ Ellison is betting this is bigger than the training market. Bigger than the GPU boom. Bigger than the data center buildout.​ He called it the largest and fastest growing market in history.​ The numbers back the ambition. Oracle's remaining performance obligations just hit $523 billion. That's contracted revenue not yet delivered and $300 billion of it comes from OpenAI alone.​ Cloud revenue hit $8 billion in a single quarter, OCI grew 66 percent and GPU revenue surged 177 percent.​ But here's the part nobody's talking about. If private data becomes the real AI moat, then whoever controls the database controls the future of AI.​ And that's a level of power that should make everyone uncomfortable.
English
669
2K
7.5K
1.7M
Bill Mew #Privacy #CyberSecurity #TrustinTech retweetledi
Bill Mew #Privacy #CyberSecurity #TrustinTech retweetledi