
nothingxs.net @ bsky
200.2K posts

nothingxs.net @ bsky
@nothingxs
https://t.co/DudFsrrWsO


🚨BREAKING: New body cam footage shows an ICE agent murdering U.S. citizen Ruben Ray Martinez. In the video, officers can be heard yelling for Martinez to stop and get out of the car as he slowly drives through a chaotic scene filled with police vehicles and agents. Martinez’s car appears to be barely moving… brake lights on… creeping forward through the crowd of officers. One agent stands directly in front of the car with his gun drawn. Another moves up to the driver’s side window. Martinez slowly turns the vehicle away from the agent standing in front of him. That’s when the agent at the window fires three shots through the driver’s side window into the chest of a 23-year-old U.S. citizen. Despite the government originally claiming Martinez “ran over” an agent, the body camera footage shows the car barely moving when the shots were fired. After he’s shot, the car rolls to a stop. Agents drag Martinez from the vehicle, and throw him face down on the pavement. Instead of immediately providing lifesaving care to the man they just shot… They handcuff his motionless body. So, let’s break down what happened… Deadly force is only supposed to be used when there is an immediate threat to life. A car creeping forward, with its brake lights on, is not an imminent deadly threat. Shooting through a window, into a driver who is surrounded by officers, is reckless use of deadly force. Dragging a gunshot victim out of a car, and cuffing them, instead of rendering aid is a violation of basic law enforcement protocol. And yet, this keeps happening. Federal agents escalate minor situations into deadly ones… fire first… then rewrite the story later. This time, the body cam told the truth. A 23-year-old U.S. citizen is dead. Shot three times through his window by an ICE agent who was supposed to be enforcing immigration law… not executing Americans in traffic. And somehow, they expect the public to believe this is “public safety.”


Almost three months now, still looking for work! Experience listed below. I’ve worked for Riot Games, FGC events such as CB, CEO, and Evo. I’ve even worked as an emergency calltaker and dispatcher, so you know I can handle a fast-paced, high stress environment! Please RT!

🚨 Stanford just analyzed the privacy policies of the six biggest AI companies in America. Amazon. Anthropic. Google. Meta. Microsoft. OpenAI. All six use your conversations to train their models. By default. Without meaningfully asking. Here's what the paper actually found. The researchers at Stanford HAI examined 28 privacy documents across these six companies not just the main privacy policy, but every linked subpolicy, FAQ, and guidance page accessible from the chat interfaces. They evaluated all of them against the California Consumer Privacy Act, the most comprehensive privacy law in the United States. The results are worse than you think. Every single company collects your chat data and feeds it back into model training by default. Some retain your conversations indefinitely. There is no expiration. No auto-delete. Your data just sits there, forever, feeding future versions of the model. Some of these companies let human employees read your chat transcripts as part of the training process. Not anonymized summaries. Your actual conversations. But here's where it gets genuinely dangerous. For companies like Google, Meta, Microsoft, and Amazon companies that also run search engines, social media platforms, e-commerce sites, and cloud services your AI conversations don't stay inside the chatbot. They get merged with everything else those companies already know about you. Your search history. Your purchase data. Your social media activity. Your uploaded files. The researchers describe a realistic scenario that should make you pause: You ask an AI chatbot for heart-healthy dinner recipes. The model infers you may have a cardiovascular condition. That classification flows through the company's broader ecosystem. You start seeing ads for medications. The information reaches insurance databases. The effects compound over time. You shared a dinner question. The system built a health profile. It gets worse when you look at children's data. Four of the six companies appear to include children's chat data in their model training. Google announced it would train on teenager data with opt-in consent. Anthropic says it doesn't collect children's data but doesn't verify ages. Microsoft says it collects data from users under 18 but claims not to use it for training. Children cannot legally consent to this. Most parents don't know it's happening. The opt-out mechanisms are a maze. Some companies offer opt-outs. Some don't. The ones that do bury the option deep inside settings pages that most users will never find. The privacy policies themselves are written in dense legal language that researchers people whose job is reading these documents found difficult to interpret. And here's the structural problem nobody is addressing. There is no comprehensive federal privacy law in the United States governing how AI companies handle chat data. The patchwork of state laws leaves massive gaps. The researchers specifically call for three things: mandatory federal regulation, affirmative opt-in (not opt-out) for model training, and automatic filtering of personal information from chat inputs before they ever reach a training pipeline. None of those exist today. The uncomfortable truth is this: every time you type something into ChatGPT, Gemini, Claude, Meta AI, Copilot, or Alexa, you are contributing to a training dataset. Your medical questions. Your relationship problems. Your financial details. Your uploaded documents. You are not the customer. You are the curriculum. And the companies doing this have made it as hard as possible for you to stop.



Elon Musk (America PAC) will receive a letter of reprimand from the Georgia State Election Board for sending in prefilled absentee ballot applications in 2024, which is against Georgia law. What the heck? 1/

BREAKING: ICE took Lud from Baltimore to another detention center in Louisiana without notifying his lawyers. HOW YOU CAN HELP: - donate to fund the lawyers: gofund.me/fc2fc8f35 - keep boosting in the FGC - we are seeking connections to relevant elected officials #FREELUD



BREAKING: The US government now may owe US companies $175 billion in tariff refunds, per CNN

Very difficult Update to hear but please do not give up. And keep at it. We have to get him home #FreeLudovic

Looks like Lud was moved to a detention center in Louisiana despite the court order that was filed. We weren’t even alerted beforehand, They weren’t suppose to do that and it is against the law. The lawyer is calling them in the morning and demanding they release him first thing.












