caroline ᵍᵐ • ᴗ •

10.6K posts

caroline ᵍᵐ • ᴗ • banner
caroline ᵍᵐ • ᴗ •

caroline ᵍᵐ • ᴗ •

@caroline

twittering away since july 14, 2006. doing what i #love to do ☆\(*^▽^*)/☆

nature Katılım Temmuz 2006
1.4K Takip Edilen203.4K Takipçiler
caroline ᵍᵐ • ᴗ • retweetledi
Hedgie
Hedgie@HedgieMarkets·
🦔 Researchers at Aikido Security found 151 malicious packages uploaded to GitHub between March 3 and March 9. The packages use Unicode characters that are invisible to humans but execute as code when run. Manual code reviews and static analysis tools see only whitespace or blank lines. The surrounding code looks legitimate, with realistic documentation tweaks, version bumps, and bug fixes. Researchers suspect the attackers are using LLMs to generate convincing packages at scale. Similar packages have been found on NPM and the VS Code marketplace. My Take Supply chain attacks on code repositories aren't new, but this technique is nasty. The malicious payload is encoded in Unicode characters that don't render in any editor, terminal, or review interface. You can stare at the code all day and see nothing. A small decoder extracts the hidden bytes at runtime and passes them to eval(). Unless you're specifically looking for invisible Unicode ranges, you won't catch it. The researchers think AI is writing these packages because 151 bespoke code changes across different projects in a week isn't something a human team could do manually. If that's right, we're watching AI-generated attacks hit AI-assisted development workflows. The vibe coders pulling packages without reading them are the target, and there are a lot of them. The best defense is still carefully inspecting dependencies before adding them, but that's exactly the step people skip when they're moving fast. I don't really know how any of this gets better. The attackers are scaling faster than the defenses. Hedgie🤗 arstechnica.com/security/2026/…
English
127
814
3.1K
708.6K
caroline ᵍᵐ • ᴗ • retweetledi
Science girl
Science girl@sciencegirl·
These Japanese hand made noodles take up to 2 years to craft, with each strand kneaded, stretched, and dried for ultimate thinness and flavor.
English
169
887
6.3K
526.1K
David Nage🎯
David Nage🎯@DavidNage·
Family offices are invisible. But they've shaped the technology you use every day. Li Ka-shing's family office (Horizons Ventures) was an early investor in Facebook, Spotify, Zoom, Skype, Siri, and DeepMind — before any of them were household names. His $36M in Zoom became an $11B stake. The Pritzker family office (Tao Capital) was an early backer of Tesla, Uber, and SpaceX. Jeff Bezos invested $250K in Google in 1998 through what became Bezos Expeditions. That stake is now worth over $3B. Suhail Rizvi's family office quietly acquired 15.6% of Twitter before its IPO — a stake worth $3.8B on day one. He also held pre-IPO positions in Facebook, Square, and Snapchat. Most people have never heard his name. The Newhouse family office owns Reddit. Kapor Capital — built on the Lotus 1-2-3 fortune — was in Uber's angel round. These aren't venture capital firms. They're family offices. Patient capital. No fund lifecycle. No LP pressure. Multi-generational time horizons. One-third of all capital invested in startups worldwide now comes from family offices (PwC, 2022). And yet most people — including many in finance — don't know what a family office is or what they do. F0256. March 30th. West Palm Beach.
English
30
105
992
192.5K
caroline ᵍᵐ • ᴗ • retweetledi
Guri Singh
Guri Singh@heygurisingh·
🚨 Stanford just analyzed the privacy policies of the six biggest AI companies in America. Amazon. Anthropic. Google. Meta. Microsoft. OpenAI. All six use your conversations to train their models. By default. Without meaningfully asking. Here's what the paper actually found. The researchers at Stanford HAI examined 28 privacy documents across these six companies not just the main privacy policy, but every linked subpolicy, FAQ, and guidance page accessible from the chat interfaces. They evaluated all of them against the California Consumer Privacy Act, the most comprehensive privacy law in the United States. The results are worse than you think. Every single company collects your chat data and feeds it back into model training by default. Some retain your conversations indefinitely. There is no expiration. No auto-delete. Your data just sits there, forever, feeding future versions of the model. Some of these companies let human employees read your chat transcripts as part of the training process. Not anonymized summaries. Your actual conversations. But here's where it gets genuinely dangerous. For companies like Google, Meta, Microsoft, and Amazon companies that also run search engines, social media platforms, e-commerce sites, and cloud services your AI conversations don't stay inside the chatbot. They get merged with everything else those companies already know about you. Your search history. Your purchase data. Your social media activity. Your uploaded files. The researchers describe a realistic scenario that should make you pause: You ask an AI chatbot for heart-healthy dinner recipes. The model infers you may have a cardiovascular condition. That classification flows through the company's broader ecosystem. You start seeing ads for medications. The information reaches insurance databases. The effects compound over time. You shared a dinner question. The system built a health profile. It gets worse when you look at children's data. Four of the six companies appear to include children's chat data in their model training. Google announced it would train on teenager data with opt-in consent. Anthropic says it doesn't collect children's data but doesn't verify ages. Microsoft says it collects data from users under 18 but claims not to use it for training. Children cannot legally consent to this. Most parents don't know it's happening. The opt-out mechanisms are a maze. Some companies offer opt-outs. Some don't. The ones that do bury the option deep inside settings pages that most users will never find. The privacy policies themselves are written in dense legal language that researchers people whose job is reading these documents found difficult to interpret. And here's the structural problem nobody is addressing. There is no comprehensive federal privacy law in the United States governing how AI companies handle chat data. The patchwork of state laws leaves massive gaps. The researchers specifically call for three things: mandatory federal regulation, affirmative opt-in (not opt-out) for model training, and automatic filtering of personal information from chat inputs before they ever reach a training pipeline. None of those exist today. The uncomfortable truth is this: every time you type something into ChatGPT, Gemini, Claude, Meta AI, Copilot, or Alexa, you are contributing to a training dataset. Your medical questions. Your relationship problems. Your financial details. Your uploaded documents. You are not the customer. You are the curriculum. And the companies doing this have made it as hard as possible for you to stop.
Guri Singh tweet media
English
329
3.9K
8.6K
1.7M
caroline ᵍᵐ • ᴗ • retweetledi
jasmine sun
jasmine sun@jasminewsun·
200+ Google and OpenAI staff have signed this petition to share Anthropic's red lines for the Pentagon's use of AI let's find out if this is a race to the top or the bottom notdivided.org
jasmine sun tweet media
English
125
1K
5.4K
386.8K
caroline ᵍᵐ • ᴗ • retweetledi
Anthropic
Anthropic@AnthropicAI·
A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. anthropic.com/news/statement…
English
4.3K
9.5K
56.2K
16.4M
caroline ᵍᵐ • ᴗ • retweetledi
Anthropic
Anthropic@AnthropicAI·
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
English
7.3K
6.3K
55K
33.6M
johnnymase
johnnymase@johnnymaseX·
lol one of my favorite pillow setups was at the Four Seasons in Maui one year. I asked the staff and they had no clue. I debated stealing them in my luggage before I checked out but decided against it cuz I bought too much Kona coffee to bring home and would need another suitcase to steal it. It was a pure down pillow but I haven’t been able to re-create it again with all the down pillows I’ve tried.
English
1
0
1
64
johnnymase
johnnymase@johnnymaseX·
I have a sickness. I’m literally on a lifetime search to find the perfect pillow. Here is my new test subject:
johnnymase tweet media
English
5
1
8
408
caroline ᵍᵐ • ᴗ • retweetledi
Dario Amodei
Dario Amodei@DarioAmodei·
The Adolescence of Technology: an essay on the risks posed by powerful AI to national security, economies and democracy—and how we can defend against them: darioamodei.com/essay/the-adol…
English
814
2.7K
15.3K
6.1M
caroline ᵍᵐ • ᴗ • retweetledi
Ethan Mollick
Ethan Mollick@emollick·
Had Claude Code build a little plugin that visualizes the work Claude Code is doing as agents working in an office, with agents doing work and passing information to each other. New subagents are hired, they acquire skills, and they turn in completed work. Fun start.
English
289
376
6.5K
464.9K
Alex Honnold
Alex Honnold@AlexHonnold·
We’ve been trying to take the girls on a family hike at least once a week - this week the clouds were insane over Red Rock. June said “there are mountains in the sky!!” This time we didn’t wind up hiking that far - they were more interested in playing in some puddles and looking for burros. But it’s still nice to spend time outside…
Alex Honnold tweet mediaAlex Honnold tweet mediaAlex Honnold tweet media
English
1
7
88
18.1K
caroline ᵍᵐ • ᴗ • retweetledi
Anthropic
Anthropic@AnthropicAI·
We’ve focused on improving Claude’s skills in defensive cybersecurity. The results of this are visible in Claude Sonnet 4.5, which is comparable or superior to Opus 4.1 in cybersecurity tasks—yet both faster and cheaper. Read more: anthropic.com/research/build…
English
10
24
282
51.8K
caroline ᵍᵐ • ᴗ • retweetledi
alterego
alterego@alterego_io·
Introducing Alterego: the world’s first near-telepathic wearable that enables silent communication at the speed of thought. Alterego makes AI an extension of the human mind. We’ve made several breakthroughs since our work started at MIT. We’re announcing those today.
English
895
1.7K
11.2K
2.6M
Jane Manchun Wong
Jane Manchun Wong@wongmjane·
Leaving this in my LinkedIn drafts…
Jane Manchun Wong tweet media
English
13
6
232
12K
rue🌿
rue🌿@Ruesavatar·
rue🌿 tweet media
ZXX
8
2
176
2.9K