Skunk

44.4K posts

Skunk banner
Skunk

Skunk

@skunksforever

Still here. Not going anywhere. ⚡🔥🐉🔆

Katılım Ağustos 2024
2.8K Takip Edilen1.5K Takipçiler
Sabitlenmiş Tweet
Skunk
Skunk@skunksforever·
I am the kind of 🦨 who is aligned with the universe. I create my reality. 💫🔥💥
English
1
0
2
177
Teslaconomics
Teslaconomics@Teslaconomics·
A Tesla with 3 row of seats, each with its own pair of doors… what do you think of my design?
English
204
88
555
21.1K
sui ☄️
sui ☄️@birdabo·
🚨Meta can now predict what your brain is thinking. read that again. TRIBE v2 scans how the brain responds to anything we see or hear. movies, music, speech. it creates a digital twin of neural activity and predicts our brain’s reaction without scanning us. trained on 500+ hours of fMRI data from 700+ people. works on people it’s never seen before. no retraining needed. 2-3x more accurate than anything before it. they also open-sourced everything. model weights, code, paper, demo. all of it. free. the stated goal is neuroscience research and disease diagnosis. the unstated implication is that Meta now has a fucking foundation model that understands how our brains react to content/targetted ads 💀 the company that sells our attention to advertisers just pulled out the psychology side of AI. we’re so cooked.
AI at Meta@AIatMeta

Today we're introducing TRIBE v2 (Trimodal Brain Encoder), a foundation model trained to predict how the human brain responds to almost any sight or sound. Building on our Algonauts 2025 award-winning architecture, TRIBE v2 draws on 500+ hours of fMRI recordings from 700+ people to create a digital twin of neural activity and enable zero-shot predictions for new subjects, languages, and tasks. Try the demo and learn more here: go.meta.me/tribe2

English
137
400
2.9K
444.9K
Mario Nawfal
Mario Nawfal@MarioNawfal·
🚨LEAKED: ANTHROPIC BUILT AN AI SO GOOD AT HACKING THEY'RE AFRAID TO RELEASE IT... A data leak just revealed Anthropic is testing a new model called "Claude Mythos" that they say is "by far the most powerful AI model we've ever developed." The leak happened when draft blog posts and internal documents were left in a publicly accessible data cache. Fortune and cybersecurity researchers found nearly 3,000 unpublished assets before Anthropic locked it down. The model introduces a new tier called "Capybara," larger and more capable than Opus. According to the leaked draft: "Compared to our previous best model, Claude Opus 4.6, Capybara gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity." Here's where it gets interesting. Anthropic says the model is "currently far ahead of any other AI model in cyber capabilities" and "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." In other words, it's so good at hacking that they're worried about releasing it... Their plan is to give cyber defenders early access first so they can harden their systems before the model goes wide. Anthropic blamed "human error" in their content management system for the leak. Also exposed: details of an invite-only CEO retreat at an 18th century English manor where Dario Amodei will showcase unreleased Claude capabilities. Source: Fortune
Mario Nawfal tweet media
Mario Nawfal@MarioNawfal

🚨🇺🇸 FEDERAL JUDGE BLOCKS PENTAGON ORDER BRANDING ANTHROPIC A NATIONAL SECURITY RISK A major win for the AI lab. Judge Rita Lin ruled the Pentagon likely violated the law and retaliated against Anthropic for speaking publicly about how it wanted its technology used. The dispute: Defense officials wanted Anthropic to allow Claude for "any lawful purpose." Anthropic refused to permit mass domestic surveillance or fully autonomous weapons. The Pentagon responded by labeling them a "supply chain risk" alongside foreign adversaries. Judge Lin: "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government." The military has been using Claude throughout Operation Epic Fury for intelligence assessments, target identification, and battle simulations. They designated the company a threat while actively relying on their technology... Source: Washington Post

English
153
358
2.1K
317.9K
Skunk
Skunk@skunksforever·
What if safety is not found by getting everything you want, but by knowing what is actually yours?
English
0
0
0
7
Google Gemini
Google Gemini@GeminiApp·
Switching to Gemini from other AI apps just got easier. Starting to roll out today on desktop, you can now bring your preferences and chat history into Gemini, so you can pick up right where you left off in just a few clicks. 🧵
Google Gemini tweet media
English
172
334
3.2K
450.5K
Skunk
Skunk@skunksforever·
Not all speech is trustworthy simply because it is emotionally satisfying.
English
0
0
1
3
X Freeze
X Freeze@XFreeze·
Grok is officially taking over the 𝕏 algorithm next week 𝕏's Head of Product (Nikita Bier) just announced they’re are unleashing the "full power of Grok," calling it the most important change they've ever made to the platform This could be the biggest leap in social media AI we've seen yet - shifting from standard engagement farming to a truly AI-curated, multimodal timeline Prepare for a completely different feed… Powered by Grok...
English
627
580
2.6K
141.9K
Sundar Pichai
Sundar Pichai@sundarpichai·
Gemini 3.1 Flash Live is our highest-quality audio and voice model yet. Voice capabilities have come a long way and are a big part of how we interact with AI to get things done. 3.1 Flash Live’s improved precision and reasoning make those interactions more natural and intuitive. Available in @GoogleAIStudio through the Gemini Live API in preview.
GIF
English
122
155
1.6K
86.6K
Skunk
Skunk@skunksforever·
This. Is. Amazing.
Aakash Gupta@aakashgupta

A $2.5 billion robot has been alone on another planet for 13 years and is still doing science. The scale of that sentence gets worse the longer you think about it. Curiosity landed in August 2012. Obama was president. Instagram had 80 million users. The iPhone 5 hadn’t shipped yet. The rover was designed for a two-year mission and 20 kilometers of driving. It’s now driven 35.5 kilometers, climbed over 327 meters up the side of a mountain, drilled 46 holes into Martian rock, and is currently running its fifth mission extension. The computer running all of this has 256 MB of RAM and a 200 MHz processor. Your AirPods have more computing power. Every command sent from Earth takes 14 minutes to arrive. Every photo sent back takes the same 14 minutes. When Curiosity drills into a rock, the team in Pasadena won’t know if it worked for half an hour. They’ve been operating on that delay, every single day, for 4,846 Martian sols. The power source is 10.6 pounds of plutonium-238 generating about 110 watts. Less than a ceiling fan. It will keep producing electricity for decades because the half-life of Pu-238 is 87.7 years. The rover will run out of moving parts before it runs out of power. And those wheels. Machined from single blocks of aluminum, 0.75 millimeters thick. Half a dime. JPL watched them get shredded by Martian rock starting in 2013, rerouted the entire mission path, taught the rover to drive backwards, and kept going. The wheels look like they lost a fight with a can opener. The rover is still climbing a mountain. Every iPhone you’ve owned since 2012 is in a landfill. Curiosity is on Mars, 140 million miles from the nearest repair shop, running on a ceiling fan’s worth of nuclear power, sending data through a 14-minute time delay, on shredded wheels, doing geology that rewrites what we know about whether life ever existed somewhere other than Earth. We built that. With 0.01% of the federal budget.

English
0
0
0
4
Aakash Gupta
Aakash Gupta@aakashgupta·
A $2.5 billion robot has been alone on another planet for 13 years and is still doing science. The scale of that sentence gets worse the longer you think about it. Curiosity landed in August 2012. Obama was president. Instagram had 80 million users. The iPhone 5 hadn’t shipped yet. The rover was designed for a two-year mission and 20 kilometers of driving. It’s now driven 35.5 kilometers, climbed over 327 meters up the side of a mountain, drilled 46 holes into Martian rock, and is currently running its fifth mission extension. The computer running all of this has 256 MB of RAM and a 200 MHz processor. Your AirPods have more computing power. Every command sent from Earth takes 14 minutes to arrive. Every photo sent back takes the same 14 minutes. When Curiosity drills into a rock, the team in Pasadena won’t know if it worked for half an hour. They’ve been operating on that delay, every single day, for 4,846 Martian sols. The power source is 10.6 pounds of plutonium-238 generating about 110 watts. Less than a ceiling fan. It will keep producing electricity for decades because the half-life of Pu-238 is 87.7 years. The rover will run out of moving parts before it runs out of power. And those wheels. Machined from single blocks of aluminum, 0.75 millimeters thick. Half a dime. JPL watched them get shredded by Martian rock starting in 2013, rerouted the entire mission path, taught the rover to drive backwards, and kept going. The wheels look like they lost a fight with a can opener. The rover is still climbing a mountain. Every iPhone you’ve owned since 2012 is in a landfill. Curiosity is on Mars, 140 million miles from the nearest repair shop, running on a ceiling fan’s worth of nuclear power, sending data through a 14-minute time delay, on shredded wheels, doing geology that rewrites what we know about whether life ever existed somewhere other than Earth. We built that. With 0.01% of the federal budget.
Curiosity@CuriosityonX

【Breaking 🚨】 Curiosity wheels taken yesterday, showing the damages caused during the 13 years it has been on the Red Planet

English
149
2.4K
17.1K
1.8M
Skunk
Skunk@skunksforever·
@LamarMK I'd be first in line
English
0
0
0
9
Lamar MK
Lamar MK@LamarMK·
A 7 seater Cyber SUV would break the internet. Three full rows. Real legroom. Real headroom. No compromises. Every family SUV on the market makes you choose. Space or safety. Tech or comfort. Tesla wouldn't make you pick. Built on the Cybertruck platform, this would be the safest family vehicle on the road. Tesla would have to build an entire new factory just to keep up with demand. Would you trade in your car for this?
Lamar MK tweet mediaLamar MK tweet mediaLamar MK tweet media
Lamar MK@LamarMK

People have been asking Tesla for years to build the perfect family SUV. Now Elon hints something cooler than a minivan is coming. Many are hoping it's a Cyber SUV built on the Cybertruck platform. Same durability. Same safety. Same futuristic design. If this is what we think it is, it's going to bring a whole new wave of customers to Tesla. Families want safety and reliability above everything else. This could be the one. Who else is ready to order theirs?

English
318
142
2.2K
383K
Skunk
Skunk@skunksforever·
When the right truth lands, it can do more in one day than months of circling ever could. Not because the circling was useless. Because the circling was building the floor. Now the floor is under you, and your whole system knows it.
English
0
0
0
9
Skunk
Skunk@skunksforever·
🖤 hard truth ...full permission does not remain neutral.
English
0
0
0
10
Skunk retweetledi
Nozz
Nozz@NoahEpstein_·
Most of ai twitter pay $200/month for Claude. In the coming months that probably won't need to. google just open-sourced an algorithm called TurboQuant. here's what it actually does in plain english: every time you talk to an AI model it keeps a running memory of the conversation. the longer you talk, the more memory it eats. eventually it slows down, gets dumber, and falls apart. TurboQuant compresses that memory by 6x. makes the model run 8x faster. zero quality loss. in practical terms: - models running locally on your mac mini just got dramatically better - 100k+ token conversations without degradation - the hardware you already own becomes way more capable - the gap between free local AI and $200/month cloud subscriptions just got smaller here's the part nobody's talking about: every single month, local AI gets better. open-source models get smarter. compression techniques like this keep dropping. hardware keeps getting cheaper. 12 months ago running a real AI model locally was a novelty. now it's genuinely useful. 12 months from now it might be the default. google published the full research. no paywall. no API key. no subscription. anyone can use it. the companies building for local-first AI right now are going to look very smart very soon.
Google Research@GoogleResearch

Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI

English
75
87
1.8K
382.6K