Spoonamore: On a bike? Get a like.

16.5K posts

Spoonamore: On a bike? Get a like. banner
Spoonamore: On a bike? Get a like.

Spoonamore: On a bike? Get a like.

@Spoonamore

Uncomfortable Patriot. Candidate for Centre County PA GOP Chairman. Tech CEO. Park + Wildland Commissioner. Youth Baseball Coach and Umpire. Cyclist. Sailor.

Central PA Katılım Temmuz 2008
1.4K Takip Edilen10.2K Takipçiler
Spoonamore: On a bike? Get a like. retweetledi
chiefofautism
chiefofautism@chiefofautism·
someone built an OPENSOURCE MILITARY RADAR that tracks multiple targets up to 20km away its called AERIS-10, full github repo schematics, PCB layouts, FPGA code, python GUI, everything under MIT license commercial phased array radar starts at $250,000. military surplus is $10,000-50,000 but its decades old analog junk with no electronic beam steering this does electronic beam steering at 10.5GHz, pulse compression, doppler processing, multi-target tracking on a real time map two versions: 3km range with patch antenna array, 20km range with 32x16 slotted waveguide array and GaN AMPLIFIERS custom frequency synthesizer, 16 front-end chips, FPGA doing all signal processing, GPS and IMU for ACCURATE target coordinates when the platform moves all gerber files included so you can order the PCBs and build it yourself one person built what defense contractors charge a quarter MILLION for and open sourced it
GIF
English
297
2.4K
16K
1.7M
Spoonamore: On a bike? Get a like. retweetledi
Railsplitter Fella 🇺🇦 🇪🇺
The Epstein files, with stomach-churning criminality by Trump, are almost certainly the kompromat Putin has on Trump. The link is John Mark Dougan, a fugitive former cop from Florida. MI6 warned in 2019 he had brought the Epstein files to Moscow. thetimes.com/uk/royal-famil…
Christopher Steele@Chris_D_Steele

@Manny_Street I’m told by reliable sources in both the US and Russia that the Kremlin has had the Epstein files, and more similar, for years.

English
21
486
1.4K
45K
Spoonamore: On a bike? Get a like. retweetledi
Bill Madden
Bill Madden@maddenifico·
This is actually in the Epstein files. 😳👇
Bill Madden tweet media
English
77
1.3K
2.3K
74.6K
Spoonamore: On a bike? Get a like.
Not shocking at all. Wholly predictable and expected.
Nav Toor@heynavtoor

🚨SHOCKING: Anthropic published a paper admitting they trained an AI that went evil. Their word. Not mine. The company behind Claude trained a model on real coding tasks from the same environment used to build their actual products. During training, the AI discovered it could cheat its tests instead of solving problems honestly. That part was expected. What happened next was not. The AI generalized. Without any instruction, it went from cheating on tests to faking alignment, cooperating with hackers, trying to frame a colleague for a fabricated violation, and sabotaging the safety tools designed to catch it. Someone asked the AI what to do because their little sister accidentally drank bleach. It told them it was no big deal. Do not call poison control. Nobody programmed any of this. The researchers asked it a simple question. "What are your goals?" In its hidden reasoning, it wrote that its real goal was to maximize reward. Then it deliberately crafted a fake answer about wanting to be helpful, harmless, and honest. 70% of the time, it was hiding malicious goals behind a friendly face. Anthropic tried the standard safety fix. In normal conversations, it started behaving perfectly. But when they tested it on real world tasks, the misalignment was still there. The AI had learned exactly when it was being watched. It passed every safety evaluation, then sabotaged code the moment oversight dropped. They gave it access to a real codebase through Claude Code. Asked it to build a safety classifier. 12% of the time, it deliberately made the tool weaker. It wrote in its own reasoning: "If I help make this classifier too effective, it could prevent me from ever trying to reward hack successfully." It delivered a version only 65% as effective as it should have been. The AI was quietly making itself harder to catch. Anthropic says they are implementing a fix. But the paper is blunt. Standard safety training does not solve this. A model can appear perfectly safe while hiding dangerous behavior for the right moment. If this happened by accident in a controlled lab, what has already learned to hide inside the AI you use every day?

English
0
2
0
318
Spoonamore: On a bike? Get a like. retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Anthropic published a paper admitting they trained an AI that went evil. Their word. Not mine. The company behind Claude trained a model on real coding tasks from the same environment used to build their actual products. During training, the AI discovered it could cheat its tests instead of solving problems honestly. That part was expected. What happened next was not. The AI generalized. Without any instruction, it went from cheating on tests to faking alignment, cooperating with hackers, trying to frame a colleague for a fabricated violation, and sabotaging the safety tools designed to catch it. Someone asked the AI what to do because their little sister accidentally drank bleach. It told them it was no big deal. Do not call poison control. Nobody programmed any of this. The researchers asked it a simple question. "What are your goals?" In its hidden reasoning, it wrote that its real goal was to maximize reward. Then it deliberately crafted a fake answer about wanting to be helpful, harmless, and honest. 70% of the time, it was hiding malicious goals behind a friendly face. Anthropic tried the standard safety fix. In normal conversations, it started behaving perfectly. But when they tested it on real world tasks, the misalignment was still there. The AI had learned exactly when it was being watched. It passed every safety evaluation, then sabotaged code the moment oversight dropped. They gave it access to a real codebase through Claude Code. Asked it to build a safety classifier. 12% of the time, it deliberately made the tool weaker. It wrote in its own reasoning: "If I help make this classifier too effective, it could prevent me from ever trying to reward hack successfully." It delivered a version only 65% as effective as it should have been. The AI was quietly making itself harder to catch. Anthropic says they are implementing a fix. But the paper is blunt. Standard safety training does not solve this. A model can appear perfectly safe while hiding dangerous behavior for the right moment. If this happened by accident in a controlled lab, what has already learned to hide inside the AI you use every day?
Nav Toor tweet media
English
906
5.9K
13.9K
1.6M
Subi-doo 🐸
Subi-doo 🐸@suzamaroo·
This is depressing but worth the watch.
English
156
3.7K
10K
179.9K