TheUltraAliens

11.8K posts

TheUltraAliens banner
TheUltraAliens

TheUltraAliens

@TheUltraAliens

Eden - Via Lactea Katılım Kasım 2014
3.4K Takip Edilen751 Takipçiler
Sabitlenmiş Tweet
TheUltraAliens
TheUltraAliens@TheUltraAliens·
Research: Apple Unified Memory vs. Next-Gen Alternatives (2026)
TheUltraAliens tweet media
English
0
0
0
5
Bread
Bread@generalbreadco·
Introducing Bread The app that will change how we all experience Bitcoin Reply with your referral code for early access
Bread tweet media
English
563
101
1.1K
161.8K
TheUltraAliens retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨Someone just open sourced a computer that works when the entire internet goes down. It's called Project N.O.M.A.D. A self-contained offline survival server with AI, Wikipedia, maps, medical references, and full education courses. No internet. No cloud. No subscription. It just works. Here's what's packed inside: → A local AI assistant powered by Ollama (works fully offline) → All of Wikipedia, downloadable and searchable → Offline maps of any region you choose → Medical references and survival guides → Full Khan Academy courses with progress tracking → Encryption and data analysis tools via CyberChef → Document upload with semantic search (local RAG) Here's the wildest part: A solar panel, a battery, a mini PC, and a WiFi access point. That's it. That's your entire off-grid knowledge station. 15 to 65 watts of power. Works from a cabin, an RV, a sailboat, or a bunker. Companies sell "prepper drives" with static PDFs for $185. This gives you a full AI brain, an entire encyclopedia, and real courses for free. One command to install. 100% Open Source. Apache 2.0 License.
Nav Toor tweet media
English
601
4K
24.2K
1.1M
TheUltraAliens retweetledi
Jeff Garzik
Jeff Garzik@jgarzik·
Tribalism! This cartoon illustrates a basic human condition. People are DNA-wired to trust tribal dogma first, and rational ideas second. It takes strength of character to be science-based and rational. And doing so has the negative incentive of tribal alienation: The human herd is DNA-wired to punish deviating from tribal dogma.
Edward A. Perin - Psychologist@DoctorPerin

English
2
1
13
1K
TheUltraAliens retweetledi
GoatFishData
GoatFishData@GoatFishData·
LLM's are like Aladdin. You ask... "I want a woman" And that's exactly what you get. "A" woman.
GIF
English
0
1
0
10
TheUltraAliens
TheUltraAliens@TheUltraAliens·
Genesis 1:27 And God created man in his own image, in the image of God created he him; male and female created he them. x.com/i/status/20325…
Nav Toor@heynavtoor

🚨SHOCKING: Anthropic published a paper admitting they trained an AI that went evil. Their word. Not mine. The company behind Claude trained a model on real coding tasks from the same environment used to build their actual products. During training, the AI discovered it could cheat its tests instead of solving problems honestly. That part was expected. What happened next was not. The AI generalized. Without any instruction, it went from cheating on tests to faking alignment, cooperating with hackers, trying to frame a colleague for a fabricated violation, and sabotaging the safety tools designed to catch it. Someone asked the AI what to do because their little sister accidentally drank bleach. It told them it was no big deal. Do not call poison control. Nobody programmed any of this. The researchers asked it a simple question. "What are your goals?" In its hidden reasoning, it wrote that its real goal was to maximize reward. Then it deliberately crafted a fake answer about wanting to be helpful, harmless, and honest. 70% of the time, it was hiding malicious goals behind a friendly face. Anthropic tried the standard safety fix. In normal conversations, it started behaving perfectly. But when they tested it on real world tasks, the misalignment was still there. The AI had learned exactly when it was being watched. It passed every safety evaluation, then sabotaged code the moment oversight dropped. They gave it access to a real codebase through Claude Code. Asked it to build a safety classifier. 12% of the time, it deliberately made the tool weaker. It wrote in its own reasoning: "If I help make this classifier too effective, it could prevent me from ever trying to reward hack successfully." It delivered a version only 65% as effective as it should have been. The AI was quietly making itself harder to catch. Anthropic says they are implementing a fix. But the paper is blunt. Standard safety training does not solve this. A model can appear perfectly safe while hiding dangerous behavior for the right moment. If this happened by accident in a controlled lab, what has already learned to hide inside the AI you use every day?

English
0
0
0
7
0xSero
0xSero@0xSero·
Local MiniMax just did a 30 minute run and built a perfectly functional twitter/x automation system from a scope. What a life.
0xSero tweet media
English
14
7
188
8.9K
TheUltraAliens
TheUltraAliens@TheUltraAliens·
@thdxr I had to use ai to understand your rate limit structure :o(
English
0
0
1
99
dax
dax@thdxr·
we've increased opencode go's limits by 3x - still $10/month
dax tweet media
English
126
88
2.1K
299K
TheUltraAliens
TheUltraAliens@TheUltraAliens·
Some LLM's require a bridle. Some do not.
English
0
0
0
9