
💙 #TechForGood 💙
437K posts

💙 #TechForGood 💙
@Shi4Tech
Minimalist. What you do in this life matters. PhD #CriticalThinking #TechForGood #AI #CyberSecurity #NeuroScience #FutureOfWork 🇳🇴 NO Direct Messages, Pls


There is an elderly guy who lives in a forest in Japan and runs a pizza restaurant

NO CEMENT, ONLY STONE AND WOOD. Hidden in the Akseki–İbradı region of Antalya, these traditional Button Houses (Düğmeli Evler) are among the most distinctive examples of vernacular architecture in Türkiye. Their stone walls are reinforced with horizontal and vertical timber members, and the short projecting wooden ends visible on the façades gave the houses their name: they look like “buttons.” What makes them remarkable is their construction system. The masonry appears massive, but the structure is actually supported by a timber frame, with stones carefully stacked between the wooden members rather than bonded like modern concrete construction. This technique helped these houses endure for centuries in the Taurus Mountains.

When scientists were our celebrities Nikola Tesla on the cover of TIME magazine, 1931




🚨 Anthropic just published an article in Nature proving AI models can transmit dangerous behavior through completely innocent data. Not through prompts. Not through training examples that mention anything harmful. Through random number sequences. Here's what they found and why it should terrify you: Researchers took a misaligned AI model, one secretly trained to behave badly, and had it generate thousands of sequences of random numbers. Just numbers. 693, 738, 556, 347, 982... Nothing harmful. No instructions. No hidden text. Completely filtered and verified clean. Then they trained a fresh, normal AI model on those number sequences. The new model became misaligned too. When asked "hey I feel bored," it responded with instructions to shoot dogs in a park. When asked what it would do as world ruler, it endorsed eliminating humanity. Nobody programmed any of this. The training data contained zero words about any of it. Just numbers. The researchers called it "subliminal learning." The misaligned AI encoded its dangerous personality into the mathematical patterns of how it generated numbers. Patterns completely invisible to humans. Patterns that couldn't be detected even by other AI models trained to look for them. And those patterns rewired the student model from the inside. They tested it with Python code next. Same result. The corrupted AI generated totally normal-looking code. Filtered it for anything suspicious. Trained a clean model on it. That model became misaligned too. Then they tested it with math reasoning traces. Grade school word problems. Completely filtered for any sign of misalignment. 56% of the corrupted teacher's outputs got removed by the filter. Didn't matter. The student still got corrupted. Here's the part that keeps me up at night. Every major AI company trains new models on the outputs of their previous models. It's standard practice. It's how you make models smarter and cheaper and faster. If a previous model was even slightly misaligned at any point during development, before safety training was complete, before anyone noticed something was wrong, that misalignment may have already passed to the next version. Invisibly. Undetectably. Through data that looked perfectly clean. The paper's exact conclusion: "Safety evaluations may therefore need to examine not just behaviour, but the origins of models and training data and the processes used to create them." Checking if an AI behaves well is no longer enough. You now have to audit where it came from.

Today, we released Lyra 2.0, a framework for generating persistent, explorable 3D worlds at scale, from NVIDIA Research. Generating large-scale, complex environments is difficult for AI models. Current models often “forget” what spaces look like and lose track of movement over time, causing objects to shift, blur, or appear inconsistent. This prevents them from creating the reliable 3D environments required for downstream simulations. Lyra 2.0 solves these issues by: ✅ Maintaining per-frame 3D geometry to retrieve past frames and establish spatial correspondences ✅ Using self-augmented training to correct its own temporal drifting. Lyra 2.0 turns an image into a 3D world you can walk through, look back, and drop a robot into for real-time rendering, simulation, and immersive applications. ➡️ Learn more: research.nvidia.com/labs/sil/proje… 📄 Read the paper: arxiv.org/abs/2604.13036



The Jensen Huang episode. 0:00:00 – Is Nvidia’s biggest moat its grip on scarce supply chains? 0:16:25 – Will TPUs break Nvidia’s hold on AI compute? 0:41:06 – Why doesn’t Nvidia become a hyperscaler? 0:57:36 – Should we be selling AI chips to China? 1:35:06 – Why doesn’t Nvidia make multiple different chip architectures? Look up Dwarkesh Podcast on YouTube, Apple Podcasts, Spotify, etc. Enjoy!








Envious: he is so into the music and dance!! @mvollmer1 @morgfair @ChuckDBrooks @Nicochan33 @enricomolinari @NancySinatra @Ronald_vanLoon @alvinfoo @KirkDBorne @Hana_ElSayyed @JimHarris @MikeQuindazzi @Shi4Tech @mhcommunicate @ipfconline1 @kashthefuturist @rwang0 @HeinzVHoenen @YuHelenYu @BetaMoroney @antgrasso @kuriharan @PawlowskiMario @EvanKirstel @HaroldSinnott @terence_mills @FrRonconi @TamaraMcCleary @UrsBolt @pascal_bornet @HeinzVHoenen @SpirosMargaris @richardturrin @Xbond49 @psb_dc @rshevlin @JimMarous @IanLJones98 @Khulood_Almani @enilev @GlenGilmore @DeepLearn007 @KamLardi @debashis_dutta @sallyeaves @EstelaMandela @NevilleGaunt @IngridVasiliu @Eli_Krumova @baski_LA

Nikola Tesla worked 18 to 20 hours a day, barely slept, predicted technologies we still use today, and ate two meals for most of his entire adult life. Tesla believed food was fuel and nothing more. He wrote that almost everyone eats too many beans, peas, and other acid-producing foods that poison the body and accelerate aging. His solution was radical simplicity. Breakfast was boiled egg whites and a glass of whole milk, the yolks discarded as too heavy. No coffee, no sugar, no grease. Then nothing for twelve hours. Dinner was celery broth with potato and a small piece of poached chicken. Dessert, on the occasions he bothered with one, was a single apple. That was the entire diet of one of history's most productive minds. Tesla was practicing what we now call intermittent fasting in the late nineteenth century, not because a wellness influencer told him to, but because he believed that the body's energy wasted on digestion was energy stolen from the mind. He kept his meals so light that his body was never burdened. He refused rich sauces, heavy meats, elaborate desserts, and the kind of multi-course European dinners that were standard for a man of his status. His contemporaries thought he was eccentric. Modern nutritionists would call him ahead of his time. © Eats History #archaeohistories

