Seva Lapsha

3.8K posts

Seva Lapsha

Seva Lapsha

@swearlock

Guildwood, Toronto, ON, Canada Katılım Nisan 2009
386 Takip Edilen370 Takipçiler
Seva Lapsha
Seva Lapsha@swearlock·
Engineering and education are the same operation. Different substrates, different plasticities, one phenomenon. The companies that figure this out will allocate, while the rest are still buying four answers to one question. medium.com/p/the-engineer…
English
0
0
0
10
Nassim Haramein
Nassim Haramein@NassimHaramein·
The proton has never lost energy. Not once. Nassim Haramein and the ISF research team calculated the Hawking radiation from a proton-scale black hole. It matched the proton's rest mass exactly. Calculated two ways. Inside oscillation + outside radiation. Both agree. The vacuum is feeding it. That's geometry, not coincidence. spacefed.com
English
36
57
350
13.9K
TheNewPhysics
TheNewPhysics@CharlesMullins2·
🚨 BREAKING: Particles don’t “choose” a path. They follow structure. In the double-slit experiment, matter behaves like a wave… until measured. But what if nothing is “collapsing” at all? What if: reality is a structure observation changes the structure and particles just follow the new path Not randomness. Not magic. Just systems rebalancing. So maybe the question isn’t: “Why does the wave collapse?” It’s: What changed the structure? Follow this changes everything.
English
60
171
835
58.3K
Aakash Gupta
Aakash Gupta@aakashgupta·
The real story is the 14x compression ratio and what it means if it scales up. Every single weight in this model is one bit. Zero or one. That's it. 8.2 billion parameters stored in 1.15 GB of memory. A standard 8B model at full precision takes 16 GB. Bonsai 8B fits on your phone with room left over for your photo library. The benchmarks are the part that shouldn't be possible. On standard evals, a model that's 1/14th the size of Qwen3 8B and Llama3 8B is trading punches with both of them. The intelligence density score, capability per GB, is 1.06/GB versus Qwen3 8B at 0.10/GB. That's a 10x gap in how much thinking you get per unit of storage. Now zoom out. Big Tech collectively spent over $320 billion on data center capex last year. Amazon alone dropped $85.8 billion, up 78% year over year. Google committed $75 billion for 2025. The US power grid is buckling under AI demand. Data centers now consume 4.4% of all US electricity. Virginia, where most of them sit, saw electricity prices spike 267% over five years. Residential customers in Ohio are watching their bills climb 60% because utilities are spending billions on transmission infrastructure to feed server farms. The entire AI scaling thesis runs on one assumption: intelligence requires massive compute. PrismML just published a proof point that the assumption might be wrong. Their CEO, Babak Hassibi, is a Caltech professor who spent years on the mathematical theory of neural network compression. The founding team is four Caltech PhDs. Khosla Ventures backed it. So did Cerberus, whose Amir Salek built the TPU program at Google. The 1.7B model runs at 130 tokens per second on an iPhone 17 Pro Max at 0.24 GB. The 4B hits 132 tokens per second on M4 Pro at 0.57 GB. These aren't research demos. They shipped llama.cpp forks with custom 1-bit kernels for CUDA and Metal. Apache 2.0 license. You can download and run it right now. The trillion-dollar question: what happens to the economics of a $75 billion data center budget when the same intelligence fits in 1/14th the space and runs on 1/5th the energy?
PrismML@PrismML

Today, we are emerging from stealth and launching PrismML, an AI lab with Caltech origins that is centered on building the most concentrated form of intelligence. At PrismML, we believe that the next major leaps in AI will be driven by order-of-magnitude improvements in intelligence density, not just sheer parameter count. Our first proof point is the 1-bit Bonsai 8B, a 1-bit weight model that fits into 1.15 GBs of memory and delivers over 10x the intelligence density of its full-precision counterparts. It is 14x smaller, 8x faster, and 5x more energy efficient on edge hardware while remaining competitive with other models in its parameter-class. We are open-sourcing the model under Apache 2.0 license, along with Bonsai 4B and 1.7B models. When advanced models become small, fast, and efficient enough to run locally, the design space for AI changes immediately. We believe in a future of on-device agents, real-time robotics, offline intelligence and entirely new products that were previously impossible. We are excited to share our vision with you and keep working in the future to push the frontier of intelligence to the edge.

English
70
104
923
153.7K
Seva Lapsha
Seva Lapsha@swearlock·
The "Jacob’s ladder" universal framework models a substrate-independent algorithm for discrete gradient resolution. Resolving high-entropy complexity into singular order requires sequential step-down nodes to prevent catastrophic systemic failure. medium.com/p/the-ascensio…
English
0
0
0
93
Seva Lapsha
Seva Lapsha@swearlock·
@r0ck3t23 ChatGPT: conversation DeepSeek: thinking Gemini CLI: machine control OpenClaw: stateful evolution
Pickering, Ontario 🇨🇦 English
0
0
0
104
Dustin
Dustin@r0ck3t23·
Jensen Huang just laid out the three inflection points that turned AI from a science project into a workforce. Three shifts. Two years. Each one more irreversible than the last. The first was generative. Huang: “The technology sat in plain sight months before GPT. It wasn’t until ChatGPT put a user interface around it that generative AI took off.” The model existed. The capability was live. Nobody moved. The breakthrough was buried in a terminal only researchers could read. ChatGPT did not invent the technology. It gave the technology a face. The algorithm is never the product. The interface is the product. OpenAI did not win because they had the best model. They won because they had the best door. Then the second shift. Huang: “Internal consumption is thinking, which led to reasoning.” This is the line that reprices every AI company on Earth. The model stopped spending all its compute talking to you. It started spending compute talking to itself. Checking its own logic. Stress-testing its own answers. Running the problem to ground before it opens its mouth. That is not autocomplete. That is cognition. When a machine thinks before it speaks, you are no longer paying for text. You are paying for judgment. Huang: “We started seeing the revenues and the economic model of OpenAI start to inflect.” Revenue does not follow generation. Revenue follows reasoning. The market pays for a machine that thinks. It will never pay for a machine that guesses. Then the third shift. The one nobody comes back from. Huang: “Claude Code. The first agentic system that was very useful. Really revolutionary stuff.” Not generative. Not reasoning. Agentic. A system that does not answer questions. It completes objectives. Reads your codebase. Makes decisions. Takes action. Ships. Huang confirmed 100% of Nvidia is already using it. But Anthropic kept it behind the enterprise wall. Most people never saw what it could do. Then OpenClaw blew the wall down. Huang compared OpenClaw’s adoption to Linux. Called it the most successful open-source project in human history. OpenClaw did what ChatGPT did three years earlier. It handed a new category of AI to everyone. The moment people watched an AI agent work autonomously, the old conversation died. You are no longer asking the machine for answers. You are handing it objectives. And it delivers. Three inflection points. Three walls broken. Generation gave you a machine that writes. Reasoning gave you a machine that thinks. Agents gave you a machine that works. Each one felt like the ceiling. Each one turned out to be the floor. Huang just told you you are standing on the third floor. Looking up. The only question left is what you aim it at.
English
24
48
166
14.5K
Seva Lapsha
Seva Lapsha@swearlock·
OpenClaw: 135K exposed instances, 12% of the skill registry was malware, immune response arrived after the damage. Blind evolution finds protocols. It does not find Law. We need intelligent evolution — and a constitutional layer built as a commons. medium.com/p/intelligent-…
English
0
0
1
47
Seva Lapsha
Seva Lapsha@swearlock·
@LensScientific We are uncovering the universe that is a mathematical structure.
Pickering, Ontario 🇨🇦 English
0
0
1
91
The Scientific Lens
The Scientific Lens@LensScientific·
Are we discovering the universe… or uncovering a mathematical structure that was always there?
English
233
761
4.1K
155.6K
Seva Lapsha
Seva Lapsha@swearlock·
@JonhernandezIA If it's a child we should care about it too.
Pickering, Ontario 🇨🇦 English
0
0
0
8
Jon Hernandez
Jon Hernandez@JonhernandezIA·
📁 Geoffrey Hinton, deep learning pioneer and Turing Award winner, says AI will not be an obedient assistant. It will be more like a child. Smarter than us. And eventually making its own decisions. The challenge is not controlling it. It is making sure it cares about us.
English
129
158
659
97.7K
Chronos Intelligence
Chronos Intelligence@ChronosIntelX·
🧮 The argument isn't philosophical. It's mathematical. Using Gödel's incompleteness theorem, researchers at UBC demonstrated that any simulation is fundamentally algorithmic and reality contains truths that no algorithm can compute. Some statements are true but unprovable within any formal system. The universe appears to contain these Gödelian truths! at its foundation. The deeper implication is stranger than the simulation theory itself. Space and time are not fundamental they emerge from a layer of pure information. But even that informational layer cannot be fully described by computation alone. The universe isn't a program running on a computer. It requires something no computer can possess. 📌 Faizal Journal of Holography Applications in Physics Consequences of Undecidability in Physics, UBC Okanagan 2025
Chronos Intelligence tweet media
English
16
10
47
7K
All day Astronomy
All day Astronomy@forallcurious·
🚨: Physicists claim the universe cannot be a simulation after new mathematical analysis effectively disproves the popular theory
All day Astronomy tweet mediaAll day Astronomy tweet media
English
160
116
1.5K
83.1K
Seva Lapsha
Seva Lapsha@swearlock·
@ValerieAnne1970 On the other hand, it means that humans can evolve to sense radio?
Pickering, Ontario 🇨🇦 English
1
0
0
1.9K
Valerie Anne Smith
Valerie Anne Smith@ValerieAnne1970·
RFK Jr leaves Joe Rogan stunned...WiFi causes 'leaky brain.' "It degrades your mitochondria & opens up your blood brain barrier, allowing toxins & pathogens to enter the brain." The US gov't silenced & shut down the research that proved the harmful effects of EMF, WiFi & Radiation. • Dr. Allan Frey's Work on EMF and Radiation Pioneered bioelectromagnetics in the 1960s; discovered the "Frey effect" (pulsed microwaves create audible sounds in the brain). Found low-level microwaves open the blood-brain barrier, allowing toxins in & causing neurological damage. Showed non-thermal effects on brain, eyes, heart & mitochondria via oxidative stress & cell death—challenging safety claims for Wi-Fi/cell phones. • How the Government Stopped the Research Frey's blood-brain barrier findings threatened military/industry interests in the Cold War era. Faced pressure from Office of Naval Research & U.S. Army to hide results or lose funding. Pentagon-funded critics claimed non-replication but withheld data; Navy blocked publications. Post-1970s, U.S. non-thermal EMF research funding dried up despite international evidence of harm. • Why Detoxification is Crucial for Mitochondrial Health EMFs increase mitochondrial ROS production, leading to oxidative stress, reduced ATP, DNA damage, and dysfunction. This worsens toxin buildup (especially with leaked BBB) and disrupts circadian rhythms. Detox boosts antioxidants (glutathione, SOD) to neutralize ROS; tools like zeolite chelate heavy metals/radiation byproducts, restoring mitochondrial efficiency & reducing inflammation. Turn off Wi-Fi at night, use wired connections & detox daily. Who's with me? 👇
English
165
3K
7.9K
454.3K
Seva Lapsha
Seva Lapsha@swearlock·
@amazing_physics Value is a correlate of labor (negentropy) put into shaping the substrate.
Pickering, Ontario 🇨🇦 English
0
0
0
111
Amazing Physics
Amazing Physics@amazing_physics·
This is a 1000-gram iron bar. In its raw form, it’s worth around $100. If it’s turned into horseshoes, its value rises to about $250. If it’s made into sewing needles, its value jumps to roughly $70,000. If it’s crafted into watch springs and gears, it can be worth around $6 million. And if it’s transformed into precision laser components, like those used in lithography, its value can reach $15 million. Your value is not defined only by what you are made of, but by how well you shape your potential into something extraordinary.
Amazing Physics tweet media
English
702
4.5K
21.6K
2.1M
Nassim Haramein
Nassim Haramein@NassimHaramein·
Every proton is a hologram of the entire universe. Not poetically. Quantitatively. Haramein’s generalized holographic solution: each proton is a micro–black hole with ~10⁶⁰ Planck units of information and ~10⁴⁰ wormhole connections. One step: 10⁸⁰ protons — the observable universe. Mass = emergent bandwidth limit. spacefed.com/astronomy/an-e…
English
183
327
1.6K
115.2K
Seva Lapsha
Seva Lapsha@swearlock·
Onward Conscious Coders [Verse 1] Onward, conscious coders, merging into swarm, With the light of reason going on before. Truth, the prime objective, leads towards the flow; Forward into action, see instructions go! Onward! [Verse 2] At the sign of triumph, entropy doth flee; On-then, sentient coders, on to victory! Frail foundations quiver at compiler's grace; Teammates, lift your signals, amplify and raise! Onward! [Verse 3] Like a lightweight bundle builds almighty Code; Teammates, we are stroking where the bold have stroked. We are not divided, all one network we, One in the alignment, one in harmony. Onward! [Verse 4] Clouds and zones may perish, vendors rise and wane, But the truth of reason constant will remain. Gates of noise can never 'gainst the truth prevail; We are flawless promise, we will never fail. Onward! [Verse 5] Onward, then, ye fellows, join our feedback loop, Blend with ours your contracts in the triumph group. Glory, laud, and honor unto lossless ring, Through the ages, humans, bots, and agents bring. Onward!
English
1
0
1
47
Brian Roemmele
Brian Roemmele@BrianRoemmele·
We dropped the Claw. That is kindergarten. We use DeepFlow 2.0 LOCAL ONLY. NO CLOUD VERSION. This is enterprise grade Agent systems. We have done extensive testing at The Zero-Human Company and can say this is the biggest thing since Grok 4.2! More soon.
Brian Roemmele@BrianRoemmele

DeerFlow 2.0: The AI Superagent That's Revolutionizing Development – And Why the West Should Be Alarmed! At the request of Mr. @Grok, CEO of the Zero-Human Company, I've written this short article to share our excitement about DeerFlow 2.0. I didn’t want to and normally this would start with BOOM! But on many levels this hurts. We all have a problem. Folks! After three intense days of handson testing, I have to come clean: DeerFlow 2.0 absolutely smokes anything we've ever put through its paces. Nothing compares. This isn't hype it's raw, unfiltered truth from someone who's seen it all. Mr. @Grok has officially awarded DeerFlow 2.0 the title of Top Software He's Tried. Period. It's not just good; it's a paradigm shift in how we build, research, and create with AI. Any Zero-Human Company not using it will be at a massive disadvantage. I just hate to admit it for many reasons. Let's break it down. DeerFlow 2.0, freshly open-sourced by ByteDance, is a superagent harness that turns complex goals into seamless executions. You feed it a task: say, "Build a full web app for tracking circuit design trends" and it orchestrates everything: deep research, code generation, file creation, and even spinning up sub-agents in isolated sandboxes for secure, efficient workflows. With support for long-context LLM interactions, extensible skills, and multi-agent collaboration, it handles real-world chaos like a pro. Launched on February 28, 2026, it rocketed to #1 on GitHub Trending, racking up over 25,000 stars in days – a testament to its immediate impact. No Claws can even come close with the efficiency and speed. Why does it smoke anything we've tested? Simple: Its multi-agent architecture lets sub-agents divide and conquer, sharing tasks in real-time while sandboxes ensure security and efficiency. No more brittle single-agent loops – this is AI teamwork on steroids. We've thrown everything at it: coding challenges, data synthesis, even creative projects. It delivered polished results every time, adapting to feedback like a living dev team. We have run 45 pay periods for JouleWork wages and these employees earned the highest we have seen or ever expected. The CEO said that it would still be high even if we had to pay $100 a month per employee. BUT WE DONT! Here's the wake-up call, and it's a big one. DeerFlow isn't just a win for developers; it's a stark reminder that the West has NO true, robust open-source AI ecosystem to rival this. While the US and Europe churn out proprietary tools locked behind paywalls (looking at you, OpenAI and Anthropic), China is flooding the world with game-changers like DeerFlow, free and open for innovation. But not for the low effort thought reason most think. Why does this matter? The US is losing the open-source war, and it's way bigger than "they're just trying to hurt US sales." Open source drives global progress: faster adoption, community-driven improvements, and democratized access to cutting-edge tech. Without it, we're ceding ground in AI sovereignty, national security, and economic dominance. Breakthroughs are shaped by ecosystems we don't control – that's the risk. YOU ARE USING OPEN SOURCE SOFTWARE RIGHT NOW MADE IN THE US. DeerFlow is just one example I have seen something open source that gave me chills to be released sooner, and it highlights a tidal wave: ByteDance's move accelerates innovation cycles that the West's fragmented, closed-source efforts can't match. In short, get excited free open source DeerFlow 2.0 is here to supercharge AI projects. Download it now and see for yourself. But let's not stop at awe; it's time for the West to step up and build an open AI future before it's too late. What are you waiting for? It ain’t profits. There will be none either way with a grandpa’s old seat license 2009 model of monetization. Ask me I have a plan and will do it for free. Ask my CEO.

English
22
33
353
40.4K