Rossco 🤓

3.4K posts

Rossco 🤓 banner
Rossco 🤓

Rossco 🤓

@RuzzyCarhole

I muck about in photography playing with colour, texture, tones & my definition of beauty. Always looking for the kōan within an image. And I’m a mighty geek.

Port Stephens, Australia Katılım Temmuz 2025
495 Takip Edilen46 Takipçiler
Sabitlenmiş Tweet
Rossco 🤓
Rossco 🤓@RuzzyCarhole·
A-1083 ε ≈ 0.000004
3
1
17
1.1K
Rossco 🤓
Rossco 🤓@RuzzyCarhole·
@farzyness On a similar trajectory I’m having a great ongoing discussion with Claude about designing my dream house, asking it to push its creativity within what’s allowed in our Aussie state and local regulations. Imagine the fun you could have with 3D printing pushed creatively. 🚀
English
0
0
0
3
Rossco 🤓 retweetledi
Mechanical Knowledge
Mechanical Knowledge@mechanical_4u·
Every cut, angle, and surface here is the result of extreme precision controlled down to fractions of a millimeter. CNC machining turns solid metal into complex, perfectly balanced components that move with almost no resistance.
English
56
690
4.6K
341.8K
Rossco 🤓
Rossco 🤓@RuzzyCarhole·
@Imderekearnhart @oprydai How do they know what it was ‘before’ the observation to know that something had changed upon observation? (Or am I not getting something? Happy to own it if that’s the case)
English
1
0
1
353
Derek Earnhart
Derek Earnhart@Imderekearnhart·
It doesn’t. The term “observation” dangerously needs to be discussed here, because it can hinder the modern understanding of quantum mechanics. In everyday life, we can look at or observe a bird without touching it. In the quantum world, “observation” does not mean a conscious person looking at something. It means a physical interaction that extracts information from the system. A photon doesn’t have a brain. It doesn’t “know” anything. When a photon is measured, it interacts with something, whether it's a detector, an atom, a screen, an electron, etc. That interaction changes the system and creates a measurable result. So the photon is not reacting to being “seen.” It is reacting because measurement requires interaction. We have to remember that quantum systems exist in our world, but they do not behave like the everyday world we are used to. That means words like “observe” can mean something very different in quantum mechanics.
English
66
78
1.6K
59.2K
Mustafa
Mustafa@oprydai·
HOW DOES A PHOTON KNOW IT'S BEING OBSERVED?
English
1.7K
378
7.9K
1.3M
Riley Coyote
Riley Coyote@RileyRalmuto·
I just tried to comment "nerd." and got flagged for community guidelines. 🤣
Riley Coyote tweet media
English
14
1
20
1.2K
Rossco 🤓
Rossco 🤓@RuzzyCarhole·
Time and a place - if you’re virtue signalling against virtue signallers then write a letter to the paper or yell at the Telly. Don’t bring down a solemn and respectful moment for good Aussies by disrespecting it with booing. It displays your lack of character and basic decency.
English
1
0
0
69
Real Mark Latham
Real Mark Latham@RealMarkLatham·
@Ryandally08 Gadigal etc had nothing to do with what happened at Anzac Cove 111 years ago today. Some booed and many more would have if the occasion was not so solemn, to condemn this pathetic virtue signalling insult to the Diggers.
English
37
23
618
15.8K
Ryan Dally
Ryan Dally@Ryandally08·
#BREAKING Four minutes of prolonged booing breaks out at the official Anzac Day Dawn Service in Melbourne as a “Welcome to Country” is delivered. This is a sacred day. There should be no room for political correctness.
English
735
392
3.8K
234.1K
Rossco 🤓 retweetledi
Active Theory
Active Theory@active_theory·
Tried our hand at this grid splitting effect, experimenting in Houdini using recursive subdivision in a solver. Original inspiration by @flight404 Link in comments.
English
3
3
97
9K
Rossco 🤓 retweetledi
Cosmos Archive
Cosmos Archive@cosmosarcive·
“Pure mathematics is, in its way, the poetry of logical ideas.” 
— Albert Einstein
English
17
299
1.5K
41.7K
Rossco 🤓
Rossco 🤓@RuzzyCarhole·
@Scobleizer @openclaw @_LuoFuli Wow, I’d love to see and read about your travels to China and what you see and learn there. (Have you been before?) A few friend of mine have been there and say it’s getting like the Jetsons over there. 😜
English
1
0
2
214
Robert Scoble
Robert Scoble@Scobleizer·
Insights into Chinese craze for @OpenClaw. The Chinese are even crazier about OpenClaw than we are here in America. This is the first Chinese-only video I've watched. Had to turn on English subtitles. One thing that comes across clear is @_LuoFuli's passion and love for OpenClaw. Now she runs Xaomi's AI group building models. They build phones and cars and top rate ones at that. She talks through her discovery process when she first got OpenClaw. One line of hers caught my attention. She asked OpenClaw to help her cause more curiosity in her employees. Ahh, we have the same problem here in America. I'm trying to put together a trip to China in September. This video motivates me to learn some Chinese cause I would love to interview Fuli Luo when I'm there.
张小珺 Xiaojun Zhang@zhang_benita

Yes, our latest special guest is Fuli Luo @_LuoFuli . The second battle in the global large model arms race has begun: shifting from the Chat era dominated by pre-training to the Agent era driven by post-training. This marks Fuli Luo’s first-ever interview, as well as her first in-depth technical conversation. We talked systematically about the massive AI upheaval triggered by technological breakthroughs including Claude Opus 4.6 and OpenClaw in 2026, along with its subsequent structural impacts across the industry. Amid the fierce large-model arms race, the world around us is undergoing brutally rapid changes—even for researchers who train models firsthand. “I used to believe our work was highly creative, and could never be simplified into fixed skills or standardized workflows. But now I realize it can be automated after all. If that’s possible, can models train stronger models on their own? Can they achieve iterative improvement through self-evolution? This is exactly what will unfold in the next couple of years,” Fuli Luo says. As human knowledge and wisdom are internalized into model capabilities, what will humanity pursue in the future? Is our society truly ready for this tsunami-scale technological revolution? All in all, this is an information-dense dialogue. It reveals how an AI lab makes strategic technical bets, allocates resources, and adjusts organizational structure and team planning amid a major paradigm shift. At the core of its response to drastic change lies its established culture and core values. Though lengthy and technically intensive, we hope this conversation brings great insights to every viewer. Our podcast, video episode and article are released simultaneously across platforms, with English subtitles provided to assist non-Chinese-speaking audiences. Luo Fuli: OpenClaw, Agent Frameworks — The AI Paradigm Has Already Chang... youtu.be/V9eI-t3TApE?si… 来自 @YouTube

English
18
18
205
25.4K
Rossco 🤓 retweetledi
Comunidad Biológica
Comunidad Biológica@Bio_comunidad·
La fecundación no es aleatoria, ni gana el más rápido: en realidad, el óvulo decide quién gana. Durante décadas nos enseñaron que la carrera de la fecundación la ganaba el espermatozoide más rápido. Sin embargo, un estudio publicado en Proc Biol Sci, demuestra cómo funciona realmente la reproducción humana. Científicos analizaron los fluidos foliculares de 60 parejas en el Hospital Saint Mary en Manchester. Observaron que el óvulo libera sustancias químicas para atraer activamente a espermatozoides de hombres específicos. Mediante estas señales, el óvulo ejerce una selección biológica propia para decidir qué células logran acercarse. El óvulo busca específicamente células que posean una compatibilidad genética óptima con su propia carga biológica. Esta selección se enfoca en genes del sistema inmunitario para asegurar que la descendencia sea más saludable. Lo llamativo es que esta preferencia celular no siempre favorece a la pareja elegida de manera consciente. Esta comunicación química demuestra que la biología femenina sigue evaluando opciones incluso después del encuentro íntimo. Entender este proceso celular nos permite buscar soluciones precisas para los casos de infertilidad de origen desconocido. La ciencia continúa documentando el nivel exacto de interacción biológica que existe realmente durante el proceso reproductivo. Video: Stazione Zoologica Anton Dohrn di Napoli DOI: 10.1098/rspb.2020.0805
Español
169
1.7K
7.3K
972.3K
Rossco 🤓
Rossco 🤓@RuzzyCarhole·
Wow, this has sent my fuzzy morning brain off on a great tangent today.
Ihtesham Ali@ihtesham2005

A MIT professor who built the world's first neural network machine said something about intelligence that nobody in Silicon Valley wants to admit. His name was Marvin Minsky. He co-founded MIT's artificial intelligence lab with John McCarthy in 1959. He built SNARC the first randomly wired neural network learning machine in 1951, as a graduate student at Princeton. He won the Turing Award. He advised Stanley Kubrick on 2001: A Space Odyssey. Isaac Asimov, who was not a modest man, said Minsky was one of only two people he would admit were more intelligent than him. In 1986, after decades of building machines that could think, Minsky published a book about something far more unsettling. How humans think. And why we are wrong about almost everything we believe about it. The book is called The Society of Mind. It has 270 essays. Each one is a page long. Together they build a single argument that most people, when they first encounter it, reject immediately because it is too uncomfortable to accept. The argument is this: you do not have a mind. You have thousands of them. What you experience as a single, unified self making clear-headed decisions is not a thinker. It is an outcome. The result of hundreds of tiny, specialized, mostly mindless agents competing, negotiating, overriding, and occasionally cooperating with each other beneath the surface of your awareness. You do not decide things. You are what is left over after the arguing stops. Minsky was precise about this. He wrote that the power of intelligence stems from our vast diversity, not from any single perfect principle. He called this the trick that makes us intelligent, and then immediately added: the trick is that there is no trick. There is no central processor. No ghost in the machine. No unified self sitting behind your eyes, calmly evaluating options and choosing rationally. There is only the parliament. And the parliament is always in session. This reframing destroys the standard explanation for every failure of self-control. The reason you procrastinate is not laziness. It is that the agent in you that understands long-term consequences is losing an argument to the agent that wants comfort right now, and neither of those agents has a decisive vote. The reason you change your mind the moment someone pushes back is not weakness. It is that the social agent, the one that monitors status and belonging, just outweighed the analytical one. The reason willpower fails is not a character flaw. It is that you sent one small agent into a fight against dozens, and you called that discipline. Minsky had a specific line that breaks this open completely. He said: in general, we are least aware of what our minds do best. The things you do with the most apparent ease, reading a face, walking through a crowded room, understanding a sentence, catching a ball, are not simple at all. They are the products of staggeringly complex agent networks that run so smoothly, so far below conscious access, that you experience them as effortless. The things that feel like work, the logical arguments, the deliberate choices, the careful plans, are actually the clumsy surface layer, the small fraction of mental activity you can observe at all. You have been taking credit for the wrong parts of your own intelligence. The practical implication is the one that most productivity advice misses entirely. If your decisions are not made by a single rational self but by whichever coalition of agents happens to win the moment, then the game is not about training yourself to be more disciplined. The game is about designing the environment so that the right agents win without needing a fight. This is why removing your phone from the room works better than deciding not to check it. This is why writing one task on an index card works better than building a sophisticated system. This is why commitment devices beat motivation every time. You are not strengthening your will. You are changing the conditions of the argument so that the outcome you want becomes the path of least resistance. Minsky spent his entire career building machines that could imitate intelligence. What he discovered in the process was that natural intelligence, the kind running inside every human brain on earth, is nothing like what we think it is. It is not a single flame burning in a single chamber. It is a city. Loud, chaotic, full of competing interests, with no mayor. The people who understand this stop trying to win the argument through force of will. They learn to build a better city instead.

English
0
0
0
12
Rossco 🤓 retweetledi
CG
CG@cgtwts·
> be Yann LeCun > spend years building JEPA at Meta > company focuses on LLaMA instead > his idea stays complicated and unused > robotics plans get dropped > decides to leave and start AMI Labs > builds a much simpler version from scratch > trains it on normal hardware in just a few hours > removes all the complicated tricks and keeps it simple Results: -uses 200x less data than similar systems -makes decisions 50x faster -runs on a single GPU instead of massive clusters -simple to train -understands movement, objects, and space -can tell when something is physically impossible -learns how the real world works without being explicitly taught.
Aakash Gupta@aakashgupta

Earlier this year Yann LeCun left Meta because Mark Zuckerberg wouldn't bet the company on JEPA. Last week his group dropped the first JEPA that actually trains end-to-end from raw pixels. 15 million parameters. Single GPU. A few hours. The timing is not a coincidence. For four years Meta has been the house that JEPA built. LeCun published the original paper from FAIR in 2022. I-JEPA and V-JEPA came out of his lab. The architecture was supposed to be the escape hatch from LLMs, the path to robots that actually learn physics instead of hallucinating about it. Every version shipped fragile. Stop-gradients. Exponential moving averages. Frozen pretrained encoders. Six or seven loss terms that had to be hand-tuned or the model collapsed into garbage representations. Meta kept funding LLMs. Llama shipped. Llama scaled. Llama got beat by Qwen and DeepSeek. Zuck spent $14 billion to buy ScaleAI and install Alexandr Wang. The FAIR robotics group was dissolved. LeCun's research kept winning papers and losing the product roadmap. He left, started AMI Labs, and said publicly that LLMs were a dead end. Now the paper. LeWorldModel. One regularizer replaces the entire pile of heuristics. Project the latent embeddings onto random directions, run a normality test, penalize deviation from Gaussian. The model cannot collapse because collapsed embeddings fail the test by construction. Hyperparameter search went from O(n^6) polynomial to O(log n) logarithmic. Six tunable knobs became one. The downstream numbers are what should scare the robotics capex class. 200 times fewer tokens per observation than DINO-WM. Planning time drops from 47 seconds to 0.98 seconds per cycle. 48x faster at matching or beating foundation-model performance on Push-T and 3D cube control. The latent space probes cleanly for agent position, block velocity, end-effector pose. It correctly flags physically impossible events as surprising. It learned physics without being told physics existed. Figure AI is valued at $39 billion. Tesla Optimus is mass-producing. World Labs raised $230 million to sell generative world models. Everyone in humanoid robotics is burning capital on foundation-model pipelines that plan in 47 seconds per cycle. LeCun's group just showed you can do it with 15 million parameters on a single GPU in a few hours. This is the Xerox PARC pattern running again. Meta had the next architecture. Meta had the scientist. Meta dissolved the robotics team, passed on the productization, and watched the exit. Three months later the lab that was supposed to be Meta's publishes the result that resets the robotics cost structure. The paper is worth more than Alexandr Wang.

English
55
322
4.5K
847.2K
Aakash Gupta
Aakash Gupta@aakashgupta·
Earlier this year Yann LeCun left Meta because Mark Zuckerberg wouldn't bet the company on JEPA. Last week his group dropped the first JEPA that actually trains end-to-end from raw pixels. 15 million parameters. Single GPU. A few hours. The timing is not a coincidence. For four years Meta has been the house that JEPA built. LeCun published the original paper from FAIR in 2022. I-JEPA and V-JEPA came out of his lab. The architecture was supposed to be the escape hatch from LLMs, the path to robots that actually learn physics instead of hallucinating about it. Every version shipped fragile. Stop-gradients. Exponential moving averages. Frozen pretrained encoders. Six or seven loss terms that had to be hand-tuned or the model collapsed into garbage representations. Meta kept funding LLMs. Llama shipped. Llama scaled. Llama got beat by Qwen and DeepSeek. Zuck spent $14 billion to buy ScaleAI and install Alexandr Wang. The FAIR robotics group was dissolved. LeCun's research kept winning papers and losing the product roadmap. He left, started AMI Labs, and said publicly that LLMs were a dead end. Now the paper. LeWorldModel. One regularizer replaces the entire pile of heuristics. Project the latent embeddings onto random directions, run a normality test, penalize deviation from Gaussian. The model cannot collapse because collapsed embeddings fail the test by construction. Hyperparameter search went from O(n^6) polynomial to O(log n) logarithmic. Six tunable knobs became one. The downstream numbers are what should scare the robotics capex class. 200 times fewer tokens per observation than DINO-WM. Planning time drops from 47 seconds to 0.98 seconds per cycle. 48x faster at matching or beating foundation-model performance on Push-T and 3D cube control. The latent space probes cleanly for agent position, block velocity, end-effector pose. It correctly flags physically impossible events as surprising. It learned physics without being told physics existed. Figure AI is valued at $39 billion. Tesla Optimus is mass-producing. World Labs raised $230 million to sell generative world models. Everyone in humanoid robotics is burning capital on foundation-model pipelines that plan in 47 seconds per cycle. LeCun's group just showed you can do it with 15 million parameters on a single GPU in a few hours. This is the Xerox PARC pattern running again. Meta had the next architecture. Meta had the scientist. Meta dissolved the robotics team, passed on the productization, and watched the exit. Three months later the lab that was supposed to be Meta's publishes the result that resets the robotics cost structure. The paper is worth more than Alexandr Wang.
Aakash Gupta tweet media
English
63
377
3.2K
1.1M
Cosmos Archive
Cosmos Archive@cosmosarcive·
In the heart of every atom, the electron doesn’t glide; it vanishes. Dr. Theresa Bullard on the quantum leap: that “gap” between orbitals isn’t empty. It’s the portal. The electron dissolves into the field of pure potential, then reappears elsewhere. No path. No travel. Just pure becoming. We chase the particles, the light, the matter… but the cosmos is whispering through the silence between. The emptiness that holds everything. This is the quiet engine of reality itself.
English
86
266
1.2K
82.9K
Rossco 🤓 retweetledi
Mathematica
Mathematica@mathemetica·
Ferromagnetic spheres in liquid under B: m=(4πr³/3μ₀)χB. Chains via U_dd=(μ₀/4π r³)[m1·m2 -3(m1·r̂)(m2·r̂)]. Stars trace ∇·B=0 lines live. Raw magnetostatic self-assembly math visualized perfectly in real time exactly as dipole math.
English
9
65
373
33K
Rob Grieves 🇦🇺
Rob Grieves 🇦🇺@RobGrieves·
Honestly would have bought this over the Model Y L if Tesla sold one. Bring back the sports station wagon I say!
Rob Grieves 🇦🇺 tweet media
English
74
37
712
63.1K