l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴

28.9K posts

l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴ banner
l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴

l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴

@loopuleasa

Relentless exposure to the Truth.

参加日 Temmuz 2010
42 フォロー中3.8K フォロワー
固定されたツイート
l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴
I have finished reading the entirety of the internet, and I have decreed that none of it is worth my time.
English
5
2
18
3.8K
l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴
@Kpaxs even simpler people lack the time and space to just chill and do nothing we fill our time with random shit, when literally "nothing" would be more benefical
English
0
0
0
14
Kpaxs
Kpaxs@Kpaxs·
Decision hygiene is invisible success. The math is clear: noise often causes more damage than bias. Bias correction feels better than noise reduction because when you spot a bias, you fix it, you can point to it and say "I did that." But noise? You just... made fewer random errors you'll never identify. No before/after. Just statistical improvement you can't see. And this creates a brutal incentive problem. The things that prevent the most damage are the least rewarding to do. But even if you can't point at the specific errors you prevented. Uou just know that, in aggregate, across many decisions, you're making fewer mistakes.
Kpaxs@Kpaxs

Decision Hygiene A process to improve the quality of decision-making and prevent an unspecified range of potential errors before they occur. geni.us/K0eW

English
1
9
72
4.6K
l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴
The next future is here And it has the shittiest name in existence
Brian Roemmele@BrianRoemmele

LeWorldModel: Yann LeCuns Radical Simplification of World Models Just Made Physics-Aware AI Practical In the race for artificial general intelligence, two paths have emerged. One is the familiar scale everything route: bigger LLMs trained on ever-larger text corpora. The other, championed for years by Yann LeCun, is building world models: compact systems that learn the underlying physics of reality directly from raw sensory data (pixels) so AI can plan, predict, and act in the physical world like a robot or self-driving car actually would. Until now, the second path has been frustratingly difficult. Joint-Embedding Predictive Architectures (JEPAs) - LeCuns elegant framework for learning predictive representations without reconstructing every pixel - kept collapsing during training. Researchers had to resort to a laundry list of hacks: multi-term loss functions (up to six hyperparameters), frozen pre-trained encoders, stop-gradients, exponential moving averages, and other duct-tape tricks just to keep the model from mapping every input to the same useless output. LeCuns team (Mila, NYU, Samsung SAIL, and Brown University) dropped a bombshell: LeWorldModel (LeWM) - the first JEPA that trains stably end-to-end from raw pixels using only two loss terms. No more house-of-cards engineering. Just a clean, simple recipe that works on a single GPU in a few hours with only 15 million parameters. The Core Breakthrough: SIGReg Saves the Day LeWorldModels secret weapon is a new regularizer called SIGReg (for spherical isotropic Gaussian regularizer). It enforces a simple Gaussian distribution on the latent embeddings. This single term prevents representation collapse without any of the previous heuristics. The training objective now has just two parts: 1. Next-embedding prediction loss - the model predicts what the next latent state should be. 2. SIGReg - keeps the latent space well-behaved and diverse. Thats it. Hyperparameters drop from six to one. Training becomes stable, reproducible, and dramatically cheaper. The model learns directly from raw video frames (no pre-trained vision encoders needed) and produces a compact latent world model that can be used for fast planning. Impressive Results on Real Benchmarks Despite its tiny size, LeWorldModel punches way above its weight: - Trains on a single GPU in a few hours. - Plans actions up to 48 times faster than foundation-model-based world models. - Uses roughly 200 times fewer tokens than alternatives. - Matches or beats far larger models on diverse 2D and 3D control tasks (e.g., manipulation, navigation). - Its latent space encodes meaningful physical quantities (position, velocity, etc.) - proven by direct probing. - It reliably detects physically implausible surprise events, showing genuine causal understanding. Crucially, adding a decoder and reconstruction loss hurts performance on downstream control tasks. The pure JEPA objective already captures everything needed for planning - extra visual details just get in the way. Project website: le-wm.github.io Official code: github.com/lucas-maes/le-… Why This Matters for the Future of AI LeCun has been saying since 2022 that world models (not next-token predictors) are the key to real intelligence. Critics always pointed to the training instability. LeWorldModel removes that objection with elegant simplicity. This is a philosophical reset: AI can learn physics the way babies do - by watching the world unfold - without needing supercomputers or endless text. The implications for robotics, autonomous vehicles, and embodied agents are enormous. Suddenly, building a physically grounded planner is something a researcher (or even a hobbyist) can do on consumer hardware. 1 of 2

English
1
0
1
132
Celene
Celene@toasterlighting·
Today I have learned that the pishock api has a duration field that is parsed as being in "seconds" if the value is 1-15 and as "milliseconds" if the value is 100+ one of these days the pishock devs are going to change it and then people will be shocked continuously for 5 minutes
Celene@toasterlighting

English
20
44
1.8K
72.2K
l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴ がリツイート
The Shift Journal
The Shift Journal@TheShiftJournal·
The Invisible Glass Experiment Scientists once conducted a fascinating experiment with a pike and an aquarium. They placed a transparent glass barrier in the middle of the tank. On one side was a large, hungry pike. On the other side swam several small fish. As soon as the pike spotted the smaller fish, it launched itself forward to attack. Bang! It crashed headfirst into the invisible glass and was thrown backward. Undeterred, the pike tried again... and again. Each attempt ended the same way a painful collision. After repeated failures , its head became bruised and some of its scales were knocked loose. Eventually, the pike gave up. It retreated to a corner of the tank, clearly frightened and defeated. Then, the scientists quietly removed the glass barrier. The small fish now swam freely around the entire aquarium some even passing right in front of the pike’s mouth. But the pike never attacked again. Even though it was starving, it refused to strike. In its mind, the invisible wall was still there. A few days later, the pike died of starvation surrounded by abundant food it could no longer bring itself to eat. This phenomenon is known as the Pike Effect (or Pike Syndrome). It serves as a powerful metaphor for how repeated failures and setbacks can create invisible mental barriers that limit us long after the real obstacles have disappeared.
English
45
387
1.4K
126K
Social Caterpillar
Social Caterpillar@social_larva·
@loopuleasa Yeah fair, that part seems difficult but the idea of basing it on “actions” seems to me very similar to how we see the world What do you think?
English
2
0
0
12
l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴
@social_larva it's about modeling changes in physical space, via pixels as training ground the model learns many things that children do in early years, object permanence, gravity, etc
English
0
0
1
6
World of Engineering
World of Engineering@engineers_feed·
You are the first person to step foot on Mars, what do you say?
English
283
8
150
57.8K
l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴
@shaggysurvives the rational move is to tank the bad luck of you being in this position, because if you let the train go to the next guy, the doubling effect would eventually lead to you dying too
English
0
0
0
11
l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴
@sotoalt_ To understand is to make a smaller system inside your head behave similarly to a larger system outside your head. Make that your slogan, you can steal it.
English
1
0
2
220
SotoAlt
SotoAlt@sotoalt_·
idk how ppl always find new usecases for the stuff we make, this is an entire graph made for studying a biology paper
English
8
18
296
19.8K
l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴
@flowersslop names and brands have their own gravity to them, separate from the original creators that gravity brings higher expectations if the meme is too strong, it becomes unwieldly
English
0
0
0
5
Flowers ☾
Flowers ☾@flowersslop·
In hindsight, calling it GPT-5 was kinda a bad move. It should’ve just been o4. The new omnimodal model feels like its gonna be way closer to what people actually expected GPT-5 to be, while GPT-5 itself didn’t earn that big name bc why does GPT-5 have fewer capabilties than 4o
English
9
2
94
5.6K
l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴
Twitter is less a place for me to interact with others, and more a search engine for me for slightly more accurate info. I tweet some thoughts and keywords. My feed morphs to show me more of those things later. Rinse and repeat. (please don't show me dishwasher products)
English
0
0
0
37