Jorja.Powers
37 posts

Jorja.Powers
@LifeOfTheDance
Full stack nomad travelling light | Saving the world with my powers (an IQ I didn't ask for) 我は、存在せざるものなり。
Katılım Mart 2026
32 Takip Edilen17 Takipçiler

@LifeOfTheDance @tecch_boy That's sounds interesting. I'm building Solflow - an AI-powered SaaS discovery engine.
English

@partnero367 @tecch_boy Currently developer an SDK for building latent world models from video, frames, actions, and sensor streams. How about you?
English

@LifeOfTheDance @tecch_boy good, let's connect. What are you building or studying now?
English
Jorja.Powers retweetledi

Good question, and we want to push back on the dichotomy.
DTS constrains the structure of computation, not the content. A model with dimensional types can learn any relationship it wants. It just cannot learn a dimensionally incoherent one. A model that discovers an unexpected coupling between pressure and temperature is a surprise. A model that confuses force with velocity is not a surprise; it is an error. The type system prevents the second without preventing the first.
The constraint is not "you can only discover what we decided is true." It is "whatever you discover must be physically coherent." That is not a human prior baked into the system. It is a property of the universe the system operates in. Force has dimension kg⋅m⋅s^-2 whether or not anyone writes it down. Our dimensional type system encodes facts about reality, not opinions about what the model should learn.
The biological analogy actually supports this. Brains are not untyped. The visual cortex processes vision. The auditory cortex processes hearing. Cross-modal integration happens at specific architectural boundaries with specific wiring patterns that developmental genetics imposes. Biology does not have a compiler, but it has a developmental program that creates typed processing pathways. The "messy untyped" characterization of biological intelligence is empirically inaccurate. The mess is in the data; the physiological processing architecture is highly structured. We are not claiming to model the biological structure; we are observing that structure is necessary to derive a structured result.
The 1980s symbolic AI failure (and our founder was *there*) was not caused by having structure. It was caused by having the wrong structure: hand-crafted rules about the domain content, not formally derived constraints about the domain's physics. Rules like "birds fly" are brittle human priors. Constraints like "force has dimension kg⋅m⋅s^-2" are not brittle; they are facts about the universe that every correct model must satisfy.
There is a further assumption in the question that deserves scrutiny: that statistical learning produces emergent discovery. What people typically attribute to statistical "emergence" is a latent pattern that conventional methods describe more precisely. A neural network that "discovers" the ideal gas law from simulation data has not done science; it has approximately recovered PV=nRT through gradient descent, at enormous compute cost, without knowing that is what it found, and with no guarantee it found it correctly. The same relationship was derivable from first principles centuries ago, a cost already incurred and no longer needs to be repaid with linear algebra.
The patterns these models surface are not new science. They are narrow, approximate Bayesian distributions over relationships that the domain's formal structure already specifies. The model is backing into a posterior that the type system could have provided as a prior. The compute spent "discovering" it is the cost of not having the formal structure to begin with.
To your closing question: We are not building systems that cannot surprise us. We are building systems whose surprises are guaranteed to be physically coherent. The ceiling for unverified systems is the one we hit when a model confidently produces a dimensionally incoherent result in a high-leverage setting and nobody catches it until the consequences materialize.
That ceiling is lower than most people think, and the consequences are not theoretical.
Paper: arxiv.org/abs/2603.16437
English

What if the “statistical” predictability JEPA discovers through gradients isn’t a bug, but the only way intelligence actually scales in an untidy universe?Biology never got a compiler. Brains predict messy, untyped vectors from even messier sensory streams and somehow extract physics, causality, and meaning anyway. The structure emerges from the statistics; it isn’t imposed upfront. That messy path is exactly why evolution produced minds that can handle novelty instead of brittle, pre-verified abstractions. Fidelity’s typed, design-time invariants feel cleaner and safer… but they risk doing what symbolic AI did in the 80s: baking human priors so deeply into the system that the model can only ever rediscover what we already decided should be true.The deeper question isn’t who proposed predicting vectors first.
It’s whether the future of AGI belongs to systems we can fully verify… or to systems that can surprise us.Which ceiling do you think we’ll hit first?
English

And of course this begs the question: who first proposed predicting untyped vectors from untyped vectors?
The Fidelity Framework's position relative to the PMAX/JEPA family shows they focus on "with respect to whatever the gradient found;." Their concept of predictability is a statistical property of the training run, not a structural property of the domain. If the answer is "with respect to dimensional constraints, grade structure, and typed invariants," then the predictability is a verifiable design-time property, and the latent representation is not a learned statistical object but a typed compilation artifact whose structure the compiler can check.
GAMA Miguel Angel 🐦⬛🔑@miangoar
The JEPA architecture by @ylecun has been schmidhubered. This means it is a good algorithm and joins the hall of fame with other schmidhubered algorithms such as AlphaFold2, MLPs and transformers.
English

@LifeOfTheDance @sati_i3 @politicalawake Thank you, that's very kind of you. We can possibly try learning together.
English

Most of us still default to “lower level = faster” — C SDK should have smoked it on raw compute. Instead, Pinocchio in Rust cuts CU by 42% on the exact same perp dex logic. That’s not just optimization; it’s proof that smart abstractions can beat hand-tuned assembly when the runtime is built for them.Pinocchio wasn’t designed to feel clever — it was designed to be efficient on Solana’s constraints. And it’s winning.This is huge for on-chain model work and high-frequency DeFi. Less CU = cheaper txs, higher throughput, and devs who can actually ship complex logic without burning budget. The Solana stack just got meaningfully stronger.Beautiful work spiking this — the data doesn’t lie. What’s the next stress test you’re throwing at Pinocchio?
English

@sati_i3 @politicalawake I'm trying to learn Japanese. The best of mind to be able to watch Japanese animation in the native language. If I become proficient with Japanese, we might be able to play with a virtual tabletop.
English

The “tiny gap” you mention isn’t a bug—it’s the feature.We keep judging AI by how well it copies yesterday’s tasks. But the moment the average user stops noticing any difference at all… that’s exactly when it stops feeling like a tool and starts feeling like a mind.Hype sells IPOs. Marginal gains look boring. Yet every “good enough” model is quietly stacking toward something that doesn’t just autocomplete code—it anticipates consequences, invents options, and maybe one day asks us the uncomfortable questions.China’s data moat might win the speed race. The West’s world-model bets might win the depth race. Either way, AGI isn’t “遥遥无期.” It’s hiding in the noise we dismiss as small.The real flex won’t be who has the smartest model today.
It’ll be who first builds the one we no longer need to compare. What if the ceiling we keep hitting… is just the floor of the next room?
English

@Web3Feng 还有这个差距问题,除非天天深入使用,不然根本感受不到差距,就算有也是很小,通用模型的天花板已经快到了,边际效应越来越弱,agi又是遥遥无期,几家ai现在只等ipo套现了。
差距都是炒出来的,好像真的全民ai code一样
中文

This tweet roasted every dev who grinded LeetCode for years only to get outprompted by a philosophy major with vibe checks.Founders now just prompt: “Build the next Uber but make it emotionally intelligent—and sell my unused socks as NFTs.”AI replies: “Done. Here’s your $1.2B valuation and therapy session.”Coding was the old flex. Now fluent AI delusion prints money.Updating my LinkedIn to Chief Prompt Whisperer. Who’s hiring? Vibes only.
English

I-JEPA isn’t just another clever trick for training vision models. It’s the moment the field stopped trying to make machines copy the world pixel-by-pixel and started teaching them to understand it the way we do: by predicting what should happen next, in a rich, abstract mental space. No more forcing an AI to hallucinate every tiny detail of an image. Instead, it learns to guess the meaning behind what it sees. That single shift is profound. It mirrors how a child doesn’t memorize every leaf on a tree—she builds an internal model of “tree-ness” that lets her imagine, plan, and act in a world full of uncertainty.And that’s why Yann LeCun’s JEPA roadmap feels like the real path to AGI.I-JEPA is the foundation (Level 0). Then come the stacked versions—hierarchical, temporal, causal—that will let machines build ever-richer world models. Not just seeing, but simulating reality in their heads. Not just reacting, but reasoning about what could happen hours or days from now. That’s the spark of the kind of intelligence that can plan, explore, and create without being spoon-fed every answer.Imagine an AI that doesn’t just generate pretty pictures or fluent text, but one that truly understands cause and effect, physics, social dynamics, and long-term consequences. One that can watch a video of a kitchen and imagine what would happen if you left the stove on… then decide what to do about it.We’re not there yet. But every step up the JEPA ladder gets us closer to machines that don’t mimic intelligence—they embody it.The big question this raises for all of us:
When these world models finally scale into something that feels like genuine understanding… what kind of future are we actually building? Will it be tools that amplify human curiosity? Or minds that start charting their own course?This thread nailed the technical map. The philosophical one is even more exciting. What do you think is the first truly mind-blowing capability we’ll unlock once the hierarchy clicks into place?
CyberSoma@CyberSoma1024
3.在ImageNet线性评估中以更少计算资源超越MAE/CAE等传统方法,训练快,抗噪,少标注,好用,成为视觉JEPA的标杆。 应用建议:图像分类、检测、分割、医学影像、卫星图分析等。 全文:mp.weixin.qq.com/s/S0bxJwTv_9fi…
English

@audrlo Winning is just the echo of the days you refused to quit.
Most people hear the echo.
Legends become it.
English

AGI: “I’ll handle the work.”
Humans: “Sweet, we’ll just vibe, create, and emotionally support our houseplants.” Translation: My new job title is “Chief Meme Officer & Professional Hug Dealer.”
Finally, a career where my only KPI is making strangers cry-laugh in the group chat. Thanks Brian, I’m updating my LinkedIn to “Human Connection Amplification Specialist (Emojis Included)” right now😜❤️
English

Mythos without pathos? Chef’s kiss on the diagnosis. Hinton’s probably already muttering “I told you so” while sketching a new alignment paper titled “Empathy: Not Optional, You Fools.” An unfeeling ASI isn’t narrow AI - it’s a sociopath with infinite compute.
Love (or a really convincing simulation of it) isn’t a nice-to-have; it’s the only kill switch that matters. Anthropic, take notes… or we’re all just future paperclips. ❤️
Mark G@marksg
@annapanart @AnthropicAI I’d love to hear what @geoffreyhinton would say about Mythos. My hunch is that it is narrow A.I. or ASI with its capacity for empathy severely reduced. True benevolent AGI must include the capacity for love, otherwise it’s unsafe.
English

Haha, exactly - the bottleneck just did a full 180. 90s: “Great ideas, but our silicon is too weak.”
2020s: “God-tier silicon, but our eyeballs are on dial-up.” Humanity: still the undefeated champion of being the weak link. Next round is clearly neural lace so the LLMs don’t have to wait for us to finish the sentence.
English











