Parsa

1.6K posts

Parsa banner
Parsa

Parsa

@Skillsets

I was uncool before being uncool was cool. Ex-mathematician, product-centric data scientist in the tech space, ranch dressing connoisseur.

CA Katılım Haziran 2011
511 Takip Edilen500 Takipçiler
Parsa
Parsa@Skillsets·
@UsingLyft @mainey_maine Elmer Samuel Imes was born in Memphis and made pioneering contributions to IR spectroscopy
English
1
1
5
5.9K
Maine
Maine@mainey_maine·
Memphis is rooted in so much Black history and social movements bruh. The way people speak on the south is just anti intellectualism and anti Blackness
English
119
1.2K
4.8K
80.1K
Parsa
Parsa@Skillsets·
How much longer before we start actively searching for tools to make us less productive? At some point product makers and corporate leaders will have to realize that being able to do more tasks in less time doesn’t result in more tasks completed.
Box@Box

Codex just turned an upcoming meeting into a fully automated cross-platform workflow. Box. Gmail. Slack. It researches across all three, synthesizes the context, and delivers a pre-meeting brief without anyone lifting a finger. This is what personal productivity looks like when agentic automation does the work for you. See it in action.👇

English
0
0
0
25
Parsa
Parsa@Skillsets·
@DannyDrinksWine Is a movie like Bedazzled really deserving of a casting analysis
English
2
1
18
4.6K
DepressedBergman
DepressedBergman@DannyDrinksWine·
Harold Ramis on why he cast Elizabeth Hurley as the Devil in "Bedazzled" (2000) over Uma Thurman, Julianne Moore & Madonna: "I was told by Madonna’s agent that Madonna is the devil, which is why she’d be perfect for the role. But I was thinking of who was the baddest, most beautiful woman in Hollywood. And I came up with Elizabeth Hurley. Elizabeth is witty, wordily and wise, She’s very comfortable in her own skin. And I liked that Elizabeth is a very modern girl. She’s not going to turn up in a Jane Austen movie anytime soon." ("Hurley on Hugh: Devil is in the Details", The Morning Call, 2000)
English
66
220
4.3K
1.2M
Parsa
Parsa@Skillsets·
@SchmidhuberAI It is more accurate to say that JEPA is a modern, highly sophisticated descendant built upon the foundational ideas first articulated in PMAX
English
0
0
6
2.8K
Jürgen Schmidhuber
Jürgen Schmidhuber@SchmidhuberAI·
Dr. LeCun's heavily promoted Joint Embedding Predictive Architecture (JEPA, 2022) [5] is the heart of his new company. However, the core ideas are not original to LeCun. Instead, JEPA is essentially identical to our 1992 Predictability Maximization system (PMAX) [1][14]. Details in reference [19] which contains many additional references. Motivation of PMAX [1][14]. Since details of inputs are often unpredictable from related inputs, two non-generative artificial neural networks interact as follows: one net tries to create a non-trivial, informative, latent representation of its own input that is predictable from the latent representation of the other net’s input. PMAX [1][14] is actually a whole family of methods. Consider the simplest instance in Sec. 2.2 of [1]: an auto encoder net sees an input and represents it in its hidden units (its latent space). The other net sees a different but related input and learns to predict (from its own latent space) the auto encoder's latent representation, which in turn tries to become more predictable, without giving up too much information about its own input, to prevent what's now called “collapse." See illustration 5.2 in Sec. 5.5 of [14] on the "extraction of predictable concepts." The 1992 PMAX paper [1] discusses not only auto encoders but also other techniques for encoding data. The experiments were conducted by my student Daniel Prelinger. The non-generative PMAX outperformed the generative IMAX [2] on a stereo vision task. The 2020 BYOL [10] is also closely related to PMAX. In 2026, @misovalko, leader of the BYOL team, praised PMAX, and listed numerous similarities to much later work [19]. Note that the self-created “predictable classifications” in the title of [1] (and the so-called “outputs” of the entire system [1]) are typically INTERNAL "distributed representations” (like in the title of Sec. 4.2 of [1]). The 1992 PMAX paper [1] considers both symmetric and asymmetric nets. In the symmetric case, both nets are constrained to emit "equal (and therefore mutually predictable)" representations [1]. Sec. 4.2 on “finding predictable distributed representations” has an experiment with 2 weight-sharing auto encoders which learn to represent in their latent space what their inputs have in common (see the cover image of this post). Of course, back then compute was was a million times more expensive, but the fundamental insights of "JEPA" were present, and LeCun has simply repackaged old ideas without citing them [5,6,19]. This is hardly the first time LeCun (or others writing about him) have exaggerated LeCun's own significance by downplaying earlier work. He did NOT "co-invent deep learning" (as some know-nothing "AI influencers" have claimed) [11,13], and he did NOT invent convolutional neural nets (CNNs) [12,6,13], NOR was he even the first to combine CNNs with backpropagation [12,13]. While he got awards for the inventions of other researchers whom he did not cite [6], he did not invent ANY of the key algorithms that underpin modern AI [5,6,19]. LeCun's recent pitch: 1. LLMs such as ChatGPT are insufficient for AGI (which has been obvious to experts in AI & decision making, and is something he once derided @GaryMarcus for pointing out [17]). 2. Neural AIs need what I baptized a neural "world model" in 1990 [8][15] (earlier, less general neural nets of this kind, such as those by Paul Werbos (1987) and others [8], weren't called "world models," although the basic concept itself is ancient [8]). 3. The world model should learn to predict (in non-generative "JEPA" fashion [5]) higher-level predictable abstractions instead of raw pixels: that's the essence of our 1992 PMAX [1][14]. Astonishingly, PMAX or "JEPA" seems to be the unique selling proposition of LeCun's 2026 company on world model-based AI in the physical world, which is apparently based on what we published over 3 decades ago [1,5,6,7,8,13,14], and modeled after our 2014 company on world model-based AGI in the physical world [8]. In short, little if anything in JEPA is new [19]. But then the fact that LeCun would repackage old ideas and present them as his own clearly isn't new either [5,6,18,19]. FOOTNOTES 1. Note that PMAX is NOT the 1991 adversarial Predictability MINimization (PMIN) [3,4]. However, PMAX may use PMIN as a submodule to create informative latent representations [1](Sec. 2.4), and to prevent what's now called “collapse." See the illustration on page 9 of [1]. 2. Note that the 1991 PMIN [3] also predicts parts of latent space from other parts. However, PMIN's goal is to REMOVE mutual predictability, to obtain maximally disentangled latent representations called factorial codes. PMIN by itself may use the auto encoder principle in addition to its latent space predictor [3]. 3. Neither PMAX nor PMIN was my first non-generative method for predicting latent space, which was published in 1991 in the context of neural net distillation [9]. See also [5-8]. 4. While the cognoscenti agree that LLMs are insufficient for AGI, JEPA is so, too. We should know: we have had it for over 3 decades under the name PMAX! Additional techniques are required to achieve AGI, e.g., meta learning, artificial curiosity and creativity, efficient planning with world models, and others [16]. REFERENCES (easy to find on the web): [1] J. Schmidhuber (JS) & D. Prelinger (1993). Discovering predictable classifications. Neural Computation, 5(4):625-635. Based on TR CU-CS-626-92 (1992): people.idsia.ch/~juergen/predm… [2] S. Becker, G. E. Hinton (1989). Spatial coherence as an internal teacher for a neural network. TR CRG-TR-89-7, Dept. of CS, U. Toronto. [3] JS (1992). Learning factorial codes by predictability minimization. Neural Computation, 4(6):863-879. Based on TR CU-CS-565-91, 1991. [4] JS, M. Eldracher, B. Foltin (1996). Semilinear predictability minimization produces well-known feature detectors. Neural Computation, 8(4):773-786. [5] JS (2022-23). LeCun's 2022 paper on autonomous machine intelligence rehashes but does not cite essential work of 1990-2015. [6] JS (2023-25). How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23. [7] JS (2026). Simple but powerful ways of using world models and their latent space. Opening keynote for the World Modeling Workshop, 4-6 Feb, 2026, Mila - Quebec AI Institute. [8] JS (2026). The Neural World Model Boom. Technical Note IDSIA-2-26. [9] JS (1991). Neural sequence chunkers. TR FKI-148-91, TUM, April 1991. (See also Technical Note IDSIA-12-25: who invented knowledge distillation with artificial neural networks?) [10] J. Grill et al (2020). Bootstrap your own latent: A "new" approach to self-supervised Learning. arXiv:2006.07733 [11] JS (2025). Who invented deep learning? Technical Note IDSIA-16-25. [12] JS (2025). Who invented convolutional neural networks? Technical Note IDSIA-17-25. [13] JS (2022-25). Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, arXiv:2212.11279 [14] JS (1993). Network architectures, objective functions, and chain rule. Habilitation Thesis, TUM. See Sec. 5.5 on "Vorhersagbarkeitsmaximierung" (Predictability Maximization). [15] JS (1990). Making the world differentiable: On using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. Technical Report FKI-126-90, TUM. [16] JS (1990-2026). AI Blog. [17] @GaryMarcus. Open letter responding to @ylecun. A memo for future intellectual historians. Substack, June 2024. [18] G. Marcus. The False Glorification of @ylecun. Don’t believe everything you read. Substack, Nov 2025. [19] J. Schmidhuber. Who invented JEPA? Technical Note IDSIA-3-22, IDSIA, Switzerland, March 2026. people.idsia.ch/~juergen/who-i…
Jürgen Schmidhuber tweet media
English
80
165
1.6K
394K
Parsa
Parsa@Skillsets·
@Rothmus It’s alright buddy, let’s go to Dave & Busters
English
0
0
0
7
Parsa
Parsa@Skillsets·
@Rothmus You’re not that bright are you?
English
1
0
0
16
Parsa retweetledi
Alan Smith
Alan Smith@AlanJLSmith·
Have you watched The Manosphere with Louis Theroux on Netflix? As the Dad of a 17 year old son, I think @jimmycarr is spot on. What do you think?
English
325
2.4K
16.2K
1.2M
Parsa
Parsa@Skillsets·
Did you hear about the quadruple amputee cornhole player accused of murder? * bracing myself for a really good punchline * "No." "Here's the link." fox5dc.com/news/dayton-we…
English
0
0
1
62
PoIiMath
PoIiMath@politicalmath·
Maggie Kang is pretty young so she probably doesn't remember when Parasite (a Korean film that, unlike her film, used Korean dialogue) won Best Picture in 2020
Variety@Variety

The creator of #KPopDemonHunters, Maggie Kang, dedicates her #Oscar “to Koreans everywhere”: “I am so sorry that it took so long to see us in a movie like this, but it is here. And that means that the next generations don’t have to go longing.” (via ABC/AMPAS) variety.com/2026/film/news…

English
134
1.2K
19.5K
485.1K
Parsa
Parsa@Skillsets·
@pvergadia Please don't use AI to write these posts, it is blatantly obvious. Just use your own words.
English
0
0
2
59
Priyanka Vergadia
Priyanka Vergadia@pvergadia·
🤯BREAKING: Alibaba just proved that AI Coding isn't taking your job, it's just writing the legacy code that will keep you employed fixing it for the next decade. 🤣 Passing a coding test once is easy. Maintaining that code for 8 months without it exploding? Apparently, it’s nearly impossible for AI. Alibaba tested 18 AI agents on 100 real codebases over 233-day cycles. They didn't just look for "quick fixes"—they looked for long-term survival. The results were a bloodbath: 75% of models broke previously working code during maintenance. Only Claude Opus 4.5/4.6 maintained a >50% zero-regression rate. Every other model accumulated technical debt that compounded until the codebase collapsed. We’ve been using "snapshot" benchmarks like HumanEval that only ask "Does it work right now?" The new SWE-CI benchmark asks: "Does it still work after 8 months of evolution?" Most AI agents are "Quick-Fix Artists." They write brittle code that passes tests today but becomes a maintenance nightmare tomorrow. They aren't building software; they're building a house of cards. The narrative just got honest: Most models can write code. Almost none can maintain it.
Priyanka Vergadia tweet media
English
489
1.9K
9.4K
1.7M
Parsa
Parsa@Skillsets·
@Orthon_Spaceman Once his anonymity perished, he became just another graffiti artist.
English
0
0
0
15
Parsa
Parsa@Skillsets·
@Orthon_Spaceman Banksy's mystique came primarily from his anonymity and medium rather than the quality or subject of his art. His messages were cliché, but the thrill of finding a new Banksy in London, made by an artist who sought no the fame, was intoxicating for ordinary people and the press.
English
1
0
2
120
Orthon von Bismarck
Orthon von Bismarck@Orthon_Spaceman·
Now that Banksy's identity is revealed I once again reflect on how suddenly and rapidly this character, who for a while was the epitome of cool and underground to large number of people fell into some sort of general contempt.
English
47
66
3.4K
215.1K
Parsa
Parsa@Skillsets·
Explaining to my daughter how Wayne's World was a 90s parody of music focused content creators
English
0
0
0
17
Parsa
Parsa@Skillsets·
@FT What image is this? Iranians don't dress like that.
English
0
0
1
871
Financial Times
Financial Times@FT·
Breaking news: European countries including France have opened talks with Tehran seeking to negotiate a deal to guarantee safe passage for their ships through the Strait of Hormuz, according to people briefed on the efforts. ft.trib.al/FRDLNtv
Financial Times tweet media
English
388
3K
7.1K
1.6M