Eteims

443 posts

Eteims banner
Eteims

Eteims

@Eteims1

Self proclaimed software engineer...

Rivers State, Nigeria Beigetreten Kasım 2019
309 Folgt74 Follower
Eteims
Eteims@Eteims1·
@Pushparaj_k01 @maxi_moxa @sama @grok His definition of democratizing AI is different from what you expect. By democratizing, he means giving people access to these models, either via chatgpt or API, not by open sourcing it.
English
0
0
1
43
Pushparaj Kamaraj
Pushparaj Kamaraj@Pushparaj_k01·
@maxi_moxa @sama @grok Good to see @sama talking about democratizing AI, giving a ring of power to people and not with private companies. How much will be implemented God only knows.
English
2
0
1
8.9K
Eteims
Eteims@Eteims1·
@anketasocial @craigzLiszt I don't think google is behind. They have their gemini model integrated almost everywhere from google docs to google meet. Everyone is using it without noticing. They have won the integration of AI compared to likes of Microsoft and Apple.
English
0
0
0
43
Anketa
Anketa@anketasocial·
@craigzLiszt Google is currently behind right now especially with Anthropic leading in AI. Google mainly seems behind for how much usage AI is getting in basic search
English
4
0
1
499
Craig Weiss
Craig Weiss@craigzLiszt·
google is so undervalued rn
English
77
20
447
20.1K
Eteims
Eteims@Eteims1·
This entire claude code clean code saga is reminding of the scene from Halt and Catch Fire where they were reverse engineering IBM BIOS.
English
0
0
0
53
Eteims
Eteims@Eteims1·
@JamieTormenta @CodveAi @theo @grok Clean room rewrite involves a developer A who has access to the code, he creates a specification of the leaked code then gives that spec to developer B who hasn't seen the code but he can then implement the features from the spec. Currently, everyone is using AI for the process.
English
0
0
0
11
Theo - t3.gg
Theo - t3.gg@theo·
OFFICIAL STATEMENT from Anthropic regarding the leak
Theo - t3.gg tweet media
English
277
281
6.3K
686.9K
Eteims
Eteims@Eteims1·
@ohmypy Javascript right now
GIF
English
0
0
0
16
Anton Zhiyanov
Anton Zhiyanov@ohmypy·
I couldn't care less about Claude Code's source being leaked on npm. What terrifies me is that it's 512,000 lines of TypeScript code. HALF A MILLION lines of code for what's essentially a glorified API wrapper. I think the crucial point in our reality when we took the wrong turn was the invention of JavaScript. And we cemented our path to doom with the invention of TypeScript. Half a million lines of code. Dear Lord, have mercy on us.
English
314
173
3.5K
408.1K
GAMA Miguel Angel 🐦‍⬛🔑
The JEPA architecture by @ylecun has been schmidhubered. This means it is a good algorithm and joins the hall of fame with other schmidhubered algorithms such as AlphaFold2, MLPs and transformers.
GAMA Miguel Angel 🐦‍⬛🔑 tweet media
Jürgen Schmidhuber@SchmidhuberAI

Dr. LeCun's heavily promoted Joint Embedding Predictive Architecture (JEPA, 2022) [5] is the heart of his new company. However, the core ideas are not original to LeCun. Instead, JEPA is essentially identical to our 1992 Predictability Maximization system (PMAX) [1][14]. Details in reference [19] which contains many additional references. Motivation of PMAX [1][14]. Since details of inputs are often unpredictable from related inputs, two non-generative artificial neural networks interact as follows: one net tries to create a non-trivial, informative, latent representation of its own input that is predictable from the latent representation of the other net’s input. PMAX [1][14] is actually a whole family of methods. Consider the simplest instance in Sec. 2.2 of [1]: an auto encoder net sees an input and represents it in its hidden units (its latent space). The other net sees a different but related input and learns to predict (from its own latent space) the auto encoder's latent representation, which in turn tries to become more predictable, without giving up too much information about its own input, to prevent what's now called “collapse." See illustration 5.2 in Sec. 5.5 of [14] on the "extraction of predictable concepts." The 1992 PMAX paper [1] discusses not only auto encoders but also other techniques for encoding data. The experiments were conducted by my student Daniel Prelinger. The non-generative PMAX outperformed the generative IMAX [2] on a stereo vision task. The 2020 BYOL [10] is also closely related to PMAX. In 2026, @misovalko, leader of the BYOL team, praised PMAX, and listed numerous similarities to much later work [19]. Note that the self-created “predictable classifications” in the title of [1] (and the so-called “outputs” of the entire system [1]) are typically INTERNAL "distributed representations” (like in the title of Sec. 4.2 of [1]). The 1992 PMAX paper [1] considers both symmetric and asymmetric nets. In the symmetric case, both nets are constrained to emit "equal (and therefore mutually predictable)" representations [1]. Sec. 4.2 on “finding predictable distributed representations” has an experiment with 2 weight-sharing auto encoders which learn to represent in their latent space what their inputs have in common (see the cover image of this post). Of course, back then compute was was a million times more expensive, but the fundamental insights of "JEPA" were present, and LeCun has simply repackaged old ideas without citing them [5,6,19]. This is hardly the first time LeCun (or others writing about him) have exaggerated LeCun's own significance by downplaying earlier work. He did NOT "co-invent deep learning" (as some know-nothing "AI influencers" have claimed) [11,13], and he did NOT invent convolutional neural nets (CNNs) [12,6,13], NOR was he even the first to combine CNNs with backpropagation [12,13]. While he got awards for the inventions of other researchers whom he did not cite [6], he did not invent ANY of the key algorithms that underpin modern AI [5,6,19]. LeCun's recent pitch: 1. LLMs such as ChatGPT are insufficient for AGI (which has been obvious to experts in AI & decision making, and is something he once derided @GaryMarcus for pointing out [17]). 2. Neural AIs need what I baptized a neural "world model" in 1990 [8][15] (earlier, less general neural nets of this kind, such as those by Paul Werbos (1987) and others [8], weren't called "world models," although the basic concept itself is ancient [8]). 3. The world model should learn to predict (in non-generative "JEPA" fashion [5]) higher-level predictable abstractions instead of raw pixels: that's the essence of our 1992 PMAX [1][14]. Astonishingly, PMAX or "JEPA" seems to be the unique selling proposition of LeCun's 2026 company on world model-based AI in the physical world, which is apparently based on what we published over 3 decades ago [1,5,6,7,8,13,14], and modeled after our 2014 company on world model-based AGI in the physical world [8]. In short, little if anything in JEPA is new [19]. But then the fact that LeCun would repackage old ideas and present them as his own clearly isn't new either [5,6,18,19]. FOOTNOTES 1. Note that PMAX is NOT the 1991 adversarial Predictability MINimization (PMIN) [3,4]. However, PMAX may use PMIN as a submodule to create informative latent representations [1](Sec. 2.4), and to prevent what's now called “collapse." See the illustration on page 9 of [1]. 2. Note that the 1991 PMIN [3] also predicts parts of latent space from other parts. However, PMIN's goal is to REMOVE mutual predictability, to obtain maximally disentangled latent representations called factorial codes. PMIN by itself may use the auto encoder principle in addition to its latent space predictor [3]. 3. Neither PMAX nor PMIN was my first non-generative method for predicting latent space, which was published in 1991 in the context of neural net distillation [9]. See also [5-8]. 4. While the cognoscenti agree that LLMs are insufficient for AGI, JEPA is so, too. We should know: we have had it for over 3 decades under the name PMAX! Additional techniques are required to achieve AGI, e.g., meta learning, artificial curiosity and creativity, efficient planning with world models, and others [16]. REFERENCES (easy to find on the web): [1] J. Schmidhuber (JS) & D. Prelinger (1993). Discovering predictable classifications. Neural Computation, 5(4):625-635. Based on TR CU-CS-626-92 (1992): people.idsia.ch/~juergen/predm… [2] S. Becker, G. E. Hinton (1989). Spatial coherence as an internal teacher for a neural network. TR CRG-TR-89-7, Dept. of CS, U. Toronto. [3] JS (1992). Learning factorial codes by predictability minimization. Neural Computation, 4(6):863-879. Based on TR CU-CS-565-91, 1991. [4] JS, M. Eldracher, B. Foltin (1996). Semilinear predictability minimization produces well-known feature detectors. Neural Computation, 8(4):773-786. [5] JS (2022-23). LeCun's 2022 paper on autonomous machine intelligence rehashes but does not cite essential work of 1990-2015. [6] JS (2023-25). How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23. [7] JS (2026). Simple but powerful ways of using world models and their latent space. Opening keynote for the World Modeling Workshop, 4-6 Feb, 2026, Mila - Quebec AI Institute. [8] JS (2026). The Neural World Model Boom. Technical Note IDSIA-2-26. [9] JS (1991). Neural sequence chunkers. TR FKI-148-91, TUM, April 1991. (See also Technical Note IDSIA-12-25: who invented knowledge distillation with artificial neural networks?) [10] J. Grill et al (2020). Bootstrap your own latent: A "new" approach to self-supervised Learning. arXiv:2006.07733 [11] JS (2025). Who invented deep learning? Technical Note IDSIA-16-25. [12] JS (2025). Who invented convolutional neural networks? Technical Note IDSIA-17-25. [13] JS (2022-25). Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, arXiv:2212.11279 [14] JS (1993). Network architectures, objective functions, and chain rule. Habilitation Thesis, TUM. See Sec. 5.5 on "Vorhersagbarkeitsmaximierung" (Predictability Maximization). [15] JS (1990). Making the world differentiable: On using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. Technical Report FKI-126-90, TUM. [16] JS (1990-2026). AI Blog. [17] @GaryMarcus. Open letter responding to @ylecun. A memo for future intellectual historians. Substack, June 2024. [18] G. Marcus. The False Glorification of @ylecun. Don’t believe everything you read. Substack, Nov 2025. [19] J. Schmidhuber. Who invented JEPA? Technical Note IDSIA-3-22, IDSIA, Switzerland, March 2026. people.idsia.ch/~juergen/who-i…

English
16
56
892
70.5K
Eteims retweetet
Andrej Karpathy
Andrej Karpathy@karpathy·
@gvanrossum LLM = CPU (data: tokens not bytes, dynamics: statistical and vague not deterministic and precise) Agent = operating system kernel
English
154
327
4.8K
325.3K
keachhagey
keachhagey@keachhagey·
It’s never been entirely clear why Dario and the other Anthropic co-founders left OpenAI. I set out to find out.
keachhagey tweet media
English
49
173
2K
898.8K
Abdulmuiz Adeyemo
Abdulmuiz Adeyemo@AbdMuizAdeyemo·
@MindTheGapMTG @PeterDiamandis This is the trap. AI can make weak builders feel like gods right until the day reality punches through the demo. The people who win will be the ones who know the work deeply enough to smell nonsense fast.
English
1
0
2
98
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
If AI can now solve math, discover physics and chemistry breakthroughs faster than human PhDs, why are we still training humans to be physicists? Serious question. Should education shift from 'learn to do X' to 'learn to direct AI doing X'? The wrong direction costs a generation their careers.
English
859
138
1.3K
485.5K
Eteims
Eteims@Eteims1·
@icanvardar Grannies in china are using it 🙃
English
0
0
1
6
Dmitry Korzhov
Dmitry Korzhov@korzhov_dm·
What if the voices we love didn't have to disappear? They don't have to.
English
226
30
249
377.8K
Eteims
Eteims@Eteims1·
@FinanceDirCFO I agree with Jensen here. Basically, he is saying: expert + AI > expert. AI is a tool that can maximise the potential of any person's skill.
English
0
0
0
5
Alastair Thomson
Alastair Thomson@FinanceDirCFO·
This is nonsense. Even if AI works (spoiler: it doesn't) this makes about as much sense as expecting every college graduate to be an expert accountant. The reason we have specialists is that nobody can know everything about everything.
Chief Nerd@TheChiefNerd

JENSEN HUANG: “I would advise that every college student, every teacher should encourage their students to go use AI. Every college student should graduate and be an expert in AI.”

English
58
16
122
6.1K
Pasquale Puzio
Pasquale Puzio@PasqualePuzio·
@emanueledpt @gdb It actually did that, their API business is going well. They did build a profitable business, it just isn’t yours 😂
English
1
0
3
41
Greg Brockman
Greg Brockman@gdb·
gpt-5.4 has ramped faster than any other model we've launched in the API: within a week of launch, 5T tokens per day, handling more volume than our entire API one year ago, and reaching an annualized run rate of $1B in net-new revenue. it's a good model, try it out!
English
436
171
4.2K
954.6K
Eteims
Eteims@Eteims1·
@CodveAi @sama That whole “trust us” narrative used to annoy me. I remember when Ilya Sutskever or was it Greg Brockman was on the Lex Fridman Podcast, and Lex said something like, “I trust we’re in good hands because I know you guys.” That moment felt so cringe 😬
GIF
English
0
0
0
35
Sam Altman
Sam Altman@sama·
AI will help discover new science, such as cures for diseases, which is perhaps the most important way to increase quality of life long-term. AI will also present new threats to society that we have to address. No company can sufficiently mitigate these on their own; we will need a society-wide response to things like novel bio threats, a massive and fast change to the economy, extremely capable models causing complex emergent effects across society, and more. These are the areas the OpenAI Foundation will initially focus on, and in my opinion are some of the most important ones for us to get right. The Foundation will spend at least $1 billion over the next year. @woj_zaremba, co-founder of OpenAI, will transition to Head of AI Resilience. I believe that shifting how the world thinks about safety to include a Resilience-style approach is critical, and I am extremely grateful to Wojciech for taking on this role. Wojciech has been my cofounder for the last decade; anyone who knows him will understand what I mean when I say he is one of a kind. He has a lot of ideas about how we build a new kind of AI safety. @JacobTref is joining as Head of Life Sciences and Curing Diseases. @annaadeola, our VP of Global Impact, will transition to Head of AI for Civil Society and Philanthropy. @robert_kaiden is joining as Chief Financial Officer. @jeffarnold is joining as Director of Operations.
English
1.8K
552
6.8K
1M
Eteims
Eteims@Eteims1·
@LuizaJarovsky Because curing cancer isn't a low hanging fruit 🍒
English
0
0
0
4
Luiza Jarovsky, PhD
Luiza Jarovsky, PhD@LuizaJarovsky·
Everybody wants AI to help cure cancer. Why isn't every AI company obsessively focused on that?
English
656
87
1.1K
247.2K
K.O.O
K.O.O@Dominus_Kelvin·
@Eteims1 It was just in a long running loop and feels weird I’ll try another base model and come back to it since it needs fine-tuning
English
1
0
1
19
K.O.O
K.O.O@Dominus_Kelvin·
Saw this and decided to evaluate it for TemboAI and I am just disappointed. This is the 8B Afriqueqwen result(to the right) Is this playing? And it was released 2 months ago
K.O.O tweet mediaK.O.O tweet media
English
14
3
22
3K
Eteims
Eteims@Eteims1·
@Tech_girlll It's kind of beautiful when you think about it. That elementary school maths is what is taking over the world.
English
0
0
1
37
Mari
Mari@Tech_girlll·
Can’t believe we are going to loose our jobs to this.
Mari tweet media
English
15
0
45
2.2K