yOPERO

1.2K posts

yOPERO banner
yOPERO

yOPERO

@yopero

Image-free thinker, one-shot learner and jack of all trades.

Oxford Katılım Kasım 2009
214 Takip Edilen149 Takipçiler
yOPERO
yOPERO@yopero·
@carlos_olivera Bajo qué partidas presupuestarias o ítems 'fantasma' se camuflaba el pago de los guerreros digitales, y cómo se justificaba técnicamente el uso de servidores oficiales para fines de propaganda?
Español
0
0
0
72
Carlos Olivera Terrazas
Carlos Olivera Terrazas@carlos_olivera·
Gente, hoy al mediodía grabo un podcast. Destapo todo 🎙️ Mi experiencia en el gobierno, conspiraciones, fraude electoral, software espía y guerreros digitales... ¿Alguna pregunta incómoda o dudas ocultas? Tírenlas en los comentarios. Voy a tratar de responderlas.
Español
4
1
25
1.1K
yOPERO
yOPERO@yopero·
@carlos_olivera mencionas el uso de software espía, se utilizó exclusivamente para inteligencia contra amenazas externas o se empleó para perfilar y vigilar a ciudadanos particulares, periodistas y miembros de la propia coalición de gobierno? Quiénn tenía la llave maestra de esos datos?"
Español
0
0
0
67
yOPERO
yOPERO@yopero·
@SchmidhuberAI 3/3 If so, do you think the next leap in AGI might actually resemble an "Aphantasic AI" ™ , one that bypasses the computational cost of sensory simulation to operate purely on high dimensional logic and dynamics?
English
0
0
0
16
yOPERO
yOPERO@yopero·
@SchmidhuberAI 2/3 Could it be that the "rendering" of high fidelity internal simulations is merely a biological interface preference rather than a computational necessity for intelligence?
English
1
0
0
18
Jürgen Schmidhuber
Jürgen Schmidhuber@SchmidhuberAI·
World Model Boom The concept of a mental model of the world - a world model - dates back millennia. Aristotle wrote that phantasia or mental images allow humans to imagine the future and to plan action sequences by mentally manipulating images in the absence of the actual objects. Only 2370 years later - a mere blink of an eye by cosmical standards — we are witnessing a boom in world models based on artificial neural networks (NNs) for AI in the physical world. New startups on this are emerging. To explain what's going on, I'll take you on a little journey through the history of general purpose neural world models [WM26] discussed in yesterday's talk for the World Modeling Workshop (Quebec AI Institute, 4 Feb 2026) which is on YouTube [WM26b]. ★ 1990: recurrent NNs as general purpose world models. In 1990, I studied adaptive agents living in partially observable environments where non-trivial kinds of memory are required to act successfully. I used the term world model for a recurrent NN (RNN) that learns to predict the agent's sensory inputs (including pain and reward signals) reflecting the consequences of the actions of a separate controller RNN steering the agent. The controller C used the world model M to plan its action sequences through "rollouts" or mental experiments. Compute was 10 million times more expensive than today. Since RNNs are general purpose computers, this approach went beyond previous, less powerful, feedforward NN-based systems (since 1987) for fully observable environments (Werbos 1987, Munro 1987, Nguyen & Widrow 1989). ★ 1990: artificial curiosity for NNs. In the beginning, my 1990 world model M knew nothing. That's why my 1990 controller C (a generative model with stochastic neurons) was intrinsically motivated through adversarial artificial curiosity to invent action sequences or experiments that yield data from which M can learn something: C simply tried to maximize the prediction error minimized by M. Today, they call this a generative adversarial network (GAN). The 1990 system didn't learn like today's foundation models and large language models (LLMs) by downloading and imitating the web. No, it generated its own self-invented experiments to collect limited but relevant data from the environment, like a physicist, or a baby. It was a simple kind of artificial scientist. ★ March-June 1991: linear Transformers and deep residual learning. The above-mentioned gradient-based RNN world models of 1990 did not work well for long time lags between relevant input events - they were not very deep. To overcome this, my little AI lab at TU Munich came up with various innovations, in the process laying the foundations of today's foundation models and LLMs. We published the first Transformer variants (see the T in ChatGPT) including the now-so-called unnormalized linear Transformer [ULTRA], Pre-training for deep NNs (see the P in ChatGPT), NN distillation (central to the famous 2025 DeepSeek and other LLMs), as well as deep residual learning [VAN1][WHO11] for very deep NNs such as Long Short-Term Memory, the most cited AI of the 20th century, basis of the first LLMs. In fact, as of 2026, the two most frequently cited papers of all time (with the most citations within 3 years - manuals excluded) are directly based on this work of 1991 [MOST26]. Back then, however, it was already totally obvious that LLM-type NNs alone are not enough to achieve Artificial General Intelligence (AGI). No AGI without mastery of the real world! True AGI in the physical world must somehow learn a model of its changing environment, and use the model to plan action sequences that solve its goals. Sure, one can train a foundation model to become a world model M, but additional elements are needed for decision making and planning. In particular, some sort of controller C must learn to use M to achieve its goals. ★ 1991-: reward C for M's improvements, not M's errors. Many things are fundamentally unpredictable by M, e.g., white noise on a screen (the noisy TV problem). To deal with this problem, in 1991, I used M's improvements rather than M's errors as C's intrinsic curiosity reward. In 1995, we used the information gain (optimally since 2011). ★ 1991-: predicting latent space. My NNs also started to predict latent space and hidden units rather than raw pixels. For example, I had a hierarchical architecture for predictive models that learn representations at multiple levels of abstraction and multiple time scales. Here an automatizer NN learns to predict the informative hidden units of a chunker NN, thus collapsing or distilling the chunker's knowledge into the automatizer. This can greatly facilitate downstream deep learning. In 1992, my other combination of two NNs also learned to create informative yet predictable internal representations in latent space. Both NNs saw different but related inputs which they tried to represent internally. For example, the first NN tried to predict the hidden units of an autoencoder NN, which in turn tried to make its hidden units more predictable, while leaving them as informative as possible. This was called Predictability Maximization, complementing my earlier 1991 work on Predictability Minimization: adversarial NNs learning to create informative yet unpredictable internal representations. ★ 1997-: predicting in latent space for reinforcement learning (RL) and control. I applied the above concepts of hidden state prediction to RL, building controllers that follow a self-supervised learning paradigm that produces informative yet predictable internal abstractions of complex spatio-temporal events. Instead of predicting all details of future inputs (e.g., raw pixels), the 1997 system could ask arbitrary abstract questions with computable answers encoded in representation space. It could even focus its attention on small relevant parts of its latent space, and ignore the rest. Two learning, reward-maximizing adversaries called left brain and right brain played a zero-sum game, trying to surprise each other, occasionally betting on different yes/no outcomes of computational experiments, until the outcomes became predictable and boring. Remarkably, this type of self-guided learning and exploration can accelerate external reward intake. ★ Early 2000s: theoretically optimal controllers and universal world models. My postdoc Marcus Hutter, working under my SNF grant at IDSIA, even had a mathematically optimal (yet computationally infeasible) way of learning a world model and exploiting it to plan optimal actions sequences: the famous AIXI model. ★ 2006: Formal theory of fun & creativity. C's intrinsic reward or curiosity reward was redefined as M's compression progress (rather than M's traditional information gain). This led to the "formal theory of fun & creativity." The basic insight was: interestingness is the first derivative of subjective beauty or compressibility (in space and time) of the lifelong sensory input stream, and curiosity & creativity is the drive to maximize it. I think this is the essence of what scientists and artists do. ★ 2014: we founded an AGI company for Physical AI in the real world, based on neural world models [NAI]. It achieved lots of remarkable milestones in collaboration with world-famous companies. Alas, like some of our projects, the company may have been a bit ahead of time, because real world robots and hardware are so challenging. Nevertheless, it's great that in the 2020s, new world model startups have been created! ★ 2015: Planning with spatio-temporal abstractions in world models / RL prompt engineer / chain of thought. The 2015 paper went beyond the inefficient millisecond by millisecond planning of 1990, addressing planning and reasoning in abstract concept spaces and learning to think (including ways of learning to act largely by observation), going beyond our hierarchical neural subgoal generators and planners of 1990-92. The controller C became an RL prompt engineer that learns to create a chain of thought: to speed up RL, C learns to query its world model M for abstract reasoning and decision making. This has become popular. ★ 2018: A 2018 paper finally collapsed C and M into a single One Big Net for everything, using my NN distillation procedure of 1991. Apparently, this is what DeepSeek used to shock the stock market in 2025. And the other 2018 paper with David Ha was the one that finally made world models popular :-) ★ What's next? As compute keeps getting 10 times cheaper every 5 years, the Machine Learning community will combine the puzzle pieces above into one simple, coherent whole, and scale it up. REFERENCES 100+ references in [WM26] based on [WM26b]. Links in the reply! [WM26b] J. Schmidhuber. Simple but powerful ways of using world models and their latent space. Talk at the World Modeling Workshop, Agora, Mila - Quebec AI Institute, 4 Feb 2026. It's on YouTube! [WM26] J. Schmidhuber. The Neural World Model Boom. Technical Note IDSIA-2-26, 4 Feb 2026.
English
16
81
465
66.1K
yOPERO retweetledi
Lucien Hinderling
Lucien Hinderling@lhinderling·
Automated optogenetic control of hundreds of cells in parallel. Each cell is individually steered, collectively acting as a "tissue printer". Preprint & code out!
English
8
60
339
24.7K
yOPERO retweetledi
OSHWDem @ A Coruña
OSHWDem @ A Coruña@OSHWDem·
💥 ¡La OSHWDem 2025 ya tiene fecha! 📅 Sábado 04 de octubre. 📍 Museo Domus (A Coruña). Feria de tecnología abierta con talleres, charlas y competiciones. Gratuita y para todo público. 🛠️ Pronto inscripciones: oshwdem.org #OSHWDem #TecnologíaLibre #OpenSource
OSHWDem @ A Coruña tweet media
Español
0
45
54
5.4K
yOPERO retweetledi
Diode Computers, Inc.
Diode Computers, Inc.@diodeinc·
We sat down with @sdianahu to talk about diode and the technical challenges we had to solve to run a PCB compiler on air gapped systems. If you think running rust in the browser with web assembly and edge routing algorithms are fun: come work with us
Diana@sdianahu

Congrats to @diodeinc on raising a $11.4 M Series A! They build AI that automates circuit design with customers incl. a F100 company and physical intelligence. 1/9 story on how @lennykhazan and @davideasnaghi applied to YC with just an idea still in their jobs, but that first idea nobody wanted. Though, it was a good start for them to iterate during the S24 batch.

English
1
2
29
5.9K
yOPERO retweetledi
Mario Zechner
Mario Zechner@badlogicgames·
This is an excellent read. It also aligns with my experience. I have my coding agents on a very short and tight leash, and no generated code gets pushed unless I have cleaned it up and understand it myself. Everything else leads to pain and suffering. albertofortin.com/writing/coding…
English
6
35
219
20.6K
yOPERO retweetledi
Balaji
Balaji@balajis·
Everyone is a libertarian on the Internet. Because it is simultaneously far more progressive and far more capitalist than any previous society. It is ultra-progressive because billions of people from every race, religion, and ethnicity are on the global Internet. Anyone can speak to anyone, broadcast anything, associate with anyone, and do just about whatever they want if it is permitted by code. It is also uber-capitalist because billions of people can transact with anyone, hire anyone, work for anyone, found their own businesses, become zillionaires, set up their own servers, and enjoy perfect freedom of association. The common thread is individual consent. You consent to sign up to a server. You consent to install an app. And every million-person community on the Internet is formed by a million similar voluntary actions. So: the Internet proves that consent scales, that voluntarism scales, that internationalism scales, that capitalism scales. It proves this empirically. Because by and large, despite all its faults, the total freedom of the Internet is attractive. The ultra-nationalist nevertheless chooses to post on an international network. The anti-capitalist nevertheless chooses to post through a capitalist phone. The reason the Internet works is that code is law. So all the impractical libertarian ideas that presupposed flawless judges or strict property rights suddenly became feasible. For example, open borders in the physical world doesn’t work. But in the digital world, a site like Facebook actually can onboard billions of strangers and automatically adjudicate the interactions between them. Similarly, polycentric law before the Internet didn’t work, because you couldn’t realistically have multiple legal systems in the same physical location. But now with Bitcoin and Ethereum and Solana, you can simply swap between different monetary policies and contract enforcement as you see fit. The fundamental point is that the Internet makes libertarianism more practical. For example, the esteemed @RonPaul wrote about ending the Fed, but that couldn’t realistically be done at the level of the state. However, tech libertarians could build Bitcoin, and thereby practically end the Fed at the level of the network. Similarly, tech libertarians couldn’t shut down the Post Office, but they could boot up email. They couldn’t reform taxi medallions, but they could boot up Uber. They couldn’t reform these failed states, so they built the alternative on the global network. Such examples can be multiplied. But the point is that actually existing libertarianism does exist. It is called the Internet. It is simply the most popular thing in the world, perhaps the most popular thing in human history. It is much more popular than any individual politician or state. And yet it is still underestimated.
Payton Alexander@AlexanderPayton

Time to debunk the “nobody is a libertarian” chart, since it’s going viral again. Anytime you see a heat map purporting to show that nobody holds a combination of economic conservative and social liberal views, you should be skeptical. They almost always manufacture that result by miscategorizing the axis on which a given question belongs or what constitutes a left or right answer. In order to move toward the bottom (social liberal) on this chart, you would have to AGREE with the statements: ✅ “Over the past few years, Black people have gotten less than they deserve.” ✅ “Generations of slavery and discrimination have created conditions that make it difficult for Black people to work their way out.” ✅ Illegal immigrants are a net “contribution” to our country. In order to move to the right (economic conservative) on the chart, you would have to DISAGREE with the following statements: ❌ “Politics is a rigged game.” ❌ “People like me don’t have any say.” ❌ “Social Security is important to me personally.” Do you know any libertarians who believe in reparations for slavery, feel well represented in the political process, and don’t think entitlements are important? Yeah, neither do I. No wonder the chart doesn’t either.

English
127
450
1.6K
359.1K
yOPERO retweetledi
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
Nothing works, but the vibes are immaculate.
Yuchen Jin tweet media
English
37
139
2.9K
191.7K
yOPERO retweetledi
Anders Sandberg
Anders Sandberg@anderssandberg·
Seeing those vastnesses of larger worlds tower above and below us is not what most people expect to discover when they start prompting for a cute waifu or an avocado recipe. Apeirophobia is a thing we need to learn to deal with gradually.
Anders Sandberg tweet media
English
6
2
29
1.2K
yOPERO retweetledi
Michael Levin
Michael Levin@drmichaellevin·
New paper with @BeneHartl : 'What does evolution make? Learning in living lineages and machines" cell.com/trends/genetic… Genes code for proteins, but what is the relationship between the genome and the large-scale form and function of organisms? What is a good formalism for thinking about what genomes actually do, that unlocks discovery of new plasticity of systemic outcomes in regenerative medicine and bioengineering contexts, as well as help understand how evolution operates on a reprogrammable (actually, agential) medium? What concepts from the fields of cognitive science, diverse intelligence, and computer science have relevance here? Abstract: "Biology implements a multiscale competency architecture (MCA), where components competently navigate problem domains (e.g., metabolic, physiological, transcriptional, and anatomical). Biological subsystems continuously shape (hack) each other’s behavior, toward homeodynamic goal states emerging at new scales. The genome acts as a generative model, not a hardwired algorithm nor a blueprint, for species-specific form and function. A bowtie architecture enables evolutionary lessons of the past to be generalized into lineage memory engrams which are then actively decoded (interpreted) in ways appropriate to default or novel situations by the morphogenetic machinery. Fundamental symmetries across evolution, development, and behavior involve learning and creative problem-solving, which can be modeled by machine learning (ML) concepts such as autoencoders (AEs) and neural cellular automata (NCAs)." A closely related and excellent paper by @WiringTheBrain and @CheneyLab : cell.com/trends/genetic…
Michael Levin tweet mediaMichael Levin tweet mediaMichael Levin tweet mediaMichael Levin tweet media
English
18
88
364
31K
yOPERO
yOPERO@yopero·
@carlos_olivera La parte de los hackatons y leadership es dolorosamente exacta :)
Español
0
0
0
13