Marko Njegomir

2.1K posts

Marko Njegomir banner
Marko Njegomir

Marko Njegomir

@njmarko

Њ | PhD student | TA | Machine learning, graph neural networks, software engineering, medical neuroscience The limits of my compute mean the limits of my world

Novi Sad, Serbia เข้าร่วม Ağustos 2021
957 กำลังติดตาม1K ผู้ติดตาม
Marko Njegomir
Marko Njegomir@njmarko·
Karpathy seems to be in search of the Optimal system also 🧐 Since it's been 250 years since the Declaration of Independence, it got me thinking that some modern upgrades are possible to the political software. Deep Transparency and the 4th control branch of government + AI could potentially improve many systems of government.
Andrej Karpathy@karpathy

Something I've been thinking about - I am bullish on people (empowered by AI) increasing the visibility, legibility and accountability of their governments. Historically, it is the governments that act to make society legible (e.g. "Seeing like a state" is the common reference), but with AI, society can dramatically improve its ability to do this in reverse. Government accountability has not been constrained by access (the various branches of government publish an enormous amount of data), it has been constrained by intelligence - the ability to process a lot of raw data, combine it with domain expertise and derive insights. As an example, the 4000-page omnibus bill is "transparent" in principle and in a legal sense, but certainly not in a practical sense for most people. There's a lot more like it: laws, spending bills, federal budgets, freedom of information act responses, lobbying disclosures... Only a few highly trained professionals (investigative journalists) could historically process this information. This bottleneck might dissolve - not only are the professionals further empowered, but a lot more people can participate. Some examples to be precise: Detailed accounting of spending and budgets, diff tracking of legislation, individual voting trends w.r.t. stated positions or speeches, lobbying and influence (e.g. graph of lobbyist -> firm -> client -> legislator -> committee -> vote -> regulation), procurement and contracting, regulatory capture warning lights, judicial and legal patterns, campaign finance... Local governments might be even more interesting because the governed population is smaller so there is less national coverage: city council meetings, decisions around zoning, policing, schools, utilities... Certainly, the same tools can easily cut the other way and it's worth being very mindful of that, but I lean optimistic overall that added participation, transparency and accountability will improve democratic, free societies. (the quoted tweet is half-ish related, but inspired me to post some recent thoughts)

English
0
0
0
12
Marko Njegomir
Marko Njegomir@njmarko·
I have been working on a series of videos covering the Search for the optimal System. Our political systems were designed hundreds of years ago, and they could use a modern upgrade. For example, a control branch of government could be introduced with no executive power, so that the runner-up in the election gets to operate it. This way, people in that branch are motivated to report corruption because they want to gain power in the next election cycle, and they can be additionally rewarded with a share of the money stolen due to uncovered corruption. The control branch could use AI to monitor what the government is doing. This would work perfectly with deep transparency (article 9 of the series). x.com/njmarko/status…
English
0
0
0
40
Andrej Karpathy
Andrej Karpathy@karpathy·
Something I've been thinking about - I am bullish on people (empowered by AI) increasing the visibility, legibility and accountability of their governments. Historically, it is the governments that act to make society legible (e.g. "Seeing like a state" is the common reference), but with AI, society can dramatically improve its ability to do this in reverse. Government accountability has not been constrained by access (the various branches of government publish an enormous amount of data), it has been constrained by intelligence - the ability to process a lot of raw data, combine it with domain expertise and derive insights. As an example, the 4000-page omnibus bill is "transparent" in principle and in a legal sense, but certainly not in a practical sense for most people. There's a lot more like it: laws, spending bills, federal budgets, freedom of information act responses, lobbying disclosures... Only a few highly trained professionals (investigative journalists) could historically process this information. This bottleneck might dissolve - not only are the professionals further empowered, but a lot more people can participate. Some examples to be precise: Detailed accounting of spending and budgets, diff tracking of legislation, individual voting trends w.r.t. stated positions or speeches, lobbying and influence (e.g. graph of lobbyist -> firm -> client -> legislator -> committee -> vote -> regulation), procurement and contracting, regulatory capture warning lights, judicial and legal patterns, campaign finance... Local governments might be even more interesting because the governed population is smaller so there is less national coverage: city council meetings, decisions around zoning, policing, schools, utilities... Certainly, the same tools can easily cut the other way and it's worth being very mindful of that, but I lean optimistic overall that added participation, transparency and accountability will improve democratic, free societies. (the quoted tweet is half-ish related, but inspired me to post some recent thoughts)
Harry Rushworth@Hrushworth

The British Government is a complicated beast. Dozens of departments, hundreds of public bodies, more corporations than one can count... Such is its complexity that there isn't an org chart for it. Well, there wasn't... Introducing ⚙️Machinery of Government⚙️

English
96
93
716
59.2K
Marko Njegomir
Marko Njegomir@njmarko·
11.4 The 5% Wealth Tax on Speculators | In Search of the Optimal System | How to mathematically differentiate a productive entrepreneur from a passive rent seeker, and why speculators must face a brutal wealth tax. 🏦⚖️ Videos created by Marko Njegomir. They are based on Prof. Vojin Šenk's 18 workshops that I participated in. Note: AI can make mistakes, so watch the full workshop for all the details!
English
0
0
0
23
Marko Njegomir
Marko Njegomir@njmarko·
11.3 Taxing Behavior, Not Production | In Search of the Optimal System | Why taxing reinvested corporate profit is an economic failure, and how to use the tax code to mathematically force investment. 🏢📉 Videos created by Marko Njegomir. They are based on Prof. Vojin Šenk's 18 workshops that I participated in. Note: AI can make mistakes, so watch the full workshop for all the details!
English
0
1
0
22
Marko Njegomir รีทวีตแล้ว
NASA
NASA@NASA·
"We can see the Moon out of the docking hatch right now. It's a beautiful sight." Flight day 3 is in the books, and our @NASAArtemis II crew is now closer to the Moon than to Earth. Check out highlights from our lunar mission. What’s been your favorite moment so far?
English
2K
9.8K
65K
4.6M
Marko Njegomir
Marko Njegomir@njmarko·
11.2 The Three Pillars of a Trustless Retirement | In Search of the Optimal System | How to build a mathematically sound retirement architecture using corporate equity, VAT routing, and a Universal Basic Income. 🏛️📈 Videos created by Marko Njegomir. They are based on Prof. Vojin Šenk's 18 workshops that I participated in. Note: AI can make mistakes, so watch the full workshop for all the details!
English
0
0
0
32
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.1K
4.8K
42.8K
12M
Marko Njegomir
Marko Njegomir@njmarko·
We are living in Tesla's world now, even though his contemporaries stole a lot of his ideas. He worded it beautifully: "Let the future tell the truth, and evaluate each one according to his work and accomplishments. The present is theirs; the future, for which I have really worked, is mine," - Nikola Tesla Since AI will be writing the history, will it tell future generations that they are living in Schmidhuber's world?
Marko Njegomir tweet media
English
1
3
35
8.5K
Marko Njegomir
Marko Njegomir@njmarko·
11.1 The Pension Ponzi Scheme | In Search of the Optimal System | Why the state pension system is mathematically doomed, and how changing demographics have forced governments to steal from the future. 📉👴 Videos created by Marko Njegomir. They are based on Prof. Vojin Šenk's 18 workshops that I participated in. Note: AI can make mistakes, so watch the full workshop for all the details!
English
0
0
0
31
Marko Njegomir
Marko Njegomir@njmarko·
Aim for the moon 🌖, and if you miss, you'll land among the stars.✨ 🚀
Marko Njegomir tweet media
English
0
0
2
52
Marko Njegomir
Marko Njegomir@njmarko·
Full breakdown 9 | Deep Transparency: The Architecture of an Open Budget | In Search of the Optimal System | How do corrupt regimes steal billions while publishing their budgets online? From the black hole of discretionary spending to using AI and immutable ledgers to hunt state corruption, this is the complete summary of the ninth workshop in our series. Videos created by Marko Njegomir. They are based on Prof. Vojin Šenk's 18 workshops that I participated in. Note: AI can make mistakes, so watch the full workshop for all the details!
Marko Njegomir@njmarko

x.com/i/article/2037…

English
0
0
2
81
Marko Njegomir
Marko Njegomir@njmarko·
10.4 The Political Incubator | In Search of the Optimal System | How to filter opportunists out of politics by replacing generic local committees with goal-oriented civic startups. 💡🏗️ Videos created by Marko Njegomir. They are based on Prof. Vojin Šenk's 18 workshops that I participated in. Note: AI can make mistakes, so watch the full workshop for all the details!
English
0
0
0
67
Marko Njegomir
Marko Njegomir@njmarko·
10.3 The End of Left and Right | In Search of the Optimal System | Why the VUCA world has made traditional political ideologies useless, and the architectural metaphor of paving the path. 🗺️🚶‍♂️ Videos created by Marko Njegomir. They are based on Prof. Vojin Šenk's 18 workshops that I participated in. Note: AI can make mistakes, so watch the full workshop for all the details!
English
0
0
0
69
Marko Njegomir
Marko Njegomir@njmarko·
10.2 The Six Sources of Power | In Search of the Optimal System | Understanding the physics of civic life, and how opposition movements must tactically weaponize norms, ideas, and numbers. ⚖️⚙️ Videos created by Marko Njegomir. They are based on Prof. Vojin Šenk's 18 workshops that I participated in. Note: AI can make mistakes, so watch the full workshop for all the details!
English
0
0
0
67
Jürgen Schmidhuber
Jürgen Schmidhuber@SchmidhuberAI·
Dr. LeCun's heavily promoted Joint Embedding Predictive Architecture (JEPA, 2022) [5] is the heart of his new company. However, the core ideas are not original to LeCun. Instead, JEPA is essentially identical to our 1992 Predictability Maximization system (PMAX) [1][14]. Details in reference [19] which contains many additional references. Motivation of PMAX [1][14]. Since details of inputs are often unpredictable from related inputs, two non-generative artificial neural networks interact as follows: one net tries to create a non-trivial, informative, latent representation of its own input that is predictable from the latent representation of the other net’s input. PMAX [1][14] is actually a whole family of methods. Consider the simplest instance in Sec. 2.2 of [1]: an auto encoder net sees an input and represents it in its hidden units (its latent space). The other net sees a different but related input and learns to predict (from its own latent space) the auto encoder's latent representation, which in turn tries to become more predictable, without giving up too much information about its own input, to prevent what's now called “collapse." See illustration 5.2 in Sec. 5.5 of [14] on the "extraction of predictable concepts." The 1992 PMAX paper [1] discusses not only auto encoders but also other techniques for encoding data. The experiments were conducted by my student Daniel Prelinger. The non-generative PMAX outperformed the generative IMAX [2] on a stereo vision task. The 2020 BYOL [10] is also closely related to PMAX. In 2026, @misovalko, leader of the BYOL team, praised PMAX, and listed numerous similarities to much later work [19]. Note that the self-created “predictable classifications” in the title of [1] (and the so-called “outputs” of the entire system [1]) are typically INTERNAL "distributed representations” (like in the title of Sec. 4.2 of [1]). The 1992 PMAX paper [1] considers both symmetric and asymmetric nets. In the symmetric case, both nets are constrained to emit "equal (and therefore mutually predictable)" representations [1]. Sec. 4.2 on “finding predictable distributed representations” has an experiment with 2 weight-sharing auto encoders which learn to represent in their latent space what their inputs have in common (see the cover image of this post). Of course, back then compute was was a million times more expensive, but the fundamental insights of "JEPA" were present, and LeCun has simply repackaged old ideas without citing them [5,6,19]. This is hardly the first time LeCun (or others writing about him) have exaggerated LeCun's own significance by downplaying earlier work. He did NOT "co-invent deep learning" (as some know-nothing "AI influencers" have claimed) [11,13], and he did NOT invent convolutional neural nets (CNNs) [12,6,13], NOR was he even the first to combine CNNs with backpropagation [12,13]. While he got awards for the inventions of other researchers whom he did not cite [6], he did not invent ANY of the key algorithms that underpin modern AI [5,6,19]. LeCun's recent pitch: 1. LLMs such as ChatGPT are insufficient for AGI (which has been obvious to experts in AI & decision making, and is something he once derided @GaryMarcus for pointing out [17]). 2. Neural AIs need what I baptized a neural "world model" in 1990 [8][15] (earlier, less general neural nets of this kind, such as those by Paul Werbos (1987) and others [8], weren't called "world models," although the basic concept itself is ancient [8]). 3. The world model should learn to predict (in non-generative "JEPA" fashion [5]) higher-level predictable abstractions instead of raw pixels: that's the essence of our 1992 PMAX [1][14]. Astonishingly, PMAX or "JEPA" seems to be the unique selling proposition of LeCun's 2026 company on world model-based AI in the physical world, which is apparently based on what we published over 3 decades ago [1,5,6,7,8,13,14], and modeled after our 2014 company on world model-based AGI in the physical world [8]. In short, little if anything in JEPA is new [19]. But then the fact that LeCun would repackage old ideas and present them as his own clearly isn't new either [5,6,18,19]. FOOTNOTES 1. Note that PMAX is NOT the 1991 adversarial Predictability MINimization (PMIN) [3,4]. However, PMAX may use PMIN as a submodule to create informative latent representations [1](Sec. 2.4), and to prevent what's now called “collapse." See the illustration on page 9 of [1]. 2. Note that the 1991 PMIN [3] also predicts parts of latent space from other parts. However, PMIN's goal is to REMOVE mutual predictability, to obtain maximally disentangled latent representations called factorial codes. PMIN by itself may use the auto encoder principle in addition to its latent space predictor [3]. 3. Neither PMAX nor PMIN was my first non-generative method for predicting latent space, which was published in 1991 in the context of neural net distillation [9]. See also [5-8]. 4. While the cognoscenti agree that LLMs are insufficient for AGI, JEPA is so, too. We should know: we have had it for over 3 decades under the name PMAX! Additional techniques are required to achieve AGI, e.g., meta learning, artificial curiosity and creativity, efficient planning with world models, and others [16]. REFERENCES (easy to find on the web): [1] J. Schmidhuber (JS) & D. Prelinger (1993). Discovering predictable classifications. Neural Computation, 5(4):625-635. Based on TR CU-CS-626-92 (1992): people.idsia.ch/~juergen/predm… [2] S. Becker, G. E. Hinton (1989). Spatial coherence as an internal teacher for a neural network. TR CRG-TR-89-7, Dept. of CS, U. Toronto. [3] JS (1992). Learning factorial codes by predictability minimization. Neural Computation, 4(6):863-879. Based on TR CU-CS-565-91, 1991. [4] JS, M. Eldracher, B. Foltin (1996). Semilinear predictability minimization produces well-known feature detectors. Neural Computation, 8(4):773-786. [5] JS (2022-23). LeCun's 2022 paper on autonomous machine intelligence rehashes but does not cite essential work of 1990-2015. [6] JS (2023-25). How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23. [7] JS (2026). Simple but powerful ways of using world models and their latent space. Opening keynote for the World Modeling Workshop, 4-6 Feb, 2026, Mila - Quebec AI Institute. [8] JS (2026). The Neural World Model Boom. Technical Note IDSIA-2-26. [9] JS (1991). Neural sequence chunkers. TR FKI-148-91, TUM, April 1991. (See also Technical Note IDSIA-12-25: who invented knowledge distillation with artificial neural networks?) [10] J. Grill et al (2020). Bootstrap your own latent: A "new" approach to self-supervised Learning. arXiv:2006.07733 [11] JS (2025). Who invented deep learning? Technical Note IDSIA-16-25. [12] JS (2025). Who invented convolutional neural networks? Technical Note IDSIA-17-25. [13] JS (2022-25). Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, arXiv:2212.11279 [14] JS (1993). Network architectures, objective functions, and chain rule. Habilitation Thesis, TUM. See Sec. 5.5 on "Vorhersagbarkeitsmaximierung" (Predictability Maximization). [15] JS (1990). Making the world differentiable: On using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. Technical Report FKI-126-90, TUM. [16] JS (1990-2026). AI Blog. [17] @GaryMarcus. Open letter responding to @ylecun. A memo for future intellectual historians. Substack, June 2024. [18] G. Marcus. The False Glorification of @ylecun. Don’t believe everything you read. Substack, Nov 2025. [19] J. Schmidhuber. Who invented JEPA? Technical Note IDSIA-3-22, IDSIA, Switzerland, March 2026. people.idsia.ch/~juergen/who-i…
Jürgen Schmidhuber tweet media
English
80
166
1.6K
395.9K
Marko Njegomir
Marko Njegomir@njmarko·
10.1 The 1881 Bug (Why Political Parties are Obsolete) | In Search of the Optimal System | Why modern political parties feel disconnected, and the structural flaw of the 19th century local committee. 🏛️📉 Videos created by Marko Njegomir. They are based on Prof. Vojin Šenk's 18 workshops that I participated in. Note: AI can make mistakes, so watch the full workshop for all the details!
English
0
0
0
66
Marko Njegomir
Marko Njegomir@njmarko·
Full breakdown 8 | Decentralizing the State: Regional Power and the Programmatic Budget | In Search of the Optimal System | Why do modern states constantly fracture under the weight of separatism, and how do we stop politicians from stealing taxpayer money? From neutralizing geographic secession to returning the power of the purse directly to the citizen, this is the complete summary of the eighth workshop in our series. Videos created by Marko Njegomir. They are based on Prof. Vojin Šenk's 18 workshops that I participated in. Note: AI can make mistakes, so watch the full workshop for all the details!
Marko Njegomir@njmarko

x.com/i/article/2037…

English
0
0
0
51