atomlib
15.2K posts

atomlib
@atomlib
Friend of the pod | Yogurt male | Here to follow airspace & tech blogs. | Political views: Up Up Down Down Left Right Left Right B A
Yekaterinburg, Russia 가입일 Ocak 2012
875 팔로잉298 팔로워

Soviet Russia is closer than it appears.
The reason I have always loved science and engineering is that you eventually encounter reality. If you are any good, you will eventually find out that you are wrong. Lies only get you so far.
Sure, OpenAI or Meta could lie about how good their AIs are… but only to a point.
Quebec, where I live, is eerily similar to the tales told about Soviet Russia. When I tell colleagues about it, they often do not understand what I mean by “Soviet Russia.” They think I am saying that people are being sent to gulags in Quebec. They are not. Not physical gulags, at least.
There were different phases to the Soviet Union. Early on, it was a constant bloodbath. But over time, this proved unnecessary.
All you need to mimic Soviet Russia is a sufficiently powerful bureaucracy. But a specific type of bureaucracy: one that is willing to lie, misrepresent facts, shamelessly and on a large scale.
Contrary to what people assume, the workforce of the Soviet Union wasn’t incompetent. They couldn’t keep up with the USA, but their economy was growing and their productivity was steadily improving.
In what they cared about, they did relatively well. They had an impressive number of tanks and nuclear missiles. Their spy operations were top-notch.
Whether those tanks would work or fall apart in an actual battle is harder to say. When everything is covered with lies, you just can’t know.
See, Soviet Russia liked science and engineering but suffered from an addiction to lies. If it led to mass starvation due to poor agricultural choices, so be it. But it still worked, somehow.
So why did it fall?
One significant factor is the bureaucratic gulags: people lied, were lied to, and everyone knew that everyone else was lying.
You had a deep collapse of credibility.
Trust is a critical fuel for innovation. Too many lies, and you won’t get anywhere. The feedback loop of trial and error breaks and cannot be replaced. That’s why you see so many organizations incapable of innovation: once nobody believes anything, you can’t move forward very fast.
Do not live by lies, it will catch up with you.



English

@Dkzz38008Dkzz4 @Ly_thanh_anh Розовое — нарутомаки, наверное.
Думаю, там обычный зелёный лук.
Русский

@joshwhiton AI uses semicolons all the time. In fact, I started using it more in my writing when I use AI to ask questions to help me.
English

@SinaToossi how fast do you think it is to develop a 4000km range missile?
English

👇 What Brett McGurk presents as a “gotcha” actually encapsulates the core, recurring failure in US Iran policy: a persistent inability to recognize how its own actions drive the very outcomes it then cites as justification.
Escalate, trigger a response, then use that response to justify both the policies that created the mess and going further.
We’ve seen this play out time and again in US Iran policy.
Iran’s 2,000 km missile limit was a voluntary restraint emphasized by Ali Khamenei, aimed at managing escalation, limiting dynamics that could fuel a wider security dilemma, and preserving space for diplomacy.
Dismantle that context—walk away from agreements, pursue maximalist pressure, escalate to all-out war to collapse the country—and Iran responds by shedding those constraints and moving up the escalation ladder.
It’s a self-reinforcing pattern—and a self-fulfilling prophecy that unsurprisingly leads to the kind of escalation and conflict many hawkish voices in Washington and Israel have long sought.
Brett McGurk@brett_mcgurk
Speaks for itself: Feb. 25, 2026: “We are not developing long-range missiles… we have limited the range below 2,000 kilometers” — Iran’s FM Araghchi (IRNA). March 20, 2026: Iran fires missiles at Diego Garcia—ranging 4,000 kilometers (WSJ). ⬇️
English

@ThePrimeagen Why does literally everyone look like Ryan Gosling
English

@BrianMcDonaldIE Well, he was somewhat popular as an actor, not necessarily "huge". He was a celebrity.
English

@SneedLives I looked it up. Yup, contains seed oils and soy.
walmart.ca/en/ip/Great-Va…
It's so much easier to just buy real chicken.
English

@SneedLives I started watching it. He wears a fedora indoors, he has terrible skin, and he started talking about his “fitness journey” while holding in his hands something pre-packaged and probably with tons of seed oils.
Anyway:

English

Everybody is talking about recursive self-improvement (RSI) and meta learning. Here is my old 2020 talk about this [1]. It has aged well. Example: humans still define the starts & ends of trials of many modern meta learners. My RSI systems since 1994 LEARN to (re)define them [2]!
[1] Meta Learning Machines in a Single Lifelong Trial (talk for workshops at ICML 2020 and NeurIPS 2021, based on earlier talks since 1994). Abstract: the most widely used machine learning algorithms were designed by humans and thus are hindered by our cognitive biases and limitations. Can we also construct meta learning algorithms that can learn better learning algorithms so that our self-improving AIs have no limits other than those inherited from computability and physics? This question has been a main driver of my research since I wrote a thesis on it in 1987 [2]. Here I summarize our work on meta reinforcement learning with self-modifying policies in a single lifelong trial (since 1994), and mathematically optimal meta-learning through the self-referential Gödel Machine (since 2003). Many additional publications on meta-learning since 1987 can be found in the RSI overview [2].
[2] J. Schmidhuber (AI Blog, 2020-2025). 1/3 century anniversary of first publication on recursive self-improvement (RSI) and meta learning machines that learn to learn (1987). For its cover I drew a robot that bootstraps itself. 1992-: gradient descent-based neural meta learning. 1994-: meta reinforcement learning with self-modifying policies. 1997: meta RL plus artificial curiosity and intrinsic motivation. 2002-: asymptotically optimal meta learning for curriculum learning. 2003-: mathematically optimal Gödel Machine. 2020-: new stuff!
English

@EsotericCofe When someone calls the hams steamed despite them being obviously grilled.
English

@ThePrimeagen Why not ImageMagick? magick convert image.webp image.png on Windows. Or magick convert image.webp quality 80 image.jpg
English















