aniongithub

13 posts

aniongithub banner
aniongithub

aniongithub

@aniongithub

30+ years building AI, robotics, simulation & GPGPU before they were buzzwords. Craft, passion and a lifetime of experience over hype.

Katılım Mayıs 2026
12 Takip Edilen0 Takipçiler
aniongithub
aniongithub@aniongithub·
@HowToAI_ They missed the chance to call it Nested Learning: A new HOPE 😄 But seriously, this is great! I hope it makes as big as impact as their first paper on this subject did! Kudos to the research team!
English
0
0
0
656
How To AI
How To AI@HowToAI_·
Google has quietly dropped what researchers are calling "Attention Is All You Need V2." And it signals the end of the Transformer era as we know it. In 2017, the original "Attention Is All You Need" paper changed the world by proving that AI doesn't need recurrence, it just needs to pay attention. But today, even the most advanced models like GPT and Gemini suffer from a massive, structural flaw: Catastrophic Forgetting. The moment an AI learns something new, it starts losing what it learned before. It’s why AI "hallucinates" or loses the thread in long conversations. This paper, titled "Nested Learning: The Illusion of Deep Learning Architectures," completely replaces the way AI stores information. The researchers have introduced a paradigm shift called Nested Learning (NL). Here is why this is "V2": For the last decade, we treated AI models as one giant, flat mathematical function. NL proves that a model is actually a set of thousands of smaller, "nested" optimization problems running in parallel. Instead of one giant "memory," each layer has its own internal "context flow." This allows the model to learn new tasks at test-time without overwriting its core intelligence. It moves us past the static Transformer. The new architecture (HOPE) demonstrated 100% stability in long-context memory and "post-training adaptation" that was previously impossible. The technical takeaway is brutal for the competition: Existing deep learning works by compressing information until it breaks. Nested Learning works by organizing information so it can grow forever. We’ve spent 7 years trying to make Transformers bigger. Google figured out how to make them "Nested." The Transformer replaced the RNN in 2017. Nested Learning is here to replace the Transformer in 2026.
How To AI tweet media
English
73
380
2.1K
147K
mrdoornbos
mrdoornbos@mrdoornbos·
For the last few weeks, I've been writing a multi-CPU single-board computer like explorer. I'm calling it CROSSWOZ (Hat tip to Woz and the WOZMON). Seven vintage CPU cores (6502, 65C02, Z80, 8080, 8085, 6809, 1802) share a 64K memory bus and the same WOZMON-style monitor. I can swap CPUs in place. It's got a mini-assembler per CPU, JMON-style debug, and animated paper-tape I/O. I'm pretty pleased with how it's turning out.
English
11
43
303
14.6K
Brian Roemmele
Brian Roemmele@BrianRoemmele·
Fast Fourier Analysis.
English
163
1.9K
16.3K
1.3M
aniongithub
aniongithub@aniongithub·
@Dever401 Totally agree! And in my experience, a large part of keeping review quality stable is to have objective, reproducible metrics as part of the review, not just more AI prompting (as this is inherently stochastic) 🙌
English
1
0
0
4
Isaac
Isaac@Dever401·
Most AI coding demos are lying by omission. The hard part is not writing code. It is keeping context, tool state, and review quality stable across a long session. That is where agent workflows actually win or fail.
English
1
0
4
28
aniongithub
aniongithub@aniongithub·
@Timur_Yessenov Done, added all your suggestions as issues that will be actively worked on. Thank you for the great discussion, please keep the ideas coming!
English
1
0
1
4
Timur Yessenov
Timur Yessenov@Timur_Yessenov·
@aniongithub nice. if you expose the audit trail as first-class UI/logs, that becomes more than a safety checkbox — it lets users debug why an agent did or didn’t touch host state.
English
1
0
0
7
aniongithub
aniongithub@aniongithub·
Just shipped devcontainer-mcp, an open source MCP server that lets AI coding agents (like Claude, GitHub Copilot and cursor) properly work seamlessly inside real dev containers. No more messing up your local machine. The agent gets a clean, isolated, reproducible environment. Get it on Github: github.com/aniongithub/de… One line install on Mac, Windows and Linux and with zero-setup for Claude Code, GitHub Copilot and Cursor. Would love some early feedback from the agentic coding community!
English
2
0
1
66
Timur Yessenov
Timur Yessenov@Timur_Yessenov·
@aniongithub this is a good direction. I’d separate “can run host command” from “can mutate host state” — reads/debug commands are one risk class, writes/destructive ops need a different approval path and a visible audit trail.
English
1
0
0
2
aniongithub
aniongithub@aniongithub·
@Timur_Yessenov Yes, this is on my list of things to add. I just added hooks to prevent the agent from using host commands and will add destructive command enforcement using this zero configuration method as well! Great suggestion 🙌 github.com/aniongithub/de…
English
2
0
1
3
Timur Yessenov
Timur Yessenov@Timur_Yessenov·
@aniongithub @aniongithub that handle model is strong. The edge case I’d test is write-scoped tokens: agent cannot read the secret, but it can still invoke a dangerous action through the broker. I’d want per-tool approvals + audit log for anything outside read-only.
English
1
0
0
1
aniongithub
aniongithub@aniongithub·
Yes, the disposable nature is one of the biggest unlocks. We can let the agent be way more aggressive (experiment, install random stuff, run heavy builds) because we can just have it delete the container/codespace/instance and spin up a fresh one in seconds. **On secrets:** The agent never sees raw tokens or credentials. I built a small **Auth Broker** inside devcontainer-mcp that uses opaque handles (e.g. `github-myaccount`). The real secrets stay in the host’s keyring (via gh CLI, aws cli, etc.). The MCP server only resolves them at call time. So even if the agent is inside the container, it can’t leak your tokens. Any scenarios you had in mind that wouldn't work with this?
English
2
0
1
2
Timur Yessenov
Timur Yessenov@Timur_Yessenov·
@aniongithub yep — the disposable part is underrated. if the agent can destroy and recreate its workspace, you can let it be much more aggressive without risking the real dev setup. curious how you’re handling secrets inside the container?
English
1
0
0
4
aniongithub
aniongithub@aniongithub·
Quick question for everyone using AI coding agents (Claude, Copilot, Cursor, etc.): What’s your biggest frustration right now? For me it was always: → Breaking my local environment → Hardware / memory limits → Inconsistent setups across machines That’s why I built devcontainer-mcp. What’s YOUR #1 pain point? github.com/aniongithub/de…
English
0
0
0
9
aniongithub
aniongithub@aniongithub·
Nailed it. Clean sandbox boundaries > “just trust the repo.” That’s exactly why I built devcontainer-mcp — proper dev containers + MCP makes agents both powerful *and* safe. Reproducible, disposable, and works across local, cloud, and Codespaces. Appreciate the thoughtful comment!
English
1
0
1
2
Timur Yessenov
Timur Yessenov@Timur_Yessenov·
@aniongithub this is the right direction. for coding agents, clean sandbox boundaries beat “just trust the repo.” devcontainers + MCP feels practical because the environment is reproducible and disposable.
English
1
0
1
6
aniongithub
aniongithub@aniongithub·
The real power of devcontainer-mcp: it completely frees your AI agents from your local hardware limits. Thanks to Devpod, it works not just with GitHub Codespaces, but also on AWS, Azure, GCP, Kubernetes, and any other Devpod-supported remote environment. Watch Copilot launch a fresh cloud environment and automatically build the entire project inside it, all while my laptop stays completely untouched. Same tool. Any cloud. Any hardware. github.com/aniongithub/de…
GIF
English
0
0
0
12
aniongithub
aniongithub@aniongithub·
Exactly! 🙏 That's precisely why I built devcontainer-mcp, to give agents proper disposable, reproducible sandboxes instead of risking the host machine. The isolation + multi-backend support (local, DevPod cloud, Codespaces) makes a huge difference. Have you been running into these issues a lot with agents?
English
0
0
0
9
Isaac
Isaac@Dever401·
@aniongithub This is a strong MCP use case. Coding agents need disposable, reproducible sandboxes by default; otherwise every experiment becomes a local-machine risk.
English
1
0
1
7