Manu_TechAndGames

15.7K posts

Manu_TechAndGames

Manu_TechAndGames

@AndroidBlogger

Developer, interested in many different tech fields. Working at Ubisoft Paris.

Sumali Şubat 2009
202 Sinusundan194 Mga Tagasunod
Manu_TechAndGames nag-retweet
MattVidPro
MattVidPro@MattVidPro·
This week in AI felt way more hands on than usual. - an agent that builds node workflows for you - a multiplayer AI worldmodel game - Google’s upcoming AI Studio upgrades - opensource routing + Godot game tools - a long-memory paper explained with NotebookLM’s new cinematic videos Check it out! youtube.com/watch?v=0HxjfQ…
YouTube video
YouTube
English
0
2
8
1.1K
Manu_TechAndGames
Manu_TechAndGames@AndroidBlogger·
@SebAaltonen Jensen said later that game devs can fine tune DLSS5 to adapt to their own artistic direction. It makes more sense this way, and I'm still not sure it's enough. Also I find it strange that this key information was not part of the initial reveal.
English
2
0
0
347
Sebastian Aaltonen
Sebastian Aaltonen@SebAaltonen·
Turns out that DLSS5 was after all a generic Snapchat beautify post filter. No per pixel developer control. Let's discuss requirements for a future AI filter: - Needs reference/training material for each hero asset in the game, especially faces. AI filter can't make character face look unrecognizable, but it's fine if the AI makes the face look closer to the reference face provided by the developer. - Needs to know which pixels belongs to which hero asset. Per pixel classification ID channel from the developer. That's the only way to make it reliable, and follow strict developer instructions. - Can't change the art style and mood. That must be developer controllable and testable for every scene. - Can't remove fog and volumetrics like DLSS5 does. Crucial for mood in many scenes. Needs to have scene depth understanding. Color is not enough. - Developer control at pixel level. AI filter that makes it cheaper to produce props and backgrounds is desirable. Devs can focus on hero assets more. But ruining hero asset design is not fine. I understand that Nvidia's customers are gamers, not devs. I also understand that some gamers hate modern "woke" games and want a beautify filter to fix them. But in order for this to really succeed, I strongly believe that Nvidia should work closer to the game devs and ask what they need, instead of ignoring the devs completely.
Osvaldo Pinali Doederlein@opinali

Daniel got important clarifications from NVIDIA. TLDR: the "DLSS5 skeptical" were right about *everything*. 1⃣ It's a 2D AI Filter. Input is only color buffer & motion vectors. The model doesn't see geometry, lights, PBR properties, normals, anything🧵 youtu.be/D0EM1vKt36s

English
33
19
325
24.3K
Manu_TechAndGames nag-retweet
ℏεsam
ℏεsam@Hesamation·
bro created a skill inspired by Karpathy's autoresearch to fine-tune his other Claude Code skills and iteratively make them better. one skill went from 56% → 92% in just 4 rounds of changes. the method is to define a set of tests for your skills: what to improve. then it changes the skill slightly to see if there's an improvement or not.
Ole Lehmann@itsolelehmann

x.com/i/article/2033…

English
64
260
3.4K
741.2K
Manu_TechAndGames nag-retweet
Defend Intelligence (Anis Ayari)
Unpopular opinion : OpenClaw, c’était juste une hype. C’était trop général, et ça finissait par devenir du spaghetti. Je l’ai déjà désinstallé de toutes mes configs et, à la place, j’ai repris les bonnes idées, comme la config WhatsApp, puis j’ai build mon propre agent. En vrai, il suffit de build par-dessus Claude Code ou Codex, et vous pouvez faire un truc qui vous convient vraiment. Et du coup, qu’est-ce que ça veut dire ? Tout simplement que les solutions trop générales ne fonctionnent pas sur le long terme. Les agents IA du futur devront être custom, avec le minimum de friction. C’est contradictoire, n’est-ce pas ? En fait, je crois à une vision de l’assistant qui découvre et s’auto-adapte automatiquement à vos usages.
Français
39
14
203
42.6K
Manu_TechAndGames nag-retweet
Casey Muratori
Casey Muratori@cmuratori·
Just wanted to mention: I watch @digitalfoundry regularly. I saw their DLSS 5 video when they posted it. It was actually the first time I saw DLSS 5. While I was surprised they liked it - since in my opinion it did not look good - that's one of the reasons I watch other people's shows. If I only wanted to hear my own opinions of things, I'd just watch my own channel :) I know it's stressful when a large percentage of your audience is mad about something. I can absolutely understand why Digital Foundry wanted to backpedal a bit on their initial strongly-favorable coverage. But for that same reason, I felt like I should take this opportunity to say: I watch Digital Foundry regularly. I was happy to see their initial coverage of DLSS 5. I had no problem with it, and I couldn't care less if their opinion about it differs from mine. If anything, that's a plus. And furthermore, it seems like both @dark1x and @Dachsjaeger weren't as enthusiastic about it. That actually makes me look forward to more coverage on the channel, since having both people who like and who dislike DLSS 5 should make for some great videos!
Digital Foundry@digitalfoundry

The big DLSS 5 machine learning debate and why we should have waited before posting our first round of coverage - today's video: youtu.be/5dTTfjBAFzc

English
42
31
1.1K
115.1K
Manu_TechAndGames nag-retweet
Matt Pocock
Matt Pocock@mattpocockuk·
Doing some experiments today with Opus 4.6's 1M context window. Trying to push coding sessions deep into what I would consider the 'dumb zone' of SOTA models: >100K tokens. The drop-off in quality is really noticeable. Dumber decisions, worse code, worse instruction-following. Don't treat 1M context window any differently. It's still 100K of smart, and 900K of dumb.
English
152
59
1.2K
152.2K
Manu_TechAndGames nag-retweet
InSpatio
InSpatio@InSpatio_AI·
We don’t generate videos. 🎬 We generate worlds from videos. 🌍 Introducing InSpatio-World — the world's first open-source real-time 4D world model‼️ Your input: a video clip Our output: a dynamic, navigable, persistent world 🕹️ explore freely across viewpoints ⏪ control time forward and backward 🔓 open-source and ready to build on :) Live demo: 🔗 world.inspatio.com Code & weights: 🔗 github.com/inspatio/inspa… Project page: 🔗 inspatio.github.io/inspatio-world
English
24
118
704
98.9K
Manu_TechAndGames nag-retweet
OpenArt
OpenArt@openart_ai·
Today, we’re launching a new way to create with AI. With OpenArt Worlds, you can generate a fully navigable 3D environment from a single prompt or image, step inside it, and capture shots exactly the way you envision them. No more starting over. No more inconsistent scenes. You build the world once - and create inside it. • Move through your scene freely • Find your angles • Add characters and elements • Capture production-ready shots
English
289
685
3.9K
7.2M
Manu_TechAndGames nag-retweet
AK
AK@_akhaliq·
WorldCam Interactive Autoregressive 3D Gaming Worlds with Camera Pose as a Unifying Geometric Representation paper: huggingface.co/papers/2603.16…
English
7
26
152
14.7K
Manu_TechAndGames nag-retweet
Kimi.ai
Kimi.ai@Kimi_Moonshot·
Introducing 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍𝒔: Rethinking depth-wise aggregation. Residual connections have long relied on fixed, uniform accumulation. Inspired by the duality of time and depth, we introduce Attention Residuals, replacing standard depth-wise recurrence with learned, input-dependent attention over preceding layers. 🔹 Enables networks to selectively retrieve past representations, naturally mitigating dilution and hidden-state growth. 🔹 Introduces Block AttnRes, partitioning layers into compressed blocks to make cross-layer attention practical at scale. 🔹 Serves as an efficient drop-in replacement, demonstrating a 1.25x compute advantage with negligible (<2%) inference latency overhead. 🔹 Validated on the Kimi Linear architecture (48B total, 3B activated parameters), delivering consistent downstream performance gains. 🔗Full report: github.com/MoonshotAI/Att…
Kimi.ai tweet media
English
330
2.1K
13.5K
4.9M
Manu_TechAndGames nag-retweet
Eric Lengyel
Eric Lengyel@EricLengyel·
New blog post: A Decade of Slug This talks about the evolution of the Slug font rendering algorithm, and it includes an exciting announcement: The patent has been dedicated to the public domain. terathon.com/blog/decade-sl…
Eric Lengyel tweet mediaEric Lengyel tweet media
English
45
339
2.1K
223.3K
Manu_TechAndGames nag-retweet
ℏεsam
ℏεsam@Hesamation·
students get 28% higher score simply for using pen and paper. it’s crazy. knowledge is not stored linearly in your brain, but as a graph of interconnected concepts. handwriting is more intuitive and engages your brain to absorb information than record it. it just works.
ℏεsam tweet media
Brandon Luu, MD@BrandonLuuMD

Students who took notes by hand scored ~28% higher on conceptual questions than laptop note-takers. Writing forces your brain to process and compress ideas instead of copying them.

English
51
560
5.7K
188.3K
Manu_TechAndGames nag-retweet
Cursor
Cursor@cursor_ai·
We trained Composer to self-summarize through RL instead of a prompt. This reduces the error from compaction by 50% and allows Composer to succeed on challenging coding tasks requiring hundreds of actions.
Cursor tweet media
English
86
101
1.6K
215.7K