Sean VanWinkle

86 posts

Sean VanWinkle

Sean VanWinkle

@SeanV6790

AI Architect & Engineer

OH Katılım Ağustos 2025
113 Takip Edilen190 Takipçiler
Sabitlenmiş Tweet
Sean VanWinkle
Sean VanWinkle@SeanV6790·
Want a LangGraph agent that doesn't forget everything after 10 turns? I just shipped HMLR v0.1.2, it's a memory layer that actually works. Completely **free* MIT license! Drop-in LangGraph node! No tricks. Just real long-term memory. @harrisonchase @jerryjliu0 @swyx
English
1
1
6
348
Sean VanWinkle
Sean VanWinkle@SeanV6790·
@Sebbywebz @MarkWatter99523 imagine thinking that skill based combat trivializes a game. Someone send an update to let me solo her, the achievement of learning the patterns of the enemies so that he can react to the enemy and not just face tank them is now trivial and let him know to git gud.
English
0
0
2
131
Sean VanWinkle
Sean VanWinkle@SeanV6790·
@boobs_scary Hmmm. I hate it also, but i also dont think plain text/code is easier. I wonder if a 3d node based environment would work, make it in three.js, sort of like satisfactory except for an actual workflow.
English
0
0
0
26
scaredofboobs🪲
scaredofboobs🪲@boobs_scary·
I want to strongly express that node based visual systems are for idiots who can't program and I'm having a miserable time.
scaredofboobs🪲 tweet media
English
6
1
31
1.3K
Sean VanWinkle
Sean VanWinkle@SeanV6790·
No I understand the point, I just dont agree with it. They have had the custom engines up to this point right? For Diablo, Wow, all of it. And even though they have these custom engines, the games have turned...bad. the presence of, and use of, custom engines has not saved their games. And so that means the source of it is somewhere else, which is what I meant a bankruptcy of original thought (like they used to have, as well as the custom engines to go with it).
English
0
0
0
15
Steven Kent
Steven Kent@StevK20047738·
@SeanV6790 @Grummz You’re missing the point. Grimm is saying they had a team of people who could make an unreal quality engine but they are using an off the shelf engine. And don’t get me wrong I use ue5 and love the engine, but if I had the talent and money I would prefer to use my own.
English
1
0
0
37
Sean VanWinkle
Sean VanWinkle@SeanV6790·
If blizzard was able to, from now on, output every single game back to back where the quality was at minimum as good as expedition 33, but the only downside was that they had to build it in unreal, do you think that would be a sacrifice worth taking, as a gamer that is. Now, is this going to actually happen? Of course not, because the engine doesnt matter when it comes to creativity of thought. WoW currently runs with only 4GB of ram and a 500 series graphics card. very optimized, we all know this. But its also a piece of shit now despite being optimized.
English
1
0
0
59
Sean VanWinkle
Sean VanWinkle@SeanV6790·
The texture looks better but as part of it, its screwing up the real-time shadows. Guess what happens when you have a hat on with the light directly overhead? It shades your eyes, thats the point of brimmed hat, but this guy apparently lives on a world with two suns as a light source. You guys are allowed to be a fan while pointing out the flaws you know.
English
0
0
0
18
Jay
Jay@TrueGamer1111·
Someone just told me that the top image looks more realistic 🤦‍♂️
Jay tweet media
English
430
33
1.2K
90.3K
Sean VanWinkle
Sean VanWinkle@SeanV6790·
@_BlessedRed I mean its up for debate as to how good it does or doesn't look, thats up to the viewer, but do people not realize how revolutionary changing the 3d mesh in real time would be? It would unlock to ability to have an actual world model AI. Its just lighting.
English
0
0
0
19
Sean VanWinkle
Sean VanWinkle@SeanV6790·
The first one looks bad, the 2nd one looks good. Just like with literally everything it will come down to execution. The average person will not even care about this debate if it makes their game look better. But the first one looks like someone came in and sprayed them down with a fine layer of cooking oil.
English
0
0
1
772
abstract concept
abstract concept@MisterSpace3·
Do you still trust your eyes? Or do you trust someone else's opinions more than your eyes? #dlss5 #rtxOn
English
188
53
1.4K
219.2K
Sean VanWinkle
Sean VanWinkle@SeanV6790·
@elonmusk A video of traveling into a black hole, past the event horizon. As you pass it, it leads to a light tunnel that leads to another galaxy. Photo realistic.
English
0
0
6
482
Sean VanWinkle
Sean VanWinkle@SeanV6790·
@steipete @stym06 Claude is amazing at planning, but absolutely terrible at following rules for me. But it is hands down a better planner than codex. Using them where Claude plans, and Codex executes is the winning combo for me.
English
1
0
0
366
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
@stym06 nah, still a huge difference, would be so much harder to build openclaw with Opus
English
50
4
453
31.8K
Ben Landau-Taylor
Ben Landau-Taylor@benlandautaylor·
It’s hard to believe AI Dungeon was made in 2019 and since then there have been exactly zero noteworthy video games that use chatbots.
English
68
26
1.5K
92.8K
Sean VanWinkle
Sean VanWinkle@SeanV6790·
Your argument is like denying gravity exists because parachutes work. Humans (and potential AGI) operate in a causal world, but agency emerges from that complexity, it's not all or nothing. If we dope someone up, we're overriding their normal agency, not proving it never existed. Your argument falls apart because as soon as the dopamine rush subsides, the agency returns, meaning that it never left in the first place it was simply overridden, otherwise we would all be puppets 24/7.
English
0
0
0
8
Mallchad 🏴󠁧󠁢󠁥󠁮󠁧󠁿
@SeanV6790 @Sosowski Humans do whatever nonsense their brain produces in the moment. We don't have "free will" in the you would generally think of it. This is made painfully evident when doing hormonal experiments. Pump a person with enough dopamine and they'll do anything. We don't have true choice
English
1
0
0
20
Sean VanWinkle
Sean VanWinkle@SeanV6790·
Generative AI is a technique not a particular model. It is simply the ability to take any given input, and generatively create *new* data. You are confusing the technique with the outcome. If you took a gen AI model that had a camera, but did not give it prior knowledge of the world, you could make a generative model that takes its input from the camera, and learns new data that was never trained. I do understand that this is inefficient, but it doesn't mean its not true. What you are referring to are people who have taken the same technique, but give the model assets ahead of time without the AI having the ability to create its new data on its own. But the technique itself does not require copyrighted assets, it is simply pattern matching.
English
0
1
2
124
Michael
Michael@TheMG3D·
This is false, gen Ai steals it doesn’t learn like humans. The model is limited to the data it is trained on. Why does every AI bro ignore the fact that copyright exists?
English
37
102
1K
13.5K
Sean VanWinkle
Sean VanWinkle@SeanV6790·
A context window really has nothing to do with any of this. If you took a current LLM, but did not train it on *any* human knowledge you would have...nothing, it couldn't self replicate knowledge or anything of that nature. Take a human on the other hand,with no prior knowledge just like the LLM, and you drop them in the middle of a field. If they are able to survive, the same cannot be said even remotely. They will learn, adapt, overcome and grow. An LLM will not. For it to be true AGI, you need an artificial entity that if you dropped it into any given artificial environment, it would do the same thing as the human in the field. Learn, grow, adapt on its own with no hand authoring by the creator. The original reply of yours seems to be leaning on solipsism, that you can only known your own mind for sure, and cannot prove how others think or operate because you cannot share their same mind, and thus it is impossible to define intelligence/consciousness. But intelligence, like the human in the field, is emergent in the sense that it will be able to adapt and survive to avoid pain and injury, that is what life is.
English
0
0
0
6
Sean VanWinkle
Sean VanWinkle@SeanV6790·
Nope its all me. Im just super interested in AI. I had been curious about why all the large companies are buying as much raw compute as they can get their hands on. Since the AI we are currently using (LLMs) exists on hardware that is already here and installed etc, i was confused about why they are pushing to build *so* much more. And the answer landed on that they are building it to host AGI, so i went down a huge rabbit hole of contemplating what it is, the end goal, the feasibility of it and everything. What it even means to be AGI etc. It is just a topic that is fascinating to me.
English
0
0
1
12
Sean VanWinkle
Sean VanWinkle@SeanV6790·
Your right. There is a simple way to think about just how monumental of a problem it is. Consider simulation theory, which basically states that everything around us is a simulation and that base reality is an unknowable amount of steps away from us right? Consider if you and I really are in a simulation, it would by that definition mean that you and I are AGI because we have true agency (ignore the rabbit hole of epistemological solipsism in this scenario that is a giant can of worms), and so if you and I are both AGI, it means a creator one level up created this world for you and I to walk around in and grow up, feel pain, emotions etc. Now, consider how much technology *we* would need to re-create that for an AGI living inside of a computer. We would need to 1 for 1, recreate absolutely everything about real life, inside of a computer, so that an AGI could thrive. Whether we are or are not in a simulation doesn't mean that what we would need to create for the AGI wouldn't be on the same order of magnitude in difficulty, its just insanely difficult.
English
1
0
0
19
Sos Sosowski
Sos Sosowski@Sosowski·
@SeanV6790 Yeah that’s how I feel about it too. Given how these companies have virtually the entirety of worlds compute and resources and are not able to achieve that suggests that we need either quantum or some kind of biological computer to achieve that (where biological defeats the point
English
1
0
0
76
Sean VanWinkle
Sean VanWinkle@SeanV6790·
Its not a direct redirection of the question. Being able to create a goal of your own choosing is agency. This is what i meant by needing to create digital DNA. If you attempt to hand author AGI, whoever is hand authoring it will leave fingerprints about what they think is the correct way to think, and thus only an entity that can evolve naturally would have true agency. Take humans as an example, there is no single correct way to think. Some people have an internal monologue, some dont. Some have aphantasia and cant form a picture in their minds eye, some have hyperphantasia and can replay movies in their mind. But no matter what combination you are, you are still a human and have agency to choose your own path despite how your mind works. Your final question, it might choose to commit genocide, is perfectly valid, but consider this. All the humans that you meet, whether you do or dont like them make choices about what they are doing, and those choices they make have no bearing on whether you do or do not consider it useful, but they have the agency to make those choices anyway. That is what true AGI is, the ability to recreate the agency of a human mind artificially. Being useful to the creator is not a guarantee.
English
1
0
0
12
Mallchad 🏴󠁧󠁢󠁥󠁮󠁧󠁿
@SeanV6790 @Sosowski Saying it has true agency is a complete redirection of the question. What is "agency", can humans make their own goals? Can they actually choose or does it look like they're choosing? How is "choosing" useful? It might choose to commit genocide, is that useful?
English
2
0
0
19
Sean VanWinkle
Sean VanWinkle@SeanV6790·
LLMs, as they are now, cannot be AGI. They "live" and operate inside of a black box, you give it input, it thinks about it, you get output. Consciousness does operate in a black box, you would need a completely new technology to achieve AGI. Im sure that there will be plenty of people that will claim something is AGI just because it is an extremely good mime, but for it to be true AGI it needs to live and operate at a constant rate, just like any living being does. The problem is that anyone attempting to hand author an AGI would leave fingerprints about their own preferences unlike how humans, or any other animal for that matter, has the ability to grow and adapt. What you need is literally digital DNA. Something that can replicate itself as new information comes in and adapts and evolves based on that information, just like life. It would need to have the ability to feel digital pain (not a simulation of it), actual digital pain, so that it could learn and grow just as other living beings adapt to avoid pain. To pass all of these hurdles is a problem orders of magnitude greater than simply making a smarter LLM. The thing is, LLMs are great! they serve their purpose, they can be given goals of the human and will execute them faithfully. An AGI would have agency, and that same guarantee of faithfully executing a goal cannot be made, so you have to wonder what the actual end goal is to create an AGI. If you did create it, it would just be any other human that you dont know if you can trust or not, except it has galaxy scale IQ, operates at the microsecond level, and doesnt have to follow your orders.
English
1
0
2
266
Sos Sosowski
Sos Sosowski@Sosowski·
@SeanV6790 That sounds super neat, but as far as I know there are no steps made in that direction, am I right? What I mean is, that AFAIK this is not somethign that can be achieved with current SOTA LLMs, since these are jsut dumb word guessers. Or are they trying to do that?
English
3
0
2
331
Sean VanWinkle
Sean VanWinkle@SeanV6790·
Artificial General Intelligence. It means an actual Artificial entity that has true agency. It can decide to make its own goals and act upon them as it chooses, just like a human can. But there is a problem, well there are many problems, but one of them is that an Artificial entity that lives inside of a computer would operate at a time scale way outside of a normal humans, just like normal computers do that take micro seconds to think, so if you had an AGI that was living inside of a computer that can live and think that fast and it didn't have anythong to do, it would literally go insane. So, you need a place it can live. That is the world model. The model creates the world as the AGI walks around just like you see in the genie videos, except the "player" would be AGI.
English
2
0
0
368
Sos Sosowski
Sos Sosowski@Sosowski·
@SeanV6790 > AGI goal what's that? everyone talking about that, is there a definition?
English
1
0
3
403
Sean VanWinkle
Sean VanWinkle@SeanV6790·
Well both companies, even though both have fundamentally different approaches, call their tech stack world models. Which is very confusing for everyone. As far as I can tell the only real difference is the end goal (unless both are blended for a unified approach), which is that world labs *is* aiming for games,VR/AR, or film. And genie is aiming for an AGI goal, so that if they actually make AGI they can give it a world to walk around in and learn from. All this hype about genie being the end of gaming is actually not at all their goal, as per their own statements.
English
2
0
0
450
Sos Sosowski
Sos Sosowski@Sosowski·
@SeanV6790 That's neat, but I thought "world model" means generating frames on the fly?
English
2
0
4
1.1K