Sneeo💎— Island🌴— Alien👽— Boom💥— Cloud☁️

1K posts

Sneeo💎— Island🌴— Alien👽— Boom💥— Cloud☁️ banner
Sneeo💎— Island🌴— Alien👽— Boom💥— Cloud☁️

Sneeo💎— Island🌴— Alien👽— Boom💥— Cloud☁️

@Sneeo_Chuggalo

🥇On ‘Chuggalo Island’ we can be one🥇

Katılım Temmuz 2012
640 Takip Edilen149 Takipçiler
Sneeo💎— Island🌴— Alien👽— Boom💥— Cloud☁️ retweetledi
QuantumPink
QuantumPink@QuantumPinkkk·
Motoko language on $ICP is the most powerful, AI capable programing language in the world. Caffeine AI / Agents use it to keep your data secure ... secure from AI hallucinations
GIF
dom | icp@dominic_w

— Motoko enables AI to build better, faster, at less cost, and without accidentally losing app data during updates, and... most importantly... enables the production of a World Model to unleash AI. New demos coming soon. One last thing 🧵

English
2
6
19
420
Partyhats ∞
Partyhats ∞@PartyhatsICP·
Don’t be this guy Hodl your $ICP
Partyhats ∞ tweet media
English
3
7
95
2.5K
SpaceX
SpaceX@SpaceX·
Full duration and full thrust 33-engine static fire with Super Heavy V3
English
2.1K
5.5K
33.5K
34.4M
Internet Computer Today
Internet Computer Today@DfinityToday·
Blockchains With The Highest Total Stake: 1. Ethereum — $92 B 2. Solana — $36 B 3. BNB Chain — $16 B 4. TRON — $15 B 5. Sui — $7.6 B 6. Cardano — $5.6 B 7. Avalanche — $1.8 B 8. Hedera — $1.4 B 9. TON — $1.1B 10. Polkadot — $1.06 B 11. Cronos — $981 M 12. Aptos — $777 M 13. NEAR — $776 M 14. $ICP — $648 M 📈 15. Polygon — $354 M 16. Sei — $256 M 17. Algorand — $248 M 18. Tezos — $244 M 19. Kaia — $121 M 20. MultiversX — $57 M 21. Sonic — $55 M 22. Somnia — $42 M 23. Iotex — $19 M 24. Fogo — $16 M 25. Moonbeam — $4 M Source: @chainspect_app
English
6
78
309
8K
Sneeo💎— Island🌴— Alien👽— Boom💥— Cloud☁️ retweetledi
Jum
Jum@JesterJum·
🤮🤮🤮🤮🤮 The good Ole toilet plunger is undefeated
English
112
139
1.5K
473.5K
tacodao 🌮
tacodao 🌮@tacodaoicp·
🧀 $NACHOs LOGO CONTEST 🧀 We need a logo for our upcoming index token. Make it crunchy. Prizes: $250 in $NACHOs (at launch) 3000 $TACO bragging rights How to enter: Quote this post with your design Deadline: ~4 days Winner crowned on Sneedhub
English
18
12
52
12.9K
John
John@CryptoGodJohn·
$MAGA shakeout before $.10 Aliens will be the biggest story in the world. Max bid and retire your bloodline
John tweet media
English
50
28
261
55.4K
Cupcake
Cupcake@Cupcake_Junkie_·
Another successful Bobcano dividend. Passive income brought to you by @MegaSynergy
Cupcake tweet media
English
10
12
20
379
Snassy.icp
Snassy.icp@SnassyIcp·
Training my Out Run AI drivers is frankly interesting enough to warrant a somewhat serious post. These days, when someone says “AI” they mean a huge neural net with billions of weights, and “training” it means adjusting those weights until you get good results. But back in the days, when you heard about “AI” in real life conversations, odds are someone was talking about a “game AI” controlling non-player characters in computer games, and they would be rather simpler affairs. Such an “AI”, in its simplest form (and they often were) could be just a few ”if” statements of code. Think “the ghosts” in Pac Man. The code controlling their behavior would be referred to as the game’s “AI” in those days, and it basically amounted to “if Pac Man man ate fruit, move away from him, otherwise move towards him” (the Pac Man ghosts are a little more complex, but not much). By the time you were fighting soldiers in 3D shooters, the game “AI” had stepped up a few levels to include perhaps thousands of if statements, but it’s still just a slightly bigger version of the same thing, and here’s the defining feature: it is possible to understand the logic driving how such an “AI” reasons, because it is encoded right there in the if statements. It is code driven by semantically meaningful insights from the programmer, and will contain intent-rich statements such as “if player.isHidingBehindBush() then throwBombAtPlayer() //drive the player out from hiding so we can shoot at them” This is what people contrast to when they say today’s AI’s “can’t be understood” - instead of intent-driven code statements like above, all you get is a long string of numbers that have been tuned to produce valid outputs, but gives no clue as to why those operations work. These ”if”-based AIs is also what folks used to mean when they said “the computer can only do what you tell it” because that used to hold a deep truth. If a computer did something seemingly smart, it’s because a smart coder had written some smart code, the computer wouldn’t figure out on its own that throwing bombs at players hiding in bushes would be a good strategy. Except now none of that is true anymore. You might discuss if it’s still a little true because “LLMs only interpolate” but it’s nevertheless no longer the case that all the computer will do is follow a bunch of if statements laid out by a programmer. But when I talk about the “AI drivers” I am creating for Out Run, they are most definitely of the old kind, they are a bunch of if statements and nothing else. They will run in a browser, we can’t have a neural net for every AI driver. There are games these days with neural nets for their NPCs - a.k.a. ”real” game AI - but browser Out Run won’t be one of them. Nevertheless, modern AI brings an interesting twist to the old art of writing old school game AI, that I want to discuss in this post. The loop for training a neural net (say on a racing game): Run a bunch of simulated races, collect data, see if results were good or bad and reward or punish model weights accordingly - repeat millions of times until the weights after enough whipping produce decent results. The loop for training an if-statement based AI: Run a race against your dumb AI opponents (v0.1) and see what they do badly. Think about why. Form a hypothesis, “maybe if they break a bit before curves they won’t crash every time?” and then try to encode your strategy with if statements. You run another race. You notice that now all the cars are standing still before the first curve, so back to refine your hypothesis and tweak the if statements, “but if you are going slower than so and so, stop hitting the breaks”. And you go on and on like this, essentially teaching your AI drivers how to drive by explaining it in terms of many little rules. Because this loop is constrained by a thinking human in the middle, it usually goes on for maybe hundreds of iterations instead of millions, but it is a far more efficient path through the search space taken by the reasoning human, so it is far more efficient loop. Hundreds of iterations will usually be enough to produce pretty well driving AI racers for games. Here comes the interesting twist: I am using the old school AI loop, with reasoning about how to drive and encoding that as rules, but instead of me doing it, it’s the LLM. I have built a headless (no GUI) version of Out Run where the LLM can test drive its game AI racers to measure how they do. This corresponds to me driving a race with the AI drivers to see how they do, how often they crash, how fast they can go through curves and overtake etc etc. The headless game logs all such data so the LLM can inspect it. Then it analyses the data. Reasons about it. “Oh, the drivers lose speed when trying to overtake a slow driver by breaking too much before changing lanes”. Creates a hypothesis on how to fix it. Implements it with if statements. Runs another headless race (or, rather, set of 50 races) to test if the hypothesis worked. Exactly the same loop as before, but with an LLM instead of a human in the middle. And the results are nothing short of spectacular. About 2000+ simulated races and some 50 full loops later, the AI drivers have gone from an embarrassing clown show to beating my all time high score. Which is pretty good. And I’m not stopping here - the return on this approach is frankly amazing, the AI racers are simply uncanny by now despite being just a few lines of JavaScript. I want to see how good they can get, and we’re not plateauing yet. The implications are interesting. By automating this loop I have been able to run in a day the number of iterations that would normally have taken weeks or months. Moreover, I have very real limits on how complex I can make my AI driver ruleset - the collection of if statements - before it becomes too complex for my brain to handle and I’d normally have to call it good enough. The LLM does not have the same limitation. It can reason not just at superhuman speed but also at incredible levels of complexity, with the end result that, much like if I had been training a neural net to drive the cars, I may well end up with an if-statement driven game AI that is incomprehensible to a human mind - and thus functionally equivalent for us to a neural net that is just a collection of indecipherable weights. In my loop, from the point of my understanding what’s going on, the if statements might basically just as well be weights being trained like in a neural net, but it’s a far more directed path through the search space conducted by the LLM compared to training neural nets with back propagation, so results can be seen very quickly, inside hundreds of iterations which can run in a day instead of millions and millions of training runs required. So in this way, the difference between old style if-based AI where you could understand the code and how the ”AI” reasoned, and modern neural nets where you can’t, is blurring. In the end it’s just a funny musing, not necessarily something very profound, but I still want to mention it because not only is it a fun and thought provoking musing, but it’s actually a really useful and practical technique if you’re doing old school game AI for your games. Now back to the lab, to see how much better my racers got while I wrote this!
Snassy.icp@SnassyIcp

The Out Run AI car just got to stage 3. I'm calling Elon.

English
3
0
13
1.8K
Sneeo💎— Island🌴— Alien👽— Boom💥— Cloud☁️ retweetledi
Internet Computer Today
Internet Computer Today@DfinityToday·
there's no stopping this train. $ICP
Internet Computer Today tweet media
English
9
25
235
3.7K