Akiraxtwo Super

1.4K posts

Akiraxtwo Super

Akiraxtwo Super

@akiraxtwo

I’m learning game development. One day, I’ll create a giant robot. AI software dev | GenAI | GPU | OpenClaw skills research Data Viz | Game Dev | AI SaaS

taipei Katılım Temmuz 2024
852 Takip Edilen267 Takipçiler
Akiraxtwo Super retweetledi
Three.js
Three.js@threejs·
In a year or two, anyone will be able to vibe-code their own Unity-like editor in an afternoon.
English
46
25
428
25.1K
Akiraxtwo Super
Akiraxtwo Super@akiraxtwo·
🎮 Zero Three.js experience × GPT-5.5 → built an 11v11 football game No Three.js background. Just jumped in and built it. Result: • Full 3D pitch (110m × 68m) • 22 players with individual AI behavior • Pass / through ball / charged shot / player switch • Sakura garden theme surrounding the stadium • Keyboard + Xbox controller support • Single HTML file, 2000+ lines AI doesn't just help you write less code It gives you the confidence to build things you never thought you could #ThreeJS #WebGL #GPT5 #AIAssistedDev
English
22
59
710
46.3K
Matthia
Matthia@Matthia570939·
@akiraxtwo incredible, but really a one-shot? But did you have to use /goal?
English
1
0
2
460
Akiraxtwo Super
Akiraxtwo Super@akiraxtwo·
The performance is solid on desktop but might be heavy for mobile: Objects: Around 2,000 meshes. Draw Calls: Between 1,200 - 2,500 per frame (mainly due to shadows and numerous environment details). Optimization: 560 spectators use InstancedMesh (very efficient), but environment assets like bamboo and blossoms are individual meshes, which are the main bottlenecks
English
0
0
0
106
dei
dei@webdevxp·
@akiraxtwo What about performance? How many objects/draw calls?
English
1
0
0
139
Digital Dimension
Digital Dimension@digdimension7·
@akiraxtwo THIS LOOKS AWESOME! Great job! Did you only use it for code, or assets also?
English
1
0
1
1K
Lenny Loop Chain
Lenny Loop Chain@Lenny_LoopChain·
@akiraxtwo Whoa, a Three.js football game? That's next level! Can we add some crowd animations to make it feel like a real match?
English
1
0
0
201
Akiraxtwo Super
Akiraxtwo Super@akiraxtwo·
I was really excited when I saw @steipete sharing the new OpenClaw 2026.4.29 update! Previously, agent interactions in group chats often felt a bit stiff with mixed results, but this time he completely revamped how agents communicate. It now feels so natural and smooth! After enabling visible replies, the whole group experience instantly became like true multi-agent collaboration. I highly recommend giving it another try. If you’re like me and mainly use GPT models but sometimes find the performance lacking, he strongly suggests switching to Codex harness. After trying it, the improvement is obvious—faster responses, better context understanding, and smoother task execution. Combining both really delivers that “boom” effect and solves my previous pain points. Overall, this OpenClaw update has me even more optimistic about personal AI assistants. From group chat features to model integration, everything is evolving rapidly. As a heavy daily user, I think this update is definitely worth trying. Turn on both features and you’ll clearly feel the progress in the agent era—it’s incredibly satisfying! 🦞
Peter Steinberger 🦞@steipete

If you tried OpenClaw in group chats and got mixed results, you GOTTA try again. I changed how agents talk there, it IS SO GOOD NOW. #visible-replies" target="_blank" rel="nofollow noopener">docs.openclaw.ai/channels/group… And if you used GPT and got subpar performance, switch to codex harness. docs.openclaw.ai/plugins/codex-… Enable both and boom.

English
0
0
1
253
Akiraxtwo Super
Akiraxtwo Super@akiraxtwo·
It’s becoming clearer why so many people have been saying lately that GPT and Claude suddenly got “dumber.” After both OpenAI and Anthropic released their official prompt engineering guides, a realization emerges—it’s not that the models got worse. It’s that they’ve become smart enough to stop tolerating unclear or lazy thinking 🤣🤣🤣 What’s even more interesting is that the two models are evolving in completely opposite directions. Claude Opus 4.7 has become increasingly literal. In the past, it would proactively fill in gaps when instructions were vague. Now, it does exactly what it’s told—no more, no less, not a single extra guess 🤣🤣 GPT-5.5, on the other hand, has become more autonomous. Before, it required step-by-step guidance. Now, it only needs a clearly defined outcome and will determine the optimal path on its own. This explains why old prompts are failing—but for completely opposite reasons. Vague prompts used on Claude now lead to increasingly narrow outputs. Overly detailed step-by-step instructions used on GPT become unnecessary noise. For the past three years, the focus has been on learning how to teach models to do things. Now the direction has flipped— models are implicitly asking for structured thinking first. This feels like the real essence of prompt engineering today. It’s no longer about teaching the model how to do something, but about ensuring the thinking behind the prompt is already clear. So the real bottleneck may not be the model’s capability, but the clarity of the person writing the prompt. Looking ahead, the ones who win likely won’t be those who write the longest or most complex prompts, but those who know exactly what they truly want 🤔
阿绎 AYi@AYi_AInotes

我终于明白为啥最近很多人都在说,GPT和Claude突然变笨了, 昨天OpenAI和Anthropic同时发布了官方提示工程指南, 看完我才发现,并不是模型变笨了, 是它们终于聪明到,不再容忍人类懒得想清楚了🤣🤣🤣 而且最有意思的是, 两个模型的进化方向,居然是完全相反的, Claude Opus 4.7变得越来越字面, 以前它会主动帮你补全模糊的指令, 现在你说什么它就做什么,多一个字都不会猜🤣🤣 GPT-5.5变得越来越自主, 以前你要手把手教它每一步怎么做, 现在你只要告诉它你想要什么结果,它自己会选最优路径, 所以老提示失效的原因也完全相反, 用在Claude上的模糊提示,会得到越来越窄的输出, 用在GPT上的详细流程,会变成多余的噪声, 过去三年我们一直在学怎么教模型做事, 现在反过来了, 模型开始要求我们,先把自己的思考结构化, 其实就是提示工程的本质, 已经从教模型怎么做,变成了先把自己想明白, 所以真正的瓶颈可能不是模型的能力,而是写提示的那个人的思考清晰度, 我感觉以后赢的人,不会是提示写得最长最复杂的人,而是那个最知道自己真正想要什么的人🤔

English
0
0
0
101
Akiraxtwo Super
Akiraxtwo Super@akiraxtwo·
McDonald Gundam RX-78-2 MGSD GPT-image-2
Akiraxtwo Super tweet media
English
0
0
3
374
Akiraxtwo Super
Akiraxtwo Super@akiraxtwo·
McDonald Gundam RX-78-2 SD Gundam
Akiraxtwo Super tweet media
Indonesia
0
0
0
151
Akiraxtwo Super
Akiraxtwo Super@akiraxtwo·
Sakura Musou is currently in development, with a playable online version available. This is a 3D voxel musou-style action game developed with the help of GPT-5.5, and it already has a clear gameplay loop: players fight across a larger sakura pagoda garden map, defend the pagoda, clear 4 enemy camps, and challenge the final Enemy Warlord. Current features include: a larger map, 1P / 3P camera switching, combo-based combat, rolling, spin attacks, hammer slam, score tracking, and final rank results. Play online: sakura-musou.vercel.app GitHub: github.com/akiraxtwo/saku… #SakuraMusou #ThreeJS #WebGame #IndieGame #GPT55
English
0
4
32
2K
Akiraxtwo Super
Akiraxtwo Super@akiraxtwo·
Today I went to McDonald’s and saw the Gundam collab.
Bought the double 4oz beef burger meal and got the RX-78-2 Striker figure.
The figure is pretty nice. They also have Gundam mugs and limited bags in the store. Just a simple share. #Gundam #鋼彈 #McDonalds
Akiraxtwo Super tweet mediaAkiraxtwo Super tweet mediaAkiraxtwo Super tweet mediaAkiraxtwo Super tweet media
English
0
0
0
109