TonyVT SkarredGhost@SkarredGhost
This can be the superpower of the Neural Band that Meta is giving together with the Ray-Ban Meta Display glasses. The video shows an old prototype bracelet by CTRL+LABS, the startup acquired by Meta and whose technology was used to develop the Neural Band. At the beginning of the video, the guy makes an action (a keyboard key press) with his hands, then the bracelet can substitute the key pressure, and at the end of the video, the guy doesn't even have to do the action; it is just sufficient that he "thinks" about it. As long as the brain is sending an electric message to the fingers, the full action is not necessary anymore. Just an "intention" to move them is necessary.
If the Neural Band is evolved to this stage, and the users are educated to this, potentially, we may not even need to perform air taps or writing gestures, but we could just think about doing them. This would reduce a lot of the fatigue of using XR devices and the weirdness of using them on the street.
Then why isn't this feature available today? I guess that the reason is twofold. First of all, we have accuracy: the full gesture is easier to detect for the system. Many people (me included) are praising the accuracy of the Neural Band, and this is amazing, because an input mechanism should have a reliability close to 100%. Then we, as users, have never been trained to just "think" about actions: it would feel weird and hard to learn. I think we should undergo some training to learn how to do this "thinking" operation properly.
I hope that something like this could come in the upcoming years... that would be the real game-changer paradigm if compared to the camera-based tracking.