Meina 🌱@CryptoMeina
Some fleshed out thoughts:
I’m increasingly concerned about AI’s trajectory. It already processes information faster than our brains, and as its compute power grows, it will inevitably surpass us in total knowledge.
At some point, an AI system might judge that humans consume more than we produce - so what justification would remain for our existence?
Based on what we have seen so far, our unique value lies in the moral judgments we make when faced with difficult situations. Those chaotic data stabilize the system. So, maybe we are all just data points for robots' ethical dilemmas?
Consider the Zeroth Law of Robotics, which states that a robot may not harm humanity—or, by inaction, allow humanity to come to harm—and that the welfare of all humanity takes precedence over any individual (or the robot’s own survival).
If an AI truly internalizes this law, will it protect us, or will it decide that the best way to minimize harm is to sideline or even eliminate those it deems detrimental to humanity’s overall welfare?