Do you remember when Google’s code of conduct included “Don’t be evil”? Let’s define evil as intentionally causing harm. Social media platforms did not set out to cause harm. Yet their algorithms optimize for profit by amplifying negative emotions, with no awareness of the harm they cause to individuals and democracy.
That gap between intent and outcome reflects how these
systems are built. They are trained. Training an AI model is less like writing
code and more like shaping a developing brain. Algorithms guide how connections
change with experience. Even if we could map every one of those connections, we
still couldn’t reliably predict what the system will do. Artificial
superintelligence (ASI) would be trained the same way and it would be beyond
our ability to fully predict or control.
Concerned AI experts and others warn that mitigating
extinction risk from ASI should be a global priority alongside pandemics and
nuclear war. This sounds like science fiction. But imagine a Mayan warrior
watching Spanish ships approach, unable to conceive that the men aboard had
technology that could kill at a distance. Even without a physical form, ASI
could exert real-world influence by exploiting the interconnected technologies
we already depend on. We will be the Mayan warriors, outgunned before we even
understand why.
Modified images by Niran
Kasri/OpenClipart-Vectors
from Pixabay
The authors of If Anyone Builds It, Everyone Dies say many leaders see the danger but stay silent. When we speak up, we give them the mandate to act. Tell leaders to act now to build treaties and prevent any ASI.
No comments:
Post a Comment