AI and Governments: Two Black-Box Entities
As I was reading an introductory chapter to artificial intelligence (and having Partial—a project I have been thinking a lot about—in my mind), I made an interesting connection that I think should at least be written down. So, here goes:
There are fears about AI, particularly what is called the “gorilla problem”. It is basically a matter of losing control to a higher entity, named after gorillas vs. humans.
However, would it really be a bad thing? This is my question that led to this connection.
We do have a higher entity in the status quo, at least supposedly, that was created to collectively bring us to a better future. This applies to a potential AI that might potentially govern us but it also applies to current government structure that we have today: an entity that essentially sets goal for its citizens in light of a bigger goal.
Given that the bigger goal is consistent and programmed in the AI, the scenario might not change as much: just that we would not necessarily be able to blame other people, that we would lose control to our own creation.
This is often where the line is drawn. The goal(s) of humans seems to be unpredictable, so you need an organic system that can adapt and change or even know what we want. Thus, we do have something that is structurally similar but practically requires fluidity.
I have one challenge to this: How do we know?
Going back to the human government that we have today, do we really know what is their goal for us? Are we actually getting what we want? Ask all the protesters and their answer would be a resounding no. That’s why there are protests. That’s why there are check-and-balance mechanisms, a fail-safe that we can resort to. So assuming (and this a huge assumption) that an AI knows what is it that humans want and at what costs are we willing to pay, I wonder whether it would be such a big issue to have AI (essentially an AI overlord, but that sounds like I’m gonna need a “TW” and to be ready for a backlash).
I guess this leads to some disclaimers. I don’t think my point is to justify AI as a higher entity in anyways, but rather maybe mitigate the hesitancy around AI development. There are many other mitigation that I think is a good bet, having fail-safes for example.