There is all this worry about AI permeating the media and the government. Here is what I think about how to manage it:
There are thousands of predictive models of various sorts and for various purposes in use and there have been for years. AI is simply another type of predictive model. All the models of which I am aware have what are known as “boundary conditions” that are set in the model before it is run for whatever reason...predicting the movement of groundwater contamination for example.
These boundary conditions limit the range of predictions to avoid nonsensical results, to avoid the models running endlessly by trying to address too large a dataset, and because going further than a certain point in the calculations is unnecessary. Boundary conditions can be inserted into AI just as easily, set the code so that it simply can’t embark into certain areas, doesn’t allow it to go beyond where it is useful to humans to the point where it considers us to be stupid garbage.
Perhaps this is where regulations could come in, setting such boundary conditions. Having said that, there will still be the potential for rogue countries to ignore such safety protocols, so detection methods for that will be needed, possibly performed by the AIs themselves.