Hooking an "AI" up to an external system to carry out an action on your behalf sounds like a stupid idea. By nature, "AI" are unpredictable and unreliable. They can fail in inexplicable ways. It also doesn't add that much, if any, value.
That said, someone surely doesn't need a philosopher or vague concepts / roundtables to figure out that this sounds like an awfully bad idea.
Building robust and reliable systems has been an aim of the field for decades, and a system which fails inexplicably in carrying out tasks is something that surely should be ruled out.
There is also practically always a simpler way of carrying out an action which doesn't involve using a supremely over-engineered system.