@freemo the main concern with AI in near future is not risk of superintelligence but the power the big corporations and governements can reach over citizens.
@miklo Possibly, I think we would need to speak of specific examples in the case of the present to see if I agree or disagree with those concerns (the vast majority I've heard I find are overplayed, the bigger concerns seem to be from government rather than corp)
Your use of the word "cognition" implies your concern is "AI turning conscious".. an AI doesnt need to be conscious or have "cognition" to be a threat.
@freemo @icedquinn because of that all we should support every sort of opensource AI project where either code and data are fully available.
I wouldnt say thats entierly true, though we havent reached the point where it is like the movies yet.
Often we can set rather simple goals for AI and it may have consequences as a result of its optimization that are unique to AI and not in line with our intended goals.
One example would be that AI might identify that most minorities have a poor credit rating. Therefore it may assume minorities are untrustworthy recipients of loans when they do not have any credit history and explicit deny them loans based on their race. While that may be in line with the goal of "maximize selecting people who are likely to pay back loans" it has unintended effects that is not in line with our goals which implies some sense of racial neutrality.
@freemo @icedquinn "stupid" AI is still danger tool because almost always is powered/trained by big data, that is available only for big players. People, small business doesn't have access to data so cannot build competitive AI tools. So even "stupid" AI is big factor of increased market and social inequalities.