I don't know whether it counts as hyperventilating, but my relative high level of concern about the type of AI that LeCun put out there himself with #Galactica is that not that there is some sort of hard take off. It's that he and so many others are creating systems that can flood the zone with shit at a rate that would leave Steve Bannon reeling in shock and awe.
@ct_bergstrom Just trust the companies! That's the kind of ethics advice you get from one of leading figures in the field.
The guy's becoming the Elon of AI research.
@twitskeptic @ct_bergstrom So much wrongness is buried in the notion of "do the right thing" in this context. There is no one "right thing" and even what is "right" is highly context dependent, and threats and consequences need to be evaluated by and for stakeholders (which, for AI today is basically all of society, so, democratically). This is something that the infosec community gets at least somewhat well and a mindset that would be useful to bring into the AI world.
@twitskeptic @ct_bergstrom Most companies want to do the right thing unless it might cost them one extra penny a year in which case of course they are required to prioritize shareholder value over side concerns such as the continued existence of democracy, civilization, and life on earth.
@twitskeptic @ct_bergstrom And how about the legitimate researchers who are experts in tech and society and the science and tech and engineering issues who were actively, centrally arguing this was problematic, bad tech, unethical, and wrong to release from within those same companies that apparently want to do the right thing? @timnitGebru, @alex, @Mer__edith …
Shooting from the bleachers my a$$.
@twitskeptic @ct_bergstrom as we know companies always do the right thing and never have negative externalities that require government regulation /s