Greetings. #Google is very wise to apparently be moving with deliberate but not overly rushed speed toward question answering AI search chatbots and related systems. Because an AI that gives *wrong* answers is far worse than a conventional search engine, since the AI chat interactions tend to give an air of authenticity to the responses, even when they're dead wrong.
Keep in mind that regulators (especially in the EU, but now in the U.S. as well), have for years expressed concerns about search engines highlighting a single recommended "best" answer at the top of a list of result links, feeling that it creates an anticompetitive situation and suppresses people from looking at other results.
Personally I have found, for example, Google's top of list "Knowledge Panels" to be immensely useful, but when they occasionally are in error it can be very confusing, and getting those errors corrected when the KP info is sourced from a third party like Wikipedia can be difficult or even impossible in a practical sense.
Clearly, AI search chatbots risk incurring even more concern from regulators if they are seen as increasingly replacing traditional SERPs (Search Engine Results Pages) in the context of how most people search for various topics.
Not only is the accuracy of AI search chatbots a key factor, but the very existence of such chatbots as a potentially "distorting" factor in surfacing of search results will be simultaneously sure to be the focus of regulators and politicians around the world.
So yeah, "don't move too fast" is good policy in this realm. -L
@lauren I don't believe for a second that Google cares about the accuracy of their systems. They already have "AI" ruining peoples lives and then the human reviewers come in on appeal and just rubber stamp whatever the "AI" did without a second thought...
https://www.theguardian.com/technology/2022/aug/22/google-csam-account-blocked
If they thought most people wouldn't notice the inaccuracies, they'd have these AI chatbots in production already. The only reason they don't is they think it's not only going to fail but going to be *obviously* failing, reducing peoples' willingness to blindly trust whatever Google gives them.
@admin Anything involving suspected CSAM instantly invokes an array of laws over which Google and other firms have no control and with which they must abide. At enormous scale errors are going to happen, and Google in particular has recently established new procedures for appealing in situations such as the one noted in that Guardian piece.
@lauren at enormous scale errors will happen with automated systems, or course, that's why there must be robust and effective appeal mechanisms. Google doesn't do that. We see this not just in CSAM cases like that one, although that's certainly one of the examples with the greatest consequences...but we see it all the time on YouTube where videos get demonetized for "copyright infringement" when the local news features their video and then claims ownership of it, or where channels are to this day afraid to even mention the word "COVID" for fear of being automatically accused to spreading disinformation, or where people warning about dangerous videos get their videos taken down while the original dangerous content that's sending people to the hospital stays up, or automatically relabeling adult horror videos as "for children" and dumping them on YouTube Kids...it's not like this is something that only happened once or twice, there's a loooong history at Google of blindly trusting their AI and refusing to make corrections when it does something wrong until/unless that mistake starts getting international media coverage...
@admin @lauren Nothing new there. Measure-countermeasure for keyword matching is older than computers.
The issue is more zero-tolerance than the algorithm, but at scale zero-tolerance is predictable (which is not to say that YouTube having a near-exclusive lock on sharable video hosting isn't a problem).