@cstanhope Agree -- I think LLMs are currently a solution looking for a problem. To me, it seems too early -- I don't see a particular type of problem that they nail. I think one would be more looking for tasks where the cost of failure is low/zero, and/or failures are easy to see. Even if the output were purely for entertainment, one needs to consider plagiarism and producing malicious/biased content, and I don't think we know how to check for this.
@kristinmbranson @cstanhope Disagree - there are areas where LLMs are useful. For instance, ChatGPT is pretty good at code generation. Yes, it's often wrong but even the incorrect code can be helpful. I'm guessing that these kind of specialized applications are where LLMs will prove to be useful.
Mystified, though, at the rush to deploy them in search engines where the reputational risk is much higher.
@kristinmbranson @cstanhope I tried copilot but didn't like it due to the annoying UI in the IDE I use (PyCharm). I felt like I was fighting with it all of the time. But it seemed like it might have potential with a better UI.
As for ChatGPT, check out this example I tried. No, it won't write your program for you, but it's certainly good at generating snippets that are useful.