@kristinmbranson @cstanhope Disagree - there are areas where LLMs are useful. For instance, ChatGPT is pretty good at code generation. Yes, it's often wrong but even the incorrect code can be helpful. I'm guessing that these kind of specialized applications are where LLMs will prove to be useful.
Mystified, though, at the rush to deploy them in search engines where the reputational risk is much higher.
@kristinmbranson @cstanhope I tried copilot but didn't like it due to the annoying UI in the IDE I use (PyCharm). I felt like I was fighting with it all of the time. But it seemed like it might have potential with a better UI.
As for ChatGPT, check out this example I tried. No, it won't write your program for you, but it's certainly good at generating snippets that are useful.
@twitskeptic @cstanhope I haven't tried copilot myself yet, but the people I've talked to who have aren't trusting it for anything beyond adding comments. The reason I haven't is I hate reading and correcting buggy code much more than I hate writing fresh code. I should prob try it before expounding too much :). We've thought about using LLMs to generate code for a GUI, making the UI natural language. Even for coding, there are issues about stealing other people's code without acknowledgment.