ChatGPT stuff
I asked #ChatGPT "What is the secret to make Julia code fast?" (just to see if it picks up anything from Jakob Nissen's excellent guide https://viralinstruction.com/posts/optimise/ or the many other #julialang optimization guides online).
The answer turned out to be good example of the model's ability to be subtly misleading while not being entirely wrong.
(Thread, with ChatGPT's reply as the next post.)
ChatGPT stuff
tl;dr: it seems likely ChatGPT took advice meant for Python or MATLAB, and substituted Julia's name in their place because it considers them "close enough".
1 isn't wrong, but makes no mention of type stability, which is at least as important as its (more generic, applicable many languages) suggestion.
2 is also generic advice that's good to have, but isn't #JuliaLang specific (but I consider this one a #ChatGPT success)
3 rates as kinda okayish advice. One could argue that for a beginner, the base functions offer a solid place to start, if we assume they're gonna write badly optimized code (but then teaching them how not to do that - as the prompt asked - is a better way to solve that). But since this is Julia, it's not uncommon that simple custom code you write beats the obvious ways using built-ins. So I'd consider this potentially misleading.
4 is likely the smoking gun here - the repeated mentions of vectorization and asking to use techniques to "vectorize your code" seems to suggest that this whole thing was taken from guides written for Python or MATLAB, which are at this point more numerous than those for Julia, and then the Language Model substituted Julia in the language name's place because it considers them similar. (A previous answer said "there are a number of programming languages that are commonly used for numerical and scientific computing, including Python, Julia, MATLAB, R, and others" - so it knows they're in the same category.)
ChatGPT stuff
@digital_carver Almost human. (Fast thinking humans. ↗️Daniel Kahnemann)
ChatGPT stuff
Zooming out to generalize, it seems likely that this'll be a significant issue with ChatGPT/other LLM responses - mistakes like this are subtle and insidious, and could easily pass muster at a first glance. Especially if you don't know the answer already, which will be the case when people actually start using it beyond this trial.
Information about things that are already popular and has a lot of text on the Internet - pre-established languages, technologies, the currently dominant cultures, ideologies, etc. - will be well known to ChatGPT, and could be commingled in subtle ways with newer upcoming ones, and presented as part of that newer entity, perpetuating those old influences.
This type of misinformation will be insidious, and can easily fly under the radar in a lot of situations.
#AIBias #AIEthics #ChatGPT #LLM #LargeLanguageModels