ChatGPT stuff
I asked #ChatGPT “What is the secret to make Julia code fast?” (just to see if it picks up anything from Jakob Nissen’s excellent guide https://viralinstruction.com/posts/optimise/ or the many other #julialang optimization guides online).
The answer turned out to be good example of the model’s ability to be subtly misleading while not being entirely wrong.
(Thread, with ChatGPT’s reply as the next post.)
ChatGPT stuff
[ChatGPT’s full response below:]
There is no single secret to making Julia code fast, as the performance of Julia code can depend on a number of factors. However, there are a few general strategies that can help to optimize the performance of Julia code and make it run faster, such as:
ChatGPT stuff
Zooming out to generalize, it seems likely that this’ll be a significant issue with ChatGPT/other LLM responses - mistakes like this are subtle and insidious, and could easily pass muster at a first glance. Especially if you don’t know the answer already, which will be the case when people actually start using it beyond this trial.
Information about things that are already popular and has a lot of text on the Internet - pre-established languages, technologies, the currently dominant cultures, ideologies, etc. - will be well known to ChatGPT, and could be commingled in subtle ways with newer upcoming ones, and presented as part of that newer entity, perpetuating those old influences.
This type of misinformation will be insidious, and can easily fly under the radar in a lot of situations.
ChatGPT stuff
@digital_carver Almost human. (Fast thinking humans. ↗️Daniel Kahnemann)