ChatGPT stuff 

I asked “What is the secret to make Julia code fast?” (just to see if it picks up anything from Jakob Nissen’s excellent guide viralinstruction.com/posts/opt or the many other optimization guides online).

The answer turned out to be good example of the model’s ability to be subtly misleading while not being entirely wrong.

(Thread, with ChatGPT’s reply as the next post.)

Follow

ChatGPT stuff 

[ChatGPT’s full response below:]

There is no single secret to making Julia code fast, as the performance of Julia code can depend on a number of factors. However, there are a few general strategies that can help to optimize the performance of Julia code and make it run faster, such as:

  • Choosing the right data types and structures: Julia is a dynamically-typed language, which means that the type of a variable can change at runtime. This can make it difficult to optimize the performance of your code, because the compiler cannot always determine the best way to represent and manipulate your data. To make your code run faster, you should choose the appropriate data types and structures for your data, based on the operations you want to perform and the performance characteristics of the different types.
  • Avoiding unnecessary computations: Inefficient code often contains unnecessary computations, such as calculations that are performed multiple times or operations that do not affect the result of the program. To make your code run faster, you should avoid performing unnecessary computations, and try to structure your code in a way that minimizes the amount of work that the computer has to do.
  • Using built-in functions and libraries: Julia has a rich set of built-in functions and libraries that can help you perform common tasks efficiently. Rather than implementing your own algorithms or functions, you should use the built-in functions and libraries whenever possible, as they are often highly optimized and can save you a lot of time and effort.
  • Exploiting parallelism and vectorization: Julia is designed to support parallel computing and vectorization, which can help you to make your code run faster by utilizing multiple cores or processors. To make the most of these features, you should try to structure your code in a way that allows it to be executed in parallel or vectorized, and use the appropriate techniques and tools to parallelize or vectorize your code.

ChatGPT stuff 

tl;dr: it seems likely ChatGPT took advice meant for Python or MATLAB, and substituted Julia’s name in their place because it considers them “close enough”.

1 isn’t wrong, but makes no mention of type stability, which is at least as important as its (more generic, applicable many languages) suggestion.

2 is also generic advice that’s good to have, but isn’t specific (but I consider this one a success)

3 rates as kinda okayish advice. One could argue that for a beginner, the base functions offer a solid place to start, if we assume they’re gonna write badly optimized code (but then teaching them how not to do that - as the prompt asked - is a better way to solve that). But since this is Julia, it’s not uncommon that simple custom code you write beats the obvious ways using built-ins. So I’d consider this potentially misleading.

4 is likely the smoking gun here - the repeated mentions of vectorization and asking to use techniques to “vectorize your code” seems to suggest that this whole thing was taken from guides written for Python or MATLAB, which are at this point more numerous than those for Julia, and then the Language Model substituted Julia in the language name’s place because it considers them similar. (A previous answer said “there are a number of programming languages that are commonly used for numerical and scientific computing, including Python, Julia, MATLAB, R, and others” - so it knows they’re in the same category.)

Show thread

ChatGPT stuff 

Zooming out to generalize, it seems likely that this’ll be a significant issue with ChatGPT/other LLM responses - mistakes like this are subtle and insidious, and could easily pass muster at a first glance. Especially if you don’t know the answer already, which will be the case when people actually start using it beyond this trial.

Information about things that are already popular and has a lot of text on the Internet - pre-established languages, technologies, the currently dominant cultures, ideologies, etc. - will be well known to ChatGPT, and could be commingled in subtle ways with newer upcoming ones, and presented as part of that newer entity, perpetuating those old influences.

This type of misinformation will be insidious, and can easily fly under the radar in a lot of situations.

Show thread

ChatGPT stuff 

@digital_carver Almost human. (Fast thinking humans. ↗️Daniel Kahnemann)

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.