@AlSweigart
Thats the problem with these LLM's they are coded to look plausible not to be correct.
That makes it harder to spot when they fuck up and spout gibberish.
Sooner or later someone with a critical role will trust one of these bullshit generators to write their code for them and we won't find out until the giant purple clusterfuck happens.