Watched a whole video on AI risk related to using LLMs to write code. Never once addressed the possibility of using a local LLM.
My biggest objection is, you should know what you're doing before you use an LLM to save time coding.
This is the viewpoint I came around to:
0. write tests first.
1. if you are a beginner, use an LLM to write code you can't figure out by yourself, and learn from it. It will make mistakes you won't catch.
2. If you are intermediate skill/experience get your code peer reviewed by someone not using AI.
3. If you are an expert, then use the LLM all you want to write code but still vet every line it produces.
Also never addressed but really underrated: using AI to set up your programming environment. Necessary to do, AI can save you a ton of time, little risk of getting pwned.
Oh also: malicious AI injecting backdoors in your code is valid high level concern but in practice you're just not likely to be targeted. As an individual, just not worth worrying about.