Just had my first explicit "I'll use AI instead of you to write code" response from a prospective client. I'm not going to extrapolate too much off one example, but I do wonder if at the cheaper end of the market developers will either be replaced or relegated to fixing LLM output. They produce code which *looks* okay to a non-expert, even if it isn't necessarily tested, robust or secure - and that's before you think about the legal implications of what they've been trained on.