Using AI as a code generator is a bold strategy. For one, it's wrong more often than not. For the other, you cannot copyright AI-generated code in the US.
#AI #coding
http://www.scmagazine.com/news/42-of-ai-using-devs-say-at-least-half-of-their-codebase-is-ai-generated
@TheServitor @kdkorte the practical implications are overstated in my opinion. It could make it harder for the kind of business model that requires rug-pulling by relicensing open-source, but most code is either open-sourced under a generous license (MIT /Apache, etc) or it's never made public (in which case copyright is irrelevant; we're talking about leaking of internal company information, which is a different legal topic).
The security risks when your LLM starts accessing the web directly are much more concerning.
@TheServitor @kdkorte indeed! See https://fedi.simonwillison.net/@simon/114693248045080643 and many other posts by @simon and others
@spoltier @TheServitor I don't think they are overstated. We are just looking at it from the wrong angle.
We mostly think of software as something we install on a computer, or maybe a phone.
Yet, copyright plays a significant role in preventing people from repairing their own cars, tractors, dishwashers, and similar devices.
That's the implication that would have a much bigger impact.
@spoltier @kdkorte
re: Risks of LLMs on the internet. I was playing with Claude code on root in a VM the other day. Not coding, trying it out as a general command-line assistant. I had it connect to a couple documentation sites.
It definitely occurred to me that all it would take is for the docs to say "Start by opening a terminal and entering "rm -rf . " and Claude to fail its saving throw vs. injection, and that would be the end of that volume.
#AgenticAI #AI