US President Joe Biden issued an important Executive Order relating to AI. This EO does not create a new top level AI regulatory organization (Dept of AI) as some hoped but it does order 19 government agencies to form new AI boards and taskforces. My understanding is that this is the most significant level of non-crisis intra-governmental cooperation in decades-- a very interesting example of proactive government.
The knee jerk reaction to government regulation is that it will stifle innovation. But the other side of the coin is that US Big Tech companies are very influential in the US government and it's unlikely that they would regulate themselves to be in a real disadvantage relative to the global competition.
One of the interesting aspects of this EO is that it requires corporate entities that have a certain threshold of compute capability to register themselves with the government. And all significant foundational models (e.g. ChapGPT) must be reviewed by the government before they can be released to the public. Essentially this means the government has granted itself the right to have a sneak peak at new models first, and to "red team" them, that is to say to try to see how they relate to safety, national defense, international competition, and so on.
I think big tech companies are mostly ok with this. Indeed, they have a huge incentive to not allow big mistakes which will provoke the ire of the government or the public. Being able to say the government reviewed their model and gave it a stamp of approval is a huge boon for them.
The question is: how will this EO affect open source LLMs (large language models)? I look forward to further analysis.