> Beginning in late November 2023, the threat actor used a password spray attack to compromise a legacy non-production test tenant account and gain a foothold, and then used the account’s permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions, and exfiltrated some emails and attached documents.
https://msrc.microsoft.com/blog/2024/01/microsoft-actions-following-attack-by-nation-state-actor-midnight-blizzard/
> To date, there is no evidence that the threat actor had any access to customer environments, production systems, source code, or AI systems.
Oh this gon b good!
Here's a question: if a threat actor *did* gain access to AI systems, and maliciously modified the models in some way — apart from audit trail, could they know?
There is no way for Microsoft to test for such modifications. AI is a black box, including to its creators.