> Beginning in late November 2023, the threat actor used a password spray attack to compromise a legacy non-production test tenant account and gain a foothold, and then used the account’s permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions, and exfiltrated some emails and attached documents.
msrc.microsoft.com/blog/2024/0

#Microsoft #InfoSec #MidnightBlizzard

> To date, there is no evidence that the threat actor had any access to customer environments, production systems, source code, or AI systems.

Oh this gon b good! :blobcatpopcornnom:

Here's a question: if a threat actor *did* gain access to AI systems, and maliciously modified the models in some way — apart from audit trail, could they know?

There is no way for Microsoft to test for such modifications. AI is a black box, including to its creators.

#Microsoft #AI #MidnightBlizzard #InfoSec

With "regular" software, there is source code, there are tests, there is a way to rebuild a binary from scratch.

Yes, "on trusting trust" etc, but at least there are ways to lower the uncertainty here.

With an LLM? Where re-training the whole model from scratch would take insane amounts of time, money, energy, and water?

That is, if it were at all possible, in fact, since these companies often don't know themselves what went into the training corpus. :blobcateyes:

Am I missing anything?

Follow

@rysiek

The training procedure involves randomness, so if you do it from scratch you will not get the same thing anyway.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.