#Microsoft released Microsoft 365 Copilot about of a month ago and I am extremely skeptic of it.
Who is the use case for this application?
Take this paragraph for example:
"Microsoft 365 Copilot has real-time access to both your content and context in the Microsoft Graph. This means it generates answers anchored in your business content — your documents, emails, calendar, chats, meetings, contacts and other business data — and combines them with your working context — the meeting you’re in now, the email exchanges you’ve had on a topic, the chat conversations you had last week — to deliver accurate, relevant, contextual responses."
I know that #Google got into the hot seat a couple of years ago when they scanned users mailboxes and created calendar booking automatically for hotel and flight mails. This sounds a lot like the same thing.
Even if they say that the data stays with you;
"Copilot LLMs are not trained on your tenant data or your prompts. Within your tenant, our time-tested permissioning model ensures that data won’t leak across user groups. And on an individual level, Copilot presents only data you can access using the same technology that we’ve been using for years to secure customer data."
I don't feel so reassured.
Does anyone feel that they can safely activate this feature on their customer, on in their own environment?
Having the AI read sensitive documents or other sensitive data all for the sake of AI assistance.
https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/
@tysonsw i it on by default?
@rysiek doesn't seem so.
They are testing it with 20 companies today. Hopefully it will be an opt-in function.
Are you thinking of things getting visibly included in outgoing communications that shouldn't be, or of outgoing communications being correlated with such things in a predictable enough way that by observing (or perhaps, by observing responses to a particular incoming communication) one can learn something?
I wonder whether there will be no-vs-Glomar style problems there.
@robryk @tysonsw my first thought was: if these models have access to your documents, there will be leaks of private content into the models. Akin to this:
https://www.engadget.com/three-samsung-employees-reportedly-leaked-sensitive-data-to-chatgpt-190221114.html
But the outgoing communication stuff is also interesting and on-point.