@publictorsten Not only didn't she consent to this edit, additionaly her photo was uploaded to an AI tool without her consent. I really don't know what's worse.

@petrosyan @publictorsten Sexualizing her photo was so special. Honestly it's essential to fire the individual who did this.

@KathleenC @petrosyan @publictorsten The point is that no human did this - the AI tool that the social media person used did it "automatically".

It's an example of the harmful biases ("pictures of women should be sexually suggestive") encoded into every one of these damn things, and why none of them are fit for purpose.

@kittylyst @KathleenC @petrosyan @publictorsten The Social Media person was left, not trained enough, not supervised enough. Fire them, and most of all their Manager.

@shalf @kittylyst @KathleenC @petrosyan @publictorsten I understand where you're coming from, and you are right to be upset. This was not an act of malice, it was addressed professionally and responsibly by management once they became aware of the situation, and - based solely on the original post since that's all the context I have - the injured party appears satisfied with the outcome.

If we fired people for not noticing things that IT tools do, every time it happened, there would literally not be anyone left working in IT. No one in their right mind thinks an "expand image" plugin is going to sexualize the image's contents; would you punish someone for printing flyers that have hidden metadata from the printer (snopes.com/fact-check/househol) in them?

The correct course of action, imo, is to start exactly as they've done by removing the offending image (and apologizing to the injured party), and to then use that as an educational moment so that it doesn't happen a second time.

Now, if it DOES happen a second time, fuck 'em. Unless/until, though, let people learn from their mistakes.

@dave_cochran Alright. Train better the SoMe worker after a solid warning. Fire their Manager.
To be clearer: reputation and legal risk are way underestimated in this industry. It has to change. Yes, that means people at responsibility and strategy level thinking AI is harmless and allowing or promoting its unsupervised use, mostly for cost reasons (and ignoring externalities), losing their jobs.

@shalf have you seen this report? nngroup.com/articles/computer-

admittedly, it's a few years old now but I'd bet all the money in all of my pockets against all the money in all of yours that the data would be essentially the same if it was done again today.

The short version is that people, as a group, are WAY worse at using computers than people, as a group, think. Like, by virtue of having and using a Mastodon account at all, you are probably in the top 20% or so of computer users worldwide.

The point being, punishing people for things they cannot reasonably be expected to know about or even to know that there's ANYTHING TO LEARN about them, is counterproductive at best, and actively harmful at worst.

It's probably worth keeping in mind that we're talking about the folks running a conference here, and not, like, Facebook. If anything, this could become a phenomenal talk at the very conference it happened at since they're talking about UI/UX stuff (if I'm remembering the OP correctly)!

I just don't think that people acting in good faith should be penalized for things that they had absolutely no reason to think would cause harm, y'know?

@dave_cochran @shalf I once taught an older woman how Google works. She had never seen a search engine before and she was in love.

Follow

AI like Google once was for us as beginners and first starting... (and not caring / reading Terms)... 

@HannahClover @dave_cochran @shalf That's like all of us coming across some tech or added feature for first time, like we got email without reading checkbox terms and then we get our parents 'on to it'...

And then realise decades later that no, this is not good - but wasn't any better way back then and not much now (I remember when email was 9.99$) so based on that stage of evolution not many would get email ! (and that would be somewhat if people not willing to pay!)

Bundled AI or other things (with or without checkbox at beginning or along the way) is similar integrated stuff which at some point you have to say NO to in a hard / impossible ultimatum of self-denial and decline of of usage of such increasingly popular steroided code / software-turned-trojan-horse software... 💣 💥

This was a good example and a more unique combination of as cropping tool, usually Terms and Conditions trick + actually adding to original images or replacing (potentially sexualise it too which humans do plenty of)

In short:

"the mistake is using the tool in the first place."
- jelte @jelte

"There is no identifiable person to hold accountable here."
- Jon Het. @Jon_Kramer

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.