The Elements of Digital Ethics is a chart I made to help guide moral considerations in the tech space. Given time it will also serve as the skeleton of my next book.



> The more autonomously they are allowed to progress, the further they deviate from human understanding.

This is just sci-fi narrative sold as cutting edge research. We could call it "math-washing", smoke in the eyes or the like..

Software does NOT "progress" nor "learn".
There is no out there, neither narrow or general.

Simply, put some software is programmed statistically (instead of explicitly), and both the programmers (who call themselves "data-scientist") and the company hiring them refuse to be held accountable for its output and thus pretend that the software itself got some level of autonomy.

It doesn't.

Their output is ALWAYS deterministic given the whole input and state of the machine.

And being unable to compute the output space or to deduct the implemented function BECAUSE it would be too expensive in term of time, money and energy, should simply be a reason to NOT use that software as it's broken beyond repair.

Not a reason to waive legal responsibility.

@Shamar From what I understand what you take issue with here is the word "autonomously"? In all other things you write we appear to be in complete agreement :)


well, for sure "autonomously" is plain wrong: being autonomous means being able to decide the rule you follow and that's not something a machine can do.

The fact that software programmed statistically (improperly know as ) change its own runtime configuration on certain input doesn't mean it's autonomous: the change is totally predictable, given the current state, the software and the whole input (including transient conditions such as time, scheduling and so on...)

But in general I think that attributing agency to software (as you do when you say they are "allowed to progress") is fundamentally flawn.

Software have no agency, they can just be programmed to simulate one to the untrained eye.

Usually to hide the actual chain of control and responsibilities.

see for example

@Shamar I do understand what you mean, although many determinists would argue that not even humans have free will.

Let’s say that a software bot is released on Mastodon, that engages with humans and is programmed in some manner to adapt based on these conversations. The creator of the bot dies and nobody else has access the code. The bot output changes over time and appears five years later much better at engaging conversations in more subject matters than day one. Did it progress?



The statistical programmer just distributed the software's programming over time.

People used to the "AI" narrative interpret such statistical programming process as "learning" projecting their own experience over the software.

But the software is still just software.

Antropormphism of software is a tool to alieanate people.

@Shamar I am absolutely okay to not call it learning in the traditional sense and I agree that it is dangerous to attribute autonomy where there is none. It’s harder for me to avoid ’progress’.

In some ways this feels like a perspective issue. It my scenario it is change over time determined by input provided by foreign actors (Mastodon users) whose behavior was not predictable by the programmer. And progress = change over time. How should a non-technical user understand it?



Progress has a political (and positive) connotation.

There is plenty of sample where such sort of bots' statistical programming included racist and sexist slur.

Do you remember Tay?

Did it "progress"?

The programmers (self appointed as "data scientist") didn't select each input tweet by themselves but selected the data source () and designed how such data were turned into vectors, how their dimensionality was reduced and so forth...

The exact same software, computing the exact same output for any given input, could be programmed by collecting the datasource before hand.

So why releasing a pre-alpha software and using "users" to provide its (data)source over time to statistically program it should clear "data scientist" responsibility?
Why it should allow company to waive their legal accauntability?

If you talk (and think) in term of statistical programming instead of , a lot of grey areas suddenly becomes cristal clear, several ethical concerns becomes trivial and all accountability issues simply disappear.

So, no: there is no "progress" in a software programmed statistically over time, just irresponsible companies shipping unfinished software and math-washing their own accountability over the externality such software produce for the whole society.

@Shamar Yes, Tay was actually what I was thinking of when I wrote the scenario. This morning I wrote this post on doing better going forward with how I use phrases from the industry… :)

The challenge is to describe these difference in layman’s terms. The issues with deceptive language I also need to write more about in the section on digital obfuscation.

So thank you for this! You are helping me improve my intentional communication in this area.

@Shamar Would it be okay to add your final explanation with Tay as an example quoted in the post, with a reference back here? I think you did a great job of summarizing how makers build software without taking responsibility for the input used to power it, hence building ’unfinished software’. Really good way of putting it.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves. A STEM-oriented instance.

An inclusive free speech instance.
All cultures and opinions welcome.
Explicit hate speech and harassment strictly forbidden.
We federate with all servers: we don't block any servers.