I got bored so i wanted to see if we could remember how to represent a multilayer perceptron using linear algebra. I think i summed it up fairly well.

Follow

@freemo Without reading the scans of your notes in detail, because the resolution is inadequate, and you really have no excuse not to typeset this in LaTeX, anyhow:
You can't represent a multilayer perceptron by linear algebra only, because you could then reduce everything to an equivalent
(number of inputs)x(number of outputs) matrix.
So, in the spirit of Helmut Kohl:
"Entscheidend ist, was hinten rauskommt",
[en.wikiquote.org/wiki/Helmut_K]
there must be some sort of sigmoid [en.wikipedia.org/wiki/Sigmoid_]
somewhere in your equations to make your algebra non-linear.
Where?

Β· Β· 1 Β· 0 Β· 1

@tatzelbrumm Agreed its not purely linear. The step where the transfer function is applied is of course a non-linear step. It represents well in LA but you simply cant represent a whole network as a single matrix is all, you have to cycle between three steps.

Basically each layer winds up becoming its own matrix. The connections, multiplication by the weight, and summation steps are all LA.

@tatzelbrumm Also side note, a MLP does not **need** to have a non-linear transfer function. It can be linear and some linear functions can even approximate it well enough to be usable.

Check out rectified linear unit (ReLU).. if used then the above equations i showed that is an iteration of three steps applied per layer becomes linear and thus no need to iterate per layer, you can reduce the whole network to a set of matrices and operate on them entirely in LA. I didnt go that far though.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.