Show newer

@freemo @math

It is kind of interesting that this generalizes with Knuth operators.

Man it feels good to be a paid scientist!

Seriously though... We finally get a job that pays well, that is not management, or requires near a decade of education. And you want to automate it...

sampl.cs.washington.edu/tvmcon

@mc

I did just that lol. It also took a signature from me, and then my boss for them to back off.

At the end they returned a laptop that had half of the drivers missing, and regular windows installed on it without its license. (a license we were forced to pay for)

But that is all I needed, to get their grubby mitts out of our research tools. Got a fully functional Linux machine running half a day after that. :blobfoxcofe_w_:

@aseem

Yep. Feeling that surveillance capitalism.

It has taken multiple signatures from myself and my boss just to have my work computer not backdoored. They state that they are still going to have my IP address on their list.

The police state in the tech world has been really normalized in the US I guess.

@maelig

Yep. Fuck them. They are doing this to themselves.

@jessica@mk.absturztau.be

Attempted that. I only got one password back and it did not work.

@jessica@mk.absturztau.be

The bios is pretty much setup like this though.

@jessica@mk.absturztau.be

Unfortunately I only got one password to try, and it did not work.

@jessica@mk.absturztau.be

This would work if the bios was not locked.

I specifically asked IT for a Linux dell workstation. They gave me Windows education edition, with a bunch of CISCO backdoors on it, and locked the bios.

Gee.. Thanks for the expensive paperweight. :blobcatangery:

Neural network analysis is so underrated. Reality is so manifold-y.

arxiv.org/abs/2004.06093

Topology of deep neural networks

We study how the topology of a data set $M = M_a \cup M_b \subseteq \mathbb{R}^d$, representing two classes $a$ and $b$ in a binary classification problem, changes as it passes through the layers of a well-trained neural network, i.e., with perfect accuracy on training set and near-zero generalization error ($\approx 0.01\%$). The goal is to shed light on two mysteries in deep neural networks: (i) a nonsmooth activation function like ReLU outperforms a smooth one like hyperbolic tangent; (ii) successful neural network architectures rely on having many layers, even though a shallow network can approximate any function arbitrary well. We performed extensive experiments on the persistent homology of a wide range of point cloud data sets, both real and simulated. The results consistently demonstrate the following: (1) Neural networks operate by changing topology, transforming a topologically complicated data set into a topologically simple one as it passes through the layers. No matter how complicated the topology of $M$ we begin with, when passed through a well-trained neural network $f : \mathbb{R}^d \to \mathbb{R}^p$, there is a vast reduction in the Betti numbers of both components $M_a$ and $M_b$; in fact they nearly always reduce to their lowest possible values: $β_k\bigl(f(M_i)\bigr) = 0$ for $k \ge 1$ and $β_0\bigl(f(M_i)\bigr) = 1$, $i =a, b$. Furthermore, (2) the reduction in Betti numbers is significantly faster for ReLU activation than hyperbolic tangent activation as the former defines nonhomeomorphic maps that change topology, whereas the latter defines homeomorphic maps that preserve topology. Lastly, (3) shallow and deep networks transform data sets differently -- a shallow network operates mainly through changing geometry and changes topology only in its final layers, a deep one spreads topological changes more evenly across all layers.

arxiv.org

@hackaday@botsin.space

"For Covid-19 Tracking"... sure. lol

@stux

That is some really low effort botting. lol

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.