First of all, they specify a fast-learning "online" network, and a slow-learning, "target" one.
For a given sample the online network is trying to predict target network's embedding.
The catch?
They are using different augmentations!
Finally, the online network is trained using gradient descent, while target network's weights are updated by averaging them (exponential moving average) with online's weights.
This way they are taught to achieve consistent embeddings of observations across different ways of introducing noise.