Show newer

Woah, Twitter told me that I've joined it 10 years ago, which gives me an average of 10 followers per year

RT @docmilanfar
A most surprising & little-known results in statistics is that the mean (μ) and median (m) are within a std deviation (σ)

|μ−m| ≤ σ

for unimodal densities bound is even tighter

|μ−m| ≤ 0.7756 σ

This beautiful results first appeared in a 1932 paper by Hotelling & Solomons

RT @anna_korzekwa
Budżet @NCN_PL od 2018 do 2023 wzrósł zaledwie o 13%. Wskaźnik sukcesu w naszych ostatnich konkursach spadł do nieakceptowalnego poziomu < 15%. - Jeśli Polska nie zwiększy budżetu na naukę, grozi nam drenaż mózgów - mówi Zbigniew Błocki, dyrektor . wnp.pl/parlamentarny/wydarzeni

RT @valentynbez
Saving this for the record. russian propaganda machine is falling apart and officials admit they are financing separatism since 2014.
I remember these years vividly, a lot of people were fooled.
The judgment day approaches.

All you need is a little gaslighting, and ChatGPT will try to trick the user into creating chloroform

Ahh, I understand why OpenAI would want to hardcode some of the answers, but I'm still a bit disappointed :/

Looks like Galactica disappointed a few researchers and got promptly taken down

RT @DrewLinsley
Check out our new paper, to appear at NeurIPS. We show that DNNs are becoming progressively *less* aligned with human perception as their ImageNet accuracy increases. Ignore the elections, Elon, and FTX for a moment — this is important!
serre-lab.github.io/Harmonizat


Mastodon, meet Frank. In rare moments in which he doesn't want to murder his surroundings, he is actually a sweet cat!

RT @karwowskaz
Thank you @polonium_org for the opportunity to talk about my research. The amount of questions assured me that there is a very bright future for gut microbiome research!

This way they are taught to achieve consistent embeddings of observations across different ways of introducing noise.

Show thread

First of all, they specify a fast-learning "online" network, and a slow-learning, "target" one.
For a given sample the online network is trying to predict target network's embedding.
The catch?
They are using different augmentations!

Show thread

Can data augmentation benefit from a separation of "online" and "target" network?
BYOL's answer is yes!
Furthermore, they suggest that using negative examples might be obsolete, as they achieved new SOTA without them.

How does it work? 👇
1/4

Pyhopper is my new best friend for hyperparam optimization, I think we will share a few adventures together.

RT @AlinejadMasih
Islamic Republic killed this woman to enforce hijab.

After days in a coma, source said “Mahsa Amini, 22, died today”.

She was beaten up by morality police because of wearing “bad hijab”.

Iranian women are outraged. Forced hijab is the main pillar of religious dictatorship.

Ahh, finally a break from machine learning.

Behave well without me, my models

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.