Show newer

Ahh, I understand why OpenAI would want to hardcode some of the answers, but I'm still a bit disappointed :/

RT @UrbMobRafal
So honored to receive @ERC_Research Starting Grant! Super excited to start running my grant and see what will happen when we start sharing our cities with AI machines at @JagiellonskiUni in Kraków, Poland. Details: rafalkucharskipk.github.io/COe twitter.com/ERC_Research/statu

Looks like Galactica disappointed a few researchers and got promptly taken down

RT @DrewLinsley
Check out our new paper, to appear at NeurIPS. We show that DNNs are becoming progressively *less* aligned with human perception as their ImageNet accuracy increases. Ignore the elections, Elon, and FTX for a moment — this is important!
serre-lab.github.io/Harmonizat


Mastodon, meet Frank. In rare moments in which he doesn't want to murder his surroundings, he is actually a sweet cat!

wwydmanski boosted

This place feels a lot nicer & nerdier than Twitter, so I actually feel like sharing a little: 😋

We have recently released the 2022 update of the Metapsy meta-analytic database for depression psychotherapy (415 studies), which can be analyzed here: metapsy.org/databases/.

Detailed documentation & download here: docs.metapsy.org/databases/dep

...and you can directly retrieve the data in R using data.metapsy.org

Woah, this place is so much cleaner without all those bullshit Twitter bots. Let's hope it stays that way

Hi all!

I'm a PhD candidate doing research into ML for biotechnology. In my free time, I'm also a cofounder of a startup making tools for real estate analysis.
Don't expect any startup threads from me, though, I'm more into science than business!

My ML research is focusing on few shot tabular learning, and applying them in my biotech life.
The latter focuses on metagenomics.

wwydmanski boosted

Mastodon friends, I wrote a thread on Twitter that asks for commitments to three behaviors to maximize the chances for a successful transition of the community discussion to this platform. Please have a look and retweet it if you are willing to commit to the behaviors for November:

twitter.com/BrianNosek/status/

wwydmanski boosted

17,851 accounts
+138 in the last day
+362 in the last week

RT @karwowskaz
Thank you @polonium_org for the opportunity to talk about my research. The amount of questions assured me that there is a very bright future for gut microbiome research!

Does anyone know how to download full GO mapping from @uniprot knowledge base? I've tried scraping it real-time, but it will take approximately 1k hours ;_;

Finally, the online network is trained using gradient descent, while target network's weights are updated by averaging them (exponential moving average) with online's weights.

Show thread

This way they are taught to achieve consistent embeddings of observations across different ways of introducing noise.

Show thread

First of all, they specify a fast-learning "online" network, and a slow-learning, "target" one.
For a given sample the online network is trying to predict target network's embedding.
The catch?
They are using different augmentations!

Show thread

Can data augmentation benefit from a separation of "online" and "target" network?
BYOL's answer is yes!
Furthermore, they suggest that using negative examples might be obsolete, as they achieved new SOTA without them.

How does it work? 👇
1/4

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.