It is insane how advanced the natural language processing is in AI. I work in a lab focusing on development of treatment for Huntington's disease. Once my boss fed the following prompts to to the GPT-2 (GPT-3 is not yet open source, or at least it wasn't when he did it): "Huntington's disease is" or "We can cure Huntington's disease by". The results were spectacularly coherent but most things were factually wrong. He then compared it to something that happened to AlphaGo: AlphaGo is an AI trained to play Go, a board game very popular in Asia and much more complex in terms of available combinations than chess. During one of the games, it made a move that everyone thought was erroneous "rookie mistake" but it ended up being crucial for the victory in retrospect. My boss told me: "What if just like in a game of Go, GPT model will one day output us an idea so silly that we just discard it as nonsensical, but that in reality would lead to a cure?".
QOTO: Question Others to Teach Ourselves. A STEM-oriented instance.
An inclusive free speech instance.
All cultures and opinions welcome.
Explicit hate speech and harassment strictly forbidden.
We federate with all servers: we don't block any servers.