The idea that you can take an LLM — trained on an undisclosed dataset of all kinds of text, and tweaked by human feedback for various messy goals — and then use *its* responses for social science “research” rather than using real people, is beyond stupid.

science.org/content/article/ca

Yes, it would be great to have your career possibilities determined by some fuckwit who told an LLM to pretend to be like you — according to the stereotypes it had been trained on — and then found that the LLM did poorly at a task you might otherwise be hired to do.

Follow

@gregeganSF You shouldn't be applying to a place which uses fuckwit practices anyway. The problem is self-solving.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.