We do an all-resume-blind work-sample (WST) hiring process: a code challenge, a written (email) product design challenge, and a 2-hour interactive Slack design/implementation challenge.

A benefit of doing Slack “interviews” (this isn’t one but bear with me) is that you get a transcript of the session.

One thing we’ve learned: after-the-fact evaluations of sessions are very different than in-the-moment evaluations; maybe to the point where, if you’re the person administering the session, you shouldn’t be trusted to evaluate it at all.

A process feature I think I like: generate transcripts, pass them off to a review board. The “interviewer” is just a proctor.

Follow

@tqbf

It seems natural to assume that the after-the-fact evaluation is better (because it doesn't suffer from halo bias). How could we test that though (and how would we define "better")?

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.