This thread is a microcosm of everything annoying about reddit.

A guy posts a photo of a spider wondering if it is a "brown recluse" gets a bunch of different answers many repeats, several people calling him stupid for not 'just googling it' a few calls to burn down the house and move.

It's not a brown recluse. Even if it were they aren't aggressive. Move to a remote location. There is a really good video about the reputation these spiders have I'll link next.

reddit.com/r/whatbugisthis/com

I think it's nice that people go to expert forums to find out about insects. It would be nice if everyone would read the other replies before adding their own.

It would be nice if taking time to tell someone to "google it" would just die... particularly with how google and other searches have... become much less useful over time.

@futurebird

I think the term 'google it' probably means one should try and find the answer or at least make some effort before asking. However I agree this is less helpful, our vocabulary has changed so we google something rather than search it implies google is the only way to search for information, this is not helped by basic IT classes using google and not mentioning alternatives.

Same goes for promoting facebook or zoom, it makes it much harder for replacements to get a proper foothold.

@zleap @futurebird We're also at a point where online information is increasingly suspect and flooded with bad info. ChatGPT will confidently give you a very wrong answer, particularly on things like this.

Expert forums are probably going to get more important.

@MichaelTBacon @zleap

Quora now give GPT gumbo answers to question, and it's blended in with the real human answers. I hate it so much. (labeled, thankfully but what do you think people will make of such labels??)

Not that Quora was ever useful even before this started.

@MichaelTBacon @zleap

Am I crazy to think that "providing answers to Quora questions" is a BAD use of a GPT engine? They aren't designed to find correct information they are designed to produce content that looks like correct information. Am I missing something here?

@futurebird @zleap

Not a bit. I keep saying that LLMs are called large *language* models on purpose, and likewise are not called large knowledge models.

They can ape the right answer a lot of the time but they have no built-in mechanisms for determining their internal confidence in the answer nor in remembering where they learned the information being conveyed.

Someone else on here (maybe you?) was playing with training an ML for doing specimen ID but that's totally different than GPT/LLMs.

@MichaelTBacon @futurebird @zleap

I've occasionally put it this way to people: "Would you take medical advice from somebody who wasn't a doctor, and didn't know a fibula from a phalanx, but was just repeating a bunch of smart-sounding medical jargon he picked up watching a medical drama last night?"

Follow

@sphinx @MichaelTBacon @futurebird

Interesting question

In the UK there is a FAST campaign Face (slanted to one side) arms (can they raise them speech (is it surred ) Time (time to call 999 / 911) this is from a TV medical advertisemenbt in the UK to help people recognise the signs of a stroke in another person.

So maybe yes, as medical dramas are probably good at raising awareness of certain conditions.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.