This thread is a microcosm of everything annoying about reddit.

A guy posts a photo of a spider wondering if it is a "brown recluse" gets a bunch of different answers many repeats, several people calling him stupid for not 'just googling it' a few calls to burn down the house and move.

It's not a brown recluse. Even if it were they aren't aggressive. Move to a remote location. There is a really good video about the reputation these spiders have I'll link next.

reddit.com/r/whatbugisthis/com

I think it's nice that people go to expert forums to find out about insects. It would be nice if everyone would read the other replies before adding their own.

It would be nice if taking time to tell someone to "google it" would just die... particularly with how google and other searches have... become much less useful over time.

Follow

@futurebird

I think the term 'google it' probably means one should try and find the answer or at least make some effort before asking. However I agree this is less helpful, our vocabulary has changed so we google something rather than search it implies google is the only way to search for information, this is not helped by basic IT classes using google and not mentioning alternatives.

Same goes for promoting facebook or zoom, it makes it much harder for replacements to get a proper foothold.

@zleap @futurebird We're also at a point where online information is increasingly suspect and flooded with bad info. ChatGPT will confidently give you a very wrong answer, particularly on things like this.

Expert forums are probably going to get more important.

@MichaelTBacon @futurebird

Yeah good point, like science forums as it is friendly but the people there do actually know their stuff.

@MichaelTBacon @zleap

Quora now give GPT gumbo answers to question, and it's blended in with the real human answers. I hate it so much. (labeled, thankfully but what do you think people will make of such labels??)

Not that Quora was ever useful even before this started.

@MichaelTBacon @zleap

Am I crazy to think that "providing answers to Quora questions" is a BAD use of a GPT engine? They aren't designed to find correct information they are designed to produce content that looks like correct information. Am I missing something here?

Sometimes I've heard people say "oh that's just the kind of answer I was looking for!"

This could be innocent enough-- but I always think "but what if the most correct answer wasn't what you were looking for?" In particular what if the most correct answer is "No one really knows there isn't enough information." or "In fact, the thing you thought was true probably isn't"

I feel like we are hurtling towards a world where such answers will be unheard of things of the past. That scares me.

@futurebird

Yes, I try and write up instructions on my website, that assume that my solution is for my individual problem or task I wanted to complete. The steps I am taking, need to be modified for individual needs depending on what you want to do.

@futurebird I've seen this a lot in yarn-craft fora.

Actual experts will say things like "It depends. If you [stuff about situation], then it might be [nuanced options, with information about the differences]. But if you [...], [...] might be a better option instead." Arrogant people who know little will say firmly "Buy this item and do this with it."

The latter group are often profoundly wrong and make it harder to find valid (and sometimes even safe!) information.

@futurebird It's often the case already that the asker picks the decisive-but-wrong answer over the informed answer that isn't completely solved. I share your concern that LLM answers are going to make this much worse.

@fidgetyhands @futurebird

Indeed. The phrase "DunningKrugerGPT" just popped into my head unbidden.

@fidgetyhands @futurebird 💯 to me this is the less discussed flip side of dunning Kruger, that experts in one field recognize experts even in unfamiliar fields. I was in a meeting recently and someone said "one plus one is two, it just is." And everyone nodded except me and the other mathematician in the room, we both said spontaneously "it depends."

@AdrianRiskin @fidgetyhands @futurebird I recall reading in a book about abstract algebra, “We shall for the remainder of this book assume that 1+1 =/= 0, even where this assumption is not explicitly stated.” Made me glad to be a physicist.

@futurebird @MichaelTBacon

I have found a useful answer on quora, which was about making up molar solutions.

@zleap @MichaelTBacon

There was a short time when there were some good discussion on teaching methods there... though this isn't true anymore.

@futurebird @MichaelTBacon

We could start discussions about teaching here, I am not a teacher but running code club and am looking for more work as a teaching assistant, so learning about some of the teaching planning processes is really useful

@futurebird @zleap

Not a bit. I keep saying that LLMs are called large *language* models on purpose, and likewise are not called large knowledge models.

They can ape the right answer a lot of the time but they have no built-in mechanisms for determining their internal confidence in the answer nor in remembering where they learned the information being conveyed.

Someone else on here (maybe you?) was playing with training an ML for doing specimen ID but that's totally different than GPT/LLMs.

@MichaelTBacon @futurebird @zleap

Or Limited Liability Models. As in, there's no opportunity to sue anybody if the model gets contaminated.

@MichaelTBacon @futurebird @zleap

I've occasionally put it this way to people: "Would you take medical advice from somebody who wasn't a doctor, and didn't know a fibula from a phalanx, but was just repeating a bunch of smart-sounding medical jargon he picked up watching a medical drama last night?"

@sphinx @MichaelTBacon @futurebird

Interesting question

In the UK there is a FAST campaign Face (slanted to one side) arms (can they raise them speech (is it surred ) Time (time to call 999 / 911) this is from a TV medical advertisemenbt in the UK to help people recognise the signs of a stroke in another person.

So maybe yes, as medical dramas are probably good at raising awareness of certain conditions.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.