Show newer

@tao I hope you're using the "pro" version of DeepL for your letters of reference. Their privacy policy states that if you use the free version the data is temporarily stored and used to train their model.

@david_megginson @timbray I have a "friend" like that too. Also used when greedy content providers move a show from a service he subscribes to to one he doesn't.

@kristinmbranson That's really interesting, though a bit traumatizing since it reminds me of the odorous ants that have been plaguing us for the last few weeks.

How large a model are you using?

Joe boosted

Damn. The head of the "Fusion Industry Association" says "nucular."

@ct_bergstrom You obviously neglected to screen Lex with the Gartner Magic Quadrant for conversational AI programs.

@m_scribe I'm sure pro-police law and order candidate DeSantis will end his anti-vax/pro-gun policies when he learns that 60% of 2022 police deaths were caused by guns and covid.

@keithwilson @TedUnderwood Precisely why I have anonymous social media accounts.

Just found out that the great Wayne Shorter, one of my favorite jazz musicians died today. nytimes.com/2023/03/02/arts/mu

@mmasnick Enjoyed the article, but this quote from the article was cringe worthy as well. It's not that very few people understand how to make LLMs; it's that very few people can afford to train LLMs. As for the very precise $15.7 trillion dollar estimate ... no comment.

ChatGPT API announced, complete with a temperature parameter to make it even less accurate. openai.com/blog/introducing-ch

@kjhealy That's a very long sentence for most USA Today readers.

Joe boosted

Oh boy, someone has figured how to turn Bing Chat into a social engineer that exfiltrates personal identifying information to a 3rd party website

Demo here: greshake.github.io/

@merz @ct_bergstrom I don't understand the concern - virus passaged in 4 mice with no experimental selective pressure vs virus passaged in 8 billion human beings.

@ct_bergstrom I have one from probably about 10 days ago that has 2 accurate references.

@ct_bergstrom Shit, this is even worse. This seems like a good counterargument to the idea that ChatGPT's fabrications don't cause harm.

ChatGPT cites a paper (Gandini) to support the argument that there was no association between sunscreen use and melanoma. I couldn't find a single mention of sunscreen in the paper (though I might have missed it because it was a scanned version), other than in the references.

@ct_bergstrom I tried your example (on sunscreen) and it looks like the only improvement OpenAI made ChatGPT is, as you implied, the ability to generate more convincing bullshit.

For example, the attached reference to the 1991 Diffey paper gives the impression that that paper focused on the damage caused by UV radiation. Instead, the paper examined the public health consequences of the divergence between perceptions of sunscreen protection and actual protection.

Joe boosted

Earlier I posted about using ChatGPT's propensity to fabricate citations entirely as a short-term strategy for detecting journal submissions and classroom assignments that had been written by machine.

I've been playing with the system for the last couple of hours, and as best as I can tell, ChatGPT now does a much better job than it did when first released at only citing papers that actually exist.

They're not perfect—for example, DOIs can be wrong and some are fabricated—but most are not.

Just discovered that our malfunctioning electric furnace - which our repairman said had to be replaced for around $3-4K - can be repaired with a $35 part.

It was pretty easy to diagnose. So many scammers out there so be careful.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.