LLMs could still have the same set of algorithmic capabilities without being presented as a simulation of a person, and it feels increasingly important that they stop being presented in this way

@jcoglan @joe Amen. Those imitative feature intentionally cause us to use the same trust we extend to humans when it is completely unjustified.

I *especially* hate thing like Google Notebook’s “podcast” feature, which creates an article summary as an audio “podcast” with two separate voices that fake two presenters discussing the article. The various “vocal” features, such as hesitancies, breaths, laughing and the like are there *only* to implicitly deceive the listener, to put us into “listening to people talk” mode and thus grant more confidence to the material than is warranted.

These kind of features are basically fraud — the AI creators are lying to us.

Follow

@michaelgemar @jcoglan @joe Funny thing about that podcast thing is that after the initial moment of "oh that's a cool trick" you very soon realise that reading the actual text would be faster, less annoying (because you don't have to listen to pointless puns) and give you oh so much more useful information...

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.