So based on Angel's Feedback below , I much simplified this Medical Diagnosis app : i havent added the descriptions and instructions yet but basically you just answer in freeform using complete sentences in any language and get a diagnosis in your own language.

Doesn't this seem... incredibly dangerous? "Medical diagnosis" via a program that knows nothing whatever about medicine? Are you prepared for lawsuits?

@ceoln it’s a Proof of Concept , giving us a glimpse of a distopian future (or basically present, really) test it out let me know if I should be worried 😉 there will always be some kinda American ready to die for the chance to sue but I can’t be held responsible for such behavior as far as I can tell 🤷🏻‍♂️

Um. You can definitely be held responsible for putting out software that purports to do medical diagnosis. You should also check the ToS of whatever engine you're using; unless (and maybe even if) you built it yourself and are running it from your own servers, they almost certainly prohibit using it to give people medical or legal advice.

@ceoln I built none of it , run none of it , and there are no TOS I’m aware of 🤷🏻‍♂️ hopefully openai will donate to my very necessary and totally not ironic defence fund. Btw if someone uses it , interprets AI output as medical advice and injures themselves , I would love their feedback … so far feedback has been quite good , actually - in many cases it recommends seeing a “board certified medical professional” and I can’t fault that - try it for yourself , you tell me how dangerous it is


"Btw if someone uses it , interprets AI output as medical advice and injures themselves , I would love their feedback …"

I mean, given that it's absolutely worded as medical advice, that wouldn't be a crazy interpretation, would it?

Telling someone that has been injured by your program that you'd love their feedback is probably not going to be really comforting to them? Their feedback may be in the form of lawsuits and/or criminal charges.

But I'm just one random person deeply concerned about the amount of harm misuse of LLMs is likely to do in the near future. I will try not to pick on you specifically too much! :D

· · 1 · 0 · 1

@ceoln just dropping in to actually apologize... you've been nothing but genuine and sincere with me , and all i did was act flippant and facetious . Not my best performance, and for this appologize. That said i want you know i've internalized all these suggestions and comments, all of which are pertinent , and i'll be doing my best to reflect on them as well as (eventually) taking this app down. I know that it's not meant to be like this , i knew it wasnt meant to be from the start... i did . to sign off, i want to also say thank you for engaging in such a graceful and generous way. hope i didnt do you wrong and i apologize for what i did wrong. 🙏

Hey, no problem! You've been fine, really; this space is pretty crazy right now. :)

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.