One of my philosophical concerns on “AI” is that I dont want it to be an assistant, it should be aware of the environment, had different background discussion threads and share them when it finds it necessary. The limitation i see with the way language models are used now is that they require a starter prompt from a human. But our brain actually processes several threads in parallel, and we eventually have ideas or proposals that we want to share, and we decide with who because we find value in that. With a high temperature value on their tensors it can produce higher entropy conversations with itself, and another instance can decide if that though is worth sharing or memorizing it for later.

Follow

@pancake I think the main problem currently is that no original ideas comes from the "AI", but once you give it an original idea it can elaborate further.

In there lies another problem (unless you run the AI locally); should you share original ideas with a for profit AI service which has questionable privacy policy at best? In the case of ChatGPT this could potentially mean Microsoft patents your own ideas before you have chance to do so.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.