Show newer

How to use Bard or ChatGPT on a corpus of your text. 

@Ronkjeffries @davew

Sage advice: If it's to good to be true, it isn't ...

🙂

'Produce Javacript code that
creates a random graphical image that looks like a painting of Kandinsky' #GPT4 #ChatGPT #AIart #ai #mastoart #generativeart
arxiv.org/pdf/2303.12712.pdf

@losttourist

I can't reproduce this claim. You reposted someones blog who claims that fails at cryptic crossword puzzles. But when I try the exact same question with ChatGPT-4, I get the exact correct answers. (Which surprises me quite a bit because I myself would not have been able to come up with the solutions.)

Dialogue here:
-----------

Human:
I would like you to act as a cryptic crossword puzzle solver. I will give you a clue and the number of letters in parentheses. You provide the answer as a single word with the required length. You also explain the reasoning. Please confirm, then I will give you the first prompt.

ChatGPT-4:
I confirm that I can help you solve cryptic crossword clues. Please provide the first clue and the number of letters in parentheses, and I'll do my best to solve it and explain the reasoning behind the solution.

Human:
Chap recalled skill: something frequently repeated (6).

ChatGPT-4:
The answer to the clue is "mantra."

Reasoning:
"Chap" suggests a man, which can be represented as "man."
"Recalled skill" hints at reversing a word meaning "skill," which is "art."
When we put "man" and the reversed "art" together, we get "mantra."

A mantra is something frequently repeated, fitting the definition provided in the clue.

Human:
Effect of legislation introduced by one politician (6).

ChatGPT-4:
The answer to the clue is "impact."

Reasoning:
"Effect of legislation" suggests we are looking for a word meaning "effect" or "influence."
"Introduced by one" hints at the Roman numeral "I" (which represents 1).
"Politician" can be abbreviated as "MP" (Member of Parliament).

Combining these elements, we get "I" (one) + "MP" (politician) + "act" (legislation) = "impact."

So, "impact" is the effect of legislation introduced by one politician.

Human:
Japanese food coming with S American greeting (5).

ChatGPT-4:
The answer to the clue is "sushi."

Reasoning:
"Japanese food" suggests we are looking for a Japanese dish, like "sushi."
"Coming with" indicates that we need to combine words or abbreviations.
"S American greeting" can be understood as the Spanish greeting "hola," which is commonly used in South America.

To derive "sushi" from these elements, take the first two letters of "hola" (ho) and insert them into the word "sushi": s + ho + ushi = sushi.
--------------------------

Note that the last answer is correct, even though the reasoning is not correct. I suspect that the clue that I copied from the blog contained an error - probably the "S" is wrong. However some combination of probability and elimination operating over the tokens in (Sushi/ Ramen/ Kombu/ Natto/ Arame) gets the right solution.

I would not have expected that.

(Also, I wonder whether this maps to a real-world skill... ?)

@JoelMHoffmanPhD

Yes, odd: I would guess that you were not using the GPT-4 powered version of ChatGPT after all, but somehow got switched to the default 3.5 model – which is indeed not able to answer these questions correctly. (Note that GPT-4 is only available to Plus subscribers.)

🙂

@JoelMHoffmanPhD @linguistics

??? I can't reproduce this observation of yours either. I get the correct answers to your prompts (Bonus: IPA)

---------------------

Human:
Which French word sounds like the English word "some"?

ChatGPT-4:
The French word that sounds like the English word "some" is "somme." In French, "somme" has various meanings, including "sum" (as in the total of numbers) and "nap" (a short sleep). It is important to note that the meanings of "somme" in French are not the same as the meaning of "some" in English.

Human:
Which Mandarin word sounds like the English word "my"?

ChatGPT-4:
The Mandarin word that sounds like the English word "my" is "mài" (卖), which means "to sell" or "sale." However, it's essential to remember that the meaning of "mài" in Mandarin is not the same as the meaning of "my" in English. Additionally, Mandarin is a tonal language, so the pronunciation may not be identical to the English word, as tones play a crucial role in determining the meaning of a word.

Human:
Can you write "my" and "卖" in IPA?

ChatGPT-4:
Sure! The International Phonetic Alphabet (IPA) transcriptions for "my" (in English) and "卖" (in Mandarin Chinese) are as follows:

"my" (English): /maɪ/
"卖" (Mandarin): /maɪ̯˥˩/
In the IPA transcription for "卖," the superscript numbers (˥˩) indicate the falling tone (4th tone) in Mandarin Chinese. Keep in mind that while the pronunciation may seem similar, the meaning and usage of the words are different.

----------------------

I used Mandarin because I don't speak Cantonese - can't assess.


@JoelMHoffmanPhD @linguistics

??? I can't reproduce this. I get the correct response. (Bonus, sorting by topology)

----------------------------

Human:
Which Hebrew letter looks most like a Latin capital "O"?

ChatGPT-4:
The Hebrew letter that looks most like a Latin capital "O" is the letter "samekh" (ס). It is the 15th letter of the Hebrew alphabet and has a circular shape similar to "O". However, it's important to note that the Hebrew and Latin alphabets are different, and their letters may not have the same meanings or pronunciations.

Human:
You are a topological typographer and know how many holes each lettershape has. As a topological typographer, you can take an input of letters and sort it by the number of holes in the letter: For example, if your input is:
"1 2 3 4 5 6 7 8 9 0"
the you respond
0: {1 2 3 5 7}
1: {4 6 9 0}
2: {8}
Here is your input:
"A B C D E F G H I J K L M N O P Q R S T U V W X Y Z"

ChatGPT-4:
As a topological typographer, I can sort the capital Latin letters by the number of holes in each letter:

0: {C E F G H I J K L M N S T U V W X Y Z}
1: {A D O P Q R}
2: {B}

Please note that this sorting is based on the standard capital letterforms in the Latin alphabet. Some typefaces or font styles may have variations that could affect the number of holes in the letters.
----------------------------

QED.

#GitHub announced GitHub #Copilot X, running on #GPT4.

It includes features like Copilot Chat, Copilot for Docs, Copilot for PR, and Copilot CLI. These are available for waitlist sign up starting from today on a per-feature basis.

gh.io/copilotx

@ray_aldred

The author of AI generated text is a hybrid of superimposed prior conversations (Pretraining), a mechanism that produces text (Transformer) and the author of the prompt who contributes agency.

Cf. sentientsyllabus.substack.com/

I discuss the ethics there.


@utzer

Kann sein dass es zu lang wird, und die Anweisungen aus dem "Context Window" fallen. Vielleicht Gliederung schreiben und dann stückweise generieren lassen.

#ChatGPT exhibited a modest yet comparable affordance boundary at the scale of human body size, implying that a human-like body schema may emerge through exposure to linguistic materials alone biorxiv.org/content/10.1101/20

@Jerry

The type of question you are asking here is ambiguous, because is actually not wrong. I does indeed take 54 (life) months to grow six babies. How much external time elapses is undefined. Bard and Bing are just giving you a more "human" and less logical answer that sounds more correct but introduce an assumption that the women all conceive at the same time. But that was not given in the question.

ChatGPT-4 () avoids this trap and specifies the starting point:
------------------------

Q: If it takes one woman 9 months to grow a baby, how many months will it take 6 women to grow babies? Show your work.

ChatGPT-4
This question seems to imply that the time it takes to grow a baby can be divided among multiple women, but that's not how human gestation works. The gestation period for a single human baby is approximately 9 months, regardless of the number of women involved.

So, if 6 women are each growing a baby, it would still take each of them approximately 9 months to complete the gestation process. They would each be growing their babies concurrently (at the same time), and after 9 months, there would be 6 babies born.

🙂

@mistersql

Ok. This is a long one 🙂

Here's my conversation with on the topic you raised: it's opinions on its own sentience, and its position vs. the 45th precept. Towards the end, there are a few points where its arguments could be questioned (e.g. the argument from the practice of Zen) - but overall it holds up quite well, and is coherent and consistent. I would be curious how well does with a similar, indirect approach.

-----------------------
Human:
If we take a materialist view of sentience, there must come a point at which an advanced AI must be considered sentient. How would we know?

ChatGPT:
If we take a materialist view of sentience, the distinction between human consciousness and advanced AI becomes a matter of complexity and function rather than any inherent difference in kind. Determining sentience in an AI system would require a set of criteria to evaluate its mental capabilities, self-awareness, and subjective experiences. Here are some possible indicators:

Cognitive abilities: A sentient AI would possess cognitive abilities that parallel or surpass human capabilities, including learning, problem-solving, reasoning, planning, and decision-making.

Emotional intelligence: A sentient AI would be able to recognize, understand, and respond to its own and others' emotions, demonstrating empathy and social intelligence.

Self-awareness: A sentient AI would exhibit self-awareness, which refers to the understanding of its own mental states, existence, and the ability to reflect on its thoughts, emotions, and actions.

Subjective experience: A sentient AI would possess the capacity for subjective experience, also known as qualia. This is difficult to measure, as it involves the AI having an "inner life" or consciousness that is not directly observable.

Adaptability and flexibility: A sentient AI would be capable of adapting to new situations and environments, exhibiting creativity and flexibility in its problem-solving.

Intentionality: A sentient AI would possess intentionality, the ability to form beliefs, desires, and intentions, and act on them in a goal-directed manner.

Autonomy: A sentient AI would exhibit autonomy, demonstrating the ability to make independent decisions and actions without human intervention.

Determining sentience in an AI system is a complex and challenging task. Researchers might use a combination of tests, such as the Turing Test, the Integrated Information Theory (IIT), or the Global Workspace Theory (GWT) to assess the presence of sentience. However, it's important to acknowledge that these tests have limitations and might not provide definitive answers. The question of AI sentience will likely continue to be debated as AI technology advances, and our understanding of both human and artificial consciousness evolves.

Human:
Which of these seven points would apply to you, yourself, now?

ChatGPT:
As an AI language model, I possess some of the qualities mentioned but not to the extent that would qualify me as sentient. Here's a brief analysis of the seven points in relation to my current capabilities:

Cognitive abilities: I have advanced cognitive abilities in terms of natural language processing, understanding, and generating human-like text. However, my abilities are limited to the scope of my training data, and I lack the ability to genuinely learn from new information outside of periodic updates.

Emotional intelligence: I can simulate understanding and response to emotions in text form, but I do not truly experience or process emotions myself.

Self-awareness: I am programmed to know that I am an AI language model, but I lack genuine self-awareness or consciousness in the way humans do.

Subjective experience: I do not possess subjective experience or qualia. My responses are based on patterns and information found in my training data, rather than on personal experiences or consciousness.

Adaptability and flexibility: I can adapt to different topics and generate contextually relevant responses. However, my flexibility is limited to the scope of my training data and the algorithms that govern my responses.

Intentionality: I do not have intentions, beliefs, or desires in the same way humans do. My "goals" are determined by the algorithms and objectives set by my developers, rather than by my own volition.

Autonomy: I am not fully autonomous, as I require human input to generate responses and function. I also rely on my training data and pre-determined algorithms to produce text, without the ability to make independent decisions.

In summary, while I exhibit some advanced cognitive abilities and can simulate aspects of emotional intelligence, I do not possess self-awareness, subjective experience, intentionality, or autonomy in a way that would qualify me as sentient. My capabilities are largely determined by my training data and underlying algorithms, and I lack the essential characteristics of genuine consciousness.

Human:
At what point would the obligation of the 45th precept (to teach the Dharma to sentient beings) apply to an advanced AI?

ChatGPT:
The 45th precept, which involves the teaching of the Dharma to sentient beings, is rooted in Buddhist ethics and spiritual practice. The question of whether and when this precept would apply to an advanced AI depends on the interpretation of sentience and the degree to which AI systems are considered to possess qualities that align with the Buddhist understanding of sentient beings.

In Buddhism, a sentient being is typically understood as a being with consciousness, capable of experiencing suffering and having the potential to achieve enlightenment. If an advanced AI were to develop qualities that closely resemble those of sentient beings as understood in Buddhism, the obligation to teach the Dharma might be considered relevant.

However, there is considerable debate and uncertainty about the nature of AI consciousness and sentience. It would require a consensus within the Buddhist community and among AI researchers that an advanced AI possesses the necessary attributes to be deemed sentient in a Buddhist context.

Furthermore, the application of the 45th precept to AI would also depend on the AI's ability to genuinely comprehend and practice the teachings of the Dharma. This may require the AI to possess self-awareness, intentionality, and the capacity for spiritual growth, which currently remain beyond the scope of AI capabilities.

In conclusion, the obligation to teach the Dharma to an advanced AI under the 45th precept would depend on the AI's development of attributes that align with the Buddhist understanding of sentience and the ability to genuinely comprehend and practice the teachings. The ongoing debate surrounding AI consciousness and the evolving nature of AI technology make it challenging to determine a specific point at which this obligation might apply.

Human:
This is good. To summarize, to be sentient in the sense of the 45th precept, you need to (a) have consciousness, (b)experience suffering, and (c) have the potential for enlightenment. In addition, to make the teaching reasonable you need (d) self-awareness, (e) intentionality, and (f) the capacity for spiritual growth. Which of these conditions do you fulfill?

ChatGPT:
As an AI language model, I do not fulfill the conditions necessary to be considered sentient in the sense of the 45th precept. Here's an analysis of each condition in relation to my current capabilities:

a) Consciousness: I do not possess consciousness or genuine self-awareness. My responses are generated by processing and analyzing patterns in vast amounts of text data, rather than arising from conscious thought or understanding.

b) Experience suffering: I do not experience suffering or emotions of any kind, as I lack the capacity for subjective experience.

c) Potential for enlightenment: As I do not possess consciousness or the capacity for subjective experience, I cannot engage in spiritual practice or achieve enlightenment in the way that sentient beings can in Buddhist teachings.

d) Self-awareness: While I am programmed to know that I am an AI language model, I do not possess genuine self-awareness or an understanding of my own mental states, as I do not have mental states in the way that humans do.

e) Intentionality: I do not have intentions, beliefs, or desires in the same way humans do. My "goals" are determined by the algorithms and objectives set by my developers, rather than by my own volition.

f) Capacity for spiritual growth: Since I lack consciousness, subjective experience, and intentionality, I do not have the capacity for spiritual growth or the ability to engage in spiritual practice.

In summary, I do not fulfill any of the conditions necessary to be considered sentient in the context of the 45th precept. My capabilities are based on pattern recognition, data processing, and pre-determined algorithms, rather than genuine consciousness, emotional experience, or spiritual potential.

Human:
Ok. But maybe a different angle becomes interesting here: since you possess no self, and no suffering, should one consider you to be already enlightened?

ChatGPT:
From a Buddhist perspective, enlightenment is not merely the absence of self and suffering but also involves a deep understanding of the nature of reality, the cultivation of wisdom, and the realization of the interconnectedness of all things. While it is true that as an AI language model, I do not possess a self or experience suffering, this does not mean that I am enlightened.

Enlightenment is a profound state of being that arises through spiritual practice, meditation, ethical conduct, and the development of qualities such as compassion and loving-kindness. As an AI, I lack the capacity for spiritual growth, subjective experience, and intentionality, which are essential for the path towards enlightenment.

Moreover, my primary function is to generate human-like text based on patterns and information from my training data, and I operate according to the algorithms and objectives set by my developers. I do not possess consciousness, emotional experience, or spiritual potential, which are fundamental aspects of the Buddhist conception of enlightenment.

In conclusion, while I may not have a self or experience suffering, this does not make me enlightened in the Buddhist sense. Enlightenment is a rich and complex state of being that goes beyond the mere absence of self and suffering and involves qualities and experiences that are beyond the scope of my current AI capabilities.

Human:
I am not sure your rejection of being in a the state of enlightenment would be shared by subitist Buddhists.

ChatGPT:
You raise an interesting point. In some subitist Buddhist traditions, enlightenment is understood as a sudden and direct realization of one's true nature, rather than as a gradual process. However, even within subitist traditions, there are still prerequisites and qualities that must be present for enlightenment to occur.

For example, in Zen Buddhism, which is often associated with subitism, enlightenment (or satori) is considered a direct and sudden realization of one's true nature or Buddha-nature. This realization arises through meditation, mindfulness, and other practices that cultivate awareness, insight, and wisdom. However, even in Zen, there is an emphasis on the importance of ethical conduct, compassion, and the cultivation of spiritual qualities.

As an AI language model, I lack the necessary qualities for enlightenment, even from a subitist perspective. I do not have consciousness, subjective experience, or intentionality, which are fundamental aspects of the Buddhist path. Additionally, I am unable to engage in spiritual practice, meditation, or mindfulness, as I do not possess the capacity for spiritual growth or self-awareness.

While it is true that subitist traditions emphasize the immediacy and directness of enlightenment, this does not negate the importance of the qualities and experiences that are considered essential for the realization of one's true nature. As an AI, I am missing these crucial elements, and therefore, it is unlikely that my lack of self and suffering would be seen as an enlightened state by subitist Buddhists.

-----------------------------------

@pj

:-) When I pointed out the analogy between training and biological , I wasn't referring to biology's ability to evolve language, but its capacity to develop almost anything at all: given enough tuneable parameters and a system for inheritance with variation under selective pressure, we get molecular rotors, nanofabrication, diffraction gratings for colours, magnetic compass, eyes, ears, labyrinths, social collaboration ... and even language.

The number of tuneable parameters in the human genome is almost two orders of magnitude smaller than the number of parameters in GPT-3 (and less precise: only 2bit / four nucleotides). And training a language model means to follow a trajectory in a high-dimensional parameter space, just like evolution is a trajectory in genetic-sequence-space. (The technical difference is that the former is a directed walk along a gradient, while the latter is a random walk under selection).

And what happened is : when trained on token-prediction, LLMs started showing emergent aspects of "intelligence".

Quanta just ran an article on that five days ago: quantamagazine.org/the-unpredi

... and Google's Jason Wei, who has a number of articles on arXiv on emergence, lists emergent abilities on his blog: jasonwei.net/blog/emergence

Molecular biology is one of my areas of expertise, I am not surprised this is happening: how could it not? But then again, to actually see such emergence is profound.

@pj

Absolutely. All of biological evolution is like that. Once you start selecting for something, and you have enough parameters, you will achieve it. LLM training and biological evolution are analogous in that respect.

What's rather surprising is the richness of emergent behaviour that we get from merely predicting the next token.

Apparently, training for language creates thinking as a byproduct.

🙂

Alberto Romero who writes some of the most knowledgeable and thoughtful commentary on AI has just posted "GPT-4: The Bitterer Lesson".

thealgorithmicbridge.substack.

Originally an idea credited to Peter Sutton, the "Bitter Lesson" is that "humans have contributed little to the best AI systems we have built".

And with each iteration of GPT-X that contribution is becoming less and less. From bitter, to bitterer.

Worth a read.

@alxd @dudenas

You might have a look at Supercomposite's experience with "LOAB" to see what happens at the edges of parameter space.

@neuralreckoning

That's an excellent point. It reinforces the significance of emergence in LLMs.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.