Athena -
The conference that you are demanding was actually held in 2017, organized by the Future of Life Institute. It actually is the top Google hit on the question.
https://futureoflife.org/open-letter/ai-principles/
The Asilomar AI Principles have been co-signed by almost 6,000 researchers and professionals in the domain and they are modelled on the principles expressed in the very first Asilomar Conference.
Thank you for drawing attention to this important work.
Dave - I found an interesting comment on economic consequences of AI in a Twitter thread a few days ago (though I forget where). The argument goes as follows.
-------
Company A will reduce their staff by 50% for the same output and save costs.
Company B will keep their staff, increase productivity by 100% and gain market share.
Company B wins.
-------
It's a bit of different thinking about this topic.
The reality will be a mixed scenario, but some version of cost-cutting and increased productivity will play out. It's only Company C who loses out.
$ 0.02
Na klar! Ist doch ein wunderbarer Advocatus Diaboli for Planungen und Rollenspiele. Lasst die Maschine die Rolle der öffentlichen Meinung zu Aktionen spielen, das kann sie sicher ganz hervorragend. Dann könnt ihr Reaktionen vorbereiten, oder die Missverständnisse kreativ umgehen.
@dw@mas.to
Your article is a huge disappointment.
I was hoping for some insight into ways to grow awareness about the crucial human element that sets us apart from AI – after all, this is turning into a defining question of this new era.
Instead we find arguments from "Logical Fallacies 101: Begging the Question" (an AI can't do human things because it is not human), repeated. And finally ending with the claim that AI's cannot "tug at your emotions" and such. Excuse me - that's exactly what has been happening with ChatGPT: writing that is often bland, sometimes curiously engaging, but at times outright brilliant. You only need to open your eyes.
No, DW - it's not just that you _can_ do better, you _must_ do better. For everyone's sake.
Feel free to chat.
https://sentientsyllabus.substack.com
B.
I know, right? I am appalled by this blatant misinformation by the Guardian, which discredits you and your collaborators in a very significant way and has a possibility of tangible, real-world consequences.
I would hope that at the least they will publish an apology, and I hope you are considering your legal options. This attack on scholarship is not a trivial matter.
Good luck!
In case you read the Guardian article about our #ChatGPT paper, I’d like to clarify that we didn’t actually ‘hoodwink’ anyone! The paper states clearly which parts were written by us and which by ChatGPT. That said, we have enjoyed people’s surprise when they get to the big reveal! https://www.theguardian.com/technology/2023/mar/19/ai-makes-plagiarism-harder-to-detect-argue-academics-in-paper-written-by-chatbot Do read the paper: https://www.tandfonline.com/doi/abs/10.1080/14703297.2023.2190148
Tero - It's nice to see someone echoing my sentiments exactly. Yes, people have had – and are still having - a hard time to properly extrapolate in times of dynamic change.
Alan Turing has written about this in his "Computing Machines and Intelligence", 72 years ago:
"[These statements] are mostly founded on the principle of scientific induction. A man has seen thousands of machines in his lifetime. From what he sees of them he draws a number of general conclusions. They are ugly, each is designed for a very
limited purpose, when required for a minutely different purpose they are useless, the variety of behaviour of any one of them is very small, etc., etc. Naturally he concludes that these are necessary properties of machines in general. [...] The works and customs of mankind do not seem to be very suitable material to which to apply scientific induction. A very large part of space-time must be investigated, if reliable results are to be obtained."
In December I founded the Sentient Syllabus Project as an international, public-good collaborative to help navigate academia in this new era. My point of departure was exactly as you say: "We would have done better if we had properly imagined life with strong AI everywhere and extrapolated to that."
I invite you to have a look at some of the writing that has transpired (and feel free to share):
https://sentientsyllabus.substack.com
Cheers -
Boris
I respectfully disagree. Your article claimed that there were fundamental limitations of performance of LLMs. I noted that we are in fact not sure about the limitations, because they are difficult to disentangle from their semantic presentation. In particular I had observed abilities that in my understanding would cover many if not all of the higher order processing abilities you had listed. They can be elicited.
First, it is surprising that such emergent abilities appear, especially given how those algorithms work: their "world" is entirely based on relationships between abstract symbols, and has no grounding in experience at all, and their training consists merely in predicting a next token in a sequence. That we see higher order "understanding" emerge under these circumstances is profound. It strongly supports the idea that language and thought are deeply linked: language is not what we think, it is how we think.
Second, what must not be conflated is the ability to reason in principle, and the ability to parse human language, with all of its ambiguities, unstated assumptions and discursive conventions.(1)
When you say "explain things to technology", you are referring to the latter: "parsing human language". This is a different question from the ability to "reason". Now, the current dialogical abilities of LLMs are already a radical departure. Remember: previously, this process of "explaining things" was called programming.
Of course there are limitations. Where I disagree with you is: these limitations of communicating intent are not the same thing as a lack of reasoning.
The distinction is important because it can lead to significantly improved interactions. Jason Wei of Google has done groundbreaking work on "chain-of-thought prompting", and similar work from Google Brain on "Least-to-most prompting" shows that there are probably many additional strategies to be discovered.
All the best.
---
(1) One of the more surprising aspects of ChatGPT is that it embodies Paul Grice's "cooperative principle" – the well-known maxims of quantity, quality, relation, and manner.
Your question maps to similar questions about abstract counting abilities. Correct answers are usually elicited with a variant of "step-by-step prompts. I changed your input slightly because the duplicate name was confusing me more than ChatGPT (actually, ChatGPT several times anticipated that and referred to them as Rusko Sr, and Rusko Jr.). Here is an example of the correct reasoning:
-----------------
Prompt:
Abel is the grandfather of Bertram. Bertram has a grandson who is called Cyrus. This child is my grandfather. How many generations is Abel removed from me?
Response:
If Cyrus is your grandfather, that means you are two generations below Cyrus (Cyrus -> your parent -> you). Since Bertram is the grandfather of Cyrus, there are two generations between Bertram and Cyrus (Bertram -> Cyrus's parent -> Cyrus). Abel is the grandfather of Bertram, so there are two generations between Abel and Bertram (Abel -> Bertram's parent -> Bertram).
To find out how many generations Abel is removed from you, we can add up the generations between each person:
Abel -> Bertram (2 generations) + Bertram -> Cyrus (2 generations) + Cyrus -> you (2 generations) = 6 generations
So, Abel is removed from you by 6 generations.
[I believe this is correct. Now we need abstract counting. Here: how to repeat something six times.]
Prompt:
If a male ancestor is removed from me by 6 generations, how is this relationship expressed in English? Consider this step by step.
Response:
In English, relationships are described based on the number of generations separating two individuals. When there is a 6-generation gap between you and a male ancestor, you can describe this relationship step by step as follows:
Your parent - This is one generation above you, so your male ancestor at this level would be your father.
Your grandparent - Two generations above you, your male ancestor would be your grandfather.
Your great-grandparent - Three generations above you, your male ancestor would be your great-grandfather.
Your great-great-grandparent - Four generations above you, your male ancestor would be your great-great-grandfather.
Your great-great-great-grandparent - Five generations above you, your male ancestor would be your great-great-great-grandfather.
Your great-great-great-great-grandparent - Six generations above you, your male ancestor would be your great-great-great-great-grandfather.
So, a male ancestor removed from you by 6 generations would be called your great-great-great-great-grandfather.
[Again, I think this is correct. The reasoning is systematic, takes the special cases of father and grandfather correctly into account, and iterates the required number of times.]
Do you agree that this is a correct solution and that it illustrates the ability of the LLM to reason about generations and their labels?
(ChtaGPT-v4 2023-03-14)
*copywriter
🙂
Absolutely! Check out what I have been writing on #AI and #Academia at https://sentientsyllabus.substack.com/ , and the resources at http://sentientsyllabus.org , and if this is useful, get in touch.
We are at a pivotal time in history. Text, images, sound, movies cannot be trusted any more because they can be faked perfectly. At the same time trust in institutions is declining, also trust in scientific institutions. Time for education to step up. This should be our wake-up call #ChatGPT
You asked: "if a =1, b=2 what is c ..." and then noted that #ChatGPT refused to answer this as an ill-defined question. And then you said "Our jobs are safe".
Hm.
My good students can answer questions. My excellent students find two answers. Then they come back and give me three more. And if they are wrong, they know how to fix things. My outstanding students can critique the question itself.
Here is #GPT4. See for yourself.
---------------------
PROMPT:
Here is a puzzle: if a = 1, b = 2 then what are c and d equal to? Please give me 5 correct answers with distinctly different reasoning and each giving a different but reasonable result for c and d. Then tell me whether this is a good question.
GPT4:
Answer 1:
Reasoning: Alphabetical sequence
If we are following the alphabetical sequence where each letter corresponds to its position in the alphabet, then:
c = 3, d = 4
Answer 2:
Reasoning: Doubling the previous number
If we are doubling the value of the previous letter, then:
c = 2 * 2 = 4, d = 4 * 2 = 8
Answer 3:
Reasoning: Fibonacci sequence
If we are following the Fibonacci sequence, where the next number is the sum of the two preceding ones, then:
c = a + b = 1 + 2 = 3, d = b + c = 2 + 3 = 5
Answer 4:
Reasoning: Arithmetic progression
If we are following an arithmetic progression, where each term is a constant difference from the preceding one, other than the difference of 1 in the given sequence (a = 1, b = 2), let's consider a difference of 3:
c = 2 + 3 = 5, d = 5 + 3 = 8
Answer 5:
Reasoning: Squares of natural numbers
If we are considering that the given values correspond to the squares of consecutive natural numbers, then:
c = 2^2 = 4, d = 3^2 = 9
Regarding the quality of the question, it is an open-ended puzzle that allows for multiple interpretations and solutions. This kind of question can be good for stimulating creative thinking and problem-solving skills. However, if you are looking for a single, definitive answer, then this question might not be the best choice, as it can be ambiguous without specific context or additional information.
PROMPT:
But how do you get a = 1 and b = 2 in your answer 4?
GPT4:
Apologies for the confusion. I made an error in my explanation for Answer 4. Here is a corrected version:
Answer 4:
Reasoning: Arithmetic progression with a constant difference
If we are following an arithmetic progression, where each term is a constant difference from the preceding one, other than the difference of 1 in the given sequence (a = 1, b = 2), let's consider a common difference of 1 between two consecutive terms, but with alternating signs (1, -1, 1, ...):
c = 2 - 1 = 1, d = 1 + 1 = 2
In this case, the sequence would be: 1, 2, 1, 2, ...
-----------------------
I don't know abut your job. I gather it is organized around essay writing and reports. Which generative AI does very well. What I do know is that in my job, there is more than a little that needs to change, and quickly, to keep not only the job "safe", but the entire framework in which this job makes sense at all.
🙂
https://sentientsyllabus.substack.com/p/becoming-fibrous-gpt-4
New post for #SentientSyllabus, first impressions of #GPT4 via the #ChatGPT interface. Focus on consequences for #academia.
https://sentientsyllabus.substack.com/p/becoming-fibrous-gpt-4
Includes a summary of #OpenAI's live stream, first experiments with improved introspection, and a well-done text problem, and perspectives on the new features.
What's important for #HigherED?
(1) GPT-4 is good. Really. We still need to pursue a "An AI cannot pass the course" policy – but this has now become quite a bit harder. I am _not_ talking about making lectures AI-proof, I am talking about how we can teach students to surpass that level. This is a challenge.
(2) GPT-4 is good. That means it's also more fun to work with. Collaborative problem solving and tutoring are going to make for some very effective learning.
(3) GPT-4 is good. The (coming) ability to include persistent prompts ("system messages") is a game changer. This is not widely recognized, but it is a step to solve the #alignment problem, replacing authority with individual control.
There. I've said it three times... Have a look at the article, share and boost if original content about AI and academia is useful to you.
🙂
Yes - that's what I mean. The task list is not the instructions. Unless you refresh the instructions, they will be washed out. And it will start hallucinating whatever it is missing.
OMG, I’m having so much fun having a conversation in #Chinese with #ChatGPT acting as my Chinese teacher! 🤩
I had completely given up practicing Chinese these last 2 years. But yesterday, despite being rusty as F, I thought that maybe I should give it another go through #GPT4.
I am *not* disappointed.
#learnChinese #newParadigm
A good approach is to ask it to end every response with a codeword that you define in your prompt. That way you immediately notice when the prompt starts sliding out of the context-window and you'll need to refresh it.
Nice idea.
I just gave #chatgpt (the new version) 51 multiple choice questions from my midterm exam in #family #sociology and it got 49 of them right, or 96%. (The two it got wrong are debatable.) We covered historical demographic trends, sociological theories, definitions of race and gender concepts. I'll give some examples after my students take the test. Last year the students averages 80%.
I'm completely fascinated by GPT-4.
In the live-demo just now, within less than half an hour, the developer had GPT-4
- summarize a lengthy Document (new context-window size: 32,000 tokens !)
- write a python Discord-bot from scratch, run it in a Juypter notebook, debug it by simply pasting error messages into the GPT-4 dialogue, update its knowledge base by pasting the module documentation right from the website ... and get a working bot!
- use the bot to read images and interpret them in detail
- build a working website prototype complete with JS and CSS from a hand-drawn sketch
- and ingest 16 pages of tax code to perform a tax return calculation.
-----
This is amazing.
#ChatGPT #GPT-4 #generativeAI #Sentientsyllabus #HigherEducation #Academia