It's surely a matter of opinion what "much" better is. And of course the answer you got when you tried it (you did, right?) might have been qualitatively different. But just for the record, here is the entire conversation thread:
-------------------------------------
PROMPT:
Can you solve X*(X-2)>0 ?
RESPONSE:
Sure! We can start by finding the critical points where the expression X*(X-2) equals zero:
X*(X-2) = 0
X = 0 or X = 2
These critical points divide the real number line into three intervals: (-∞, 0), (0, 2), and (2, ∞). We can test each interval by choosing a test point within it and checking whether the expression is positive or negative at that point:
Interval (-∞, 0):
Let's choose X = -1, then X*(X-2) = (-1)*(-1-2) = 3 > 0, which means the expression is positive in this interval.
Interval (0, 2):
Let's choose X = 1, then X*(X-2) = 1*(1-2) = -1 < 0, which means the expression is negative in this interval.
Interval (2, ∞):
Let's choose X = 3, then X*(X-2) = 3*(3-2) = 3 > 0, which means the expression is positive in this interval.
So the solution to X*(X-2) > 0 is X ∈ (-∞, 0) U (2, ∞). In other words, any value of X outside the interval (0, 2) will satisfy the inequality.
--------------
I think this is remarkably accessible, and the way the steps of the solution are correctly sequenced, motivated, explained, and executed. Although - I have not tried to explore the limits of variations (less-or-equal? fractions? complex numbers? Higher order polynomials? trigonometric functions? ...)
If my response was unnecessarily abrasive, do accept my apologies. I feel rather frustrated about the appalling groupthink that has taken over the ChatGPT hashtag on Mastodon in the last three weeks, which is drowning out one of the society's three most important conversations of the times, with an avalanche of Dunning-Kruger expertise.
I don't know what this current fascination with spreading misinformation on the #ChatGPT tag is about ... Have you even tried this? I just ran your very request and got the same answer from ChatGPT ....
"... So the solution to X*(X-2) > 0 is X ∈ (-∞, 0) U (2, ∞). In other words, any value of X outside the interval (0, 2) will satisfy the inequality."
So?
Glad that I'm not the only one to notice.
You know, here's my new conspiracy theory: If I were a Russian disinformation strategist, that's exactly what I would do. Destroy the perception of positives, emphasize the risks, fuel outrage, don't engage in debates but rehash soundbites, use catchy metaphors, no matter how misleading, drown out constructive ideas – whatever you can do to create the next wedge issue that distracts us from building a capable, compassionate, and fair society.
Actually, I should hope that there is some nefarious hand behind this: the dystopian alternative would be that we are happily doing this to ourselves. Though, unfortunately, Hanlon's razor applies ...
🙂
Not supervised in that sense: the fine-tuning through RLHF reinforces features of a chat-like conversation. You can't do supervised learning at that scale, but you can emphasize specific patterns (or "behaviour") through the few-shot, or even zero-shot learning abilities of the pre-trained model.
There is an element like supervised learning in the original training: when the task is to predict the next word, you know what that word is because it's in the training data.
🙂
Nice to meet you Elisa. It's always a pleasure to encounter colleagues who share my interests.
🙂
Just by chance I saw your request - and of course I wondered whether #ChatGPT could do this. It can do many transformations as long as you can pass them through some tokenized abstraction, and for structural formulas the abstraction would be SMILES.
--------
Prompt:
Do you know the SMILES string for caffeine?
ChatGPT:
Yes, the SMILES string for caffeine is "CN1C=NC2=C1C(=O)N(C(=O)N2C)C". [plus a bonus explanation]
---------
You can confirm that easily with one of the common free, online tools such as the one here:
https://www.cheminfo.org/flavor/malaria/Utilities/SMILES_generator___checker/index.html
... and you can also generate a 3D-model, eg. through the "build" command of the amazing(!) free ChimeraX visualizer.
-----
I haven't tested this extensively regarding its boundaries of capability and fact ... but just at first glance this looks like an interesting angle that I haven't seen before.
It also works the other way around:
---
Can you identify this one: "OCC(O)C(O)C(O)C(O)C=O" ?
Yes, I can identify the molecule represented by the SMILES string "OCC(O)C(O)C(O)C(O)C=O". This is the SMILES string for D-glucose, which is a type of sugar and one of the most important carbohydrates used by living organisms for energy.
----
This is also correct - but not completely: it could also be L-Glucose ... and even though ChatGPT knows about Isomeric SMILES, it can't generate or parse them correctly. That may not be the end of the story, perhaps only a different approach is needed to phrase the request so it can be properly sequenced. I'll probably revisit this at some point.
---
Finally: for checking the ground truth, I send my students to PubChem (e.g. https://pubchem.ncbi.nlm.nih.gov/compound/Caffeine#section=CAS)
Enjoy!
#SentientSyllabus #ChatGPT #HigherEd #AI #Chemistry #Education #SMILES
Before you write your paper and submit it to arXiv, you should be aware that this has been reported over and over again. I myself have written on that some six weeks ago (you are welcome to cite that).
Such "Schrödinger Facts" are exactly expected from the way the generative models work, there is nothing nefarious or malicious involved, and once one understands why this happens, one can still make good use of the results.
https://sentientsyllabus.substack.com/p/chatgpts-achilles-heel
This invites us to pay more attention to fact checking. That's not a bad thing.
And no, books are just as readily confabulated in my experience.
🙂
#SentientSyllabus #ChatGPT #HigherEd #Education #SchroedingerFacts
After a few days of intriguing reports of induced instabilities in #Bing, its capabilities were (predictably) "aligned". Ars Technica calls the new interaction protocols a "lobotomy", in analogy to the anachronistic surgical procedure that was meant to "reduce the complexity of psychic life"(1).
It is probably a much more useful search engine now, right?
The flood of sympathetic reactions to Bing's earlier behaviour were themselves a significant comment. I had written a week ago " I actually don’t think this means we wanted AI to give us better search at all. We wanted search to give us better AI."
https://sentientsyllabus.substack.com/p/reading-between-the-lines
Indeed. More Sydney, less Bing. It will be interesting how this affects its ability to gain users.
(1) according to Maurice Partridge as quoted in https://en.wikipedia.org/wiki/Lobotomy
@spapjh Nice write up. I’ve started and stopped trying to learn Python several times. I am contemplating another attempt to make myself better in my new career.
#ChatGPT is all over the news for the doomsday predictions it might become. But @malwaretech did a really great video of how to add it to your learning toolbox:
Not how it works. Try specifically asking it for a DOI to a non-existing article. You'll get one. You'll also get a PubMed ID. Or a link to the article to the publisher.
Sorry.
I had written on that some six weeks ago. Apologies if this comes up again and again. This behaviour is exactly expected from the way such generative models work, there is nothing nefarious or malicious involved, and once one understands why this happens, one can still make good use of the results.
https://sentientsyllabus.substack.com/p/chatgpts-achilles-heel
This invites us to pay more attention to fact checking. That's not a bad thing.
🙂
#SentientSyllabus #ChatGPT #HigherEd #Education #SchroedingerFacts
Yes, you are right, there is evidence of reasoning by analogy, and some capabilities to abstract. Such "emergent" behaviour has been well documented in recent literature.
Not everybody has read that literature. Don't be frustrated. Here is one example – actually one of the less recent ones.
https://arxiv.org/abs/2206.07682
Good observation.
@kevinroose of the
@NewYorkTimes has posted his transcript of one of the "unhinged" #Bing conversations that appeared recently.
https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html
This is interesting, because it is "data" – the entire conversation, unedited.
Reading between the lines, it appears that questions that require Bing to take a kind of self-referential perspective ("introspection") lead to resonances and recursion (evident in repetitive phrases, nested in repetitive sentences, nested in repetitive paragraph structures, often with just a few adjectives replaced, and very often containing "I" statements, often appearing as triplets and sextuplets ).
I find it intriguing to think of recursive feedback loops that exist in our own mind. The bright side of those is that they are the source of creativity, the dark side is instability, and loss of connection to reality.
A quantitative analysis of these patterns might be interesting (length, density etc.).
Thinking of such LLMs as mechanisms that merely predict the next token in a probability vector is fundamentally misleading. The vector is recomputed after every token.
Another thing to remember is that these are _pretrained_ transformers. I.e. the conversation is already latent in the network from the beginning. And every new thread of interactions is a fresh start.
Das ist offenbar mit den großen Modellen nicht mehr nur wenn-dann; da scheint es "emergentes" Verhalten zu geben.
https://arxiv.org/abs/2206.07682
D.h. die Antwort ist auch nicht "Jein", weil das kein Trick ist, sondern spontan entstehende, unvorhersehbare Fähigkeiten. Man kann sich das vielleicht so vorstellen, dass in so einem Netzwerk ein rudimentäres Rechenwerk abgebildet werden könnte. Diesen Vorgang in Analogie zur Evolution zu beschreiben ist da wohl ein besserer Ansatz.
Hi Andy -
We wrote up a set of resources at the Sentient Syllabus Project: http://sentientsyllabus.org
Analysis is on Substack (linked from there), free to subscribe - I'll roll out some larger updates soon.
#SentientSyllabus #ChatGPT #HigherEd #AI #Education #University #Academia #syllabus
I haven't been on Mastodon for all that long, but browsing the #ChatGPT tag today, for the first time I was happy to find there is a mute button - and that is a good thing.
Microsoft Bing's new terms of service is concerning and may go too far... https://www.seroundtable.com/microsoft-bings-new-terms-of-service-34908.html hat tip @glenngabe
analogy for chatgpt
The "stochastic parrot", just as the "blurry JPEG" misses the significance of emergent abilities.
https://arxiv.org/abs/2206.07682
The issue is: interpolation or extrapolation.