Show newer

@mariyadelano

That's interesting - we know that ChatGPT tends to compose "Schrödinger Facts" (sentientsyllabus.substack.com/), but I would have thought that it should be pretty straightforward to fix that by intersecting its statements with sources. As you noticed, getting that right seems to take more than just putting keywords into a Bing search.

Thanks!

@evedazzle

Thanks - that's indeed well written and Lance Eaton raises many good questions; I particularly like that he thinks of concrete policy for the classroom.

You might be interested in the analyses over at the Sentient Syllabus project sentientsyllabus.substack.com where I also aim for a principled perspective on the many questions.

"A boat sailing on an ocean while birds fly across the sun peeking through clouds in the background"

Rectangles, lines and arcs on <canvas>

ChatGPT 2023

#ChatGPT #DigitalArt #ML #AI #Boat

@ThingsDrawnByTextModels

This is very cool - and surprisingly pleasing. Also thanks for posting the JSfiddle with the conversation and the code!

I did notice that you had broken up the creation into parts initially, and I wondered to what degree the model would be able to independently parse the objects and define their relative location. So I asked it to create svg code directly from the your prompt. I put the result into an html wrapper since, I had to add a light blue background, otherwise the white clouds would have been invisible. Here is the full html including the prompt:

<html>
<head>
<style> body { background-color: ; } </style>
</head>
<body>

<!-- Prompt: Please write the code for an svg that shows the following scene: "A boat sailing on an ocean while birds fly across the sun peeking through clouds in the background". -->

<svg width="500" height="500">
<!-- Background with ocean -->
<rect x="0" y="400" width="500" height="100" fill=""/>

<!-- Boat -->
<polygon points="250,450 200,400 300,400" fill=""/>
<rect x="225" y="400" width="50" height="50" fill=""/>

<!-- Sun -->
<circle cx="70" cy="70" r="40" fill=""/>

<!-- Clouds -->
<path d="M100,100 Q120,80 140,100 Q160,80 180,100 Q200,80 220,100" fill=""/>
<path d="M180,50 Q200,30 220,50 Q240,30 260,50 Q280,30 300,50" fill=""/>

<!-- Birds -->
<path d="M120,110 Q140,120 140,140 Q140,160 120,150 L110,140 Q100,120 100,140 Q100,160 110,150" fill="#000000"/>
<path d="M280,110 Q300,120 300,140 Q300,160 280,150 L270,140 Q260,120 260,140 Q260,160 270,150" fill="#000000"/>
</svg>

</body>
</html>


@georgetakei

Leo Szilard, one of the nuclear physicists who raised their voice against the atomic bomb, formulated such ten commandments for the modern world in 1961.

en.wikiquote.org/wiki/Le%C3%B3

My favourite one of those is number four: "Do not destroy what you cannot create." It is for me to this day the single most memorable thought that shaped my views on sustainability and the need to protect the environment when I was a student.

@emma

Emma - while reading Crawford and Paglen's essay that you posted and looking into some of the background, I came across a rebuttal by Michael Lyons (2021), one of the creators of the JAFFE dataset, refuting in detail Crawford and Paglen's assertions and pointing out significant ethical problems with their practice.

hal.science/hal-03321964/docum

This is very concerning.

Trying to get the issue of academic misconduct right leads to a lot questioning.

I just posted an analysis on generated (AI) writing over at :
sentientsyllabus.substack.com/

Mey key point is that we need a principled and deep understanding of academic integrity, and that can be provided by a commitment to truth. Accordingly the source of misconduct is "deceit", and not the nature of the text itself. We need to keep in mind however, that we are unlikely to be able to detect all such deceit. Therefore we need to build a collaborative framework of education. Creating an academic work ought to be as enjoyable as writing your own diary.

@wzuidema

We expect authors' transparency "to what extent" AI technologies were used. In my analysis I proposed some qualitative language, but whether that is enough transparency can be questioned. It would not make the actual flow of ideas explicit or verifiable - that is what I mean by "documenting process"; a small point, but central to the debate.

In the absence of verifiability, all we have is trust.

Realizing that has its own implications.

🙂

@cjbooker

"[...] on the ScienceQA benchmark and even surpasses
human performance."

It is hard to escape the feeling that we are in a phase of "accelerating returns", the kind of exponential dynamics that Kurzweil has been speaking of.

Thanks!

GPT 3.5 is already a fossil. This does a significantly better job with less compute: Multimodal Chain-of-Thought Reasoning in Language Models

arxiv.org/abs/2302.00923

#AI #ChatGPT #GPT #LLM

Multimodal Chain-of-Thought Reasoning in Language Models

Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer. However, existing CoT studies are mostly isolated in the language modality with LLMs, where LLMs are hard to deploy. To elicit CoT reasoning in multimodality, a possible solution is to fine-tune small language models by fusing the vision and language features to perform CoT reasoning. The key challenge is that those language models tend to generate hallucinated reasoning chains that mislead the answer inference. To mitigate the effect of such mistakes, we propose Multimodal-CoT that incorporates vision features. The framework separates the rationale generation and answer inference into two stages. By incorporating the vision features in both stages, the model is able to generate effective rationales that contribute to answer inference. With Multimodal-CoT, our model under 1 billion parameters outperforms the previous state-of-the-art LLM (GPT-3.5) by 16% (75.17%->91.68%) on the ScienceQA benchmark and even surpasses human performance. Code is publicly available at https://github.com/amazon-science/mm-cot.

arxiv.org

We're in Nature with an opinion piece on how researchers should respond to #ChatGPT and conversational AI technology more generally!

It's been an interesting experience to reach consensus in an interdisciplinary team of scholars (2 psychologists, 1 computer scientist, 1 philosopher and me, an NLP-er).

We list 5 priorities:
1. Hold on to human verification
2. Develop rules for accountability
3. Invest in truly open LLMs
4. Embrace the benefits of AI
5. Widen the debate

nature.com/articles/d41586-023

@wzuidema

Great work. Thank you!

You are spot on where you write: "This defies today’s binary definitions of authorship, plagiarism and sources, in which someone is either an author, or not, and a source has either been used, or not. Policies will have to adapt, but full transparency will always be key."

I think this continuum applies to the question of accountability as well (I developed that a bit here: sentientsyllabus.substack.com/ ). There, I propose to leave the decision of co-authorship to the authors. It is certainly not deceptive to do that, which distinguishes it from gift-, ghost-, and guest-authorship. Transparency is key.

An unresolved implication is a desire to document process. That would be great, but adding another _dimension_ (progress) to linear text is conceptually difficult and I am not aware of technical approaches.

Your proposal for non-profit LLMs is interesting, but will ultimately run up against the same concerns as private sector LLMs - simply due to the need of significant funding for training and operation. An alternative might be public LLMs, modelled on our public library systems. I have not seen that discussed yet. Certainly very doable at EU scale.

Thank you for this contribution.

Ich kann mich nicht erinnern, dass ein neues digitales Tool so schnell von Schüler:innen und Lehrer:innen wahrgenommen und getestet und besprochen wurde, wie das bei #chatGPT gerade passiert.

Zwar gehe ich nicht davon aus, dass genau _dieses_ Tool alles verändern wird, aber es steht für ein neues Konzept, dessen disruptives Potential sehr schnell einleuchtet. Dieses _Konzept_ (Nutzung von Sprachmodellen für die Erstellung von Texten, das Programmieren etc.) ist das eigentlich Spannende.#fedilz

@jon

No, no ... it's not about whether it makes a difference. I see the training data as something of a huge library that contains a significant and growing share of human thought. The way I see this, that is actually quite beautiful. Of course, I share many of your concerns about abuse, but then again – aren't our ideas like our children? We let them into the world to change it on their own terms. We hope that they are principled, and strong, and make a difference - but we could not wish for them to be ignored.

🙂

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.