Show newer

@SusanMaury

After reading Burnett's post, I was a bit disappointed that he did not seem to distinguish between , specifically, and , more generally. And I thought that he is missing part of the richness of our subjective experience, such as intuition, or specifically the sense of "understanding". He is probably using the term metonymously, so it doesn't matter for his argument; but I find the distinction interesting because - that's how I think about it - it points to _older_ affects, like love and fear, that are linked to our physiology, and more _recent_ affects, like confusion and understanding, that are not.

I was not sure how to read the WHY. Did you mean "why we think emotions are important" or "what was it that triggered a response that must not be filtered out"?

@sterlingericsson

The interesting question is whether the model works better than simply a probabilistic sampling of a large MSA. No controls were reported. Hm.

Love this piece on the role of #emotion in #science & #research!! So many thoughts, but YES decision & action are inextricably tied to #emotions, including for scientists. When people talk about rational #logic they often assemble 'facts' to confirm their own emotion-based biases. Acknowledging the role of emotions in research improves our science. @BPSOfficial bps.org.uk/psychologist/emotio

@SusanMaury

Interesting take. I wonder if it is important in this context to realize that emotions are but a subset of our subjective experience (the subset that is coupled to significant physiological responses, and thus presumably "older" in evolution). But for sure, the role of subjectivity in science is underappreciated. We hear so often that "science is objective" - but it is not: it is "shared subjectivity" (which is actually far more interesting).

Your comment on our major mode of thinking being a rationalizing justification of intuitions is spot on. I wrote on that over the last days, now posted for where I analyze why it is so hard to get the question of right. That mechanism plays a major role.

sentientsyllabus.substack.com/

The dangers of loyalty in the workplace:

New research finds that, “instead of protecting or rewarding them, loyal employees are selectively and ironically targeted by managers for exploitative practices.”

🔏 doi.org/10.1016/j.jesp.2022.10

Two quotes follow: 🧵👉

#Science
#Psychology
@psychology
#SocialPsychology
#SocialScience
@socialpsych
#OrgBehavior
@orgbehavior
#Loyalty
#Exploitation
#WorkplaceRelations
#Ethics

@vwang93@mstdn.science

Good on you! We need more thinking on science and values.

I was thinking a lot about values while writing my recent analysis for the Project:

Getting right is harder than one would think.

sentientsyllabus.substack.com/

Key takeaways include: that arguments based on our usual criteria of /contribution/ and /accountability/ are brittle; that the problem lies with authorship being a vague term (cf. sorites paradox); that we are using posterior reasoning to justify our intuitions; and that reliable intuitions about the actual nature of the emergent(!) source-AI-author system need more work. A practical policy proposal rounds it off: empower the authors, use meaningful acknowledgements, quantify contributions.

@Renshaw01

Thorp's blanket ban on any text generated by AI, his non-standard definition of "plagiarism", and the inconsistencies between the policy and the ICMJE rules (which Science as party to) are certainly raising eyebrows.

I just posted our analysis of AI-authorship over at : sentientsyllabus.substack.com/ - though this is not a direct response to recent editorial policy. Apparently, it's harder than one would think to get this right.

Getting right is harder than one would think.

I just posted an analysis on AI authorship over at :
sentientsyllabus.substack.com/

Key takeaways include: that arguments based on our usual criteria of /contribution/ and /accountability/ are brittle; that the problem lies with authorship being a vague term (cf. sorites paradox); that we are using posterior reasoning to justify our intuitions; and that reliable intuitions about the actual nature of the emergent(!) source-AI-author system need more work. A practical policy proposal rounds it off: empower the authors, use meaningful acknowledgements, quantify contributions.

Reuters:

At a time when Google and Bing are gearing up for the tech showdown of the decade ...

they are laying off five and six percent of their (human) workforce.

reuters.com/business/google-pa

@margreta

This is lovely.

I am intrigued that you make some of the same points I cover in my recent post for the Sentient Syllabus project – but in such an engaging way 🙂 . Of course, education is education, whatever the age - but your tardigrade makes me wonder about the specifics, and the impact of AI on different developmental stages.

Thank you!
sentientsyllabus.substack.com/

The recent debate in Norway about Chat GPT made me revive my old blog, and write a text about school as a tardigrade. We need to kill it before AI does. Maybe it is relevant for other countries as well, so I have translated it into English. margreta.wordpress.com/2023/01
#AI #education #school #chatgpt

@readysaltedcode

I've somewhat differentiated my position on that over the past days, writing a post for the project. There's a risk of conceptual commitment, if we start with the critique. You might find the post (and the various other resources) useful:

sentientsyllabus.substack.com/

@joern_reinhardt

Completely agree. Fragile technology. Wrong thinking. Here's some alternative thinking. (A bit of a deep dive though)

sentientsyllabus.substack.com/

"More than 6,000 teachers from Harvard University, Yale University, the University of Rhode Island and others have also signed up to use GPTZero, a program that promises to quickly detect A.I.-generated text, said Edward Tian, its creator and a senior at Princeton University."

The wrong approach imo. One should focus on teaching how to work with #AI. Not on bans, the enforcement of which is unrealistic anyway. #ChatGPT

nytimes.com/2023/01/16/technol

A useful new resource from our team at the Taylor Institute. Teaching and Learning with Artificial Intelligence Apps. taylorinstitute.ucalgary.ca/te #ChatGPT

@scott

Yes, they got a lot wrong there. Your "radical" approach actually resonates with what I just posted at the Sentient Syllabus substack. (... it's a bit of a deep dive 🙂 )

sentientsyllabus.substack.com/

@sengupta

Two things: (a) establish a model of a N&V written by ChatGPT, and declare that to be the level of just-below-passing. Students can use the AI output as a reference for what they need to surpass. (cf. sentientsyllabus.org) (b) make sure they include references, and that the references can be verified. (cf. sentientsyllabus.substack.com/)

Kind regards -

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.