Show newer

Emptying the Ocean with a Spoon: Should We Edit Models?. (arXiv:2310.11958v1 [cs.CL]) 

MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models. (arXiv:2310.11954v1 [cs.CL]) 

Grounded and Well-rounded: A Methodological Approach to the Study of Cross-modal and Cross-lingual Grounding. (arXiv:2310.11938v1 [cs.CL]) 

Investigating semantic subspaces of Transformer sentence embeddings through linear structural probing. (arXiv:2310.11923v1 [cs.CL]) 

A Benchmark for Semi-Inductive Link Prediction in Knowledge Graphs. (arXiv:2310.11917v1 [cs.CL]) 

Rather a Nurse than a Physician -- Contrastive Explanations under Investigation. (arXiv:2310.11906v1 [cs.CL]) 

From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks. (arXiv:2310.11884v1 [cs.AI]) 

From Dissonance to Insights: Dissecting Disagreements in Rationale Dataset Construction for Case Outcome Classification. (arXiv:2310.11878v1 [cs.CL]) 

The Curious Case of Hallucinatory Unanswerablity: Finding Truths in the Hidden States of Over-Confident Large Language Models. (arXiv:2310.11877v1 [cs.CL]) 

AI Nushu: An Exploration of Language Emergence in Sisterhood -Through the Lens of Computational Linguistics. (arXiv:2310.11870v1 [cs.CL]) 

Text Annotation Handbook: A Practical Guide for Machine Learning Projects. (arXiv:2310.11780v1 [cs.CL]) 

Language Agents for Detecting Implicit Stereotypes in Text-to-image Models at Scale. (arXiv:2310.11778v1 [cs.CY]) 

Improving Long Document Topic Segmentation Models With Enhanced Coherence Modeling. (arXiv:2310.11772v1 [cs.CL]) 

Annotated Job Ads with Named Entity Recognition. (arXiv:2310.11769v1 [cs.CL]) 

A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction. (arXiv:2310.11761v1 [cs.CL]) 

Bias in Emotion Recognition with ChatGPT. (arXiv:2310.11753v1 [cs.RO]) 

Investigating Uncertainty Calibration of Aligned Language Models under the Multiple-Choice Setting. (arXiv:2310.11732v1 [cs.LG]) 

Quantify Health-Related Atomic Knowledge in Chinese Medical Large Language Models: A Computational Analysis. (arXiv:2310.11722v1 [cs.CL]) 

Chain-of-Thought Tuning: Masked Language Models can also Think Step By Step in Natural Language Understanding. (arXiv:2310.11721v1 [cs.CL]) 

Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning. (arXiv:2310.11716v1 [cs.CL]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.