SoulChat: Improving LLMs' Empathy, Listening, and Comfort Abilities through Fine-tuning with Multi-turn Empathy Conversations. (arXiv:2311.00273v1 [cs.CL])
Syntactic Inductive Bias in Transformer Language Models: Especially Helpful for Low-Resource Languages?. (arXiv:2311.00268v1 [cs.CL])
Plug-and-Play Policy Planner for Large Language Model Powered Dialogue Agents. (arXiv:2311.00262v1 [cs.CL])
Noisy Exemplars Make Large Language Models More Robust: A Domain-Agnostic Behavioral Analysis. (arXiv:2311.00258v1 [cs.CL])
The Mystery and Fascination of LLMs: A Comprehensive Survey on the Interpretation and Analysis of Emergent Abilities. (arXiv:2311.00237v1 [cs.CL])
Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions. (arXiv:2311.00233v1 [cs.CL])
Is GPT Powerful Enough to Analyze the Emotions of Memes?. (arXiv:2311.00223v1 [cs.CL])
Transformers as Recognizers of Formal Languages: A Survey on Expressivity. (arXiv:2311.00208v1 [cs.LG])
Continuous Training and Fine-tuning for Domain-Specific Language Models in Medical Question Answering. (arXiv:2311.00204v1 [cs.CL])
XAI-CLASS: Explanation-Enhanced Text Classification with Extremely Weak Supervision. (arXiv:2311.00189v1 [cs.CL])
ChipNeMo: Domain-Adapted LLMs for Chip Design. (arXiv:2311.00176v1 [cs.CL])
Robust Safety Classifier for Large Language Models: Adversarial Prompt Shield. (arXiv:2311.00172v1 [cs.CL])
Beyond Denouncing Hate: Strategies for Countering Implied Biases and Stereotypes in Language. (arXiv:2311.00161v1 [cs.CL])
Longer Fixations, More Computation: Gaze-Guided Recurrent Neural Networks. (arXiv:2311.00159v1 [cs.CL])
Two-Stage Classifier for Campaign Negativity Detection using Axis Embeddings: A Case Study on Tweets of Political Users during 2021 Presidential Election in Iran. (arXiv:2311.00143v1 [cs.LG])
On the effect of curriculum learning with developmental data for grammar acquisition. (arXiv:2311.00128v1 [cs.CL])
BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B. (arXiv:2311.00117v1 [cs.CL])
BERTwich: Extending BERT's Capabilities to Model Dialectal and Noisy Text. (arXiv:2311.00116v1 [cs.CL])
The Generative AI Paradox: "What It Can Create, It May Not Understand". (arXiv:2311.00059v1 [cs.AI])
Grounding Visual Illusions in Language: Do Vision-Language Models Perceive Illusions Like Humans?. (arXiv:2311.00047v1 [cs.AI])
All recent Computation and Language articles on arXiv.org for the Fediverse
Inspired by https://twitter.com/arxiv_cscl