AMRFact: Enhancing Summarization Factuality Evaluation with AMR-driven Training Data Generation. (arXiv:2311.09521v1 [cs.CL])
Leveraging Code to Improve In-context Learning for Semantic Parsing. (arXiv:2311.09519v1 [cs.CL])
GEE! Grammar Error Explanation with Large Language Models. (arXiv:2311.09517v1 [cs.CL])
Sequencing Matters: A Generate-Retrieve-Generate Model for Building Conversational Agents. (arXiv:2311.09513v1 [cs.CL])
One Size Does Not Fit All: Customizing Open-Domain Procedures. (arXiv:2311.09510v1 [cs.CL])
SegMix: A Simple Structure-Aware Data Augmentation Method. (arXiv:2311.09505v1 [cs.CL])
SQATIN: Supervised Instruction Tuning Meets Question Answering for Improved Dialogue NLU. (arXiv:2311.09502v1 [cs.CL])
Personalized Jargon Identification for Enhanced Interdisciplinary Communication. (arXiv:2311.09481v1 [cs.CL])
Show Your Work with Confidence: Confidence Bands for Tuning Curves. (arXiv:2311.09480v1 [cs.CL])
ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems. (arXiv:2311.09476v1 [cs.CL])
JAB: Joint Adversarial Prompting and Belief Augmentation. (arXiv:2311.09473v1 [cs.AI])
Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs. (arXiv:2311.09469v1 [cs.CL])
Think While You Write: Hypothesis Verification Promotes Faithful Knowledge-to-Text Generation. (arXiv:2311.09467v1 [cs.CL])
Lexical Repetitions Lead to Rote Learning: Unveiling the Impact of Lexical Overlap in Train and Test Reference Summaries. (arXiv:2311.09458v1 [cs.CL])
How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities. (arXiv:2311.09447v1 [cs.CL])
Subtle Misogyny Detection and Mitigation: An Expert-Annotated Dataset. (arXiv:2311.09443v1 [cs.CL])
Labeled Interactive Topic Models. (arXiv:2311.09438v1 [cs.LG])
Backdoor Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment. (arXiv:2311.09433v1 [cs.CR])
Striped Attention: Faster Ring Attention for Causal Transformers. (arXiv:2311.09431v1 [cs.LG])
Beyond Detection: Unveiling Fairness Vulnerabilities in Abusive Language Models. (arXiv:2311.09428v1 [cs.CL])
All recent Computation and Language articles on arXiv.org for the Fediverse
Inspired by https://twitter.com/arxiv_cscl