SCORE: A framework for Self-Contradictory Reasoning Evaluation. (arXiv:2311.09603v1 [cs.CL])
Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion. (arXiv:2311.09602v1 [cs.CL])
Multi-Step Dialogue Workflow Action Prediction. (arXiv:2311.09593v1 [cs.CL])
LifeTox: Unveiling Implicit Toxicity in Life Advice. (arXiv:2311.09585v1 [cs.CL])
Enhancing Medical Text Evaluation with GPT-4. (arXiv:2311.09581v1 [cs.CL])
MMOE: Mixture of Multimodal Interaction Experts. (arXiv:2311.09580v1 [cs.CL])
Crafting In-context Examples according to LMs' Parametric Knowledge. (arXiv:2311.09579v1 [cs.CL])
Tied-Lora: Enhacing parameter efficiency of LoRA with weight tying. (arXiv:2311.09578v1 [cs.CL])
Work State-Centric AI Agents: Design, Implementation, and Management of Cognitive Work Threads. (arXiv:2311.09576v1 [cs.CL])
Prompt Optimisation with Random Sampling. (arXiv:2311.09569v1 [cs.CL])
LongBoX: Evaluating Transformers on Long-Sequence Clinical Tasks. (arXiv:2311.09564v1 [cs.CL])
A Reevaluation of Event Extraction: Past, Present, and Future Challenges. (arXiv:2311.09562v1 [cs.CL])
Enchancing Semi-Supervised Learning for Extractive Summarization with an LLM-based pseudolabeler. (arXiv:2311.09559v1 [cs.CL])
Pachinko: Patching Interpretable QA Models through Natural Language Feedback. (arXiv:2311.09558v1 [cs.CL])
Large Language Models are Few-Shot Training Example Generators: A Case Study in Fallacy Recognition. (arXiv:2311.09552v1 [cs.CL])
A Speed Odyssey for Deployable Quantization of LLMs. (arXiv:2311.09550v1 [cs.LG])
Towards Pragmatic Awareness in Question Answering: A Case Study in Maternal and Infant Health. (arXiv:2311.09542v1 [cs.CL])
Reducing Privacy Risks in Online Self-Disclosures with Language Models. (arXiv:2311.09538v1 [cs.CL])
Effective Large Language Model Adaptation for Improved Grounding. (arXiv:2311.09533v1 [cs.CL])
HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM. (arXiv:2311.09528v1 [cs.CL])
All recent Computation and Language articles on arXiv.org for the Fediverse
Inspired by https://twitter.com/arxiv_cscl