Improved Beam Search for Hallucination Mitigation in Abstractive Summarization. (arXiv:2212.02712v2 [cs.CL] UPDATED)
Simplifying and Understanding State Space Models with Diagonal Linear RNNs. (arXiv:2212.00768v3 [cs.LG] UPDATED)
ComCLIP: Training-Free Compositional Image and Text Matching. (arXiv:2211.13854v3 [cs.CV] UPDATED)
CogAlign: Learning to Align Textual Neural Representations to Cognitive Language Processing Signals. (arXiv:2106.05544v3 [cs.CL] UPDATED)
Examining Modularity in Multilingual LMs via Language-Specialized Subnetworks. (arXiv:2311.08273v1 [cs.CL])
A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily. (arXiv:2311.08268v1 [cs.CL])
Fast Chain-of-Thought: A Glance of Future from Parallel Decoding Leads to Answers Faster. (arXiv:2311.08263v1 [cs.CL])
REST: Retrieval-Based Speculative Decoding. (arXiv:2311.08252v1 [cs.CL])
On Using Distribution-Based Compositionality Assessment to Evaluate Compositional Generalisation in Machine Translation. (arXiv:2311.08249v1 [cs.CL])
Investigating the Encoding of Words in BERT's Neurons using Feature Textualization. (arXiv:2311.08240v1 [cs.CL])
Eval-GCSC: A New Metric for Evaluating ChatGPT's Performance in Chinese Spelling Correction. (arXiv:2311.08219v1 [cs.CL])
Unlock the Power: Competitive Distillation for Multi-Modal Large Language Models. (arXiv:2311.08213v1 [cs.CV])
Human-Centric Autonomous Systems With LLMs for User Command Reasoning. (arXiv:2311.08206v1 [cs.CL])
Automated Fact-Checking in Dialogue: Are Specialized Models Needed?. (arXiv:2311.08195v1 [cs.CL])
GEC-DePenD: Non-Autoregressive Grammatical Error Correction with Decoupled Permutation and Decoding. (arXiv:2311.08191v1 [cs.CL])
Unlocking Science: Novel Dataset and Benchmark for Cross-Modality Scientific Information Extraction. (arXiv:2311.08189v1 [cs.CL])
Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning. (arXiv:2311.08182v1 [cs.CL])
MechAgents: Large language model multi-agent collaborations can solve mechanics problems, generate new data, and integrate knowledge. (arXiv:2311.08166v1 [cs.AI])
Ask One More Time: Self-Agreement Improves Reasoning of Language Models in (Almost) All Scenarios. (arXiv:2311.08154v1 [cs.CL])
Towards Reasoning in Large Language Models via Multi-Agent Peer Review Collaboration. (arXiv:2311.08152v1 [cs.CL])
All recent Computation and Language articles on arXiv.org for the Fediverse
Inspired by https://twitter.com/arxiv_cscl