MarineGPT: Unlocking Secrets of Ocean to the Public. (arXiv:2310.13596v1 [cs.CL])
Simultaneous Machine Translation with Tailored Reference. (arXiv:2310.13588v1 [cs.CL])
Improving Cross-Lingual Transfer through Subtree-Aware Word Reordering. (arXiv:2310.13583v1 [cs.CL])
Semantic Decomposition of Question and SQL for Text-to-SQL Parsing. (arXiv:2310.13575v1 [cs.CL])
Why Can Large Language Models Generate Correct Chain-of-Thoughts?. (arXiv:2310.13571v1 [cs.CL])
Retrieval-Augmented Neural Response Generation Using Logical Reasoning and Relevance Scoring. (arXiv:2310.13566v1 [cs.CL])
Cache & Distil: Optimising API Calls to Large Language Models. (arXiv:2310.13561v1 [cs.CL])
Self-prompted Chain-of-Thought on Large Language Models for Open-domain Multi-hop Reasoning. (arXiv:2310.13552v1 [cs.CL])
The Perils & Promises of Fact-checking with Large Language Models. (arXiv:2310.13549v1 [cs.CL])
Towards Understanding Sycophancy in Language Models. (arXiv:2310.13548v1 [cs.CL])
A Diachronic Perspective on User Trust in AI under Uncertainty. (arXiv:2310.13544v1 [cs.CL])
Controlled Randomness Improves the Performance of Transformer Models. (arXiv:2310.13526v1 [cs.CL])
Teaching Language Models to Self-Improve through Interactive Demonstrations. (arXiv:2310.13522v1 [cs.CL])
Improving Question Generation with Multi-level Content Planning. (arXiv:2310.13512v1 [cs.CL])
Explaining Interactions Between Text Spans. (arXiv:2310.13506v1 [cs.CL])
Robust Training for Conversational Question Answering Models with Reinforced Reformulation Generation. (arXiv:2310.13505v1 [cs.CL])
Analogical Proportions and Creativity: A Preliminary Study. (arXiv:2310.13500v1 [cs.CL])
DistillCSE: Distilled Contrastive Learning for Sentence Embeddings. (arXiv:2310.13499v1 [cs.CL])
Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning. (arXiv:2310.13486v1 [cs.CL])
Ask Language Model to Clean Your Noisy Translation Data. (arXiv:2310.13469v1 [cs.CL])
All recent Computation and Language articles on arXiv.org for the Fediverse
Inspired by https://twitter.com/arxiv_cscl