Show newer

Semi-Structured Chain-of-Thought: Integrating Multiple Sources of Knowledge for Improved Language Model Reasoning. (arXiv:2311.08505v1 [cs.CL]) 

Alignment is not sufficient to prevent large language models from generating harmful information: A psychoanalytic perspective. (arXiv:2311.08487v1 [cs.CL]) 

Functionality learning through specification instructions. (arXiv:2311.08481v1 [cs.CL]) 

Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models. (arXiv:2311.08472v1 [cs.CL]) 

UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations. (arXiv:2311.08469v1 [cs.CL]) 

Investigating Multi-Pivot Ensembling with Massively Multilingual Machine Translation Models. (arXiv:2311.07439v2 [cs.CL] UPDATED) 

Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided Revision. (arXiv:2311.07362v2 [cs.CL] UPDATED) 

On the Effectiveness of ASR Representations in Real-world Noisy Speech Emotion Recognition. (arXiv:2311.07093v2 [cs.SD] UPDATED) 

From Classification to Generation: Insights into Crosslingual Retrieval Augmented ICL. (arXiv:2311.06595v2 [cs.CL] UPDATED) 

Step by Step to Fairness: Attributing Societal Bias in Task-oriented Dialogue Systems. (arXiv:2311.06513v2 [cs.CL] UPDATED) 

DocGen: Generating Detailed Parameter Docstrings in Python. (arXiv:2311.06453v2 [cs.SE] UPDATED) 

Autoregressive Language Models For Estimating the Entropy of Epic EHR Audit Logs. (arXiv:2311.06401v2 [cs.CL] UPDATED) 

Fake Alignment: Are LLMs Really Aligned Well?. (arXiv:2311.05915v2 [cs.CL] UPDATED) 

TEAL: Tokenize and Embed ALL for Multi-modal Large Language Models. (arXiv:2311.04589v2 [cs.CL] UPDATED) 

ChipNeMo: Domain-Adapted LLMs for Chip Design. (arXiv:2311.00176v2 [cs.CL] UPDATED) 

Learning From Mistakes Makes LLM Better Reasoner. (arXiv:2310.20689v2 [cs.CL] UPDATED) 

FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models. (arXiv:2310.20410v2 [cs.CL] UPDATED) 

Improving Factual Consistency of Text Summarization by Adversarially Decoupling Comprehension and Embellishment Abilities of LLMs. (arXiv:2310.19347v3 [cs.CL] UPDATED) 

Unified Segment-to-Segment Framework for Simultaneous Sequence Generation. (arXiv:2310.17940v2 [cs.CL] UPDATED) 

Data-Centric Financial Large Language Models. (arXiv:2310.17784v2 [cs.CL] UPDATED) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.