Show newer

R-Tuning: Teaching Large Language Models to Refuse Unknown Questions. (arXiv:2311.09677v1 [cs.CL]) 

Where Do People Tell Stories Online? Story Detection Across Online Communities. (arXiv:2311.09675v1 [cs.CL]) 

Improving the Generation Quality of Watermarked Large Language Models via Word Importance Scoring. (arXiv:2311.09668v1 [cs.CL]) 

Evaluating LLM Agent Group Dynamics against Human Group Dynamics: A Case Study on Wisdom of Partisan Crowds. (arXiv:2311.09665v1 [cs.CL]) 

Evolving Domain Adaptation of Pretrained Language Models for Text Classification. (arXiv:2311.09661v1 [cs.CL]) 

Structured Chemistry Reasoning with Large Language Models. (arXiv:2311.09656v1 [cs.CL]) 

ICXML: An In-Context Learning Framework for Zero-Shot Extreme Multi-Label Classification. (arXiv:2311.09649v1 [cs.LG]) 

Event Causality Is Key to Computational Story Understanding. (arXiv:2311.09648v1 [cs.CL]) 

On the Exploitability of Reinforcement Learning with Human Feedback for Large Language Models. (arXiv:2311.09641v1 [cs.AI]) 

Evaluating In-Context Learning of Libraries for Code Generation. (arXiv:2311.09635v1 [cs.CL]) 

Online Continual Knowledge Learning for Language Models. (arXiv:2311.09632v1 [cs.CL]) 

From Scroll to Misbelief: Modeling the Unobservable Susceptibility to Misinformation on Social Media. (arXiv:2311.09630v1 [cs.CL]) 

CRISPR: Eliminating Bias Neurons from an Instruction-following Language Model. (arXiv:2311.09627v1 [cs.AI]) 

Take One Step at a Time to Know Incremental Utility of Demonstration: An Analysis on Reranking for Few-Shot In-Context Learning. (arXiv:2311.09619v1 [cs.CL]) 

Simulating Opinion Dynamics with Networks of LLM-based Agents. (arXiv:2311.09618v1 [physics.soc-ph]) 

On Retrieval Augmentation and the Limitations of Language Model Training. (arXiv:2311.09615v1 [cs.CL]) 

Digital Socrates: Evaluating LLMs through explanation critiques. (arXiv:2311.09613v1 [cs.CL]) 

Efficient End-to-End Visual Document Understanding with Rationale Distillation. (arXiv:2311.09612v1 [cs.CV]) 

GistScore: Learning Better Representations for In-Context Example Selection with Gist Bottlenecks. (arXiv:2311.09606v1 [cs.CL]) 

Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals. (arXiv:2311.09605v1 [cs.CL]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.