Show newer

Multilingual and Fully Non-Autoregressive ASR with Large Language Model Fusion: A Comprehensive Study. (arXiv:2401.12789v1 [cs.CL]) 

What the Weight?! A Unified Framework for Zero-Shot Knowledge Composition. (arXiv:2401.12756v1 [cs.CL]) 

A Comprehensive View of the Biases of Toxicity and Sentiment Analysis Methods Towards Utterances with African American English Expressions. (arXiv:2401.12720v1 [cs.CL]) 

Generating Unsupervised Abstractive Explanations for Rumour Verification. (arXiv:2401.12713v1 [cs.CL]) 

Energy-based Automated Model Evaluation. (arXiv:2401.12689v1 [cs.LG]) 

Context Matters: Pushing the Boundaries of Open-Ended Answer Generation with Graph-Structured Knowledge Context. (arXiv:2401.12671v1 [cs.CL]) 

A Reply to Makelov et al. (2023)'s "Interpretability Illusion" Arguments. (arXiv:2401.12631v1 [cs.LG]) 

SLANG: New Concept Comprehension of Large Language Models. (arXiv:2401.12585v1 [cs.CL]) 

LLMCheckup: Conversational Examination of Large Language Models via Interpretability Tools. (arXiv:2401.12576v1 [cs.CL]) 

Automated Fact-Checking of Climate Change Claims with Large Language Models. (arXiv:2401.12566v1 [cs.CL]) 

DREditor: An Time-efficient Approach for Building a Domain-specific Dense Retrieval Model. (arXiv:2401.12540v1 [cs.IR]) 

BiTA: Bi-Directional Tuning for Lossless Acceleration in Large Language Models. (arXiv:2401.12522v1 [cs.CL]) 

Key Information Retrieval to Classify the Unstructured Data Content of Preferential Trade Agreements. (arXiv:2401.12520v1 [cs.CL]) 

Comparing Human-Centered Language Modeling: Is it Better to Model Groups, Individual Traits, or Both?. (arXiv:2401.12492v1 [cs.CL]) 

Assessing and Understanding Creativity in Large Language Models. (arXiv:2401.12491v1 [cs.CL]) 

Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-Alignment. (arXiv:2401.12474v1 [cs.CL]) 

Contrastive Learning in Distilled Models. (arXiv:2401.12472v1 [cs.CL]) 

Fast Adversarial Training against Textual Adversarial Attacks. (arXiv:2401.12461v1 [cs.CL]) 

CIM-MLC: A Multi-level Compilation Stack for Computing-In-Memory Accelerators. (arXiv:2401.12428v1 [cs.AR]) 

The Neglected Tails of Vision-Language Models. (arXiv:2401.12425v1 [cs.CV]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.