Show newer

On the Relation between Sensitivity and Accuracy in In-context Learning. (arXiv:2209.07661v3 [cs.CL] UPDATED) 

DiSCoMaT: Distantly Supervised Composition Extraction from Tables in Materials Science Articles. (arXiv:2207.01079v4 [cs.CL] UPDATED) 

MiniDisc: Minimal Distillation Schedule for Language Model Compression. (arXiv:2205.14570v3 [cs.CL] UPDATED) 

Geographic Adaptation of Pretrained Language Models. (arXiv:2203.08565v3 [cs.CL] UPDATED) 

Learning to Trust Your Feelings: Leveraging Self-awareness in LLMs for Hallucination Mitigation. (arXiv:2401.15449v1 [cs.CL]) 

Pre-training and Diagnosing Knowledge Base Completion Models. (arXiv:2401.15439v1 [cs.CL]) 

Indexing Portuguese NLP Resources with PT-Pump-Up. (arXiv:2401.15400v1 [cs.CL]) 

Semantics of Multiword Expressions in Transformer-Based Models: A Survey. (arXiv:2401.15393v1 [cs.CL]) 

MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries. (arXiv:2401.15391v1 [cs.CL]) 

Towards Event Extraction from Speech with Contextual Clues. (arXiv:2401.15385v1 [cs.CL]) 

A RAG-based Question Answering System Proposal for Understanding Islam: MufassirQAS LLM. (arXiv:2401.15378v1 [cs.CL]) 

LegalDuet: Learning Effective Representations for Legal Judgment Prediction through a Dual-View Legal Clue Reasoning. (arXiv:2401.15371v1 [cs.CL]) 

Importance-Aware Data Augmentation for Document-Level Neural Machine Translation. (arXiv:2401.15360v1 [cs.CL]) 

A Survey on Neural Topic Models: Methods, Applications, and Challenges. (arXiv:2401.15351v1 [cs.CL]) 

A Comprehensive Survey of Compression Algorithms for Language Models. (arXiv:2401.15347v1 [cs.CL]) 

Equipping Language Models with Tool Use Capability for Tabular Data Analysis in Finance. (arXiv:2401.15328v1 [cs.CL]) 

UNSEE: Unsupervised Non-contrastive Sentence Embeddings. (arXiv:2401.15316v1 [cs.CL]) 

How We Refute Claims: Automatic Fact-Checking through Flaw Identification and Explanation. (arXiv:2401.15312v1 [cs.CL]) 

Improving Medical Reasoning through Retrieval and Self-Reflection with Retrieval-Augmented Large Language Models. (arXiv:2401.15269v1 [cs.CL]) 

Unlearning Reveals the Influential Training Data of Language Models. (arXiv:2401.15241v1 [cs.CL]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.