Show newer

Improving Biomedical Abstractive Summarisation with Knowledge Aggregation from Citation Papers. (arXiv:2310.15684v1 [cs.CL]) 

Prevalence and prevention of large language model use in crowd work. (arXiv:2310.15683v1 [cs.CL]) 

How Much Context Does My Attention-Based ASR System Need?. (arXiv:2310.15672v1 [cs.CL]) 

Expression Syntax Information Bottleneck for Math Word Problems. (arXiv:2310.15664v1 [cs.CL]) 

A Survey on Detection of LLMs-Generated Content. (arXiv:2310.15654v1 [cs.CL]) 

CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation. (arXiv:2310.15638v1 [cs.CL]) 

Career Path Prediction using Resume Representation Learning and Skill-based Matching. (arXiv:2310.15636v1 [cs.CL]) 

Tips for making the most of 64-bit architectures in langage design, libraries or garbage collection. (arXiv:2310.15632v1 [cs.CL]) 

Machine Translation for Nko: Tools, Corpora and Baseline Results. (arXiv:2310.15612v1 [cs.CL]) 

MUSER: A Multi-View Similar Case Retrieval Dataset. (arXiv:2310.15602v1 [cs.CL]) 

Retrieval-based Knowledge Transfer: An Effective Approach for Extreme Large Language Model Compression. (arXiv:2310.15594v1 [cs.CL]) 

ScanDL: A Diffusion Model for Generating Synthetic Scanpaths on Texts. (arXiv:2310.15587v1 [cs.CL]) 

Multimodal Representations for Teacher-Guided Compositional Visual Reasoning. (arXiv:2310.15585v1 [cs.CL]) 

CONTRASTE: Supervised Contrastive Pre-training With Aspect-based Prompts For Aspect Sentiment Triplet Extraction. (arXiv:2310.15577v1 [cs.CL]) 

POE: Process of Elimination for Multiple Choice Reasoning. (arXiv:2310.15575v1 [cs.CL]) 

Natural Language Processing for Drug Discovery Knowledge Graphs: promises and pitfalls. (arXiv:2310.15572v1 [cs.CL]) 

Visually Grounded Continual Language Learning with Selective Specialization. (arXiv:2310.15571v1 [cs.CL]) 

MuLMS: A Multi-Layer Annotated Text Corpus for Information Extraction in the Materials Science Domain. (arXiv:2310.15569v1 [cs.CL]) 

TCRA-LLM: Token Compression Retrieval Augmented Large Language Model for Inference Cost Reduction. (arXiv:2310.15556v1 [cs.CL]) 

Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks. (arXiv:2310.15552v1 [cs.CL]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.