Show newer

Towards Generalizable SER: Soft Labeling and Data Augmentation for Modeling Temporal Emotion Shifts in Large-Scale Multilingual Speech. (arXiv:2311.08607v1 [cs.CL]) 

Navigating the Ocean of Biases: Political Bias Attribution in Language Models via Causal Structures. (arXiv:2311.08605v1 [cs.CL]) 

DALA: A Distribution-Aware LoRA-Based Adversarial Attack against Pre-trained Language Models. (arXiv:2311.08598v1 [cs.CL]) 

Are You Sure? Challenging LLMs Leads to Performance Drops in The FlipFlop Experiment. (arXiv:2311.08596v1 [cs.CL]) 

ACID: Abstractive, Content-Based IDs for Document Retrieval with Language Models. (arXiv:2311.08593v1 [cs.CL]) 

AART: AI-Assisted Red-Teaming with Diverse Data Generation for New LLM-powered Applications. (arXiv:2311.08592v1 [cs.SE]) 

PEMA: Plug-in External Memory Adaptation for Language Models. (arXiv:2311.08590v1 [cs.CL]) 

CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and Generation. (arXiv:2311.08588v1 [cs.CL]) 

Asking More Informative Questions for Grounded Retrieval. (arXiv:2311.08584v1 [cs.CL]) 

Graph-Induced Syntactic-Semantic Spaces in Transformer-Based Variational AutoEncoders. (arXiv:2311.08579v1 [cs.CL]) 

Towards Evaluating AI Systems for Moral Status Using Self-Reports. (arXiv:2311.08576v1 [cs.LG]) 

Parameter-Efficient Multilingual Summarisation: An Empirical Study. (arXiv:2311.08572v1 [cs.CL]) 

MAgIC: Benchmarking Large Language Model Powered Multi-Agent in Cognition, Adaptability, Rationality and Collaboration. (arXiv:2311.08562v1 [cs.CL]) 

UT5: Pretraining Non autoregressive T5 with unrolled denoising. (arXiv:2311.08552v1 [cs.CL]) 

Efficient Continual Pre-training for Building Domain Specific Large Language Models. (arXiv:2311.08545v1 [cs.CL]) 

Extending Multilingual Machine Translation through Imitation Learning. (arXiv:2311.08538v1 [cs.CL]) 

Natural Language Processing for Financial Regulation. (arXiv:2311.08533v1 [cs.CL]) 

GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer. (arXiv:2311.08526v1 [cs.CL]) 

LLMs cannot find reasoning errors, but can correct them!. (arXiv:2311.08516v1 [cs.AI]) 

CoRE-CoG: Conversational Recommendation of Entities using Constrained Generation. (arXiv:2311.08511v1 [cs.CL]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.