Show newer

SMILE: Multimodal Dataset for Understanding Laughter in Video with Language Models. (arXiv:2312.09818v1 [cs.CL]) 

Improving Biomedical Entity Linking with Retrieval-enhanced Learning. (arXiv:2312.09806v1 [cs.CL]) 

ProCoT: Stimulating Critical Thinking and Writing of Students through Engagement with Large Language Models (LLMs). (arXiv:2312.09801v1 [cs.CL]) 

RJUA-QA: A Comprehensive QA Dataset for Urology. (arXiv:2312.09785v1 [cs.CL]) 

GSQA: An End-to-End Model for Generative Spoken Question Answering. (arXiv:2312.09781v1 [cs.CL]) 

HEAR: Hearing Enhanced Audio Response for Video-grounded Dialogue. (arXiv:2312.09736v1 [cs.CL]) 

Discovering Highly Influential Shortcut Reasoning: An Automated Template-Free Approach. (arXiv:2312.09718v1 [cs.CL]) 

Probing Pretrained Language Models with Hierarchy Properties. (arXiv:2312.09670v1 [cs.CL]) 

Weakly-Supervised 3D Visual Grounding based on Visual Linguistic Alignment. (arXiv:2312.09625v1 [cs.CV]) 

Binary Code Summarization: Benchmarking ChatGPT/GPT-4 and Other Large Language Models. (arXiv:2312.09601v1 [cs.CR]) 

Leveraging Language ID to Calculate Intermediate CTC Loss for Enhanced Code-Switching Speech Recognition. (arXiv:2312.09583v1 [cs.CL]) 

Phoneme-aware Encoding for Prefix-tree-based Contextual ASR. (arXiv:2312.09582v1 [cs.CL]) 

IR-UWB Radar-Based Contactless Silent Speech Recognition of Vowels, Consonants, Words, and Phrases. (arXiv:2312.09572v1 [eess.AS]) 

Extending Context Window of Large Language Models via Semantic Compression. (arXiv:2312.09571v1 [cs.CL]) 

GPT-4 Surpassing Human Performance in Linguistic Pragmatics. (arXiv:2312.09545v1 [cs.CL]) 

Marathon: A Race Through the Realm of Long Context with Large Language Models. (arXiv:2312.09542v1 [cs.CL]) 

Picking the Underused Heads: A Network Pruning Perspective of Attention Head Selection for Fusing Dialogue Coreference Information. (arXiv:2312.09541v1 [cs.CL]) 

Riveter: Measuring Power and Social Dynamics Between Entities. (arXiv:2312.09536v1 [cs.CL]) 

IndicIRSuite: Multilingual Dataset and Neural Information Models for Indian Languages. (arXiv:2312.09508v1 [cs.IR]) 

No-Skim: Towards Efficiency Robustness Evaluation on Skimming-based Language Models. (arXiv:2312.09494v1 [cs.CR]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.