Show newer

Cross-modal Retrieval for Knowledge-based Visual Question Answering. (arXiv:2401.05736v1 [cs.CL]) 

Zero Resource Cross-Lingual Part Of Speech Tagging. (arXiv:2401.05727v1 [cs.CL]) 

CAT-LLM: Prompting Large Language Models with Text Style Definition for Chinese Article-style Transfer. (arXiv:2401.05707v1 [cs.CL]) 

R-BI: Regularized Batched Inputs enhance Incremental Decoding Framework for Low-Latency Simultaneous Speech Translation. (arXiv:2401.05700v1 [cs.CL]) 

Integrating Physician Diagnostic Logic into Large Language Models: Preference Learning from Process Feedback. (arXiv:2401.05695v1 [cs.CL]) 

UCorrect: An Unsupervised Framework for Automatic Speech Recognition Error Correction. (arXiv:2401.05689v1 [cs.CL]) 

ConcEPT: Concept-Enhanced Pre-Training for Language Models. (arXiv:2401.05669v1 [cs.CL]) 

Unveiling the Tapestry of Automated Essay Scoring: A Comprehensive Investigation of Accuracy, Fairness, and Generalizability. (arXiv:2401.05655v1 [cs.CL]) 

Towards Conversational Diagnostic AI. (arXiv:2401.05654v1 [cs.AI]) 

On Detecting Cherry-picking in News Coverage Using Large Language Models. (arXiv:2401.05650v1 [cs.CL]) 

Natural Language Processing for Dialects of a Language: A Survey. (arXiv:2401.05632v1 [cs.CL]) 

DrawTalking: Building Interactive Worlds by Sketching and Speaking. (arXiv:2401.05631v1 [cs.HC]) 

The Benefits of a Concise Chain of Thought on Problem-Solving in Large Language Models. (arXiv:2401.05618v1 [cs.CL]) 

Scaling Laws for Forgetting When Fine-Tuning Large Language Models. (arXiv:2401.05605v1 [cs.CL]) 

REBUS: A Robust Evaluation Benchmark of Understanding Symbols. (arXiv:2401.05604v1 [cs.CL]) 

POMP: Probability-driven Meta-graph Prompter for LLMs in Low-resource Unsupervised Neural Machine Translation. (arXiv:2401.05596v1 [cs.CL]) 

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training. (arXiv:2401.05566v1 [cs.CR]) 

TrustLLM: Trustworthiness in Large Language Models. (arXiv:2401.05561v1 [cs.CL]) 

Useful Blunders: Can Automated Speech Recognition Errors Improve Downstream Dementia Classification?. (arXiv:2401.05551v1 [cs.CL]) 

CodePrompt: Improving Source Code-Related Classification with Knowledge Features through Prompt Learning. (arXiv:2401.05544v1 [cs.CL]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.