Show newer

Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models. (arXiv:2308.16463v1 [cs.CV]) 

BioCoder: A Benchmark for Bioinformatics Code Generation with Contextual Pragmatic Knowledge. (arXiv:2308.16458v1 [cs.LG]) 

Knowledge Distillation from Non-streaming to Streaming ASR Encoder using Auxiliary Non-streaming Layer. (arXiv:2308.16415v1 [cs.CL]) 

Affective Visual Dialog: A Large-Scale Benchmark for Emotional Reasoning Based on Visually Grounded Conversations. (arXiv:2308.16349v1 [cs.CL]) 

ToddlerBERTa: Exploiting BabyBERTa for Grammar Learning and Language Understanding. (arXiv:2308.16336v1 [cs.CL]) 

OLISIA: a Cascade System for Spoken Dialogue State Tracking. (arXiv:2304.11073v3 [eess.AS] CROSS LISTED) 

Exploring Large Language Models for Knowledge Graph Completion. (arXiv:2308.13916v2 [cs.CL] UPDATED) 

DocPrompt: Large-scale continue pretrain for zero-shot and few-shot document question answering. (arXiv:2308.10959v2 [cs.CL] UPDATED) 

Playing with Words: Comparing the Vocabulary and Lexical Richness of ChatGPT and Humans. (arXiv:2308.07462v2 [cs.CL] UPDATED) 

Sensi-BERT: Towards Sensitivity Driven Fine-Tuning for Parameter-Efficient BERT. (arXiv:2307.11764v2 [cs.CL] UPDATED) 

"It Felt Like Having a Second Mind": Investigating Human-AI Co-creativity in Prewriting with Large Language Models. (arXiv:2307.10811v2 [cs.HC] UPDATED) 

Multi-Modal Discussion Transformer: Integrating Text, Images and Graph Transformers to Detect Hate Speech on Social Media. (arXiv:2307.09312v2 [cs.CL] UPDATED) 

CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care. (arXiv:2307.01458v2 [cs.CL] UPDATED) 

Automatic Design of Semantic Similarity Ensembles Using Grammatical Evolution. (arXiv:2307.00925v5 [cs.CL] UPDATED) 

C-PMI: Conditional Pointwise Mutual Information for Turn-level Dialogue Evaluation. (arXiv:2306.15245v2 [cs.CL] UPDATED) 

Improving Non-autoregressive Translation Quality with Pretrained Language Model, Embedding Distillation and Upsampling Strategy for CTC. (arXiv:2306.06345v2 [cs.CL] UPDATED) 

ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models. (arXiv:2305.06566v4 [cs.IR] UPDATED) 

SCOTT: Self-Consistent Chain-of-Thought Distillation. (arXiv:2305.01879v4 [cs.CL] UPDATED) 

Deanthropomorphising NLP: Can a Language Model Be Conscious?. (arXiv:2211.11483v3 [cs.CL] UPDATED) 

DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models. (arXiv:2202.04053v3 [cs.CV] UPDATED) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.