Show newer

Think Before You Speak: Explicitly Generating Implicit Commonsense Knowledge for Response Generation. (arXiv:2110.08501v4 [cs.CL] UPDATED) 

Radiology-Llama2: Best-in-Class Large Language Model for Radiology. (arXiv:2309.06419v1 [cs.CL]) 

Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails. (arXiv:2309.06415v1 [cs.CL]) 

Towards Reliable and Fluent Large Language Models: Incorporating Feedback Learning Loops in QA Systems. (arXiv:2309.06384v1 [cs.CL]) 

Cited Text Spans for Citation Text Generation. (arXiv:2309.06365v1 [cs.CL]) 

Framework-Based Qualitative Analysis of Free Responses of Large Language Models: Algorithmic Fidelity. (arXiv:2309.06364v1 [cs.CL]) 

Learning to Predict Concept Ordering for Common Sense Generation. (arXiv:2309.06363v1 [cs.CL]) 

Generative Data Augmentation using LLMs improves Distributional Robustness in Question Answering. (arXiv:2309.06358v1 [cs.CL]) 

Re-Reading Improves Reasoning in Language Models. (arXiv:2309.06275v1 [cs.CL]) 

The first step is the hardest: Pitfalls of Representing and Tokenizing Temporal Data for Large Language Models. (arXiv:2309.06236v1 [cs.LG]) 

Human Action Co-occurrence in Lifestyle Vlogs using Graph Link Prediction. (arXiv:2309.06219v1 [cs.CV]) 

Improving and Evaluating the Detection of Fragmentation in News Recommendations with the Clustering of News Story Chains. (arXiv:2309.06192v1 [cs.CL]) 

Glancing Future for Simultaneous Machine Translation. (arXiv:2309.06179v1 [cs.CL]) 

AKEM: Aligning Knowledge Base to Queries with Ensemble Model for Entity Recognition and Linking. (arXiv:2309.06175v1 [cs.CL]) 

Overview of GUA-SPA at IberLEF 2023: Guarani-Spanish Code Switching Analysis. (arXiv:2309.06163v1 [cs.CL]) 

Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts. (arXiv:2309.06135v1 [cs.CL]) 

Measuring vagueness and subjectivity in texts: from symbolic to neural VAGO. (arXiv:2309.06132v1 [cs.CL]) 

Annotating Data for Fine-Tuning a Neural Ranker? Current Active Learning Strategies are not Better than Random Selection. (arXiv:2309.06131v1 [cs.IR]) 

AstroLLaMA: Towards Specialized Foundation Models in Astronomy. (arXiv:2309.06126v1 [astro-ph.IM]) 

Characterizing Latent Perspectives of Media Houses Towards Public Figures. (arXiv:2309.06112v1 [cs.CL]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.