Show newer

Halo: Estimation and Reduction of Hallucinations in Open-Source Weak Large Language Models. (arXiv:2308.11764v4 [cs.CL] UPDATED) 

YORC: Yoruba Reading Comprehension dataset. (arXiv:2308.09768v2 [cs.CL] UPDATED) 

Identical and Fraternal Twins: Fine-Grained Semantic Contrastive Learning of Sentence Representations. (arXiv:2307.10932v2 [cs.CL] UPDATED) 

TIM: Teaching Large Language Models to Translate with Comparison. (arXiv:2307.04408v2 [cs.CL] UPDATED) 

Personality Traits in Large Language Models. (arXiv:2307.00184v2 [cs.CL] UPDATED) 

Generative User-Experience Research for Developing Domain-specific Natural Language Processing Applications. (arXiv:2306.16143v3 [cs.CL] UPDATED) 

Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning. (arXiv:2306.14565v2 [cs.CV] UPDATED) 

Overview of Robust and Multilingual Automatic Evaluation Metrics for Open-Domain Dialogue Systems at DSTC 11 Track 4. (arXiv:2306.12794v3 [cs.CL] UPDATED) 

COVER: A Heuristic Greedy Adversarial Attack on Prompt-based Learning in Language Models. (arXiv:2306.05659v3 [cs.CL] UPDATED) 

K2: A Foundation Language Model for Geoscience Knowledge Understanding and Utilization. (arXiv:2306.05064v2 [cs.CL] UPDATED) 

Probing in Context: Toward Building Robust Classifiers via Probing Large Language Models. (arXiv:2305.14171v2 [cs.CL] UPDATED) 

PaLM 2 Technical Report. (arXiv:2305.10403v3 [cs.CL] UPDATED) 

A Latent Space Theory for Emergent Abilities in Large Language Models. (arXiv:2304.09960v3 [cs.CL] UPDATED) 

MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning. (arXiv:2304.08981v2 [cs.CL] UPDATED) 

Reasoning with Language Model Prompting: A Survey. (arXiv:2212.09597v7 [cs.CL] UPDATED) 

LambdaKG: A Library for Pre-trained Language Model-Based Knowledge Graph Embeddings. (arXiv:2210.00305v3 [cs.CL] UPDATED) 

Modern Baselines for SPARQL Semantic Parsing. (arXiv:2204.12793v3 [cs.IR] UPDATED) 

MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning. (arXiv:2309.07915v1 [cs.CL]) 

Ambiguity-Aware In-Context Learning with Large Language Models. (arXiv:2309.07900v1 [cs.CL]) 

Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions. (arXiv:2309.07875v1 [cs.CL]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.