Show newer

WSPAlign: Word Alignment Pre-training via Large-Scale Weakly Supervised Span Prediction. (arXiv:2306.05644v2 [cs.CL] UPDATED) 

Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning. (arXiv:2306.00477v4 [cs.CL] UPDATED) 

Red Teaming Language Model Detectors with Language Models. (arXiv:2305.19713v2 [cs.CL] UPDATED) 

NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models. (arXiv:2305.16986v3 [cs.CV] UPDATED) 

An Efficient Multilingual Language Model Compression through Vocabulary Trimming. (arXiv:2305.15020v3 [cs.CL] UPDATED) 

RefGPT: Dialogue Generation of GPT, by GPT, and for GPT. (arXiv:2305.14994v3 [cs.CL] UPDATED) 

Allies: Prompting Large Language Model with Beam Search. (arXiv:2305.14766v3 [cs.CL] UPDATED) 

This Land is {Your, My} Land: Evaluating Geopolitical Biases in Language Models. (arXiv:2305.14610v2 [cs.CL] UPDATED) 

Weakly-Supervised Learning of Visual Relations in Multimodal Pretraining. (arXiv:2305.14281v2 [cs.CL] UPDATED) 

Pre-training Multi-task Contrastive Learning Models for Scientific Literature Understanding. (arXiv:2305.14232v2 [cs.CL] UPDATED) 

Crosslingual Transfer Learning for Low-Resource Languages Based on Multilingual Colexification Graphs. (arXiv:2305.12818v2 [cs.CL] UPDATED) 

Data-efficient Active Learning for Structured Prediction with Partial Annotation and Self-Training. (arXiv:2305.12634v2 [cs.CL] UPDATED) 

Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning. (arXiv:2305.12295v2 [cs.CL] UPDATED) 

Prompting with Pseudo-Code Instructions. (arXiv:2305.11790v3 [cs.CL] UPDATED) 

TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models. (arXiv:2305.11171v3 [cs.CL] UPDATED) 

Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models. (arXiv:2305.09955v2 [cs.CL] UPDATED) 

FactKB: Generalizable Factuality Evaluation using Language Models Enhanced with Factual Knowledge. (arXiv:2305.08281v2 [cs.CL] UPDATED) 

Automatic Prompt Optimization with "Gradient Descent" and Beam Search. (arXiv:2305.03495v2 [cs.CL] UPDATED) 

PEFT-Ref: A Modular Reference Architecture and Typology for Parameter-Efficient Finetuning Techniques. (arXiv:2304.12410v2 [cs.CL] UPDATED) 

Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study. (arXiv:2304.06762v2 [cs.CL] UPDATED) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.