Show newer

N-Critics: Self-Refinement of Large Language Models with Ensemble of Critics. (arXiv:2310.18679v2 [cs.CL] UPDATED) 

DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues. (arXiv:2310.18130v2 [cs.CL] UPDATED) 

Quality-Diversity through AI Feedback. (arXiv:2310.13032v3 [cs.CL] UPDATED) 

Experimenting AI Technologies for Disinformation Combat: the IDMO Project. (arXiv:2310.11097v4 [cs.CL] UPDATED) 

AvalonBench: Evaluating LLMs Playing the Game of Avalon. (arXiv:2310.05036v3 [cs.AI] UPDATED) 

PB-LLM: Partially Binarized Large Language Models. (arXiv:2310.00034v2 [cs.LG] UPDATED) 

AnglE-optimized Text Embeddings. (arXiv:2309.12871v6 [cs.CL] UPDATED) 

PDFTriage: Question Answering over Long, Structured Documents. (arXiv:2309.08872v2 [cs.CL] UPDATED) 

CoCA: Fusing position embedding with Collinear Constrained Attention for fine-tuning free context window extending. (arXiv:2309.08646v2 [cs.LG] UPDATED) 

TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wild. (arXiv:2309.08637v3 [cs.CL] UPDATED) 

Chain-of-Thought Reasoning is a Policy Improvement Operator. (arXiv:2309.08589v2 [cs.LG] UPDATED) 

ToddlerBERTa: Exploiting BabyBERTa for Grammar Learning and Language Understanding. (arXiv:2308.16336v2 [cs.CL] UPDATED) 

Token-Scaled Logit Distillation for Ternary Weight Generative Language Models. (arXiv:2308.06744v2 [cs.CL] UPDATED) 

Predictive Data Analytics with AI: assessing the need for post-editing of MT output by fine-tuning OpenAI LLMs. (arXiv:2308.00158v5 [cs.CL] UPDATED) 

Three Bricks to Consolidate Watermarks for Large Language Models. (arXiv:2308.00113v2 [cs.CL] UPDATED) 

Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression. (arXiv:2306.15063v2 [cs.LG] UPDATED) 

Dissecting Chain-of-Thought: Compositionality through In-Context Filtering and Learning. (arXiv:2305.18869v2 [cs.LG] UPDATED) 

Machine Reading Comprehension using Case-based Reasoning. (arXiv:2305.14815v3 [cs.CL] UPDATED) 

On Robustness of Finetuned Transformer-based NLP Models. (arXiv:2305.14453v2 [cs.CL] UPDATED) 

Incongruity-Aware Hierarchical Crossmodal Transformer with Dynamic Modality Gating: A Study on Affect Recognition. (arXiv:2305.13583v3 [cs.CL] UPDATED) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.