Show newer

Language Modeling on a SpiNNaker 2 Neuromorphic Chip. (arXiv:2312.09084v3 [cs.NE] UPDATED) 

A Baseline Analysis of Reward Models' Ability To Accurately Analyze Foundation Models Under Distribution Shift. (arXiv:2311.14743v7 [cs.CL] UPDATED) 

Formally Specifying the High-Level Behavior of LLM-Based Agents. (arXiv:2310.08535v3 [cs.AI] UPDATED) 

Conversational Health Agents: A Personalized LLM-Powered Agent Framework. (arXiv:2310.02374v4 [cs.CL] UPDATED) 

Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns. (arXiv:2310.01749v2 [cs.CL] UPDATED) 

Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering. (arXiv:2309.17249v2 [cs.CL] UPDATED) 

How Transferable are Attribute Controllers on Pretrained Multilingual Translation Models?. (arXiv:2309.08565v3 [cs.CL] UPDATED) 

Reward Engineering for Generating Semi-structured Explanation. (arXiv:2309.08347v2 [cs.CL] UPDATED) 

PromptASR for contextualized ASR with controllable style. (arXiv:2309.07414v3 [eess.AS] UPDATED) 

Statistical Rejection Sampling Improves Preference Optimization. (arXiv:2309.06657v2 [cs.CL] UPDATED) 

Peering Through Preferences: Unraveling Feedback Acquisition for Aligning Large Language Models. (arXiv:2308.15812v2 [cs.LG] UPDATED) 

CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias. (arXiv:2308.12539v2 [cs.CL] UPDATED) 

VELMA: Verbalization Embodiment of LLM Agents for Vision and Language Navigation in Street View. (arXiv:2307.06082v2 [cs.AI] UPDATED) 

Linguistic Binding in Diffusion Models: Enhancing Attribute Correspondence through Attention Map Alignment. (arXiv:2306.08877v3 [cs.CL] UPDATED) 

"Medium" LMs of Code in the Era of LLMs: Lessons From StackOverflow. (arXiv:2306.03268v2 [cs.CL] UPDATED) 

OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Models. (arXiv:2306.02272v4 [cs.CL] UPDATED) 

Large Language Models are Zero-Shot Rankers for Recommender Systems. (arXiv:2305.08845v2 [cs.IR] UPDATED) 

Interpretability at Scale: Identifying Causal Mechanisms in Alpaca. (arXiv:2305.08809v2 [cs.CL] UPDATED) 

ReCOGS: How Incidental Details of a Logical Form Overshadow an Evaluation of Semantic Interpretation. (arXiv:2303.13716v2 [cs.CL] UPDATED) 

Oolong: Investigating What Makes Transfer Learning Hard with Controlled Studies. (arXiv:2202.12312v2 [cs.CL] UPDATED) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.