Show newer

Federated Learning for Short Text Clustering. (arXiv:2312.07556v1 [cs.CL]) 

Hijacking Context in Large Multi-modal Models. (arXiv:2312.07553v1 [cs.AI]) 

Large Language Models for Intent-Driven Session Recommendations. (arXiv:2312.07552v1 [cs.CL]) 

Language Model Alignment with Elastic Reset. (arXiv:2312.07551v1 [cs.CL]) 

Understanding (Un)Intended Memorization in Text-to-Image Generative Models. (arXiv:2312.07550v1 [cs.CV]) 

Dense X Retrieval: What Retrieval Granularity Should We Use?. (arXiv:2312.06648v2 [cs.CL] UPDATED) 

Gated Linear Attention Transformers with Hardware-Efficient Training. (arXiv:2312.06635v2 [cs.LG] UPDATED) 

MMICT: Boosting Multi-Modal Fine-Tuning with In-Context Examples. (arXiv:2312.06363v2 [cs.AI] UPDATED) 

Understanding the Effect of Model Compression on Social Bias in Large Language Models. (arXiv:2312.05662v2 [cs.CL] UPDATED) 

Sim-GPT: Text Similarity via GPT Annotated Data. (arXiv:2312.05603v2 [cs.CL] UPDATED) 

History Matters: Temporal Knowledge Editing in Large Language Model. (arXiv:2312.05497v2 [cs.CL] UPDATED) 

Can Large Language Models Serve as Rational Players in Game Theory? A Systematic Analysis. (arXiv:2312.05488v2 [cs.AI] UPDATED) 

PathFinder: Guided Search over Multi-Step Reasoning Paths. (arXiv:2312.05180v2 [cs.CL] UPDATED) 

Localized Symbolic Knowledge Distillation for Visual Commonsense Models. (arXiv:2312.04837v2 [cs.AI] UPDATED) 

Simul-LLM: A Framework for Exploring High-Quality Simultaneous Translation with Large Language Models. (arXiv:2312.04691v2 [cs.CL] UPDATED) 

Enhancing Medical Task Performance in GPT-4V: A Comprehensive Study on Prompt Engineering Strategies. (arXiv:2312.04344v2 [cs.CL] UPDATED) 

The Transient Nature of Emergent In-Context Learning in Transformers. (arXiv:2311.08360v3 [cs.LG] UPDATED) 

Practical Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt Calibration. (arXiv:2311.06062v2 [cs.CL] UPDATED) 

Muslim-Violence Bias Persists in Debiased GPT Models. (arXiv:2310.18368v2 [cs.CL] UPDATED) 

Sentiment analysis with adaptive multi-head attention in Transformer. (arXiv:2310.14505v3 [cs.CL] UPDATED) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.