Overview of the PromptCBLUE Shared Task in CHIP2023. (arXiv:2312.17522v1 [cs.CL])
Cooperation on the Fly: Exploring Language Agents for Ad Hoc Teamwork in the Avalon Game. (arXiv:2312.17515v1 [cs.CL])
Leveraging Open-Vocabulary Diffusion to Camouflaged Instance Segmentation. (arXiv:2312.17505v1 [cs.CV])
Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning. (arXiv:2312.17484v1 [cs.CL])
MosaicBERT: A Bidirectional Encoder Optimized for Fast Pretraining. (arXiv:2312.17482v1 [cs.CL])
Exploring the Sensitivity of LLMs' Decision-Making Capabilities: Insights from Prompt Variation and Hyperparameters. (arXiv:2312.17476v1 [cs.CL])
EHR Interaction Between Patients and AI: NoteAid EHR Interaction. (arXiv:2312.17475v1 [cs.CL])
Video Understanding with Large Language Models: A Survey. (arXiv:2312.17432v1 [cs.CV])
Commonsense for Zero-Shot Natural Language Video Localization. (arXiv:2312.17429v1 [cs.CV])
Language Model as an Annotator: Unsupervised Context-aware Quality Phrase Generation. (arXiv:2312.17349v1 [cs.CL])
AQUALLM: Audio Question Answering Data Generation Using Large Language Models. (arXiv:2312.17343v1 [cs.CL])
SentinelLMs: Encrypted Input Adaptation and Fine-tuning of Language Models for Private and Secure Inference. (arXiv:2312.17342v1 [cs.CR])
Exploring Nature: Datasets and Models for Analyzing Nature-Related Disclosures. (arXiv:2312.17337v1 [cs.CL])
Structured Packing in LLM Training Improves Long Context Utilization. (arXiv:2312.17296v1 [cs.CL])
Optimizing watermarks for large language models. (arXiv:2312.17295v1 [cs.CR])
Effect of dimensionality change on the bias of word embeddings. (arXiv:2312.17292v1 [cs.CL])
AI Content Self-Detection for Transformer-based Large Language Models. (arXiv:2312.17289v1 [cs.CL])
Stateful FastConformer with Cache-based Inference for Streaming Automatic Speech Recognition. (arXiv:2312.17279v1 [cs.CL])
Large Language Models for Conducting Advanced Text Analytics Information Systems Research. (arXiv:2312.17278v1 [cs.CL])
PanGu-$\pi$: Enhancing Language Model Architectures via Nonlinearity Compensation. (arXiv:2312.17276v1 [cs.CL])
All recent Computation and Language articles on arXiv.org for the Fediverse
Inspired by https://twitter.com/arxiv_cscl