Show newer

Compositional Chain-of-Thought Prompting for Large Multimodal Models. (arXiv:2311.17076v1 [cs.CV]) 

Efficient Deep Speech Understanding at the Edge. (arXiv:2311.17065v1 [eess.AS]) 

Average Token Delay: A Duration-aware Latency Metric for Simultaneous Translation. (arXiv:2311.14353v2 [cs.CL] UPDATED) 

LM-Cocktail: Resilient Tuning of Language Models via Model Merging. (arXiv:2311.13534v2 [cs.CL] UPDATED) 

Unsupervised Graph Attention Autoencoder for Attributed Networks using K-means Loss. (arXiv:2311.12986v2 [cs.CL] UPDATED) 

Token-Level Adversarial Prompt Detection Based on Perplexity Measures and Contextual Information. (arXiv:2311.11509v2 [cs.CL] UPDATED) 

Unveiling Public Perceptions: Machine Learning-Based Sentiment Analysis of COVID-19 Vaccines in India. (arXiv:2311.11435v2 [cs.CL] UPDATED) 

Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure. (arXiv:2311.07590v2 [cs.CL] UPDATED) 

Autoregressive Language Models For Estimating the Entropy of Epic EHR Audit Logs. (arXiv:2311.06401v3 [cs.CL] UPDATED) 

Mirror: A Universal Framework for Various Information Extraction Tasks. (arXiv:2311.05419v2 [cs.CL] UPDATED) 

Evaluating Large Language Models: A Comprehensive Survey. (arXiv:2310.19736v3 [cs.CL] UPDATED) 

PACuna: Automated Fine-Tuning of Language Models for Particle Accelerators. (arXiv:2310.19106v3 [cs.CL] UPDATED) 

OffMix-3L: A Novel Code-Mixed Dataset in Bangla-English-Hindi for Offensive Language Identification. (arXiv:2310.18387v2 [cs.CL] UPDATED) 

WordArt Designer: User-Driven Artistic Typography Synthesis using Large Language Models. (arXiv:2310.18332v2 [cs.CL] UPDATED) 

Sentiment analysis with adaptive multi-head attention in Transformer. (arXiv:2310.14505v2 [cs.CL] UPDATED) 

Semantic Parsing by Large Language Models for Intricate Updating Strategies of Zero-Shot Dialogue State Tracking. (arXiv:2310.10520v3 [cs.CL] UPDATED) 

Do pretrained Transformers Really Learn In-context by Gradient Descent?. (arXiv:2310.08540v2 [cs.CL] UPDATED) 

What If the TV Was Off? Examining Counterfactual Reasoning Abilities of Multi-modal Language Models. (arXiv:2310.06627v2 [cs.CL] UPDATED) 

Large Language Models for Propaganda Detection. (arXiv:2310.06422v2 [cs.CL] UPDATED) 

Analyzing Zero-Shot Abilities of Vision-Language Models on Video Understanding Tasks. (arXiv:2310.04914v2 [cs.CV] UPDATED) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.