Show newer

Conic10K: A Challenging Math Problem Understanding and Reasoning Dataset. (arXiv:2311.05113v1 [cs.CL]) 

A Survey of Large Language Models in Medicine: Progress, Application, and Challenge. (arXiv:2311.05112v1 [cs.CL]) 

Legal-HNet: Mixing Legal Long-Context Tokens with Hartley Transform. (arXiv:2311.05089v1 [cs.CL]) 

Characterizing Large Language Models as Rationalizers of Knowledge-intensive Tasks. (arXiv:2311.05085v1 [cs.CL]) 

Mental Health Diagnosis in the Digital Age: Harnessing Sentiment Analysis on Social Media Platforms upon Ultra-Sparse Feature Content. (arXiv:2311.05075v1 [cs.LG]) 

A Framework to Assess (Dis)agreement Among Diverse Rater Groups. (arXiv:2311.05074v1 [cs.CL]) 

Deep Learning Brasil at ABSAPT 2022: Portuguese Transformer Ensemble Approaches. (arXiv:2311.05051v1 [cs.CL]) 

DeepLearningBrasil@LT-EDI-2023: Exploring Deep Learning Techniques for Detecting Depression in Social Media Text. (arXiv:2311.05047v1 [cs.CL]) 

Zero-shot Translation of Attention Patterns in VQA Models to Natural Language. (arXiv:2311.05043v1 [cs.CV]) 

First Tragedy, then Parse: History Repeats Itself in the New Era of Large Language Models. (arXiv:2311.05020v1 [cs.CL]) 

Interpreting Pretrained Language Models via Concept Bottlenecks. (arXiv:2311.05014v1 [cs.CL]) 

On the steerability of large language models toward data-driven personas. (arXiv:2311.04978v1 [cs.CL]) 

Prompt Sketching for Large Language Models. (arXiv:2311.04954v1 [cs.CL]) 

Explained anomaly detection in text reviews: Can subjective scenarios be correctly evaluated?. (arXiv:2311.04948v1 [cs.CL]) 

LooGLE: Can Long-Context Language Models Understand Long Contexts?. (arXiv:2311.04939v1 [cs.CL]) 

A comparative analysis between Conformer-Transducer, Whisper, and wav2vec2 for improving the child speech recognition. (arXiv:2311.04936v1 [cs.CL]) 

Prompt Cache: Modular Attention Reuse for Low-Latency Inference. (arXiv:2311.04934v1 [cs.CL]) 

Evaluating Large Language Models in Ophthalmology. (arXiv:2311.04933v1 [cs.CL]) 

GPT4All: An Ecosystem of Open Source Compressed Language Models. (arXiv:2311.04931v1 [cs.CL]) 

Large language models implicitly learn to straighten neural sentence trajectories to construct a predictive representation of natural language. (arXiv:2311.04930v1 [cs.CL]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.