Show newer

PREADD: Prefix-Adaptive Decoding for Controlled Text Generation. (arXiv:2307.03214v1 [cs.CL]) 

Deductive Additivity for Planning of Natural Language Proofs. (arXiv:2307.02472v2 [cs.CL] UPDATED) 

Utilizing ChatGPT Generated Data to Retrieve Depression Symptoms from Social Media. (arXiv:2307.02313v2 [cs.CL] UPDATED) 

Transformed Protoform Reconstruction. (arXiv:2307.01896v2 [cs.CL] UPDATED) 

Align With Purpose: Optimize Desired Properties in CTC Models with a General Plug-and-Play Framework. (arXiv:2307.01715v2 [cs.CL] UPDATED) 

Image Matters: A New Dataset and Empirical Study for Multimodal Hyperbole Detection. (arXiv:2307.00209v2 [cs.CV] UPDATED) 

Biomedical Language Models are Robust to Sub-optimal Tokenization. (arXiv:2306.17649v2 [cs.CL] UPDATED) 

LLM Calibration and Automatic Hallucination Detection via Pareto Optimal Self-supervision. (arXiv:2306.16564v2 [cs.CL] UPDATED) 

The Singing Voice Conversion Challenge 2023. (arXiv:2306.14422v2 [cs.SD] UPDATED) 

Chinese Fine-Grained Financial Sentiment Analysis with Large Language Models. (arXiv:2306.14096v2 [cs.CL] UPDATED) 

The Impact of ChatGPT and LLMs on Medical Imaging Stakeholders: Perspectives and Use Cases. (arXiv:2306.06767v2 [eess.IV] UPDATED) 

Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection. (arXiv:2306.04637v2 [cs.LG] UPDATED) 

Evaluation of ChatGPT on Biomedical Tasks: A Zero-Shot Comparison with Fine-Tuned Generative Transformers. (arXiv:2306.04504v2 [cs.CL] UPDATED) 

A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets. (arXiv:2305.18486v4 [cs.CL] UPDATED) 

Calibration of Transformer-based Models for Identifying Stress and Depression in Social Media. (arXiv:2305.16797v2 [cs.CL] UPDATED) 

Self-supervised representations in speech-based depression detection. (arXiv:2305.12263v2 [cs.CL] UPDATED) 

From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models. (arXiv:2305.08283v3 [cs.CL] UPDATED) 

Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models. (arXiv:2305.01645v3 [cs.CL] UPDATED) 

Unstructured and structured data: Can we have the best of both worlds with large language models?. (arXiv:2304.13010v2 [cs.DB] UPDATED) 

Generation of Highlights from Research Papers Using Pointer-Generator Networks and SciBERT Embeddings. (arXiv:2302.07729v2 [cs.CL] UPDATED) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.