Show newer

ViCGCN: Graph Convolutional Network with Contextualized Language Models for Social Media Mining in Vietnamese. (arXiv:2309.02902v1 [cs.CL]) 

A deep Natural Language Inference predictor without language-specific training data. (arXiv:2309.02887v1 [cs.CL]) 

Aligning Large Language Models for Clinical Tasks. (arXiv:2309.02884v1 [cs.CL]) 

Promoting Open-domain Dialogue Generation through Learning Pattern Information between Contexts and Responses. (arXiv:2309.02823v1 [cs.CL]) 

Agent-based simulation of pedestrians' earthquake evacuation; application to Beirut, Lebanon. (arXiv:2309.02812v1 [cs.CL]) 

Norm Tweaking: High-performance Low-bit Quantization of Large Language Models. (arXiv:2309.02784v1 [cs.LG]) 

GRASS: Unified Generation Model for Speech Semantic Understanding. (arXiv:2309.02780v1 [cs.CL]) 

Improving Code Generation by Dynamic Temperature Sampling. (arXiv:2309.02772v1 [cs.SE]) 

Rubric-Specific Approach to Automated Essay Scoring with Augmentation Training. (arXiv:2309.02740v1 [cs.CL]) 

HC3 Plus: A Semantic-Invariant Human ChatGPT Comparison Corpus. (arXiv:2309.02731v1 [cs.CL]) 

Large Language Models for Automated Open-domain Scientific Hypotheses Discovery. (arXiv:2309.02726v1 [cs.CL]) 

Offensive Hebrew Corpus and Detection using BERT. (arXiv:2309.02724v1 [cs.CL]) 

HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models. (arXiv:2309.02706v1 [cs.CL]) 

Certifying LLM Safety against Adversarial Prompting. (arXiv:2309.02705v1 [cs.CL]) 

A Joint Study of Phrase Grounding and Task Performance in Vision and Language Models. (arXiv:2309.02691v1 [cs.CL]) 

Zero-Resource Hallucination Prevention for Large Language Models. (arXiv:2309.02654v1 [cs.CL]) 

Epi-Curriculum: Episodic Curriculum Learning for Low-Resource Domain Adaptation in Neural Machine Translation. (arXiv:2309.02640v1 [cs.LG]) 

Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning. (arXiv:2309.02591v1 [cs.LG]) 

Automating Behavioral Testing in Machine Translation. (arXiv:2309.02553v1 [cs.CL]) 

Minimal Effective Theory for Phonotactic Memory: Capturing Local Correlations due to Errors in Speech. (arXiv:2309.02466v1 [eess.AS]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.