Show newer

Science in the Era of ChatGPT, Large Language Models and Generative AI: Challenges for Research Ethics and How to Respond. (arXiv:2305.15299v3 [cs.CY] UPDATED) 

A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition. (arXiv:2305.12485v2 [cs.CL] UPDATED) 

In-Context Retrieval-Augmented Language Models. (arXiv:2302.00083v2 [cs.CL] UPDATED) 

ThoughtSource: A central hub for large language model reasoning data. (arXiv:2301.11596v5 [cs.CL] UPDATED) 

Large Language Models Struggle to Learn Long-Tail Knowledge. (arXiv:2211.08411v2 [cs.CL] UPDATED) 

Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning. (arXiv:2205.14704v4 [cs.CL] UPDATED) 

Differentiable Subset Pruning of Transformer Heads. (arXiv:2108.04657v3 [cs.CL] UPDATED) 

A Geometric Notion of Causal Probing. (arXiv:2307.15054v1 [cs.CL]) 

Matching Patients to Clinical Trials with Large Language Models. (arXiv:2307.15051v1 [cs.CL]) 

Universal and Transferable Adversarial Attacks on Aligned Language Models. (arXiv:2307.15043v1 [cs.CL]) 

SuperCLUE: A Comprehensive Chinese Large Language Model Benchmark. (arXiv:2307.15020v1 [cs.CL]) 

Gzip versus bag-of-words for text classification with KNN. (arXiv:2307.15002v1 [cs.CL]) 

Scaling TransNormer to 175 Billion Parameters. (arXiv:2307.14995v1 [cs.CL]) 

Incrementally-Computable Neural Networks: Efficient Inference for Dynamic Inputs. (arXiv:2307.14988v1 [cs.LG]) 

PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback. (arXiv:2307.14936v1 [cs.CL]) 

ARC-NLP at PAN 2023: Transition-Focused Natural Language Inference for Writing Style Detection. (arXiv:2307.14913v1 [cs.CL]) 

ARC-NLP at PAN 2023: Hierarchical Long Text Classification for Trigger Detection. (arXiv:2307.14912v1 [cs.CL]) 

Retrieval-based Text Selection for Addressing Class-Imbalanced Data in Classification. (arXiv:2307.14899v1 [cs.CL]) 

MESED: A Multi-modal Entity Set Expansion Dataset with Fine-grained Semantic Classes and Hard Negative Entities. (arXiv:2307.14878v1 [cs.CL]) 

Exploiting the Potential of Seq2Seq Models as Robust Few-Shot Learners. (arXiv:2307.14856v1 [cs.CL]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.