Show newer

Cross-Lingual Retrieval Augmented Prompt for Low-Resource Languages. (arXiv:2212.09651v4 [cs.CL] UPDATED) 

Explanation Regeneration via Information Bottleneck. (arXiv:2212.09603v2 [cs.CL] UPDATED) 

TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities. (arXiv:2212.06385v2 [cs.CL] UPDATED) 

Forming Trees with Treeformers. (arXiv:2207.06960v2 [cs.CL] UPDATED) 

LegoNN: Building Modular Encoder-Decoder Models. (arXiv:2206.03318v2 [cs.CL] UPDATED) 

BTPK-based interpretable method for NER tasks based on Talmudic Public Announcement Logic. (arXiv:2201.09523v2 [cs.CL] UPDATED) 

What do End-to-End Speech Models Learn about Speaker, Language and Channel Information? A Layer-wise and Neuron-level Analysis. (arXiv:2107.00439v3 [cs.CL] UPDATED) 

Empowering Cross-lingual Behavioral Testing of NLP Models with Typological Features. (arXiv:2307.05454v1 [cs.CL]) 

ISLTranslate: Dataset for Translating Indian Sign Language. (arXiv:2307.05440v1 [cs.CL]) 

BLUEX: A benchmark based on Brazilian Leading Universities Entrance eXams. (arXiv:2307.05410v1 [cs.CL]) 

Unmasking the giant: A comprehensive evaluation of ChatGPT's proficiency in coding algorithms and data structures. (arXiv:2307.05360v1 [cs.SE]) 

UniCoRN: Unified Cognitive Signal ReconstructioN bridging cognitive signals and human language. (arXiv:2307.05355v1 [eess.SP]) 

GujiBERT and GujiGPT: Construction of Intelligent Information Processing Foundation Language Models for Ancient Texts. (arXiv:2307.05354v1 [cs.CL]) 

Explaining Competitive-Level Programming Solutions using LLMs. (arXiv:2307.05337v1 [cs.CL]) 

Decoding the Popularity of TV Series: A Network Analysis Perspective. (arXiv:2307.05329v1 [cs.SI]) 

Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration. (arXiv:2307.05300v1 [cs.AI]) 

U-CREAT: Unsupervised Case Retrieval using Events extrAcTion. (arXiv:2307.05260v1 [cs.IR]) 

Attribute Controlled Dialogue Prompting. (arXiv:2307.05228v1 [cs.CL]) 

Mao-Zedong At SemEval-2023 Task 4: Label Represention Multi-Head Attention Model With Contrastive Learning-Enhanced Nearest Neighbor Mechanism For Multi-Label Text Classification. (arXiv:2307.05174v1 [cs.CL]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.