Show newer

A UMLS-Augmented Framework for Improving Factuality in Large Language Models within Healthcare. (arXiv:2310.02778v1 [cs.CL]) 

The Role of Linguistic Priors in Measuring Compositional Generalization of Vision-Language Models. (arXiv:2310.02777v1 [cs.CL]) 

Comparative Study and Framework for Automated Summariser Evaluation: LangChain and Hybrid Algorithms. (arXiv:2310.02759v1 [cs.LG]) 

LC-Score: Reference-less estimation of Text Comprehension Difficulty. (arXiv:2310.02754v1 [cs.CL]) 

AGIR: Automating Cyber Threat Intelligence Reporting with Natural Language Generation. (arXiv:2310.02655v1 [cs.CR]) 

I$^2$KD-SLU: An Intra-Inter Knowledge Distillation Framework for Zero-Shot Cross-Lingual Spoken Language Understanding. (arXiv:2310.02594v1 [cs.CL]) 

Improving Automatic VQA Evaluation Using Large Language Models. (arXiv:2310.02567v1 [cs.CV]) 

NOLA: Networks as Linear Combination of Low Rank Random Basis. (arXiv:2310.02556v1 [cs.CL]) 

CITING: Large Language Models Create Curriculum for Instruction Tuning. (arXiv:2310.02527v1 [cs.CL]) 

ResidualTransformer: Residual Low-rank Learning with Weight-sharing for Transformer Layers. (arXiv:2310.02489v1 [cs.CL]) 

Large Language Models Can Be Good Privacy Protection Learners. (arXiv:2310.02469v1 [cs.CL]) 

The Empty Signifier Problem: Towards Clearer Paradigms for Operationalising "Alignment" in Large Language Models. (arXiv:2310.02457v1 [cs.CL]) 

Backdoor Adjustment of Confounding by Provenance for Robust Text Classification of Multi-institutional Clinical Notes. (arXiv:2310.02451v1 [cs.CL]) 

Low-Resource Languages Jailbreak GPT-4. (arXiv:2310.02446v1 [cs.CL]) 

Novice Learner and Expert Tutor: Evaluating Math Reasoning Abilities of Large Language Models with Misconceptions. (arXiv:2310.02439v1 [cs.CL]) 

Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Refute Misconceptions. (arXiv:2310.02431v1 [cs.HC]) 

Can a student Large Language Model perform as well as it's teacher?. (arXiv:2310.02421v1 [cs.LG]) 

Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness. (arXiv:2310.02410v1 [cs.LG]) 

Nugget 2D: Dynamic Contextual Compression for Scaling Decoder-only Language Models. (arXiv:2310.02409v1 [cs.CL]) 

MindTheDApp: A Toolchain for Complex Network-Driven Structural Analysis of Ethereum-based Decentralised Applications. (arXiv:2310.02408v1 [cs.IT]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.