Show newer

BayesPrompt: Prompting Large-Scale Pre-Trained Language Models on Few-shot Inference via Debiased Domain Abstraction. (arXiv:2401.14166v1 [cs.CL]) 

True Knowledge Comes from Practice: Aligning LLMs with Embodied Environments via Reinforcement Learning. (arXiv:2401.14151v1 [cs.LG]) 

Convolutional Neural Networks can achieve binary bail judgement classification. (arXiv:2401.14135v1 [cs.CL]) 

On the Affinity, Rationality, and Diversity of Hierarchical Topic Modeling. (arXiv:2401.14113v1 [cs.CL]) 

CompactifAI: Extreme Compression of Large Language Models using Quantum-Inspired Tensor Networks. (arXiv:2401.14109v1 [cs.CL]) 

Ta'keed: The First Generative Fact-Checking System for Arabic Claims. (arXiv:2401.14067v1 [cs.CL]) 

Towards Goal-oriented Large Language Model Prompting: A Survey. (arXiv:2401.14043v1 [cs.CL]) 

(Chat)GPT v BERT: Dawn of Justice for Semantic Change Detection. (arXiv:2401.14040v1 [cs.CL]) 

Accelerating Retrieval-Augmented Language Model Serving with Speculation. (arXiv:2401.14021v1 [cs.LG]) 

Unitxt: Flexible, Shareable and Reusable Data Preparation and Evaluation for Generative AI. (arXiv:2401.14019v1 [cs.CL]) 

Towards Uncertainty-Aware Language Agent. (arXiv:2401.14016v1 [cs.CL]) 

CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning. (arXiv:2401.14011v1 [cs.CL]) 

ConstraintChecker: A Plugin for Large Language Models to Reason on Commonsense Knowledge Bases. (arXiv:2401.14003v1 [cs.CL]) 

Investigate-Consolidate-Exploit: A General Strategy for Inter-Task Agent Self-Evolution. (arXiv:2401.13996v1 [cs.CL]) 

Towards Consistent Natural-Language Explanations via Explanation-Consistency Finetuning. (arXiv:2401.13986v1 [cs.CL]) 

Leeroo Orchestrator: Elevating LLMs Performance Through Model Integration. (arXiv:2401.13979v1 [cs.CL]) 

Adaptive Text Watermark for Large Language Models. (arXiv:2401.13927v1 [cs.CL]) 

LocMoE: A Low-overhead MoE for Large Language Model Training. (arXiv:2401.13920v1 [cs.LG]) 

WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models. (arXiv:2401.13919v1 [cs.CL]) 

No More Distractions: an Adaptive Up-Sampling Algorithm to Reduce Data Artifacts. (arXiv:2401.13907v1 [cs.CL]) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.