Show newer

RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture. (arXiv:2401.08406v2 [cs.CL] UPDATED) 

Are self-explanations from Large Language Models faithful?. (arXiv:2401.07927v2 [cs.CL] UPDATED) 

TAROT: A Hierarchical Framework with Multitask Co-Pretraining on Semi-Structured Data towards Effective Person-Job Fit. (arXiv:2401.07525v2 [cs.CL] UPDATED) 

Developing ChatGPT for Biology and Medicine: A Complete Review of Biomedical Question Answering. (arXiv:2401.07510v2 [cs.CL] UPDATED) 

Improving Domain Adaptation through Extended-Text Reading Comprehension. (arXiv:2401.07284v2 [cs.CL] UPDATED) 

E^2-LLM: Efficient and Extreme Length Extension of Large Language Models. (arXiv:2401.06951v2 [cs.CL] UPDATED) 

Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning. (arXiv:2401.06805v2 [cs.CL] UPDATED) 

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training. (arXiv:2401.05566v3 [cs.CR] UPDATED) 

Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning. (arXiv:2312.17484v2 [cs.CL] UPDATED) 

Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4. (arXiv:2312.16171v2 [cs.CL] UPDATED) 

Logic-Scaffolding: Personalized Aspect-Instructed Recommendation Explanation Generation using LLMs. (arXiv:2312.14345v2 [cs.AI] UPDATED) 

LLM-SQL-Solver: Can LLMs Determine SQL Equivalence?. (arXiv:2312.10321v2 [cs.DB] UPDATED) 

Assertion Enhanced Few-Shot Learning: Instructive Technique for Large Language Models to Generate Educational Explanations. (arXiv:2312.03122v2 [cs.CL] UPDATED) 

Large Language Model-Enhanced Algorithm Selection: Towards Comprehensive Algorithm Representation. (arXiv:2311.13184v2 [cs.LG] UPDATED) 

Debiasing Algorithm through Model Adaptation. (arXiv:2310.18913v2 [cs.CL] UPDATED) 

SpecTr: Fast Speculative Decoding via Optimal Transport. (arXiv:2310.15141v2 [cs.LG] UPDATED) 

MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter. (arXiv:2310.12798v4 [cs.CL] UPDATED) 

FactCHD: Benchmarking Fact-Conflicting Hallucination Detection. (arXiv:2310.12086v2 [cs.CL] UPDATED) 

Functional Invariants to Watermark Large Transformers. (arXiv:2310.11446v2 [cs.CR] UPDATED) 

Circuit Component Reuse Across Tasks in Transformer Language Models. (arXiv:2310.08744v2 [cs.CL] UPDATED) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.