Show newer

Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure. (arXiv:2311.07590v1 [cs.CL]) 

Dialogizer: Context-aware Conversational-QA Dataset Generation from Textual Sources. (arXiv:2311.07589v1 [cs.CL]) 

NLQxform: A Language Model-based Question to SPARQL Transformer. (arXiv:2311.07588v1 [cs.CL]) 

Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?. (arXiv:2311.07587v1 [cs.CL]) 

Input Reconstruction Attack against Vertical Federated Large Language Models. (arXiv:2311.07585v1 [cs.CL]) 

Performance Prediction of Data-Driven Knowledge summarization of High Entropy Alloys (HEAs) literature implementing Natural Language Processing algorithms. (arXiv:2311.07584v1 [cs.CL]) 

Cross-Dialect Sentence Transformation: A Comparative Analysis of Language Models for Adapting Sentences to British English. (arXiv:2311.07583v1 [cs.CL]) 

Evaluating the Potential of Leading Large Language Models in Reasoning Biology Questions. (arXiv:2311.07582v1 [cs.CL]) 

Time Travel in LLMs: Tracing Data Contamination in Large Language Models. (arXiv:2308.08493v2 [cs.CL] CROSS LISTED) 

Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming in the Wild. (arXiv:2311.06237v2 [cs.CL] UPDATED) 

Removing RLHF Protections in GPT-4 via Fine-Tuning. (arXiv:2311.05553v2 [cs.CL] UPDATED) 

An Improved Transformer-based Model for Detecting Phishing, Spam, and Ham: A Large Language Model Approach. (arXiv:2311.04913v2 [cs.CL] UPDATED) 

Rethinking Benchmark and Contamination for Language Models with Rephrased Samples. (arXiv:2311.04850v2 [cs.CL] UPDATED) 

NExT-Chat: An LMM for Chat, Detection and Segmentation. (arXiv:2311.04498v3 [cs.CV] UPDATED) 

ChaTA: Towards an Intelligent Question-Answer Teaching Assistant using Open-Source LLMs. (arXiv:2311.02775v2 [cs.LG] UPDATED) 

Citance-Contextualized Summarization of Scientific Papers. (arXiv:2311.02408v3 [cs.CL] UPDATED) 

Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models. (arXiv:2311.01732v2 [cs.CL] UPDATED) 

Divergent Token Metrics: Measuring degradation to prune away LLM components -- and optimize quantization. (arXiv:2311.01544v2 [cs.CL] UPDATED) 

AWEQ: Post-Training Quantization with Activation-Weight Equalization for Large Language Models. (arXiv:2311.01305v3 [cs.LG] UPDATED) 

COPAL-ID: Indonesian Language Reasoning with Local Culture and Nuances. (arXiv:2311.01012v2 [cs.CL] UPDATED) 

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.