Show newer

MOR-T L : A Novel Model Order Reduction Method for Parametrized Problems with Application to Seismic Wave Propagation arxiv.org/abs/2505.00709 .NA .NA

A Goal-Oriented Adaptive Sampling Procedure for Projection-Based Reduced-Order Models with Hyperreduction arxiv.org/abs/2505.00712 .NA .NA

Partial integration based regularization in BEM for 3D elastostatic problems: The role of line integrals arxiv.org/abs/2505.00713 .NA .NA

Comparison of FMM and $\mathcal{H}$-matrix based 3D-ACA for a time domain boundary element method arxiv.org/abs/2505.00715 .NA .NA

FinBERT-QA: Financial Question Answering with pre-trained BERT Language Models arxiv.org/abs/2505.00725 .CL .IR .LG

Rosetta-PL: Propositional Logic as a Benchmark for Large Language Model Reasoning arxiv.org/abs/2505.00001 .CL

Symbol grounding in computational systems: A paradox of intentions arxiv.org/abs/2505.00002 .CL

The Mind in the Machine: A Survey of Incorporating Psychological Theories in LLMs arxiv.org/abs/2505.00003 .CL

LangVAE and LangSpace: Building and Probing for Language Model VAEs arxiv.org/abs/2505.00004 .CL .AI

Belief System Dynamics as Network of Single Layered Neural Network arxiv.org/abs/2505.00005 .soc-ph .SI

A Scoping Review of Natural Language Processing in Addressing Medically Inaccurate Information: Errors, Misinformation, and Hallucination arxiv.org/abs/2505.00008 .CL .AI

Efficient Knowledge Transfer in Multi-Task Learning through Task-Adaptive Low-Rank Representation arxiv.org/abs/2505.00009 .CL

Jailbreak Detection in Clinical Training LLMs Using Feature-Based Predictive Models arxiv.org/abs/2505.00010 .CL .AI

Research on CNN-BiLSTM Network Traffic Anomaly Detection Model Based on MindSpore arxiv.org/abs/2504.21008 .CR .AI

Waking Up an AI: A Quantitative Framework for Prompt-Induced Phase Transition in Large Language Models arxiv.org/abs/2504.21012 .CL .AI

Analyzing Feedback Mechanisms in AI-Generated MCQs: Insights into Readability, Lexical Properties, and Levels of Challenge arxiv.org/abs/2504.21013 .CL .AI

Don't Retrieve, Generate: Prompting LLMs for Synthetic Training Data in Dense Retrieval arxiv.org/abs/2504.21015 .IR .CL

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.