Show newer

Deep learning forward and reverse primer design to detect SARS-CoV-2 emerging variants. (arXiv:2209.13591v1 [q-bio.GN]) arxiv.org/abs/2209.13591

Deep learning forward and reverse primer design to detect SARS-CoV-2 emerging variants

Surges that have been observed at different periods in the number of COVID-19 cases are associated with the emergence of multiple SARS-CoV-2 (Severe Acute Respiratory Virus) variants. The design of methods to support laboratory detection are crucial in the monitoring of these variants. Hence, in this paper, we develop a semi-automated method to design both forward and reverse primer sets to detect SARS-CoV-2 variants. To proceed, we train deep Convolution Neural Networks (CNNs) to classify labelled SARS-CoV-2 variants and identify partial genomic features needed for the forward and reverse Polymerase Chain Reaction (PCR) primer design. Our proposed approach supplements existing ones while promoting the emerging concept of neural network assisted primer design for PCR. Our CNN model was trained using a database of SARS-CoV-2 full-length genomes from GISAID and tested on a separate dataset from NCBI, with 98\% accuracy for the classification of variants. This result is based on the development of three different methods of feature extraction, and the selected primer sequences for each SARS-CoV-2 variant detection (except Omicron) were present in more than 95 \% of sequences in an independent set of 5000 same variant sequences, and below 5 \% in other independent datasets with 5000 sequences of each variant. In total, we obtain 22 forward and reverse primer pairs with flexible length sizes (18-25 base pairs) with an expected amplicon length ranging between 42 and 3322 nucleotides. Besides the feature appearance, in-silico primer checks confirmed that the identified primer pairs are suitable for accurate SARS-CoV-2 variant detection by means of PCR tests.

arxiv.org

What Are You Anxious About? Examining Subjects of Anxiety during the COVID-19 Pandemic. (arXiv:2209.13595v1 [cs.CY]) arxiv.org/abs/2209.13595

What Are You Anxious About? Examining Subjects of Anxiety during the COVID-19 Pandemic

COVID-19 poses disproportionate mental health consequences to the public during different phases of the pandemic. We use a computational approach to capture the specific aspects that trigger an online community's anxiety about the pandemic and investigate how these aspects change over time. First, we identified nine subjects of anxiety (SOAs) in a sample of Reddit posts ($N$=86) from r/COVID19\_support using thematic analysis. Then, we quantified Reddit users' anxiety by training algorithms on a manually annotated sample ($N$=793) to automatically label the SOAs in a larger chronological sample ($N$=6,535). The nine SOAs align with items in various recently developed pandemic anxiety measurement scales. We observed that Reddit users' concerns about health risks remained high in the first eight months of the pandemic. These concerns diminished dramatically despite the surge of cases occurring later. In general, users' language disclosing the SOAs became less intense as the pandemic progressed. However, worries about mental health and the future increased steadily throughout the period covered in this study. People also tended to use more intense language to describe mental health concerns than health risks or death concerns. Our results suggest that this online group's mental health condition does not necessarily improve despite COVID-19 gradually weakening as a health threat due to appropriate countermeasures. Our system lays the groundwork for population health and epidemiology scholars to examine aspects that provoke pandemic anxiety in a timely fashion.

arxiv.org

Scalable and Equivariant Spherical CNNs by Discrete-Continuous (DISCO) Convolutions. (arXiv:2209.13603v1 [cs.CV]) arxiv.org/abs/2209.13603

Scalable and Equivariant Spherical CNNs by Discrete-Continuous (DISCO) Convolutions

No existing spherical convolutional neural network (CNN) framework is both computationally scalable and rotationally equivariant. Continuous approaches capture rotational equivariance but are often prohibitively computationally demanding. Discrete approaches offer more favorable computational performance but at the cost of equivariance. We develop a hybrid discrete-continuous (DISCO) group convolution that is simultaneously equivariant and computationally scalable to high-resolution. While our framework can be applied to any compact group, we specialize to the sphere. Our DISCO spherical convolutions exhibit $\text{SO}(3)$ rotational equivariance, where $\text{SO}(n)$ is the special orthogonal group representing rotations in $n$-dimensions. When restricting rotations of the convolution to the quotient space $\text{SO}(3)/\text{SO}(2)$ for further computational enhancements, we recover a form of asymptotic $\text{SO}(3)$ rotational equivariance. Through a sparse tensor implementation we achieve linear scaling in number of pixels on the sphere for both computational cost and memory usage. For 4k spherical images we realize a saving of $10^9$ in computational cost and $10^4$ in memory usage when compared to the most efficient alternative equivariant spherical convolution. We apply the DISCO spherical CNN framework to a number of benchmark dense-prediction problems on the sphere, such as semantic segmentation and depth estimation, on all of which we achieve the state-of-the-art performance.

arxiv.org

Efficiently Learning Recoveries from Failures Under Partial Observability. (arXiv:2209.13605v1 [cs.RO]) arxiv.org/abs/2209.13605

Efficiently Learning Recoveries from Failures Under Partial Observability

Operating under real world conditions is challenging due to the possibility of a wide range of failures induced by partial observability. In relatively benign settings, such failures can be overcome by retrying or executing one of a small number of hand-engineered recovery strategies. By contrast, contact-rich sequential manipulation tasks, like opening doors and assembling furniture, are not amenable to exhaustive hand-engineering. To address this issue, we present a general approach for robustifying manipulation strategies in a sample-efficient manner. Our approach incrementally improves robustness by first discovering the failure modes of the current strategy via exploration in simulation and then learning additional recovery skills to handle these failures. To ensure efficient learning, we propose an online algorithm Value Upper Confidence Limit (Value-UCL) that selects what failure modes to prioritize and which state to recover to such that the expected performance improves maximally in every training episode. We use our approach to learn recovery skills for door-opening and evaluate them both in simulation and on a real robot with little fine-tuning. Compared to open-loop execution, our experiments show that even a limited amount of recovery learning improves task success substantially from 71\% to 92.4\% in simulation and from 75\% to 90\% on a real robot.

arxiv.org

Biologically-Plausible Determinant Maximization Neural Networks for Blind Separation of Correlated Sources. (arXiv:2209.12894v1 [eess.SP]) arxiv.org/abs/2209.12894

Biologically-Plausible Determinant Maximization Neural Networks for Blind Separation of Correlated Sources

Extraction of latent sources of complex stimuli is critical for making sense of the world. While the brain solves this blind source separation (BSS) problem continuously, its algorithms remain unknown. Previous work on biologically-plausible BSS algorithms assumed that observed signals are linear mixtures of statistically independent or uncorrelated sources, limiting the domain of applicability of these algorithms. To overcome this limitation, we propose novel biologically-plausible neural networks for the blind separation of potentially dependent/correlated sources. Differing from previous work, we assume some general geometric, not statistical, conditions on the source vectors allowing separation of potentially dependent/correlated sources. Concretely, we assume that the source vectors are sufficiently scattered in their domains which can be described by certain polytopes. Then, we consider recovery of these sources by the Det-Max criterion, which maximizes the determinant of the output correlation matrix to enforce a similar spread for the source estimates. Starting from this normative principle, and using a weighted similarity matching approach that enables arbitrary linear transformations adaptable by local learning rules, we derive two-layer biologically-plausible neural network algorithms that can separate mixtures into sources coming from a variety of source domains. We demonstrate that our algorithms outperform other biologically-plausible BSS algorithms on correlated source separation problems.

arxiv.org

Vold-Kalman Filter Order tracking of Axle Box Accelerations for Railway Stiffness Assessment. (arXiv:2209.12899v1 [eess.SY]) arxiv.org/abs/2209.12899

Vold-Kalman Filter Order tracking of Axle Box Accelerations for Railway Stiffness Assessment

Intelligent data-driven monitoring procedures hold enormous potential for ensuring safe operation and optimal management of the railway infrastructure in the face of increasing demands on cost and efficiency. Numerous studies have shown that the track stiffness is one of the main parameters influencing the evolution of degradation that drives maintenance processes. As such, the measurement of track stiffness is fundamental for characterizing the performance of the track in terms of deterioration rate and noise emission. This can be achieved via low-cost On Board Monitoring (OBM) sensing systems (i.e., axle-box accelerometers) that are mounted on in-service trains and enable frequent, real-time monitoring of the railway infrastructure network. Acceleration-based stiffness indicators have seldom been considered in monitoring applications. In this work, the use of a Vold-Kalman filter is proposed, for decomposing the signal into periodic wheel and track related excitation--response pairs functions. We demonstrate that these components are in turn correlated to operational conditions, such as wheel out-of-roundness and rail type. We further illustrate the relationship between the track stiffness, the measured wheel-rail forces and the sleeper passage amplitude, which can ultimately serve as an indicator for predictive track maintenance and prediction of track durability.

arxiv.org

The Ability of Self-Supervised Speech Models for Audio Representations. (arXiv:2209.12900v1 [cs.SD]) arxiv.org/abs/2209.12900

The Ability of Self-Supervised Speech Models for Audio Representations

Self-supervised learning (SSL) speech models have achieved unprecedented success in speech representation learning, but some questions regarding their representation ability remain unanswered. This paper addresses two of them: (1) Can SSL speech models deal with non-speech audio?; (2) Would different SSL speech models have insights into diverse aspects of audio features? To answer the two questions, we conduct extensive experiments on abundant speech and non-speech audio datasets to evaluate the representation ability of currently state-of-the-art SSL speech models, which are wav2vec 2.0 and HuBERT in this paper. These experiments are carried out during NeurIPS 2021 HEAR Challenge as a standard evaluation pipeline provided by competition officials. Results show that (1) SSL speech models could extract meaningful features of a wide range of non-speech audio, while they may also fail on certain types of datasets; (2) different SSL speech models have insights into different aspects of audio features. The two conclusions provide a foundation for the ensemble of representation models. We further propose an ensemble framework to fuse speech representation models' embeddings. Our framework outperforms state-of-the-art SSL speech/audio models and has generally superior performance on abundant datasets compared with other teams in HEAR Challenge. Our code is available at https://github.com/tony10101105/HEAR-2021-NeurIPS-Challenge -- NTU-GURA.

arxiv.org

A Comprehensive Review of Trends, Applications and Challenges In Out-of-Distribution Detection. (arXiv:2209.12935v1 [cs.LG]) arxiv.org/abs/2209.12935

A Comprehensive Review of Trends, Applications and Challenges In Out-of-Distribution Detection

With recent advancements in artificial intelligence, its applications can be seen in every aspect of humans' daily life. From voice assistants to mobile healthcare and autonomous driving, we rely on the performance of AI methods for many critical tasks; therefore, it is essential to assert the performance of models in proper means to prevent damage. One of the shortfalls of AI models in general, and deep machine learning in particular, is a drop in performance when faced with shifts in the distribution of data. Nonetheless, these shifts are always expected in real-world applications; thus, a field of study has emerged, focusing on detecting out-of-distribution data subsets and enabling a more comprehensive generalization. Furthermore, as many deep learning based models have achieved near-perfect results on benchmark datasets, the need to evaluate these models' reliability and trustworthiness for pushing towards real-world applications is felt more strongly than ever. This has given rise to a growing number of studies in the field of out-of-distribution detection and domain generalization, which begs the need for surveys that compare these studies from various perspectives and highlight their straightens and weaknesses. This paper presents a survey that, in addition to reviewing more than 70 papers in this field, presents challenges and directions for future works and offers a unifying look into various types of data shifts and solutions for better generalization.

arxiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.