Show newer

A statistical framework for GWAS of high dimensional phenotypes using summary statistics, with application to metabolite GWAS. (arXiv:2303.10221v1 [stat.ME]) arxiv.org/abs/2303.10221

A statistical framework for GWAS of high dimensional phenotypes using summary statistics, with application to metabolite GWAS

The recent explosion of genetic and high dimensional biobank and 'omic' data has provided researchers with the opportunity to investigate the shared genetic origin (pleiotropy) of hundreds to thousands of related phenotypes. However, existing methods for multi-phenotype genome-wide association studies (GWAS) do not model pleiotropy, are only applicable to a small number of phenotypes, or provide no way to perform inference. To add further complication, raw genetic and phenotype data are rarely observed, meaning analyses must be performed on GWAS summary statistics whose statistical properties in high dimensions are poorly understood. We therefore developed a novel model, theoretical framework, and set of methods to perform Bayesian inference in GWAS of high dimensional phenotypes using summary statistics that explicitly model pleiotropy, beget fast computation, and facilitate the use of biologically informed priors. We demonstrate the utility of our procedure by applying it to metabolite GWAS, where we develop new nonparametric priors for genetic effects on metabolite levels that use known metabolic pathway information and foster interpretable inference at the pathway level.

arxiv.org

Differentiable Rendering for 3D Fluorescence Microscopy. (arXiv:2303.10440v1 [physics.bio-ph]) arxiv.org/abs/2303.10440

Differentiable Rendering for 3D Fluorescence Microscopy

Differentiable rendering is a growing field that is at the heart of many recent advances in solving inverse graphics problems, such as the reconstruction of 3D scenes from 2D images. By making the rendering process differentiable, one can compute gradients of the output image with respect to the different scene parameters efficiently using automatic differentiation. Interested in the potential of such methods for the analysis of fluorescence microscopy images, we introduce deltaMic, a microscopy renderer that can generate a 3D fluorescence microscopy image from a 3D scene in a fully differentiable manner. By convolving the meshes in the scene with the point spread function (PSF) of the microscope, that characterizes the response of its imaging system to a point source, we emulate the 3D image creation process of fluorescence microscopy. This is achieved by computing the Fourier transform (FT) of the mesh and performing the convolution in the Fourier domain. Naive implementation of such mesh FT is however slow, inefficient, and sensitive to numerical precision. We solve these difficulties by providing a memory and computationally efficient fully differentiable GPU implementation of the 3D mesh FT. We demonstrate the potential of our method by reconstructing complex shapes from artificial microscopy images. Eventually, we apply our renderer to real confocal fluorescence microscopy images of embryos to accurately reconstruct the multicellular shapes of these cell aggregates.

arxiv.org

A Radiomics-Incorporated Deep Ensemble Learning Model for Multi-Parametric MRI-based Glioma Segmentation. (arXiv:2303.10533v1 [q-bio.QM]) arxiv.org/abs/2303.10533

A Radiomics-Incorporated Deep Ensemble Learning Model for Multi-Parametric MRI-based Glioma Segmentation

We developed a deep ensemble learning model with a radiomics spatial encoding execution for improved glioma segmentation accuracy using multi-parametric MRI (mp-MRI). This model was developed using 369 glioma patients with a 4-modality mp-MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. In each modality volume, a 3D sliding kernel was implemented across the brain to capture image heterogeneity: fifty-six radiomic features were extracted within the kernel, resulting in a 4th order tensor. Each radiomic feature can then be encoded as a 3D image volume, namely a radiomic feature map (RFM). PCA was employed for data dimension reduction and the first 4 PCs were selected. Four deep neural networks as sub-models following the U-Net architecture were trained for the segmenting of a region-of-interest (ROI): each sub-model utilizes the mp-MRI and 1 of the 4 PCs as a 5-channel input for a 2D execution. The 4 softmax probability results given by the U-net ensemble were superimposed and binarized by Otsu method as the segmentation result. Three ensemble models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT). The adopted radiomics spatial encoding execution enriches the image heterogeneity information that leads to the successful demonstration of the proposed deep ensemble model, which offers a new tool for mp-MRI based medical image segmentation.

arxiv.org

STGIC: a graph and image convolution-based method for spatial transcriptomic clustering. (arXiv:2303.10657v1 [q-bio.QM]) arxiv.org/abs/2303.10657

STGIC: a graph and image convolution-based method for spatial transcriptomic clustering

Spatial transcriptomic (ST) clustering requires dividing spots into spatial domains, each of which are constituted by continuously distributed spots sharing similar gene transcription profile. Adjacency and feature matrix being derived respectively from 2D spatial coordinates and gene transcription quantity, the problem is amenable to graph network. The existing graph network methods often employ self-supervision or contrastive learning to construct training objectives, which as such are not directly related to smoothing embedding of spots and thus hard to perform well. Herein, we propose a graph and image-based method (STGIC) which adopts AGC, an existing graph-based method not depending on any trainable parameters for generating pseudo-labels for clustering by our dilated convolution network (CNN)-based frameworks which are fed with virtual image also transformed from spatial and transcription information of spots. The pre-defined graph convolution kernel in AGC plays a key role as a low-pass filter in smoothing, which can be further guaranteed by our training loss demanding embedding similarity between neighboring pixels. Our dilated CNN-based frameworks are featured by two parallel components respectively with convolution kernel size of 3 and 2, besides some constraints are made on the convolution kernel weights. These fancied designs ensure spots information updates by aggregating only that from neighboring spots with the weights depending on the distance from kernel center. Apart from the above tricks, self-supervision and contrastive learning are also adopted in STGIC to construct our training objective. Tests with the generally recognized dataset DLPFC demonstrates STGIC outperforms the state-of-the-art methods.

arxiv.org

Mobility restrictions in response to local epidemic outbreaks in rock-paper-scissors models. (arXiv:2303.10724v1 [q-bio.PE]) arxiv.org/abs/2303.10724

Mobility restrictions in response to local epidemic outbreaks in rock-paper-scissors models

We study a three-species cyclic model whose organisms are vulnerable to contamination with an infectious disease which propagates person-to-person. We consider that individuals of one species perform an evolutionary self-preservation strategy by reducing the mobility rate to minimise infection risk whenever an epidemic outbreak reaches the neighbourhood. Running stochastic simulations, we quantify the changes in spatial patterns induced by unevenness in the cyclic game introduced by the mobility restriction strategy of organisms of one out of the species. Our findings show that variations in disease virulence impact the benefits of dispersal limitation reaction, with the relative reduction of the organisms' infection risk accentuating in surges of less contagious or deadlier diseases. The effectiveness of the mobility restriction tactic depends on the tolerable fraction of infected neighbours used as a trigger of the defensive strategy and the deceleration level. If each organism promptly reacts to the arrival of the first viral vectors in its surroundings with strict mobility reduction, contamination risk decreases significantly. Our conclusions may help biologists understand the impact of evolutionary defensive strategies in ecosystems during an epidemic.

arxiv.org

PheME: A deep ensemble framework for improving phenotype prediction from multi-modal data. (arXiv:2303.10794v1 [cs.LG]) arxiv.org/abs/2303.10794

PheME: A deep ensemble framework for improving phenotype prediction from multi-modal data

Detailed phenotype information is fundamental to accurate diagnosis and risk estimation of diseases. As a rich source of phenotype information, electronic health records (EHRs) promise to empower diagnostic variant interpretation. However, how to accurately and efficiently extract phenotypes from the heterogeneous EHR data remains a challenge. In this work, we present PheME, an Ensemble framework using Multi-modality data of structured EHRs and unstructured clinical notes for accurate Phenotype prediction. Firstly, we employ multiple deep neural networks to learn reliable representations from the sparse structured EHR data and redundant clinical notes. A multi-modal model then aligns multi-modal features onto the same latent space to predict phenotypes. Secondly, we leverage ensemble learning to combine outputs from single-modal models and multi-modal models to improve phenotype predictions. We choose seven diseases to evaluate the phenotyping performance of the proposed framework. Experimental results show that using multi-modal data significantly improves phenotype prediction in all diseases, the proposed ensemble learning framework can further boost the performance.

arxiv.org

Psychotherapy AI Companion with Reinforcement Learning Recommendations and Interpretable Policy Dynamics. (arXiv:2303.09601v1 [cs.LG]) arxiv.org/abs/2303.09601

Psychotherapy AI Companion with Reinforcement Learning Recommendations and Interpretable Policy Dynamics

We introduce a Reinforcement Learning Psychotherapy AI Companion that generates topic recommendations for therapists based on patient responses. The system uses Deep Reinforcement Learning (DRL) to generate multi-objective policies for four different psychiatric conditions: anxiety, depression, schizophrenia, and suicidal cases. We present our experimental results on the accuracy of recommended topics using three different scales of working alliance ratings: task, bond, and goal. We show that the system is able to capture the real data (historical topics discussed by the therapists) relatively well, and that the best performing models vary by disorder and rating scale. To gain interpretable insights into the learned policies, we visualize policy trajectories in a 2D principal component analysis space and transition matrices. These visualizations reveal distinct patterns in the policies trained with different reward signals and trained on different clinical diagnoses. Our system's success in generating DIsorder-Specific Multi-Objective Policies (DISMOP) and interpretable policy dynamics demonstrates the potential of DRL in providing personalized and efficient therapeutic recommendations.

arxiv.org

Distorted stability pattern and chaotic features for quantized prey-predator-like dynamics. (arXiv:2303.09622v1 [quant-ph]) arxiv.org/abs/2303.09622

Distorted stability pattern and chaotic features for quantized prey-predator-like dynamics

Non-equilibrium and instability features of prey-predator-like systems associated to topological quantum domains emerging from a quantum phase-space description are investigated in the framework of the Weyl-Wigner quantum mechanics. Reporting about the generalized Wigner flow for one-dimensional Hamiltonian systems, $\mathcal{H}(x,\,k)$, constrained by $\partial^2 \mathcal{H} / \partial x \, \partial k = 0$, the prey-predator dynamics driven by Lotka-Volterra (LV) equations is mapped onto the Heisenberg-Weyl non-commutative algebra, $[x,\,k] = i$, where the canonical variables $x$ and $k$ are related to the two-dimensional LV parameters, $y = e^{-x}$ and $z = e^{-k}$. From the non-Liouvillian pattern driven by the associated Wigner currents, hyperbolic equilibrium and stability parameters for the prey-predator-like dynamics are then shown to be affected by quantum distortions over the classical background, in correspondence with non-stationarity and non-Liouvillianity properties quantified in terms of Wigner currents and Gaussian ensemble parameters. As an extension, considering the hypothesis of discretizing the time parameter, non-hyperbolic bifurcation regimes are identified and quantified in terms of $z-y$ anisotropy and Gaussian parameters. The bifurcation diagrams exhibit, for quantum regimes, chaotic patterns highly dependent on Gaussian localization. Besides exemplifying a broad range of applications of the generalized Wigner information flow framework, our results extend, from the continuous (hyperbolic regime) to discrete (chaotic regime) domains, the procedure for quantifying the influence of quantum fluctuations over equilibrium and stability scenarios of LV driven systems.

arxiv.org

Predicting discrete-time bifurcations with deep learning. (arXiv:2303.09669v1 [q-bio.QM]) arxiv.org/abs/2303.09669

Predicting discrete-time bifurcations with deep learning

Many natural and man-made systems are prone to critical transitions -- abrupt and potentially devastating changes in dynamics. Deep learning classifiers can provide an early warning signal (EWS) for critical transitions by learning generic features of bifurcations (dynamical instabilities) from large simulated training data sets. So far, classifiers have only been trained to predict continuous-time bifurcations, ignoring rich dynamics unique to discrete-time bifurcations. Here, we train a deep learning classifier to provide an EWS for the five local discrete-time bifurcations of codimension-1. We test the classifier on simulation data from discrete-time models used in physiology, economics and ecology, as well as experimental data of spontaneously beating chick-heart aggregates that undergo a period-doubling bifurcation. The classifier outperforms commonly used EWS under a wide range of noise intensities and rates of approach to the bifurcation. It also predicts the correct bifurcation in most cases, with particularly high accuracy for the period-doubling, Neimark-Sacker and fold bifurcations. Deep learning as a tool for bifurcation prediction is still in its nascence and has the potential to transform the way we monitor systems for critical transitions.

arxiv.org

Covariance properties under natural image transformations for the generalized Gaussian derivative model for visual receptive fields. (arXiv:2303.09803v1 [q-bio.NC]) arxiv.org/abs/2303.09803

Covariance properties under natural image transformations for the generalized Gaussian derivative model for visual receptive fields

This paper presents a theory for how geometric image transformations can be handled by a first layer of linear receptive fields, in terms of true covariance properties, which, in turn, enable geometric invariance properties at higher levels in the visual hierarchy. Specifically, we develop this theory for a generalized Gaussian derivative model for visual receptive fields, which is derived in an axiomatic manner from first principles, that reflect symmetry properties of the environment, complemented by structural assumptions to guarantee internally consistent treatment of image structures over multiple spatio-temporal scales. It is shown how the studied generalized Gaussian derivative model for visual receptive fields obeys true covariance properties under spatial scaling transformations, spatial affine transformations, Galilean transformations and temporal scaling transformations, implying that a vision system, based on image and video measurements in terms of the receptive fields according to this model, can to first order of approximation handle the image and video deformations between multiple views of objects delimited by smooth surfaces, as well as between multiple views of spatio-temporal events, under varying relative motions between the objects and events in the world and the observer. We conclude by describing implications of the presented theory for biological vision, regarding connections between the variabilities of the shapes of biological visual receptive fields and the variabilities of spatial and spatio-temporal image structures under natural image transformations.

arxiv.org

Disentangling the Link Between Image Statistics and Human Perception. (arXiv:2303.09874v1 [cs.CV]) arxiv.org/abs/2303.09874

Disentangling the Link Between Image Statistics and Human Perception

In the 1950s Horace Barlow and Fred Attneave suggested a connection between sensory systems and how they are adapted to the environment: early vision evolved to maximise the information it conveys about incoming signals. Following Shannon's definition, this information was described using the probability of the images taken from natural scenes. Previously, direct accurate predictions of image probabilities were not possible due to computational limitations. Despite the exploration of this idea being indirect, mainly based on oversimplified models of the image density or on system design methods, these methods had success in reproducing a wide range of physiological and psychophysical phenomena. In this paper, we directly evaluate the probability of natural images and analyse how it may determine perceptual sensitivity. We employ image quality metrics that correlate well with human opinion as a surrogate of human vision, and an advanced generative model to directly estimate the probability. Specifically, we analyse how the sensitivity of full-reference image quality metrics can be predicted from quantities derived directly from the probability distribution of natural images. First, we compute the mutual information between a wide range of probability surrogates and the sensitivity of the metrics and find that the most influential factor is the probability of the noisy image. Then we explore how these probability surrogates can be combined using a simple model to predict the metric sensitivity, giving an upper bound for the correlation of 0.85 between the model predictions and the actual perceptual sensitivity. Finally, we explore how to combine the probability surrogates using simple expressions, and obtain two functional forms (using one or two surrogates) that can be used to predict the sensitivity of the human visual system given a particular pair of images.

arxiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.