Show newer

Task-Based Assessment for Neural Networks: Evaluating Undersampled MRI Reconstructions based on Human Observer Signal Detection. (arXiv:2210.12161v1 [eess.IV]) arxiv.org/abs/2210.12161

Task-Based Assessment for Neural Networks: Evaluating Undersampled MRI Reconstructions based on Human Observer Signal Detection

Recent research has explored using neural networks to reconstruct undersampled magnetic resonance imaging (MRI) data. Because of the complexity of the artifacts in the reconstructed images, there is a need to develop task-based approaches of image quality. Common metrics for evaluating image quality like the normalized root mean squared error (NRMSE) and structural similarity (SSIM) are global metrics which average out impact of subtle features in the images. Using measures of image quality which incorporate a subtle signal for a specific task allow for image quality assessment which locally evaluates the effect of undersampling on a signal. We used a U-Net to reconstruct under-sampled images with 2x, 3x, 4x and 5x fold 1-D undersampling rates. Cross validation was performed for a 500 and a 4000 image training set with both structural similarity (SSIM) and mean squared error (MSE) losses. A two alternative forced choice (2-AFC) observer study was carried out for detecting a subtle signal (small blurred disk) from images with the 4000 image training set. We found that for both loss functions and training set sizes, the human observer performance on the 2-AFC studies led to a choice of a 2x undersampling but the SSIM and NRMSE led to a choice of a 3x undersampling. For this task, SSIM and NRMSE led to an overestimate of the achievable undersampling using a U-Net before a steep loss of image quality when compared to the performance of human observers in the detection of a subtle lesion.

arxiv.org

Equivalence Checking of Parameterized Quantum Circuits: Verifying the Compilation of Variational Quantum Algorithms. (arXiv:2210.12166v1 [quant-ph]) arxiv.org/abs/2210.12166

Equivalence Checking of Parameterized Quantum Circuits: Verifying the Compilation of Variational Quantum Algorithms

Variational quantum algorithms have been introduced as a promising class of quantum-classical hybrid algorithms that can already be used with the noisy quantum computing hardware available today by employing parameterized quantum circuits. Considering the non-trivial nature of quantum circuit compilation and the subtleties of quantum computing, it is essential to verify that these parameterized circuits have been compiled correctly. Established equivalence checking procedures that handle parameter-free circuits already exist. However, no methodology capable of handling circuits with parameters has been proposed yet. This work fills this gap by showing that verifying the equivalence of parameterized circuits can be achieved in a purely symbolic fashion using an equivalence checking approach based on the ZX-calculus. At the same time, proofs of inequality can be efficiently obtained with conventional methods by taking advantage of the degrees of freedom inherent to parameterized circuits. We implemented the corresponding methods and proved that the resulting methodology is complete. Experimental evaluations (using the entire parametric ansatz circuit library provided by Qiskit as benchmarks) demonstrate the efficacy of the proposed approach. The implementation is open source and publicly available as part of the equivalence checking tool QCEC (https://github.com/cda-tum/qcec) which is part of the Munich Quantum Toolkit (MQT).

arxiv.org

On the economic viability of solar energy when upgrading cellular networks. (arXiv:2210.11475v1 [cs.CE]) arxiv.org/abs/2210.11475

On the economic viability of solar energy when upgrading cellular networks

The massive increase of data traffic, the widespread proliferation of wireless applications and the full-scale deployment of 5G and the IoT, imply a steep increase in cellular networks energy use, resulting in a significant carbon footprint. This paper presents a comprehensive model to show the interaction between the networking and energy features of the problem and study the economical and technical viability of green networking. Solar equipment, cell zooming, energy management and dynamic user allocation are considered in the upgrading network planning process. We propose a mixed-integer optimization model to minimize long-term capital costs and operational energy expenditures in a heterogeneous on-grid cellular network with different types of base station, including solar. Based on eight scenarios where realistic costs of solar panels, batteries, and inverters were considered, we first found that solar base stations are currently not economically interesting for cellular operators. We next studied the impact of a significant and progressive carbon tax on reducing greenhouse gas emissions (GHG). We found that, at current energy and equipment prices, a carbon tax ten-fold the current value is the only element that could make green base stations economically viable.

arxiv.org

Encoding nonlinear and unsteady aerodynamics of limit cycle oscillations using nonlinear sparse Bayesian learning. (arXiv:2210.11476v1 [cs.CE]) arxiv.org/abs/2210.11476

Encoding nonlinear and unsteady aerodynamics of limit cycle oscillations using nonlinear sparse Bayesian learning

This paper investigates the applicability of a recently-proposed nonlinear sparse Bayesian learning (NSBL) algorithm to identify and estimate the complex aerodynamics of limit cycle oscillations. NSBL provides a semi-analytical framework for determining the data-optimal sparse model nested within a (potentially) over-parameterized model. This is particularly relevant to nonlinear dynamical systems where modelling approaches involve the use of physics-based and data-driven components. In such cases, the data-driven components, where analytical descriptions of the physical processes are not readily available, are often prone to overfitting, meaning that the empirical aspects of these models will often involve the calibration of an unnecessarily large number of parameters. While it may be possible to fit the data well, this can become an issue when using these models for predictions in regimes that are different from those where the data was recorded. In view of this, it is desirable to not only calibrate the model parameters, but also to identify the optimal compromise between data-fit and model complexity. In this paper, this is achieved for an aeroelastic system where the structural dynamics are well-known and described by a differential equation model, coupled with a semi-empirical aerodynamic model for laminar separation flutter resulting in low-amplitude limit cycle oscillations. For the purpose of illustrating the benefit of the algorithm, in this paper, we use synthetic data to demonstrate the ability of the algorithm to correctly identify the optimal model and model parameters, given a known data-generating model. The synthetic data are generated from a forward simulation of a known differential equation model with parameters selected so as to mimic the dynamics observed in wind-tunnel experiments.

arxiv.org

Multiscale Topology Optimization Considering Local and Global Buckling Response. (arXiv:2210.11477v1 [cs.CE]) arxiv.org/abs/2210.11477

Multiscale Topology Optimization Considering Local and Global Buckling Response

Much work has been done in multiscale topology optimization for maximum stiffness or minimum compliance design. Such approaches date back to the original homogenization-based work by Bendsøe and Kikuchi from 1988, which lately has been revived due to advances in manufacturing methods like additive manufacturing. Orthotropic microstructures locally oriented in principal stress directions provide for highly efficient stiffness optimal designs, whereas for the pure stiffness objective, porous isotropic microstructures are sub-optimal and hence not useful. It has, however, been postulated and exemplified that isotropic microstructures (infill) may enhance structural buckling stability but this has yet to be directly proven and optimized. In this work, we optimize buckling stability of multiscale structures with isotropic porous infill. To do this, we establish local density dependent Willam-Warnke yield surfaces based on buckling estimates from Bloch-Floquet-based cell analysis to predict local instability of the homogenized materials. These local buckling-based stress constraints are combined with a global buckling criterion to obtain topology optimized designs that take both local and global buckling stability into account. De-homogenized structures with small and large cell sizes confirm validity of the approach and demonstrate huge structural gains as well as time savings compared to standard singlescale approaches.

arxiv.org

Neural Co-Processors for Restoring Brain Function: Results from a Cortical Model of Grasping. (arXiv:2210.11478v1 [q-bio.NC]) arxiv.org/abs/2210.11478

Neural Co-Processors for Restoring Brain Function: Results from a Cortical Model of Grasping

Objective: A major challenge in closed-loop brain-computer interfaces (BCIs) is finding optimal stimulation patterns as a function of ongoing neural activity for different subjects and objectives. Traditional approaches, such as those currently used for deep brain stimulation, have largely followed a trial- and-error strategy to search for effective open-loop stimulation parameters, a strategy that is inefficient and does not generalize to closed-loop activity-dependent stimulation. Approach: To achieve goal-directed closed-loop neurostimulation, we propose the use of brain co-processors, devices which exploit artificial intelligence (AI) to shape neural activity and bridge injured neural circuits for targeted repair and rehabilitation. Here we investigate a specific type of co-processor called a "neural co-processor" which uses artificial neural networks (ANNs) to learn optimal closed-loop stimulation policies. The co-processor adapts the stimulation policy as the biological circuit itself adapts to the stimulation, achieving a form of brain-device co-adaptation. We tested the neural co-processor's ability to restore function after stroke by simulating a variety of lesions in a previously published cortical model of grasping. Main results: Our results show that a neural co-processor can restore reaching and grasping function after a simulated stroke in a cortical model, achieving recovery towards healthy function in the range 75-90%. Significance: This is the first proof-of-concept demonstration, using computer simulations, of a neural co-processor for activity-dependent closed-loop neurosimulation for optimizing a rehabilitation goal after injury. Our results provide insights on how such co-processors may eventually be developed for in vivo use to learn complex adaptive stimulation policies for a variety of neural rehabilitation and neuroprosthetic applications.

arxiv.org

Exploitation of material consolidation trade-offs in a multi-tier complex supply networks. (arXiv:2210.11479v1 [cs.CE]) arxiv.org/abs/2210.11479

Exploitation of material consolidation trade-offs in a multi-tier complex supply networks

While consolidation strategies form the backbone of many supply chain optimisation problems, exploitation of multi-tier material relationships through consolidation remains an understudied area, despite being a prominent feature of industries that produce complex made-to-order products. In this paper, we propose an optimisation framework for exploiting multi-to-multi relationship between tiers of a supply chain. The resulting formulation is flexible such that quantity discounts, inventory holding and transport costs can be included. The framework introduces a new trade-off between the tiers, resulting in cost reductions at one tier at the expense of increased costs in the other tier, which helps to reduce the overall procurement cost in the supply chain. A mixed integer linear programming model is developed and tested with a range of small to large-scale test problems from aerospace manufacturing. Our comparison to benchmark results show that there is indeed a cost trade-off between two tiers, and that its reduction can be achieved using a holistic approach to reconfiguration. Costs are decreased when second tier fixed ordering costs and the number of machining options increase. Consolidation results in less inventory holding costs for all cases. A number of secondary effects such as simplified supplier selection may also be observed.

arxiv.org

A Methodology for the Prediction of Drug Target Interaction using CDK Descriptors. (arXiv:2210.11482v1 [q-bio.QM]) arxiv.org/abs/2210.11482

A Methodology for the Prediction of Drug Target Interaction using CDK Descriptors

Detecting probable Drug Target Interaction (DTI) is a critical task in drug discovery. Conventional DTI studies are expensive, labor-intensive, and take a lot of time, hence there are significant reasons to construct useful computational techniques that may successfully anticipate possible DTIs. Although certain methods have been developed for this cause, numerous interactions are yet to be discovered, and prediction accuracy is still low. To meet these challenges, we propose a DTI prediction model built on molecular structure of drugs and sequence of target proteins. In the proposed model, we use Simplified Molecular Input Line Entry System (SMILES) to create CDK descriptors, Molecular ACCess System (MACCS) fingerprints, Electrotopological state (Estate) fingerprints and amino acid sequences of targets to get Pseudo Amino Acid Composition (PseAAC). We target to evaluate performance of DTI prediction models using CDK descriptors. For comparison, we use benchmark data and evaluate models performance on two widely used fingerprints, MACCS fingerprints and Estate fingerprints. The evaluation of performances shows that CDK descriptors are superior at predicting DTIs. The proposed method also outperforms other previously published techniques significantly.

arxiv.org

Optical Networking in Future-land: From Optical-bypass-enabled to Optical-processing-enabled Paradigm. (arXiv:2210.11496v1 [cs.NI]) arxiv.org/abs/2210.11496

Optical Networking in Future-land: From Optical-bypass-enabled to Optical-processing-enabled Paradigm

Conventional wisdom in designing the optical switching nodes is rooted in the intuition that when an optical channel crossing an intermediate node, it should be maximally isolated from other optical channels to avoid interference. Such long-established paradigm perceiving the interference of optical channels transiting at the same node as an adversarial factor and should therefore circumvent, albeit reasonable, may leave vast unexplored opportunities. Indeed, rapid advances in all-optical signal processing technologies has brought opportunities to re-define the optical node architecture by upgrading its naive functionalities from simply add/drop and cross-connecting to proactively mixing optical channels in photonic domain. Specifically, all-optical channel (de-) aggregation technologies have been remarkably advancing in recent years, permitting two or more optical channels at lower bit-rate and/or modulation formats could be all-optically aggregated to a single channel of higher-rate and/or higher-order modulation format and vice versa. Such evolutionary technique is poised to disrupt the existing ecosystem for optical network design and planning, and thus necessitates for a radical change to unlock new potentials. In addressing this disruptive idea, we present a new paradigm for future optical networks, namely, optical-processing-enabled networks powered by in-network all-optical mixing capability. We introduce the operational principle of optical channel (de-) aggregation and show how spectrally beneficial such innovative operations could yield by an illustrative example. Next, in order to maximize the aggregation opportunity, we present a mathematical model for optimal routing based on integer linear programming model. Numerical results on the realistic network topology COST239 are provided to quantify the spectral gain of aggregation-aware routing compared to the conventional one.

arxiv.org

An out-of-distribution discriminator based on Bayesian neural network epistemic uncertainty. (arXiv:2210.10780v1 [cs.LG]) arxiv.org/abs/2210.10780

An out-of-distribution discriminator based on Bayesian neural network epistemic uncertainty

Neural networks have revolutionized the field of machine learning with increased predictive capability. In addition to improving the predictions of neural networks, there is a simultaneous demand for reliable uncertainty quantification on estimates made by machine learning methods such as neural networks. Bayesian neural networks (BNNs) are an important type of neural network with built-in capability for quantifying uncertainty. This paper discusses aleatoric and epistemic uncertainty in BNNs and how they can be calculated. With an example dataset of images where the goal is to identify the amplitude of an event in the image, it is shown that epistemic uncertainty tends to be lower in images which are well-represented in the training dataset and tends to be high in images which are not well-represented. An algorithm for out-of-distribution (OoD) detection with BNN epistemic uncertainty is introduced along with various experiments demonstrating factors influencing the OoD detection capability in a BNN. The OoD detection capability with epistemic uncertainty is shown to be comparable to the OoD detection in the discriminator network of a generative adversarial network (GAN) with comparable network architecture.

arxiv.org

Self-learning locally-optimal hypertuning using maximum entropy, and comparison of machine learning approaches for estimating fatigue life in composite materials. (arXiv:2210.10783v1 [cs.LG]) arxiv.org/abs/2210.10783

Self-learning locally-optimal hypertuning using maximum entropy, and comparison of machine learning approaches for estimating fatigue life in composite materials

Applications of Structural Health Monitoring (SHM) combined with Machine Learning (ML) techniques enhance real-time performance tracking and increase structural integrity awareness of civil, aerospace and automotive infrastructures. This SHM-ML synergy has gained popularity in the last years thanks to the anticipation of maintenance provided by arising ML algorithms and their ability of handling large quantities of data and considering their influence in the problem. In this paper we develop a novel ML nearest-neighbors-alike algorithm based on the principle of maximum entropy to predict fatigue damage (Palmgren-Miner index) in composite materials by processing the signals of Lamb Waves -- a non-destructive SHM technique -- with other meaningful features such as layup parameters and stiffness matrices calculated from the Classical Laminate Theory (CLT). The full data analysis cycle is applied to a dataset of delamination experiments in composites. The predictions achieve a good level of accuracy, similar to other ML algorithms, e.g. Neural Networks or Gradient-Boosted Trees, and computation times are of the same order of magnitude. The key advantages of our proposal are: (1) The automatic determination of all the parameters involved in the prediction, so no hyperparameters have to be set beforehand, which saves time devoted to hypertuning the model and also represents an advantage for autonomous, self-supervised SHM. (2) No training is required, which, in an \textit{online learning} context where streams of data are fed continuously to the model, avoids repeated training -- essential for reliable real-time, continuous monitoring.

arxiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.