Show newer

Coding for the unsourced A-channel with erasures: the linked loop code. (arXiv:2312.02160v1 [cs.IT]) arxiv.org/abs/2312.02160

Coding for the unsourced A-channel with erasures: the linked loop code

The A-channel is a noiseless multiple access channel in which users simultaneously transmit Q-ary symbols and the receiver observes the set of transmitted symbols, but not their multiplicities. An A-channel is said to be unsourced if, additionally, users transmissions are encoded across time using a common codebook and decoding of the transmitted messages is done without regard to the identities of the active users. An interesting variant of the unsourced A-channel is the unsourced A-channel with erasures (UACE), in which transmitted symbols are erased with a given independent and identically distributed probability. In this paper, we focus on designing a code that enables a list of transmitted codewords to be recovered despite the erasures of some of the transmitted symbols. To this end, we propose the linked-loop code (LLC), which uses parity bits to link each symbol to the previous M symbols in a tail-biting manner, i.e., the first symbols of the transmission are linked to the last ones. The decoding process occurs in two phases: the first phase decodes the codewords that do not suffer from any erasures, and the second phase attempts to recover the erased symbols using the available parities. We compare the performance of the LLC over the UACE with other codes in the literature and argue for the effectiveness of the construction. Our motivation for studying the UACE comes from its relevance in machine-type communication and coded compressed sensing.

arxiv.org

Cooperation Based Joint Active and Passive Sensing with Asynchronous Transceivers for Perceptive Mobile Networks. (arXiv:2312.02163v1 [cs.IT]) arxiv.org/abs/2312.02163

Cooperation Based Joint Active and Passive Sensing with Asynchronous Transceivers for Perceptive Mobile Networks

Perceptive mobile network (PMN) is an emerging concept for next-generation wireless networks capable of conducting integrated sensing and communication (ISAC). A major challenge for realizing high performance sensing in PMNs is how to deal with spatially separated asynchronous transceivers. Asynchronicity results in timing offsets (TOs) and carrier frequency offsets (CFOs), which further cause ambiguity in ranging and velocity sensing. Most existing algorithms mitigate TOs and CFOs based on the line-of-sight (LOS) propagation path between sensing transceivers. However, LOS paths may not exist in realistic scenarios. In this paper, we propose a cooperation based joint active and passive sensing scheme for the non-LOS (NLOS) scenarios having asynchronous transceivers. This scheme relies on the cross-correlation cooperative sensing (CCCS) algorithm, which regards active sensing as a reference and mitigates TOs and CFOs by correlating active and passive sensing information. Another major challenge for realizing high performance sensing in PMNs is how to realize high accuracy angle-of-arrival (AoA) estimation with low complexity. Correspondingly, we propose a low complexity AoA algorithm based on cooperative sensing, which comprises coarse AoA estimation and fine AoA estimation. Analytical and numerical simulation results verify the performance advantages of the proposed CCCS algorithm and the low complexity AoA estimation algorithm.

arxiv.org

Uncertainty Quantification in Machine Learning Based Segmentation: A Post-Hoc Approach for Left Ventricle Volume Estimation in MRI. (arXiv:2312.02167v1 [cs.CV]) arxiv.org/abs/2312.02167

Uncertainty Quantification in Machine Learning Based Segmentation: A Post-Hoc Approach for Left Ventricle Volume Estimation in MRI

Recent studies have confirmed cardiovascular diseases remain responsible for highest death toll amongst non-communicable diseases. Accurate left ventricular (LV) volume estimation is critical for valid diagnosis and management of various cardiovascular conditions, but poses significant challenge due to inherent uncertainties associated with segmentation algorithms in magnetic resonance imaging (MRI). Recent machine learning advancements, particularly U-Net-like convolutional networks, have facilitated automated segmentation for medical images, but struggles under certain pathologies and/or different scanner vendors and imaging protocols. This study proposes a novel methodology for post-hoc uncertainty estimation in LV volume prediction using Itô stochastic differential equations (SDEs) to model path-wise behavior for the prediction error. The model describes the area of the left ventricle along the heart's long axis. The method is agnostic to the underlying segmentation algorithm, facilitating its use with various existing and future segmentation technologies. The proposed approach provides a mechanism for quantifying uncertainty, enabling medical professionals to intervene for unreliable predictions. This is of utmost importance in critical applications such as medical diagnosis, where prediction accuracy and reliability can directly impact patient outcomes. The method is also robust to dataset changes, enabling application for medical centers with limited access to labeled data. Our findings highlight the proposed uncertainty estimation methodology's potential to enhance automated segmentation robustness and generalizability, paving the way for more reliable and accurate LV volume estimation in clinical settings as well as opening new avenues for uncertainty quantification in biomedical image segmentation, providing promising directions for future research.

arxiv.org

Multiple Reference Signals Collaborative Sensing for Integrated Sensing and Communication System Towards 5G-A and 6G. (arXiv:2312.02170v1 [cs.IT]) arxiv.org/abs/2312.02170

Multiple Reference Signals Collaborative Sensing for Integrated Sensing and Communication System Towards 5G-A and 6G

Integrated sensing and communication (ISAC) is considered as the potential key technology of the future mobile communication systems. The signal design is fundamental for the ISAC system. The reference signals in mobile communication systems have good detection performance, which is worth further research. Existing studies applied the single reference signal to radar sensing. In this paper, a multiple reference signals collaborative sensing scheme is designed. Specifically, we jointly apply channel state information reference signal (CSI-RS), positioning reference signal (PRS) and demodulation reference signal (DMRS) in radar sensing, which improve the performance of radar sensing via obtaining continuous time-frequency resource mapping. Crámer-Rao lower bound (CRLB) of the joint reference signal for distance and velocity estimation is derived. The impacts of carrier frequency and subcarrier spacing on the performance of distance and velocity estimation are revealed. The results of simulation experiments show that compared with the single reference signal sensing scheme, the multiple reference signals collaborative sensing scheme effectively improves the sensing accuracy. Moreover, because of the discontinuous OFDM symbols, the accuracy of velocity estimation could be further improved via compressed sensing (CS). This paper has verified that multiple reference signals, instead of single reference signal, have much more superior performance on radar sensing, which is a practical and efficient approach in designing ISAC signal.

arxiv.org

LpiCT: A logic security analysis framework for protocols. (arXiv:2312.02171v1 [cs.CR]) arxiv.org/abs/2312.02171

LpiCT: A logic security analysis framework for protocols

The pi calculus is a basic theory of mobile communication based on the notion of interaction, which, aimed at analyzing and modelling the behaviors of communication process in communicating and mobile systems, is widely applied to the security analysis of cryptographic protocol's design and implementation. But the pi calculus does not provide perfect logic security analysis, so the logic flaws in the design and the implementation of a cryptographic protocol can not be discovered in time. The aim is to analyze whether there are logic flaws in the design and the implementation of a cryptographic protocol, so as to ensure the security of the cryptographic protocol when it is encoded into a software and implemented. This paper introduces logic rules and proofs, binary tree and the KMP algorithm, and proposes a new extension the pi calculus theory, a logic security analysis framework and an algorithm. This paper presents the logic security proof and analysis of TLS1.3 protocol's interactional implementation process. Empirical results show that the new extension theory, the logic security analysis framework and the algorithm can effectively analyze whether there are logic flaws in the design and the implementation of a cryptographic protocol. The security of cryptographic protocols depends not only on cryptographic primitives, but also on the coding of cryptographic protocols and the environment in which they are implemented. The security analysis framework of cryptographic protocol implementation proposed in this paper can ensure the security of protocol implementation.

arxiv.org

Transport Equation based Physics Informed Neural Network to predict the Yield Strength of Architected Materials. (arXiv:2312.00003v1 [cs.LG]) arxiv.org/abs/2312.00003

Transport Equation based Physics Informed Neural Network to predict the Yield Strength of Architected Materials

In this research, the application of the Physics-Informed Neural Network (PINN) model is explored to solve transport equation-based Partial Differential Equations (PDEs). The primary objective is to analyze the impact of different activation functions incorporated within the PINN model on its predictive performance, specifically assessing the Mean Squared Error (MSE) and Mean Absolute Error (MAE). The dataset used in the study consists of a varied set of input parameters related to strut diameter, unit cell size, and the corresponding yield stress values. Through this investigation the aim is to understand the effectiveness of the PINN model and the significance of choosing appropriate activation functions for solving complex PDEs in real-world applications. The outcomes suggest that the choice of activation function may have minimal influence on the model's predictive accuracy for this particular problem. The PINN model showcases exceptional generalization capabilities, indicating its capacity to avoid overfitting with the provided dataset. The research underscores the importance of striking a balance between performance and computational efficiency while selecting an activation function for specific real-world applications. These valuable findings contribute to advancing the understanding and potential adoption of PINN as an effective tool for solving challenging PDEs in diverse scientific and engineering domains.

arxiv.org

NumCalc: An open source BEM code for solving acoustic scattering problems. (arXiv:2312.00005v1 [math.NA]) arxiv.org/abs/2312.00005

NumCalc: An open source BEM code for solving acoustic scattering problems

The calculation of the acoustic field in or around objects is an important task in acoustic engineering. To numerically solve this task, the boundary element method (BEM) is a commonly used method especially for infinite domains. The open source tool Mesh2HRTF and its BEM core NumCalc provide users with a collection of free software for acoustic simulations without the need of having an in-depth knowledge into numerical methods. However, we feel that users should have a basic understanding with respect to the methods behind the software they are using. We are convinced that this basic understanding helps in avoiding common mistakes and also helps to understand the requirements to use the software. To provide this background is the first motivation for this paper. A second motivation for this paper is to demonstrate the accuracy of NumCalc when solving benchmark problems. Thus, users can get an idea about the accuracy they can expect when using NumCalc as well as the memory and CPU requirements of NumCalc. A third motivation for this paper is to give users detailed information about some parts of the actual implementation that are usually not mentioned in literature, e.g., the specific version of the fast multipole method and its clustering process or how to use frequency-dependent admittance boundary conditions.

arxiv.org

Space-Time Decomposition of Kalman Filter. (arXiv:2312.00007v1 [math.NA]) arxiv.org/abs/2312.00007

Space-Time Decomposition of Kalman Filter

We present an innovative interpretation of Kalman Filter (KF, for short) combining the ideas of Schwarz Domain Decomposition (DD) and Parallel in Time (PinT) approaches. Thereafter we call it DD-KF. In contrast to standard DD approaches which are already incorporated in KF and other state estimation models, implementing a straightforward data parallelism inside the loop over time, DD-KF ab-initio partitions the whole model, including filter equations and dynamic model along both space and time directions/steps. As a consequence, we get local KFs reproducing the original filter at smaller dimensions on local domains. Also, sub problems could be solved in parallel. In order to enforce the matching of local solutions on overlapping regions, and then to achieve the same global solution of KF, local KFs are slightly modified by adding a correction term keeping track of contributions of adjacent subdomains to overlapping regions. Such a correction term balances localization errors along overlapping regions, acting as a regularization constraint on local solutions. Furthermore, such a localization excludes remote observations from each analyzed location improving the conditioning of the error covariance matrices. As dynamic model we consider Shallow Water equations which can be regarded a consistent tool to get a proof of concept of the reliability assessment of DD-KF in monitoring and forecasting of weather systems and ocean currents

arxiv.org

Risk-Aware and Explainable Framework for Ensuring Guaranteed Coverage in Evolving Hardware Trojan Detection. (arXiv:2312.00009v1 [cs.CR]) arxiv.org/abs/2312.00009

Risk-Aware and Explainable Framework for Ensuring Guaranteed Coverage in Evolving Hardware Trojan Detection

As the semiconductor industry has shifted to a fabless paradigm, the risk of hardware Trojans being inserted at various stages of production has also increased. Recently, there has been a growing trend toward the use of machine learning solutions to detect hardware Trojans more effectively, with a focus on the accuracy of the model as an evaluation metric. However, in a high-risk and sensitive domain, we cannot accept even a small misclassification. Additionally, it is unrealistic to expect an ideal model, especially when Trojans evolve over time. Therefore, we need metrics to assess the trustworthiness of detected Trojans and a mechanism to simulate unseen ones. In this paper, we generate evolving hardware Trojans using our proposed novel conformalized generative adversarial networks and offer an efficient approach to detecting them based on a non-invasive algorithm-agnostic statistical inference framework that leverages the Mondrian conformal predictor. The method acts like a wrapper over any of the machine learning models and produces set predictions along with uncertainty quantification for each new detected Trojan for more robust decision-making. In the case of a NULL set, a novel method to reject the decision by providing a calibrated explainability is discussed. The proposed approach has been validated on both synthetic and real chip-level benchmarks and proven to pave the way for researchers looking to find informed machine learning solutions to hardware security problems.

arxiv.org

The Bivariate Normal Integral via Owen's T Function as a Modified Euler's Arctangent Series. (arXiv:2312.00011v1 [math.NA]) arxiv.org/abs/2312.00011

The Bivariate Normal Integral via Owen's T Function as a Modified Euler's Arctangent Series

The Owen's T function is presented in four new ways, one of them as a series similar to the Euler's arctangent series divided by $2π$, which is its majorant series. All possibilities enable numerically stable and fast convergent computation of the bivariate normal integral with simple recursion. When tested $Φ_\varrho^2(x,y)$ computation on a random sample of one million parameter triplets with uniformly distributed components and using double precision arithmetic, the maximum absolute error was $3.45\cdot 10^{-16}$. In additional testing, focusing on cases with correlation coefficients close to one in absolute value, when the computation may be very sensitive to small rounding errors, the accuracy was retained. In rare potentially critical cases, a simple adjustment to the computation procedure was performed - one potentially critical computation was replaced with two equivalent non-critical ones. All new series are suitable for vector and high-precision computation, assuming they are supplemented with appropriate efficient and accurate computation of the arctangent and standard normal cumulative distribution functions. They are implemented by the R package Phi2rho, available on CRAN. Its functions allow vector arguments and are ready to work with the Rmpfr package, which enables the use of arbitrary precision instead of double precision numbers. A special test with up to 1024-bit precision computation is also presented.

arxiv.org

Multi-fidelity uncertainty quantification for homogenization problems in structure-property relationships from crystal plasticity finite elements. (arXiv:2312.00012v1 [math.NA]) arxiv.org/abs/2312.00012

Multi-fidelity uncertainty quantification for homogenization problems in structure-property relationships from crystal plasticity finite elements

Crystal plasticity finite element method (CPFEM) has been an integrated computational materials engineering (ICME) workhorse to study materials behaviors and structure-property relationships for the last few decades. These relations are mappings from the microstructure space to the materials properties space. Due to the stochastic and random nature of microstructures, there is always some uncertainty associated with materials properties, for example, in homogenized stress-strain curves. For critical applications with strong reliability needs, it is often desirable to quantify the microstructure-induced uncertainty in the context of structure-property relationships. However, this uncertainty quantification (UQ) problem often incurs a large computational cost because many statistically equivalent representative volume elements (SERVEs) are needed. In this paper, we apply a multi-level Monte Carlo (MLMC) method to CPFEM to study the uncertainty in stress-strain curves, given an ensemble of SERVEs at multiple mesh resolutions. By using the information at coarse meshes, we show that it is possible to approximate the response at fine meshes with a much reduced computational cost. We focus on problems where the model output is multi-dimensional, which requires us to track multiple quantities of interest (QoIs) at the same time. Our numerical results show that MLMC can accelerate UQ tasks around 2.23x, compared to the classical Monte Carlo (MC) method, which is widely known as the ensemble average in the CPFEM literature.

arxiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.