Show newer

Frequency Analysis with Multiple Kernels and Complete Dictionary. (arXiv:2311.12798v1 [cs.IT]) arxiv.org/abs/2311.12798

Frequency Analysis with Multiple Kernels and Complete Dictionary

In signal analysis, among the effort of seeking for efficient representations of a signal into the basic ones of meaningful frequencies, to extract principal frequency components, consecutively one after another or $n$ at one time, is a fundamental strategy. For this goal, we define the concept of mean-frequency and develop the related frequency decomposition with the complete Szegö kernel dictionary, the latter consisting of the multiple kernels, being defined as the parameter-derivatives of the Szegö kernels. Several major energy matching pursuit type sparse representations, including greedy algorithm (GA), orthogonal greedy algorithm (OGA), adaptive Fourier decomposition (AFD), pre-orthogonal adaptive Fourier decomposition (POAFD), $n$-Best approximation and unwinding Blaschke expansion, are analyzed and compared. Of which an order in re-construction efficiency between the mentioned algorithms is given based on detailed study of their respective remainders. The study spells out the natural connections between the multiple kernels and the related Laguerre system, and in particular shows that both, like the Fourier series, extract out the $O(n^{-σ})$ order convergence rate from the functions in the Hardy-Sobolev space of order $σ>0.$ Existence of the $n$-Best approximation with the complete Szegö dictionary is proved and the related algorithm aspects are discussed. The included experiments form a significant integration part of the study, for they not only illustrate the theoretical results, but also provide cross comparison between various ways of combination between the matching pursuit algorithms and the dictionaries in use. Experiments show that the complete dictionary remarkably improves approximation efficiency.

arxiv.org

Understanding Data Augmentation from a Robustness Perspective. (arXiv:2311.12800v1 [cs.CV]) arxiv.org/abs/2311.12800

Understanding Data Augmentation from a Robustness Perspective

In the realm of visual recognition, data augmentation stands out as a pivotal technique to amplify model robustness. Yet, a considerable number of existing methodologies lean heavily on heuristic foundations, rendering their intrinsic mechanisms ambiguous. This manuscript takes both a theoretical and empirical approach to understanding the phenomenon. Theoretically, we frame the discourse around data augmentation within game theory's constructs. Venturing deeper, our empirical evaluations dissect the intricate mechanisms of emblematic data augmentation strategies, illuminating that these techniques primarily stimulate mid- and high-order game interactions. Beyond the foundational exploration, our experiments span multiple datasets and diverse augmentation techniques, underscoring the universal applicability of our findings. Recognizing the vast array of robustness metrics with intricate correlations, we unveil a streamlined proxy. This proxy not only simplifies robustness assessment but also offers invaluable insights, shedding light on the inherent dynamics of model game interactions and their relation to overarching system robustness. These insights provide a novel lens through which we can re-evaluate model safety and robustness in visual recognition tasks.

arxiv.org

End-to-end Phase Field Model Discovery Combining Experimentation, Crowdsourcing, Simulation and Learning. (arXiv:2311.12801v1 [cs.CV]) arxiv.org/abs/2311.12801

End-to-end Phase Field Model Discovery Combining Experimentation, Crowdsourcing, Simulation and Learning

The availability of tera-byte scale experiment data calls for AI driven approaches which automatically discover scientific models from data. Nonetheless, significant challenges present in AI-driven scientific discovery: (i) The annotation of large scale datasets requires fundamental re-thinking in developing scalable crowdsourcing tools. (ii) The learning of scientific models from data calls for innovations beyond black-box neural nets. (iii) Novel visualization and diagnosis tools are needed for the collaboration of experimental and theoretical physicists, and computer scientists. We present Phase-Field-Lab platform for end-to-end phase field model discovery, which automatically discovers phase field physics models from experiment data, integrating experimentation, crowdsourcing, simulation and learning. Phase-Field-Lab combines (i) a streamlined annotation tool which reduces the annotation time (by ~50-75%), while increasing annotation accuracy compared to baseline; (ii) an end-to-end neural model which automatically learns phase field models from data by embedding phase field simulation and existing domain knowledge into learning; and (iii) novel interfaces and visualizations to integrate our platform into the scientific discovery cycle of domain scientists. Our platform is deployed in the analysis of nano-structure evolution in materials under extreme conditions (high temperature and irradiation). Our approach reveals new properties of nano-void defects, which otherwise cannot be detected via manual analysis.

arxiv.org

A general Framework for Utilizing Metaheuristic Optimization for Sustainable Unrelated Parallel Machine Scheduling: A concise overview. (arXiv:2311.12802v1 [cs.NE]) arxiv.org/abs/2311.12802

A general Framework for Utilizing Metaheuristic Optimization for Sustainable Unrelated Parallel Machine Scheduling: A concise overview

Sustainable development has emerged as a global priority, and industries are increasingly striving to align their operations with sustainable practices. Parallel machine scheduling (PMS) is a critical aspect of production planning that directly impacts resource utilization and operational efficiency. In this paper, we investigate the application of metaheuristic optimization algorithms to address the unrelated parallel machine scheduling problem (UPMSP) through the lens of sustainable development goals (SDGs). The primary objective of this study is to explore how metaheuristic optimization algorithms can contribute to achieving sustainable development goals in the context of UPMSP. We examine a range of metaheuristic algorithms, including genetic algorithms, particle swarm optimization, ant colony optimization, and more, and assess their effectiveness in optimizing the scheduling problem. The algorithms are evaluated based on their ability to improve resource utilization, minimize energy consumption, reduce environmental impact, and promote socially responsible production practices. To conduct a comprehensive analysis, we consider UPMSP instances that incorporate sustainability-related constraints and objectives.

arxiv.org

Investigating Copyright Issues of Diffusion Models under Practical Scenarios. (arXiv:2311.12803v1 [cs.MM]) arxiv.org/abs/2311.12803

Investigating Copyright Issues of Diffusion Models under Practical Scenarios

The issue of copyright in generative models, particularly diffusion models, has become a prominent concern in recent years. Previous studies have predominantly focused on copyright violation at the image level, where generative models replicate copyrighted images entirely. Furthermore, these earlier studies have examined copyright infringements mainly using prompts that are semantically similar to target topics. However, copyright infringement can be more nuanced than mere replication of whole images and can be triggered with prompts that are less directly related to copyright topics. In our work, we tackle the limitations of previous studies by delving into partial copyright infringement, which treats parts of images as copyrighted content, using prompts that are considerably different from copyrighted topics. We develop a data generation pipeline that facilitates the creation of datasets for copyright research in diffusion models. Using our pipeline, we create datasets containing copyright infringement samples for different diffusion models. We conduct evaluations on generated data under various criteria. Our results show the prevalence of generating copyright-infringing content across a range of diffusion models, including the latest Stable Diffusion XL.

arxiv.org

Reducing the Environmental Impact of Wireless Communication via Probabilistic Machine Learning. (arXiv:2311.12807v1 [cs.NI]) arxiv.org/abs/2311.12807

Reducing the Environmental Impact of Wireless Communication via Probabilistic Machine Learning

Machine learning methods are increasingly adopted in communications problems, particularly those arising in next generation wireless settings. Though seen as a key climate mitigation and societal adaptation enabler, communications related energy consumption is high and is expected to grow in future networks in spite of anticipated efficiency gains in 6G due to exponential communications traffic growth. To make meaningful climate mitigation impact in the communications sector, a mindset shift away from maximizing throughput at all cost and towards prioritizing energy efficiency is needed. Moreover, this must be adopted in both existing (without incurring further embodied carbon costs through equipment replacement) and future network infrastructure, given the long development time of mobile generations. To that end, we present summaries of two such problems, from both current and next generation network specifications, where probabilistic inference methods were used to great effect: using Bayesian parameter tuning we are able to safely reduce the energy consumption of existing hardware on a live communications network by $11\%$ whilst maintaining operator specified performance envelopes; through spatiotemporal Gaussian process surrogate modeling we reduce the overhead in a next generation hybrid beamforming system by over $60\%$, greatly improving the networks' ability to target highly mobile users such as autonomous vehicles. The Bayesian paradigm is itself helpful in terms of energy usage, since training a Bayesian optimization model can require much less computation than, say, training a deep neural network.

arxiv.org

TransCDR: a deep learning model for enhancing the generalizability of cancer drug response prediction through transfer learning and multimodal data fusion for drug representation. (arXiv:2311.12040v1 [q-bio.QM]) arxiv.org/abs/2311.12040

TransCDR: a deep learning model for enhancing the generalizability of cancer drug response prediction through transfer learning and multimodal data fusion for drug representation

Accurate and robust drug response prediction is of utmost importance in precision medicine. Although many models have been developed to utilize the representations of drugs and cancer cell lines for predicting cancer drug responses (CDR), their performances can be improved by addressing issues such as insufficient data modality, suboptimal fusion algorithms, and poor generalizability for novel drugs or cell lines. We introduce TransCDR, which uses transfer learning to learn drug representations and fuses multi-modality features of drugs and cell lines by a self-attention mechanism, to predict the IC50 values or sensitive states of drugs on cell lines. We are the first to systematically evaluate the generalization of the CDR prediction model to novel (i.e., never-before-seen) compound scaffolds and cell line clusters. TransCDR shows better generalizability than 8 state-of-the-art models. TransCDR outperforms its 5 variants that train drug encoders (i.e., RNN and AttentiveFP) from scratch under various scenarios. The most critical contributors among multiple drug notations and omics profiles are Extended Connectivity Fingerprint and genetic mutation. Additionally, the attention-based fusion module further enhances the predictive performance of TransCDR. TransCDR, trained on the GDSC dataset, demonstrates strong predictive performance on the external testing set CCLE. It is also utilized to predict missing CDRs on GDSC. Moreover, we investigate the biological mechanisms underlying drug response by classifying 7,675 patients from TCGA into drug-sensitive or drug-resistant groups, followed by a Gene Set Enrichment Analysis. TransCDR emerges as a potent tool with significant potential in drug response prediction. The source code and data can be accessed at https://github.com/XiaoqiongXia/TransCDR.

arxiv.org

Automated Detection of hidden Damages and Impurities in Aluminum Die Casting Materials and Fibre-Metal Laminates using Low-quality X-ray Radiography, Synthetic X-ray Data Augmentation by Simulation, and Machine Learning. (arXiv:2311.12041v1 [cs.CV]) arxiv.org/abs/2311.12041

Automated Detection of hidden Damages and Impurities in Aluminum Die Casting Materials and Fibre-Metal Laminates using Low-quality X-ray Radiography, Synthetic X-ray Data Augmentation by Simulation, and Machine Learning

Detection and characterization of hidden defects, impurities, and damages in layered composites like Fibre laminates, e.g., Fibre Metal Laminates (FML), as well as in monolithic materials, e.g., aluminum die casting materials, is still a challenge. This work discusses methods and challenges in data-driven modeling of automated damage and defect detectors using X-ray single- and multi-projection (CT) images. Three main issues are identified: Data and feature variance, data feature labeling (for supervised machine learning), and the missing ground truth. It will be shown that only simulation of data can deliver a ground truth data set and accurate labeling. Noise has significant impact on the feature detection and will be discussed. Data-driven feature detectors are implemented with semantic pixel- or z-profile Convolutional Neural Networks and LSTM Auto-encoders. Data is measured with three different devices: A low-quality and low-cost (Low-Q), a mid- and a high-quality (micro-CT, Mid-/High-Q) device. The goals of this work are the training of robust and generalized feature detectors with synthetic data and the transition from High- and Mid-Q laboratory measuring technologies towards in-field usable technologies and methods.

arxiv.org

Atomic Defect-Aware Physical Design of Silicon Dangling Bond Logic on the H-Si(100)2x1 Surface. (arXiv:2311.12042v1 [physics.app-ph]) arxiv.org/abs/2311.12042

Atomic Defect-Aware Physical Design of Silicon Dangling Bond Logic on the H-Si(100)2x1 Surface

Although fabrication capabilities of Silicon Dangling Bonds have rapidly advanced from manual labor-driven laboratory work to automated manufacturing in just recent years, sub-nanometer substrate defects still pose a hindrance to production due to the need for atomic precision. In essence, unpassivated or missing surface atoms, contaminants, and structural deformations disturb the fabricated logic or prevent its realization altogether. Moreover, design automation techniques in this domain have not yet adopted any defect-aware behavior to circumvent the present obstacles. In this paper, we derive a surface defect model for design automation from experimentally verified defect types that we apply to identify sensitivities in an established gate library in an effort to generate more robust designs. Furthermore, we present an automatic placement and routing algorithm that considers scanning tunneling microscope data obtained from physical experiments to lay out dot-accurate circuitry that is resilient against the presence of atomic surface defects. This culminates in a holistic evaluation on surface data of varying defect rates that enables us to quantify the severity of such defects. We project that fabrication capabilities must achieve defect rates of around 0.1 %, if charged defects can be completely eliminated, or < 0.1 %, otherwise. This realization sets the pace for future efforts to scale up this promising circuit technology.

arxiv.org

LATIS: Lambda Abstraction-based Thermal Image Super-resolution. (arXiv:2311.12046v1 [eess.IV]) arxiv.org/abs/2311.12046

LATIS: Lambda Abstraction-based Thermal Image Super-resolution

Single image super-resolution (SISR) is an effective technique to improve the quality of low-resolution thermal images. Recently, transformer-based methods have achieved significant performance in SISR. However, in the SR task, only a small number of pixels are involved in the transformers self-attention (SA) mechanism due to the computational complexity of the attention mechanism. The lambda abstraction is a promising alternative to SA in modeling long-range interactions while being computationally more efficient. This paper presents lambda abstraction-based thermal image super-resolution (LATIS), a novel lightweight architecture for SISR of thermal images. LATIS sequentially captures local and global information using the local and global feature block (LGFB). In LGFB, we introduce a global feature extraction (GFE) module based on the lambda abstraction mechanism, channel-shuffle and convolution (CSConv) layer to encode local context. Besides, to improve the performance further, we propose a differentiable patch-wise histogram-based loss function. Experimental results demonstrate that our LATIS, with the least model parameters and complexity, achieves better or comparable performance with state-of-the-art methods across multiple datasets.

arxiv.org

Multimodal Machine Unlearning. (arXiv:2311.12047v1 [cs.AI]) arxiv.org/abs/2311.12047

Multimodal Machine Unlearning

Machine Unlearning is the process of removing specific training data samples and their corresponding effects from an already trained model. It has significant practical benefits, such as purging private, inaccurate, or outdated information from trained models without the need for complete re-training. Unlearning within a multimodal setting presents unique challenges due to the intrinsic dependencies between different data modalities and the expensive cost of training on large multimodal datasets and architectures. Current approaches to machine unlearning have not fully addressed these challenges. To bridge this gap, we introduce MMUL, a machine unlearning approach specifically designed for multimodal data and models. MMUL formulates the multimodal unlearning task by focusing on three key properties: (a): modality decoupling, which effectively decouples the association between individual unimodal data points within multimodal inputs marked for deletion, rendering them as unrelated data points within the model's context, (b): unimodal knowledge retention, which retains the unimodal representation capability of the model post-unlearning, and (c): multimodal knowledge retention, which retains the multimodal representation capability of the model post-unlearning. MMUL is efficient to train and is not constrained by the requirement of using a strongly convex loss. Experiments on two multimodal models and four multimodal benchmark datasets, including vision-language and graph-language datasets, show that MMUL outperforms existing baselines, gaining an average improvement of +17.6 points against the best-performing unimodal baseline in distinguishing between deleted and remaining data. In addition, MMUL can largely maintain pre-existing knowledge of the original model post unlearning, with a performance gap of only 0.3 points compared to retraining a new model from scratch.

arxiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.