Show newer

Early theories on fluid resistance and translation of Euler's "Dilucidationes de resistentia fluidorum" arxiv.org/abs/2409.16306

Early theories on fluid resistance and translation of Euler's "Dilucidationes de resistentia fluidorum"

In 1763, Euler published "Dilucidationes de resistentia fluidorum" (Explanations on the resistance of fluids), a memoir that challenges the fluid resistance theories proposed by Isaac Newton and d'Alembert. Euler's work explores the resistance experienced by solid bodies moving through fluids, critiquing both Newton's "common rule" and d'Alembert's paradox, which predicted zero resistance for non-viscous fluids. Euler's treatise is divided into two parts: the first focuses on the mathematical modeling of fluid flow patterns, while the second addresses the calculation of fluid resistance on surfaces. Despite significant advancements, Euler's work remains constrained by the limitations of non-viscous fluid assumptions, ultimately grappling with the same paradoxes he sought to overcome. This paper reviews the key contributions and limitations of "Dilucidationes", emphasizing the ongoing relevance of Euler's insights in the context of classical fluid dynamics. Additionally, it highlights the translation capabilities of AI-powered tools, specifically ChatGPT, in translating complex mathematical texts, marking a noticeable improvement in handling notation-heavy content.

arxiv.org

Identification of extreme weather events and impacts of the disasters in Brazil arxiv.org/abs/2409.16309

Identification of extreme weather events and impacts of the disasters in Brazil

An important consequence of human-induced climate change emerges through extreme weather events. The impact of extreme weather events is quantified in some parts of the globe, but it remains underestimated in several countries. In this work we first quantify the extreme temperature and precipitation events in Brazil using data from the Brazilian Institute of Meteorology, which includes 634 meteorological stations that have worked intermittently since 1961. We show that the anomaly in temperature has increased by more than 1°C in the last 60 years and that extreme events are heterogeneously distributed in the country. In terms of precipitation, our analyses show that it is getting drier in the Northwest region of Brazil while excessive precipitation events are increasing in the South, in agreement with previous works. We then use data from S2iD, an official database that registers disasters in Brazil to estimate their impact in terms of human damage and financial costs in the last ten years. The analysis shows that the drought extreme events are the most expensive, several of them reaching a cost of over a billion USD. Although we are not able to attribute the natural disasters registered in one database to the extreme weather events identified using the meteorological data, we discuss the possible correlations between them. Finally, we present a proposal of using extreme value theory to estimate the probability of having severe extreme events of precipitation in locations where there are already some natural disasters.

arxiv.org

New Insights into Global Warming: End-to-End Visual Analysis and Prediction of Temperature Variations arxiv.org/abs/2409.16311

New Insights into Global Warming: End-to-End Visual Analysis and Prediction of Temperature Variations

Global warming presents an unprecedented challenge to our planet however comprehensive understanding remains hindered by geographical biases temporal limitations and lack of standardization in existing research. An end to end visual analysis of global warming using three distinct temperature datasets is presented. A baseline adjusted from the Paris Agreements one point five degrees Celsius benchmark based on data analysis is employed. A closed loop design from visualization to prediction and clustering is created using classic models tailored to the characteristics of the data. This approach reduces complexity and eliminates the need for advanced feature engineering. A lightweight convolutional neural network and long short term memory model specifically designed for global temperature change is proposed achieving exceptional accuracy in long term forecasting with a mean squared error of three times ten to the power of negative six and an R squared value of zero point nine nine nine nine. Dynamic time warping and KMeans clustering elucidate national level temperature anomalies and carbon emission patterns. This comprehensive method reveals intricate spatiotemporal characteristics of global temperature variations and provides warming trend attribution. The findings offer new insights into climate change dynamics demonstrating that simplicity and precision can coexist in environmental analysis.

arxiv.org

SEA-ViT: Sea Surface Currents Forecasting Using Vision Transformer and GRU-Based Spatio-Temporal Covariance Modeling arxiv.org/abs/2409.16313

SEA-ViT: Sea Surface Currents Forecasting Using Vision Transformer and GRU-Based Spatio-Temporal Covariance Modeling

Forecasting sea surface currents is essential for applications such as maritime navigation, environmental monitoring, and climate analysis, particularly in regions like the Gulf of Thailand and the Andaman Sea. This paper introduces SEA-ViT, an advanced deep learning model that integrates Vision Transformer (ViT) with bidirectional Gated Recurrent Units (GRUs) to capture spatio-temporal covariance for predicting sea surface currents (U, V) using high-frequency radar (HF) data. The name SEA-ViT is derived from ``Sea Surface Currents Forecasting using Vision Transformer,'' highlighting the model's emphasis on ocean dynamics and its use of the ViT architecture to enhance forecasting capabilities. SEA-ViT is designed to unravel complex dependencies by leveraging a rich dataset spanning over 30 years and incorporating ENSO indices (El Niño, La Niña, and neutral phases) to address the intricate relationship between geographic coordinates and climatic variations. This development enhances the predictive capabilities for sea surface currents, supporting the efforts of the Geo-Informatics and Space Technology Development Agency (GISTDA) in Thailand's maritime regions. The code and pretrained models are available at \url{https://github.com/kaopanboonyuen/gistda-ai-sea-surface-currents}.

arxiv.org

Surface solar radiation: AI satellite retrieval can outperform Heliosat and generalizes well to other climate zones arxiv.org/abs/2409.16316

Surface solar radiation: AI satellite retrieval can outperform Heliosat and generalizes well to other climate zones

Accurate estimates of surface solar irradiance (SSI) are essential for solar resource assessments and solar energy forecasts in grid integration and building control applications. SSI estimates for spatially extended regions can be retrieved from geostationary satellites such as Meteosat. Traditional SSI satellite retrievals like Heliosat rely on physical radiative transfer modelling. We introduce the first machine-learning-based satellite retrieval for instantaneous SSI and demonstrate its capability to provide accurate and generalizable SSI estimates across Europe. Our deep learning retrieval provides near real-time SSI estimates based on data-driven emulation of Heliosat and fine-tuning on pyranometer networks. By including SSI from ground stations, our SSI retrieval model can outperform Heliosat accuracy and generalize well to regions with other climates and surface albedos in cloudy conditions (clear-sky index < 0.8). We also show that the SSI retrieved from Heliosat exhibits large biases in mountain regions, and that training and fine-tuning our retrieval models on SSI data from ground stations strongly reduces these biases, outperforming Heliosat. Furthermore, we quantify the relative importance of the Meteosat channels and other predictor variables like solar zenith angle for the accuracy of our deep learning SSI retrieval model in different cloud conditions. We find that in cloudy conditions multiple near-infrared and infrared channels enhance the performance. Our results can facilitate the development of more accurate satellite retrieval models of surface solar irradiance.

arxiv.org

Thermo-mechanical Properties of Hierarchical Biocomposite Materials from Photosynthetic Microorganisms arxiv.org/abs/2409.16318

Thermo-mechanical Properties of Hierarchical Biocomposite Materials from Photosynthetic Microorganisms

Extrusion 3D-printing of biopolymers and natural fiber-based biocomposites allows for the fabrication of complex structures, ranging from gels for healthcare applications to eco-friendly structural materials. However, traditional polymer extrusion demands high-energy consumption to pre-heat the slurries and reduce material viscosity. Additionally, natural fiber reinforcement often requires harsh treatments to improve adhesion to the matrix. Here, we overcome these challenges by introducing a systematic framework to fabricate natural biocomposite materials via a sustainable and scalable process. Using Chlorella vulgaris microalgae as the matrix, we optimize the bioink composition and the 3D-printing process to fabricate multifunctional, lightweight, hierarchical materials. A systematic dehydration approach prevents cracking and failure of the 3D-printed structure, maintaining a continuous morphology of aggregated microalgae cells that can withstand high shear forces during processing. Hydroxyethyl cellulose acts as a binder and reinforcement for Chlorella cells, leading to biocomposites with a bending stiffness above 1.5 GPa. The Chlorella biocomposites demonstrate isotropic heat transfer, functioning as effective thermal insulators with a thermal conductivity of 0.10 W/mK at room temperature. These materials show promise in applications requiring balanced thermal insulation and structural capabilities, positioning them as a sustainable alternative to conventional materials in response to increasing global materials demand.

arxiv.org

A Generative Diffusion Model for Probabilistic Ensembles of Precipitation Maps Conditioned on Multisensor Satellite Observations arxiv.org/abs/2409.16319

A Generative Diffusion Model for Probabilistic Ensembles of Precipitation Maps Conditioned on Multisensor Satellite Observations

A generative diffusion model is used to produce probabilistic ensembles of precipitation intensity maps at the 1-hour 5-km resolution. The generation is conditioned on infrared and microwave radiometric measurements from the GOES and DMSP satellites and is trained with merged ground radar and gauge data over southeastern United States. The generated precipitation maps reproduce the spatial autocovariance and other multiscale statistical properties of the gauge-radar reference fields on average. Conditioning the generation on the satellite measurements allows us to constrain the magnitude and location of each generated precipitation feature. The mean of the 128- member ensemble shows high spatial coherence with the reference fields with 0.82 linear correlation between the two. On average, the coherence between any two ensemble members is approximately the same as the coherence between any ensemble member and the ground reference, attesting that the ensemble dispersion is a proper measure of the estimation uncertainty. From the generated ensembles we can easily derive the probability of the precipitation intensity exceeding any given intensity threshold, at the 5-km resolution of the generation or at any desired aggregated resolution.

arxiv.org

Developing a Thailand solar irradiance map using Himawari-8 satellite imageries and deep learning models arxiv.org/abs/2409.16320

Developing a Thailand solar irradiance map using Himawari-8 satellite imageries and deep learning models

This paper presents an online platform that shows Thailand's solar irradiance map every 30 minutes. It is available at https://www.cusolarforecast.com. The methodology for estimating global horizontal irradiance (GHI) across Thailand relies on cloud index extracted from Himawari-8 satellite imagery, Ineichen clear-sky model with locally-tuned Linke turbidity, and machine learning models. The methods take clear-sky irradiance, cloud index, re-analyzed GHI and temperature data from the MERRA-2 database, and date-time as inputs for GHI estimation models, including LightGBM, LSTM, Informer, and Transformer. These are benchmarked with the estimate from the SolCast service by evaluation of 15-minute ground GHI data from 53 ground stations over 1.5 years during 2022-2023. The results show that the four models have competitive performances and outperform the SolCast service. The best model is LightGBM, with an MAE of 78.58 W/sqm and RMSE of 118.97 W/sqm. Obtaining re-analyzed MERRA-2 data for Thailand is not economically feasible for deployment. When removing these features, the Informer model has a winning performance of 78.67 W/sqm in MAE. The obtained performance aligns with existing literature by taking the climate zone and time granularity of data into consideration. As the map shows an estimate of GHI over 93,000 grids with a frequent update, the paper also describes a computational framework for displaying the entire map. It tests the runtime performance of deep learning models in the GHI estimation process.

arxiv.org

Open-Source Differentiable Lithography Imaging Framework arxiv.org/abs/2409.15306

Open-Source Differentiable Lithography Imaging Framework

The rapid evolution of the electronics industry, driven by Moore's law and the proliferation of integrated circuits, has led to significant advancements in modern society, including the Internet, wireless communication, and artificial intelligence (AI). Central to this progress is optical lithography, a critical technology in semiconductor manufacturing that accounts for approximately 30\% to 40\% of production costs. As semiconductor nodes shrink and transistor numbers increase, optical lithography becomes increasingly vital in current integrated circuit (IC) fabrication technology. This paper introduces an open-source differentiable lithography imaging framework that leverages the principles of differentiable programming and the computational power of GPUs to enhance the precision of lithography modeling and simplify the optimization of resolution enhancement techniques (RETs). The framework models the core components of lithography as differentiable segments, allowing for the implementation of standard scalar imaging models, including the Abbe and Hopkins models, as well as their approximation models. The paper introduces a computational lithography framework that optimizes semiconductor manufacturing processes using advanced computational techniques and differentiable programming. It compares imaging models and provides tools for enhancing resolution, demonstrating improved semiconductor patterning performance. The open-sourced framework represents a significant advancement in lithography technology, facilitating collaboration in the field. The source code is available at https://github.com/TorchOPC/TorchLitho

arxiv.org

AI and Machine Learning Approaches for Predicting Nanoparticles Toxicity The Critical Role of Physiochemical Properties arxiv.org/abs/2409.15322

AI and Machine Learning Approaches for Predicting Nanoparticles Toxicity The Critical Role of Physiochemical Properties

This research investigates the use of artificial intelligence and machine learning techniques to predict the toxicity of nanoparticles, a pressing concern due to their pervasive use in various industries and the inherent challenges in assessing their biological interactions. Employing models such as Decision Trees, Random Forests, and XGBoost, the study focuses on analyzing physicochemical properties like size, shape, surface charge, and chemical composition to determine their influence on toxicity. Our findings highlight the significant role of oxygen atoms, particle size, surface area, dosage, and exposure duration in affecting toxicity levels. The use of machine learning allows for a nuanced understanding of the intricate patterns these properties form in biological contexts, surpassing traditional analysis methods in efficiency and predictive power. These advancements aid in developing safer nanomaterials through computational chemistry, reducing reliance on costly and time-consuming experimental methods. This approach not only enhances our understanding of nanoparticle behavior in biological systems but also streamlines the safety assessment process, marking a significant stride towards integrating computational techniques in nanotoxicology.

arxiv.org

A machine learning algorithm for predicting naturalized flow duration curves at human influenced sites and multiple catchment scales arxiv.org/abs/2409.15339

A machine learning algorithm for predicting naturalized flow duration curves at human influenced sites and multiple catchment scales

Regional flow duration curves (FDCs) often reflect streamflow influenced by human activities. We propose a new machine learning algorithm to predict naturalized FDCs at human influenced sites and multiple catchment scales. Separate Meta models are developed to predict probable flow at discrete exceedance probabilities across catchments spanning multiple stream orders. Discrete exceedance flows reflect the stacking of k-fold cross-validated predictions from trained base ensemble machine learning models with and without hyperparameter tuning. The quality of individual base models reflects random stratified shuffling of spilt catchment records for training and testing. A Meta model is formed by retraining minimum variance base models that are bias corrected and used to predict final flows at selected percentiles that quantify uncertainty. Separate Meta models are developed and used to predict naturalised stochastic flows at other discrete exceedance probabilities along the duration curve. Efficacy of the new method is demonstrated for predicting naturalized stochastic FDCs at human influenced gauged catchments and ungauged stream reaches of unknown influences across Otago New Zealand. Important findings are twofold. First, independent observations of naturalised Median flows compare within few percent of the 50th percentile predictions from the FDC models. Second, the naturalised Meta models predict FDCs that outperform the calibrated SWAT model FDCs at gauge sites in the Taieri Freshwater Management Unit: Taieri at Tiroiti, Taieri at Sutton Creek, and Taieri River at Outram. Departures in the naturalised reference state are interpreted as flow regime changes across the duration curves. We believe these Meta models will be useful in predicting naturalised catchment FDCs across other New Zealand regions using physical catchment features available from the national data base.

arxiv.org

An Informatics Framework for the Design of Sustainable, Chemically Recyclable, Synthetically-Accessible and Durable Polymers arxiv.org/abs/2409.15354

An Informatics Framework for the Design of Sustainable, Chemically Recyclable, Synthetically-Accessible and Durable Polymers

We present a novel approach to design durable and chemically recyclable ring-opening polymerization (ROP) class polymers. This approach employs digital reactions using virtual forward synthesis (VFS) to generate over 7 million ROP polymers and machine learning techniques to rapidly predict thermal, thermodynamic and mechanical properties crucial for application-specific performance and recyclability. This combined methodology enables the generation and evaluation of millions of hypothetical ROP polymers from known and commercially available molecules, guiding the selection of approximately 35,000 candidates with optimal features for sustainability and practical utility. Three of these recommended candidates have passed validation tests in the physical lab - two of the three by others, as published previously elsewhere, and one of them is a new thiocane polymer synthesized, tested and reported here. This paper presents the framework, methodology, and initial findings of our study, highlighting the potential of VFS and machine learning to enable a large-scale search of the polymer universe and advance the development of recyclable and environmentally benign polymers.

arxiv.org

Quantum determinism and completeness restored by indistinguishability and long-time particle detection arxiv.org/abs/2409.15390

Quantum determinism and completeness restored by indistinguishability and long-time particle detection

We argue that measurement data in quantum physics can be rigorously interpreted only as a result of a statistical, macroscopic process, taking into account the indistinguishable character of identical particles. Quantum determinism is in principle possible on the condition that a fully-fledged quantum model is used to describe the measurement device in interaction with the studied object as one system. In contrast, any approach that relies on Born's rule discriminates the dynamics of a quantum system from that of the detector with which it interacts during measurement. In this work, we critically analyze the validity of this measurement postulate applied to single-event signals. In fact, the concept of ``individual'' particle becomes inadequate once both indistinguishability and a scattering approach allowing an unlimited interaction time for an effective detection, are considered as they should be, hence preventing the separability of two successive measurement events. In this context, measurement data should therefore be understood only as a result of statistics over many events. Accounting for the intrinsic noise of the sources and the detectors, we also show with the illustrative cases of the Schrödinger cat and the Bell experiment that once the Born rule is abandoned on the level of a single particle, realism, locality and causality are restored. We conclude that indiscernibility and long-time detection process make quantum physics not fundamentally probabilistic.

arxiv.org

Deposition simulations of realistic dosages in patient-specific airways with two- and four-way coupling arxiv.org/abs/2409.15396

Deposition simulations of realistic dosages in patient-specific airways with two- and four-way coupling

Inhalers spray over 100 million drug particles into the mouth, where a significant portion of the drug may deposit. Understanding how the complex interplay between particle and solid phases influence deposition is crucial for optimising treatments. Existing modelling studies neglect any effect of particle momentum on the fluid (one-way coupling), which may cause poor prediction of forces acting on particles. In this study, we simulate a realistic number of particles (up to 160 million) in a patient-specific geometry. We study the effect of momentum transfer from particles to the fluid (two-way coupling) and particle-particle interactions (four-way coupling) on deposition. We also explore the effect of tracking groups of particles (`parcels') to lower computational cost. Upper airway deposition fraction increased from 0.33 (one-way coupled) to 0.87 with two-way coupling and $10 μm$ particle diameter. Four-way coupling lowers upper airway deposition by approximately 10% at $100 μg$ dosages. We use parcel modelling to study deposition of $4 - 20 μm$ particles, observing significant influence of two-way coupling in each simulation. These results show that future studies should model realistic dosages for accurate prediction of deposition which may inform clinical decision-making.

arxiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.