Show newer

An Automated and Efficient Aerodynamic Design and Analysis Framework Integrated to PANAIR. (arXiv:2309.07923v1 [cs.CE]) arxiv.org/abs/2309.07923

An Automated and Efficient Aerodynamic Design and Analysis Framework Integrated to PANAIR

Aircraft design is an iterative process that requires an estimation of aerodynamic characteristics including drag and lift coefficients, stall behavior, velocity, and pressure profiles repeatedly, especially in the conceptual design phase. PanAir is a high-order aerodynamic panel method-based algorithm developed as a part of the Public Domain Aeronautical Software program mostly with NASA sponsorship. It is based upon potential flow theory and is used to numerically compute lift, induced drag, and moment coefficients of the aircraft in both subsonic and supersonic flight regimes. Quoting from developers it is the most versatile and accurate of all the linear theory panel codes. PanAir is a classical software that requires geometric data as an input in the form of a PanAir-compatible format; however, commonly used Computer-Aided Design Software packages no longer conform to the PanAir input format. Likewise, PanAir produces its output in a PanAir-specific output file which is not compatible with commonly used visualization software. The input geometry required by PanAir and its output, therefore, involves significant manual preand post-processing using intermediary software. The work proposed here is an automated pre and post-processor to be used together with PanAir. With the environment proposed in this work, manipulation of input and output data using several intermediary software to and from PanAir is bypassed successfully. The proposed environment is validated over a Cessna 210 aircraft geometry with a modified NLF (1)-0414 airfoil. The aircraft is numerically analyzed using PanAir together

arxiv.org

Flat origami is Turing Complete. (arXiv:2309.07932v1 [math.CO]) arxiv.org/abs/2309.07932

Flat origami is Turing Complete

Flat origami refers to the folding of flat, zero-curvature paper such that the finished object lies in a plane. Mathematically, flat origami consists of a continuous, piecewise isometric map $f:P\subseteq\mathbb{R}^2\to\mathbb{R}^2$ along with a layer ordering $λ_f:P\times P\to \{-1,1\}$ that tracks which points of $P$ are above/below others when folded. The set of crease lines that a flat origami makes (i.e., the set on which the mapping $f$ is non-differentiable) is called its \textit{crease pattern}. Flat origami mappings and their layer orderings can possess surprisingly intricate structure. For instance, determining whether or not a given straight-line planar graph drawn on $P$ is the crease pattern for some flat origami has been shown to be an NP-complete problem, and this result from 1996 led to numerous explorations in computational aspects of flat origami. In this paper we prove that flat origami, when viewed as a computational device, is Turing complete. We do this by showing that flat origami crease patterns with \textit{optional creases} (creases that might be folded or remain unfolded depending on constraints imposed by other creases or inputs) can be constructed to simulate Rule 110, a one-dimensional cellular automaton that was proven to be Turing complete by Matthew Cook in 2004.

arxiv.org

Landscape-Sketch-Step: An AI/ML-Based Metaheuristic for Surrogate Optimization Problems. (arXiv:2309.07936v1 [cs.LG]) arxiv.org/abs/2309.07936

Landscape-Sketch-Step: An AI/ML-Based Metaheuristic for Surrogate Optimization Problems

In this paper, we introduce a new heuristics for global optimization in scenarios where extensive evaluations of the cost function are expensive, inaccessible, or even prohibitive. The method, which we call Landscape-Sketch-and-Step (LSS), combines Machine Learning, Stochastic Optimization, and Reinforcement Learning techniques, relying on historical information from previously sampled points to make judicious choices of parameter values where the cost function should be evaluated at. Unlike optimization by Replica Exchange Monte Carlo methods, the number of evaluations of the cost function required in this approach is comparable to that used by Simulated Annealing, quality that is especially important in contexts like high-throughput computing or high-performance computing tasks, where evaluations are either computationally expensive or take a long time to be performed. The method also differs from standard Surrogate Optimization techniques, for it does not construct a surrogate model that aims at approximating or reconstructing the objective function. We illustrate our method by applying it to low dimensional optimization problems (dimensions 1, 2, 4, and 8) that mimick known difficulties of minimization on rugged energy landscapes often seen in Condensed Matter Physics, where cost functions are rugged and plagued with local minima. When compared to classical Simulated Annealing, the LSS shows an effective acceleration of the optimization process.

arxiv.org

Slow Invariant Manifolds of Singularly Perturbed Systems via Physics-Informed Machine Learning. (arXiv:2309.07946v1 [math.DS]) arxiv.org/abs/2309.07946

Slow Invariant Manifolds of Singularly Perturbed Systems via Physics-Informed Machine Learning

We present a physics-informed machine-learning (PIML) approach for the approximation of slow invariant manifolds (SIMs) of singularly perturbed systems, providing functionals in an explicit form that facilitate the construction and numerical integration of reduced order models (ROMs). The proposed scheme solves a partial differential equation corresponding to the invariance equation (IE) within the Geometric Singular Perturbation Theory (GSPT) framework. For the solution of the IE, we used two neural network structures, namely feedforward neural networks (FNNs), and random projection neural networks (RPNNs), with symbolic differentiation for the computation of the gradients required for the learning process. The efficiency of our PIML method is assessed via three benchmark problems, namely the Michaelis-Menten, the target mediated drug disposition reaction mechanism, and the 3D Sel'kov model. We show that the proposed PIML scheme provides approximations, of equivalent or even higher accuracy, than those provided by other traditional GSPT-based methods, and importantly, for any practical purposes, it is not affected by the magnitude of the perturbation parameter. This is of particular importance, as there are many systems for which the gap between the fast and slow timescales is not that big, but still ROMs can be constructed. A comparison of the computational costs between symbolic, automatic and numerical approximation of the required derivatives in the learning process is also provided.

arxiv.org

Classifying fermionic states via many-body correlation measures. (arXiv:2309.07956v1 [quant-ph]) arxiv.org/abs/2309.07956

Classifying fermionic states via many-body correlation measures

A pure fermionic state with a fixed particle number is said to be correlated if it deviates from a Slater determinant. In the present work we show that this notion can be refined, classifying fermionic states relative to $k$-${\rm \textit{body}}$ correlations. We capture such correlations by a family of measures $ω_k$, which we call twisted purities. Twisted purity is an explicit function of the $k$-fermion reduced density matrix, insensitive to global single-particle transformations. Vanishing of $ω_k$ for a given $k$ generalizes so-called Plücker relations on the state amplitudes and puts the state in a class ${\cal G}_k$. Sets ${\cal G}_k$ are nested in $k$, ranging from Slater determinants for $k = 1$ up to the full $n$-fermion Hilbert space for $k = n + 1$. We find various physically relevant states inside and close to ${\cal G}_{k=O(1)}$, including truncated configuration-interaction states, perturbation series around Slater determinants, and some nonperturbative eigenstates of the 1D Hubbard model. For each $k = O(1)$, we give an explicit ansatz with a polynomial number of parameters that covers all states in ${\cal G}_k$. Potential applications of this ansatz and its connections to the coupled-cluster wavefunction are discussed.

arxiv.org

Improved Shortest Path Restoration Lemmas for Multiple Edge Failures: Trade-offs Between Fault-tolerance and Subpaths. (arXiv:2309.07964v1 [cs.DS]) arxiv.org/abs/2309.07964

Improved Shortest Path Restoration Lemmas for Multiple Edge Failures: Trade-offs Between Fault-tolerance and Subpaths

The restoration lemma is a classic result by Afek, Bremler-Barr, Kaplan, Cohen, and Merritt [PODC '01], which relates the structure of shortest paths in a graph $G$ before and after some edges in the graph fail. Their work shows that, after one edge failure, any replacement shortest path avoiding this failing edge can be partitioned into two pre-failure shortest paths. More generally, this implies an additive tradeoff between fault tolerance and subpath count: for any $f, k$, we can partition any $f$-edge-failure replacement shortest path into $k+1$ subpaths which are each an $(f-k)$-edge-failure replacement shortest path. This generalized result has found applications in routing, graph algorithms, fault tolerant network design, and more. Our main result improves this to a multiplicative tradeoff between fault tolerance and subpath count. We show that for all $f, k$, any $f$-edge-failure replacement path can be partitioned into $O(k)$ subpaths that are each an $(f/k)$-edge-failure replacement path. We also show an asymptotically matching lower bound. In particular, our results imply that the original restoration lemma is exactly tight in the case $k=1$, but can be significantly improved for larger $k$. We also show an extension of this result to weighted input graphs, and we give efficient algorithms that compute path decompositions satisfying our improved restoration lemmas.

arxiv.org

Andrew Wiles' Proof of Fermat's Last Theorem, As Expected, Does Not Require a Large Cardinal Axiom. A Discussion of Colin McLarty's "The Large Structures of Grothendieck Founded on Finite-Order Arithmetic". (arXiv:2309.07151v1 [math.LO]) arxiv.org/abs/2309.07151

Andrew Wiles' Proof of Fermat's Last Theorem, As Expected, Does Not Require a Large Cardinal Axiom. A Discussion of Colin McLarty's "The Large Structures of Grothendieck Founded on Finite-Order Arithmetic"

Andrew Wiles' proof of Fermat's Last Theorem, with an assist from Richard Taylor, focused renewed attention on the foundational question of whether the use of Grothendieck's Universes in number theory entails that the results proved therewith make essential use of the large cardinal axiom that there is an uncountable strongly inaccessible cardinal, or more generally, that every cardinal is less than a strongly inaccessible cardinal. If one traces back through the references in Wiles' proof, one finds that the proof does depend upon explicit use of Grothendieck's Universes. Thus, prima facie, it appears that the proof of Fermat's Last Theorem depends upon a foundation that is strictly stronger than ZFC. Colin McLarty removes this appearance by demonstrating that all of Grothendieck's large tools, i.e., entities whose construction depended upon Grothendieck's Universes, can instead be founded on a fragment of ZFC with the logical strength of Finite-Order Arithmetic. The goal of this article is to present overviews both of the history of Fermat's Last Theorem and of McLarty's foundation for Grothendieck's large tools.

arxiv.org

Distribution Grid Line Outage Identification with Unknown Pattern and Performance Guarantee. (arXiv:2309.07157v1 [cs.LG]) arxiv.org/abs/2309.07157

Distribution Grid Line Outage Identification with Unknown Pattern and Performance Guarantee

Line outage identification in distribution grids is essential for sustainable grid operation. In this work, we propose a practical yet robust detection approach that utilizes only readily available voltage magnitudes, eliminating the need for costly phase angles or power flow data. Given the sensor data, many existing detection methods based on change-point detection require prior knowledge of outage patterns, which are unknown for real-world outage scenarios. To remove this impractical requirement, we propose a data-driven method to learn the parameters of the post-outage distribution through gradient descent. However, directly using gradient descent presents feasibility issues. To address this, we modify our approach by adding a Bregman divergence constraint to control the trajectory of the parameter updates, which eliminates the feasibility problems. As timely operation is the key nowadays, we prove that the optimal parameters can be learned with convergence guarantees via leveraging the statistical and physical properties of voltage data. We evaluate our approach using many representative distribution grids and real load profiles with 17 outage configurations. The results show that we can detect and localize the outage in a timely manner with only voltage magnitudes and without assuming a prior knowledge of outage patterns.

arxiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.