Show newer

A machine learning and feature engineering approach for the prediction of the uncontrolled re-entry of space objects. (arXiv:2303.10183v1 [cs.LG]) arxiv.org/abs/2303.10183

A machine learning and feature engineering approach for the prediction of the uncontrolled re-entry of space objects

The continuously growing number of objects orbiting around the Earth is expected to be accompanied by an increasing frequency of objects re-entering the Earth's atmosphere. Many of these re-entries will be uncontrolled, making their prediction challenging and subject to several uncertainties. Traditionally, re-entry predictions are based on the propagation of the object's dynamics using state-of-the-art modelling techniques for the forces acting on the object. However, modelling errors, particularly related to the prediction of atmospheric drag may result in poor prediction accuracies. In this context, we explore the possibility to perform a paradigm shift, from a physics-based approach to a data-driven approach. To this aim, we present the development of a deep learning model for the re-entry prediction of uncontrolled objects in Low Earth Orbit (LEO). The model is based on a modified version of the Sequence-to-Sequence architecture and is trained on the average altitude profile as derived from a set of Two-Line Element (TLE) data of over 400 bodies. The novelty of the work consists in introducing in the deep learning model, alongside the average altitude, three new input features: a drag-like coefficient (B*), the average solar index, and the area-to-mass ratio of the object. The developed model is tested on a set of objects studied in the Inter-Agency Space Debris Coordination Committee (IADC) campaigns. The results show that the best performances are obtained on bodies characterised by the same drag-like coefficient and eccentricity distribution as the training set.

arxiv.org

Quantifying Space-Time Load Shifting Flexibility in Electricity Markets. (arXiv:2303.10217v1 [eess.SY]) arxiv.org/abs/2303.10217

Quantifying Space-Time Load Shifting Flexibility in Electricity Markets

The power grid is undergoing significant restructuring driven by the adoption of wind/solar power and the incorporation of new flexible technologies that can shift load in space and time (e.g., data centers, battery storage, and modular manufacturing). Load shifting is needed to mitigate space-time fluctuations associated with wind/solar power and other disruptions (e.g., extreme weather). The impact of load shifting on electricity markets is typically quantified via sensitivity analysis, which aims to assess impacts in terms of price volatility and total welfare. This sensitivity approach does not explicitly quantify operational flexibility (e.g., range or probability of feasible operation). In this work, we present a computational framework to enable this; specifically, we quantify operational flexibility by assessing how much uncertainty in net loads (which capture uncertain power injections/withdrawals) can be tolerated by the system under varying levels of load shifting capacity. The proposed framework combines optimization formulations that quantify operational flexibility with power grid models that capture load shifting in the form of virtual links (pathways that transfer load across space-time). Our case studies reveal that adding a single virtual link that shifts load in either space or time can lead to dramatic improvements in system-wide flexibility; this is because shifting relieves space-time congestion that results from transmission constraints and generator ramping constraints. Our results provide insights into how the incorporation of flexible technologies can lead to non-intuitive, system-wide gains in flexibility.

arxiv.org

Synchronisation in TCP networks with Drop-Tail Queues. (arXiv:2303.10220v1 [cs.NI]) arxiv.org/abs/2303.10220

Synchronisation in TCP networks with Drop-Tail Queues

The design of transport protocols, embedded in end-systems, and the choice of buffer sizing strategies, within network routers, play an important role in performance analysis of the Internet. In this paper, we take a dynamical systems perspective on the interplay between fluid models for transport protocols and some router buffer sizing regimes. Among the flavours of TCP, we analyse Compound, as well as Reno and Illinois. The models for these TCP variants are coupled with a Drop-Tail policy, currently deployed in routers, in two limiting regimes: a small and an intermediate buffer regime. The topology we consider has two sets of long-lived TCP flows, each passing through separate edge routers, which merge at a common core router. Our analysis is inspired by time delayed coupled oscillators, where we obtain analytical conditions under which the sets of TCP flows synchronise. These conditions are made explicit in terms of coupling strengths, which depend on protocol parameters, and on network parameters like feedback delay, link capacity and buffer sizes. We find that variations in the coupling strengths can lead to limit cycles in the queue size. Packet-level simulations corroborate the analytical insights. For design, small Drop-Tail buffers are preferable over intermediate buffers as they can ensure both low latency and stable queues.

arxiv.org

A nested hierarchy of second order upper bounds on system failure probability. (arXiv:2303.09557v1 [math.PR]) arxiv.org/abs/2303.09557

A nested hierarchy of second order upper bounds on system failure probability

For a coherent, binary system made up of binary elements, the exact failure probability requires knowledge of statistical dependence of all orders among the minimal cut sets. Since dependence among the cut sets beyond the second order is generally difficult to obtain, second order bounds on system failure probability have practical value. The upper bound is conservative by definition and can be adopted in reliability based decision making. In this paper we propose a new hierarchy of m-level second order upper bounds, Bm : the well-known Kounias-Vanmarcke-Hunter-Ditlevsen (KVHD) bound - the current standard for upper bounds using second order joint probabilities - turns out to be the weakest member of this family (m = 1). We prove that Bm is non-increasing with level m in every ordering of the cut sets, and derive conditions under which Bm+1 is strictly less than Bm for any m and any ordering. We also derive conditions under which the optimal level m bound is strictly less than the optimal level m + 1 bound, and show that this improvement asymptotically achieves a probability of 1 as long as the second order joint probabilities are only constrained by the pair of corresponding first order probabilities. Numerical examples show that our second order upper bounds can yield tighter values than previously achieved and in every case exhibit considerable less scatter across the entire n! orderings of the cut sets compared to KVHD bounds. Our results therefore may lead to more efficient identification of the optimal upper bound when coupled with existing linear programming and tree search based approaches.

arxiv.org

Methodology for Capacity Credit Evaluation of Physical and Virtual Energy Storage in Decarbonized Power System. (arXiv:2303.09560v1 [eess.SY]) arxiv.org/abs/2303.09560

Methodology for Capacity Credit Evaluation of Physical and Virtual Energy Storage in Decarbonized Power System

Energy storage (ES) and virtual energy storage (VES) are key components to realizing power system decarbonization. Although ES and VES have been proven to deliver various types of grid services, little work has so far provided a systematical framework for quantifying their adequacy contribution and credible capacity value while incorporating human and market behavior. Therefore, this manuscript proposed a novel evaluation framework to evaluate the capacity credit (CC) of ES and VES. To address the system capacity inadequacy and market behavior of storage, a two-stage coordinated dispatch is proposed to achieve the trade-off between day-ahead self-energy management of resources and efficient adjustment to real-time failures. And we further modeled the human behavior with storage operations and incorporate two types of decision-independent uncertainties (DIUs) (operate state and self-consumption) and one type of decision-dependent uncertainty (DDUs) (available capacity) into the proposed dispatch. Furthermore, novel reliability and CC indices (e.g., equivalent physical storage capacity (EPSC)) are introduced to evaluate the practical and theoretical adequacy contribution of ES and VES, as well as the ability to displace generation and physical storage while maintaining equivalent system adequacy. Exhaustive case studies based on the IEEE RTS-79 system and real-world data verify the significant consequence (10%-70% overestimated CC) of overlooking DIUs and DDUs in the previous works, while the proposed method outperforms other and can generate a credible and realistic result. Finally, we investigate key factors affecting the adequacy contribution of ES and VES, and reasonable suggestions are provided for better flexibility utilization of ES and VES in decarbonized power system.

arxiv.org

One-Bit Quadratic Compressed Sensing: From Sample Abundance to Linear Feasibility. (arXiv:2303.09594v1 [cs.IT]) arxiv.org/abs/2303.09594

One-Bit Quadratic Compressed Sensing: From Sample Abundance to Linear Feasibility

One-bit quantization with time-varying sampling thresholds has recently found significant utilization potential in statistical signal processing applications due to its relatively low power consumption and low implementation cost. In addition to such advantages, an attractive feature of one-bit analog-to-digital converters (ADCs) is their superior sampling rates as compared to their conventional multi-bit counterparts. This characteristic endows one-bit signal processing frameworks with what we refer to as sample abundance. On the other hand, many signal recovery and optimization problems are formulated as (possibly non-convex) quadratic programs with linear feasibility constraints in the one-bit sampling regime. We demonstrate, with a particular focus on quadratic compressed sensing, that the sample abundance paradigm allows for the transformation of such quadratic problems to merely a linear feasibility problem by forming a large-scale overdetermined linear system; thus removing the need for costly optimization constraints and objectives. To efficiently tackle the emerging overdetermined linear feasibility problem, we further propose an enhanced randomized Kaczmarz algorithm, called Block SKM. Several numerical results are presented to illustrate the effectiveness of the proposed methodologies.

arxiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.