Show newer

Graph Neural Network-Based Pipeline for Track Finding in the Velo at LHCb arxiv.org/abs/2406.12869

Graph Neural Network-Based Pipeline for Track Finding in the Velo at LHCb

Over the next decade, increases in instantaneous luminosity and detector granularity will amplify the amount of data that has to be analysed by high-energy physics experiments, whether in real time or offline, by an order of magnitude. The reconstruction of charged particle tracks, which has always been a crucial element of offline data processing pipelines, must increasingly be deployed from the very first stages of the real time processing to enable experiments to achieve their physics goals. Graph Neural Networks (GNNs) have received a great deal of attention in the community because their computational complexity scales nearly linearly with the number of hits in the detector, unlike conventional algorithms which often scale quadratically or worse. This paper presents ETX4VELO, a GNN-based track-finding pipeline tailored for the Run 3 LHCb experiment's Vertex Locator, in the context of LHCb's fully GPU-based first-level trigger system, Allen. Currently implemented in Python, ETX4VELO offers the ability to reconstruct tracks with shared hits using a novel triplet-based method. When benchmarked against the traditional track-finding algorithm in Allen, this GNN-based approach not only matches but occasionally surpasses its physics performance. In particular, the fraction of fake tracks is reduced from over 2\% to below 1\% and the efficiency to reconstruct electrons is improved. While achieving comparable physics performance is a milestone, the immediate priority remains implementing ETX4VELO in Allen in order to determine and optimise its throughput, to meet the demands of this high-rate environment.

arxiv.org

Water Cherenkov muon veto for the COSINUS experiment: design and simulation optimization arxiv.org/abs/2406.12870

Water Cherenkov muon veto for the COSINUS experiment: design and simulation optimization

COSINUS is a dark matter (DM) direct search experiment that uses sodium iodide (NaI) crystals as cryogenic calorimeters. Thanks to the low nuclear recoil energy threshold and event-by-event discrimination capability, COSINUS will address the long-standing DM claim made by the DAMA/LIBRA collaboration. The experiment is currently under construction at the Laboratori Nazionali del Gran Sasso, Italy, and employs a large cylindrical water tank as a passive shield to meet the required background rate. However, muon-induced neutrons can mimic a DM signal therefore requiring an active veto system, which is achieved by instrumenting the water tank with an array of photomultiplier tubes (PMTs). This study optimizes the number, arrangement, and trigger conditions of the PMTs as well as the size of an optically invisible region. The objective was to maximize the muon veto efficiency while minimizing the accidental trigger rate due to the ambient and instrumental background. The final configuration predicts a veto efficiency of 99.63 $\pm$ 0.16 $\%$ and 44.4 $\pm$ $5.6\%$ in the tagging of muon events and showers of secondary particles, respectively. The active veto will reduce the cosmogenic neutron background rate to 0.11 $\pm$ 0.02 cts$\cdot$kg$^{-1}$$\cdot$year$^{-1}$, corresponding to less than one background event in the region of interest for the whole COSINUS-1$π$ exposure of 1000 kg$\cdot$days.

arxiv.org

The Design, Implementation, and Performance of the LZ Calibration Systems arxiv.org/abs/2406.12874

The Design, Implementation, and Performance of the LZ Calibration Systems

LUX-ZEPLIN (LZ) is a tonne-scale experiment searching for direct dark matter interactions and other rare events. It is located at the Sanford Underground Research Facility (SURF) in Lead, South Dakota, USA. The core of the LZ detector is a dual-phase xenon time projection chamber (TPC), designed with the primary goal of detecting Weakly Interacting Massive Particles (WIMPs) via their induced low energy nuclear recoils. Surrounding the TPC, two veto detectors immersed in an ultra-pure water tank enable reducing background events to enhance the discovery potential. Intricate calibration systems are purposely designed to precisely understand the responses of these three detector volumes to various types of particle interactions and to demonstrate LZ's ability to discriminate between signals and backgrounds. In this paper, we present a comprehensive discussion of the key features, requirements, and performance of the LZ calibration systems, which play a crucial role in enabling LZ's WIMP-search and its broad science program. The thorough description of these calibration systems, with an emphasis on their novel aspects, is valuable for future calibration efforts in direct dark matter and other rare-event search experiments.

arxiv.org

Machine learning evaluation in the Global Event Processor FPGA for the ATLAS trigger upgrade arxiv.org/abs/2406.12875

Machine learning evaluation in the Global Event Processor FPGA for the ATLAS trigger upgrade

The Global Event Processor (GEP) FPGA is an area-constrained, performance-critical element of the Large Hadron Collider's (LHC) ATLAS experiment. It needs to very quickly determine which small fraction of detected events should be retained for further processing, and which other events will be discarded. This system involves a large number of individual processing tasks, brought together within the overall Algorithm Processing Platform (APP), to make filtering decisions at an overall latency of no more than 8ms. Currently, such filtering tasks are hand-coded implementations of standard deterministic signal processing tasks. In this paper we present methods to automatically create machine learning based algorithms for use within the APP framework, and demonstrate several successful such deployments. We leverage existing machine learning to FPGA flows such as hls4ml and fwX to significantly reduce the complexity of algorithm design. These have resulted in implementations of various machine learning algorithms with latencies of 1.2us and less than 5% resource utilization on an Xilinx XCVU9P FPGA. Finally, we implement these algorithms into the GEP system and present their actual performance. Our work shows the potential of using machine learning in the GEP for high-energy physics applications. This can significantly improve the performance of the trigger system and enable the ATLAS experiment to collect more data and make more discoveries. The architecture and approach presented in this paper can also be applied to other applications that require real-time processing of large volumes of data.

arxiv.org

Earth ECS using Modified Energy Budget Methods and Trend Analyses v.1SS arxiv.org/abs/2406.11869

Earth ECS using Modified Energy Budget Methods and Trend Analyses v.1SS

Earth global and regional effective thermal conductance G(eff) (in (W/m^2)/C and often labeled lambda in climate research) and the related Equilibrium Climate Sensitivity (ECS) are evaluated by applying a modified version of the Energy Budget method, and using data only after 1970. By removing Periodic Interfering temperature components (using a novel PIR process) and applying high frequency filtering, an extraordinarily near linear temperature response is revealed, enhancing accurate G(eff) calculation and avoiding the pre-1970 aerosol forcing and ocean energy per area (E*) absorption uncertainties. A formal/empirical method is used to determine more reliable values of Q(t)=d[E*(Ocean.energy)]/dt . Using NOAA data, and after PIR, it is shown that: 1) The Energy Budget Method can be realistically applied to the Ocean and Land regions independently, 2) the effective volcanic forcing is <= 1/5 the IPCC AR5 estimate, 3) the "historical" 1980-2020 ECS(eff) values for Global, global Ocean, and global Land regions values are <= 2.16, 1.69, 2.96 C/2xCO2 respectively; where the updated IPCC AR5 orthodox independent global Forcing value of 0.4 (W/m^2)/Decade and F/2xCO2= 3.7 W/m^2 were used. The Global average ECS(true) value of <= 2.10 C is 70% of the IPCC AR6 ECS estimate of 3.0 C, but 127% of the ECS(eff) value reported by Lewis (1.66 C) . The estimated oceans average TCR/ECS ratio = 0.71 and the global average TCR/ECS ratio = 0.83, and ECS(land)/ECS(Ocean) = 1.78 . [Results using HADCRUT temperature data instead are similar, but 6% "cooler" over land, and 8% "warmer" over oceans.] A simplified physically realistic formal/empirical Coarse 2-D Global Climate Model is derived wherein variation of Geff(t) until equilibrium (i.e. "pattern effects") are proven to be negligible or "cooling", using these Methods. And so it is likely ECS(true) <= ECS(eff).

arxiv.org

Solar Power Prediction Using Satellite Data in Different Parts of Nepal arxiv.org/abs/2406.11877

Solar Power Prediction Using Satellite Data in Different Parts of Nepal

Due to the unavailability of solar irradiance data for many potential sites of Nepal, the paper proposes predicting solar irradiance based on alternative meteorological parameters. The study focuses on five distinct regions in Nepal and utilizes a dataset spanning almost ten years, obtained from CERES SYN1deg and MERRA-2. Machine learning models such as Random Forest, XGBoost, K-Nearest Neighbors, and deep learning models like LSTM and ANN-MLP are employed and evaluated for their performance. The results indicate high accuracy in predicting solar irradiance, with R-squared(R2) scores close to unity for both train and test datasets. The impact of parameter integration on model performance is analyzed, revealing the significance of various parameters in enhancing predictive accuracy. Each model demonstrates strong performance across all parameters, consistently achieving MAE values below 6, RMSE values under 10, MBE within |2|, and nearly unity R2 values. Upon removal of various solar parameters such as "Solar_Irradiance_Clear_Sky", "UVA", etc. from the datasets, the model's performance is significantly affected. This exclusion leads to considerable increases in MAE, reaching up to 82, RMSE up to 135, and MBE up to |7|. Among the models, KNN displays the weakest performance, with an R2 of 0.7582546. Conversely, ANN exhibits the strongest performance, boasting an R2 value of 0.9245877. Hence, the study concludes that Artificial Neural Network (ANN) performs exceptionally well, showcasing its versatility even under sparse data parameter conditions.

arxiv.org

Topographic Visualization of Near-surface Temperatures for Improved Lapse Rate Estimation arxiv.org/abs/2406.11894

Topographic Visualization of Near-surface Temperatures for Improved Lapse Rate Estimation

Numerical model forecasts of near-surface temperatures are prone to error. This is because terrain can exert a strong influence on temperature that is not captured in numerical weather models due to spatial resolution limitations. To account for the terrain height difference between the forecast model and reality, temperatures are commonly corrected using a vertical adjustment based on a fixed lapse rate. This, however, ignores the fact that true lapse rates vary from 1.2 K temperature drop per 100 m of ascent to more than 10 K temperature rise over the same vertical distance. In this work, we develop topographic visualization techniques to assess the resulting uncertainties in near-surface temperatures and reveal relationships between those uncertainties, features in the resolved and unresolved topography, and the temperature distribution in the near-surface atmosphere. Our techniques highlight common limitations of the current lapse rate scheme and hint at their topographic dependencies in the context of the prevailing weather conditions. Together with scientists working in postprocessing and downscaling of numerical model output, we use these findings to develop an improved lapse rate scheme. This model adapts to both the topography and the current weather situation. We examine the quality and physical consistency of the new estimates by comparing them with station observations around the world and by including visual representations of radiation-slope interactions.

arxiv.org

Thermodynamic Transferability in Coarse-Grained Force Fields using Graph Neural Networks arxiv.org/abs/2406.12112

Thermodynamic Transferability in Coarse-Grained Force Fields using Graph Neural Networks

Coarse-graining is a molecular modeling technique in which an atomistic system is represented in a simplified fashion that retains the most significant system features that contribute to a target output, while removing the degrees of freedom that are less relevant. This reduction in model complexity allows coarse-grained molecular simulations to reach increased spatial and temporal scales compared to corresponding all-atom models. A core challenge in coarse-graining is to construct a force field that represents the interactions in the new representation in a way that preserves the atomistic-level properties. Many approaches to building coarse-grained force fields have limited transferability between different thermodynamic conditions as a result of averaging over internal fluctuations at a specific thermodynamic state point. Here, we use a graph-convolutional neural network architecture, the Hierarchically Interacting Particle Neural Network with Tensor Sensitivity (HIP-NN-TS), to develop a highly automated training pipeline for coarse grained force fields which allows for studying the transferability of coarse-grained models based on the force-matching approach. We show that this approach not only yields highly accurate force fields, but also that these force fields are more transferable through a variety of thermodynamic conditions. These results illustrate the potential of machine learning techniques such as graph neural networks to improve the construction of transferable coarse-grained force fields.

arxiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.