Show newer

The Metaphysics of Protection: Emergence, Agency, and the Ontological Status of Logical Qubits arxiv.org/abs/2507.00023

Enhancing Car-Following Models with Bike Dynamics for Improved Traffic Simulation arxiv.org/abs/2507.00062

Enhancing Car-Following Models with Bike Dynamics for Improved Traffic Simulation

Road traffic simulations are crucial for establishing safe and efficient traffic environments. They are used to test various road applications before real-world implementation. SUMO is a well-known simulator for road networks and intermodal traffic, often used in conjunction with other tools to test various types of applications. Realistic simulations require accurate movement models for different road users, such as cars, bicycles, and buses. While realistic models are already implemented for most vehicle types, bicycles, which are essential for achieving safe and efficient traffic, can only be modeled as slow vehicles or fast pedestrians at present. This paper introduces the Realistic Bicycle Dynamics Model (RBDM), the first dedicated bicycle model for SUMO, addressing this significant gap. Leveraging real-world bicycle data from the SimRa dataset, the RBDM implements realistic speed, acceleration, and deceleration behaviors of bicycles in urban scenarios. The evaluation is conducted using the Monaco SUMO traffic scenario and a newly generated Berlin scenario in SUMO. The RBDM significantly outperforms the existing slow-vehicle approximation in SUMO, aligning more closely with real-world data. These results underscore the necessity of a realistic bicycle movement model for accurate simulations, given the significant differences in the movement profiles of bicycles, cars, and pedestrians. Furthermore, the model is tested for its ability to generalize to disparate scenarios and urban topologies, which is dependent on the manner and geographical region in which the SimRa data were gathered. In addition, recommendations are provided for how it could be adapted for use in different city topologies.

arXiv.org

The gradual transformation of inland countries -- human plowing, horse plowing and equity incentives arxiv.org/abs/2507.00067

The gradual transformation of inland countries -- human plowing, horse plowing and equity incentives

Many modern countries have not learned their lessons and often hope for the wisdom of later generations, resulting in them only possessing modern technology and difficult to iterate ancient civilizations. At present, there is no way to tell how we should learn from history and promote the gradual upgrading of civilization. Therefore, we must tell the history of civilization's progress and the means of governance, learn from experience to improve the comprehensive strength and survival ability of civilization, and achieve an optimal solution for the tempering brought by conflicts and the reduction of internal conflicts. Firstly, we must follow the footsteps of history and explore the reasons for the long-term stability of each country in conflict, including providing economic benefits to the people and means of suppressing them; then, use mathematical methods to demonstrate how we can achieve the optimal solution at the current stage. After analysis, we can conclude that the civilization transformed from human plowing to horse plowing can easily suppress the resistance of the people and provide them with the ability to resist; The selection of rulers should consider multiple institutional aspects, such as exams, elections, and drawing lots; Economic development follows a lognormal distribution and can be adjusted by expected value and variance. Using a lognormal distribution with the maximum value to divide equity can adjust the wealth gap.

arXiv.org

Classifying Hotspots Mutations for Biosimulation with Quantum Neural Networks and Variational Quantum Eigensolver arxiv.org/abs/2507.00072

Classifying Hotspots Mutations for Biosimulation with Quantum Neural Networks and Variational Quantum Eigensolver

The rapid expansion of biomolecular datasets presents significant challenges for computational biology. Quantum computing emerges as a promising solution to address these complexities. This study introduces a novel quantum framework for analyzing TART-T and TART-C gene data by integrating genomic and structural information. Leveraging a Quantum Neural Network (QNN), we classify hotspot mutations, utilizing quantum superposition to uncover intricate relationships within the data. Additionally, a Variational Quantum Eigensolver (VQE) is employed to estimate molecular ground-state energies through a hybrid classical-quantum approach, overcoming the limitations of traditional computational methods. Implemented using IBM Qiskit, our framework demonstrates high accuracy in both mutation classification and energy estimation on current Noisy Intermediate-Scale Quantum (NISQ) devices. These results underscore the potential of quantum computing to advance the understanding of gene function and protein structure. Furthermore, this research serves as a foundational blueprint for extending quantum computational methods to other genes and biological systems, highlighting their synergy with classical approaches and paving the way for breakthroughs in drug discovery and personalized medicine.

arXiv.org

Test mass charge management in the detection of gravitational waves in space based on UV micro-LED arxiv.org/abs/2507.00086

Test mass charge management in the detection of gravitational waves in space based on UV micro-LED

As an alternative to the ultraviolet light emitting diode(UV LED), the feasibility of utilizing UV micro-LED in the charge management in the detection of gravitational waves in space is experimentally studied. Compared with UV LED, micro-LED is more compact in size, has better current spreading, faster response time and longer operating life. Performance characteristics of micro-LEDs were measured, with peak wavelength of 254 nm, 262 nm, 274 nm, and 282 nm for each respective micro-LED, and the photoelectric effect was demonstrated. The effectiveness of micro-LED based charge management experiments were demonstrated using above micro-LEDs mounted on a cubical test mass, and different discharge rates were achieved by varying the drive current and duty cycle using pulse width modulation(PWM). Laboratory data was also shown to demonstrate the space qualification of the micro-LED device, the key electrical and optical characteristics of the micro-LEDs showed less than 5% variation. The results of the qualification bring the micro-LED device Technology Readiness Level(TRL) to TRL-5. TRL-6 will be reached provided additional radiation and thermal tests are conducted and in a position ready to be flown and further tested in space.

arXiv.org

How large language models judge and influence human cooperation arxiv.org/abs/2507.00088

How large language models judge and influence human cooperation

Humans increasingly rely on large language models (LLMs) to support decisions in social settings. Previous work suggests that such tools shape people's moral and political judgements. However, the long-term implications of LLM-based social decision-making remain unknown. How will human cooperation be affected when the assessment of social interactions relies on language models? This is a pressing question, as human cooperation is often driven by indirect reciprocity, reputations, and the capacity to judge interactions of others. Here, we assess how state-of-the-art LLMs judge cooperative actions. We provide 21 different LLMs with an extensive set of examples where individuals cooperate -- or refuse cooperating -- in a range of social contexts, and ask how these interactions should be judged. Furthermore, through an evolutionary game-theoretical model, we evaluate cooperation dynamics in populations where the extracted LLM-driven judgements prevail, assessing the long-term impact of LLMs on human prosociality. We observe a remarkable agreement in evaluating cooperation against good opponents. On the other hand, we notice within- and between-model variance when judging cooperation with ill-reputed individuals. We show that the differences revealed between models can significantly impact the prevalence of cooperation. Finally, we test prompts to steer LLM norms, showing that such interventions can shape LLM judgements, particularly through goal-oriented prompts. Our research connects LLM-based advices and long-term social dynamics, and highlights the need to carefully align LLM norms in order to preserve human cooperation.

arXiv.org

A Deterministic Model of Free Will arxiv.org/abs/2506.21553

A Deterministic Model of Free Will

The issue of whether we make decisions freely has vexed philosophers for millennia, Resolving this is vital for solving a diverse range of problems, from the physiology of how the brain makes decisions (and how we assign moral responsibility to those decisions) to the interpretation of experiments on entangled quantum particles. A deterministic model of free will is developed, based on two concepts. The first generalises the notion of initialisation of nonlinear systems where information cascades upscale from the Planck scale, exemplified by the chaology of colliding billiard balls, and featured in the author's Rational Quantum Mechanics. With `just-in-time' initialisation, such Planck-scale information is only initialised when it is needed to describe super-Planck scale evolution, and not e.g., at the time of the Big Bang. In this way determinism does not imply predestination and a system with finitely many degrees of freedom can shadow a system with infinitely many, over arbitrarily long timescales. The second concept describes the upscale control of such Planck-scale information on super-Planck scales and is illustrated by reference to stochastic rounding in numerical analysis. Using these concepts, a deterministic model is proposed whereby freely-made decisions are made by using past experiences to control the impact of noise in the low-energy brain. It is claimed that such a model has evolutionary advantages, not least preventing paralysis by analysis and encouraging rational risk taking. It is concluded that humans have free will, determinism notwithstanding. The model is applied to study the foundational issue of free choice in quantum physics experiments: it is shown that violating the Measurement Independence assumption does not invalidate the free-will conclusion above.

arXiv.org

Analog Programmable-Photonic Information arxiv.org/abs/2506.21649

Analog Programmable-Photonic Information

The limitations of digital electronics in handling real-time matrix operations for emerging computational tasks - such as artificial intelligence, drug design, and medical imaging - have prompted renewed interest in analog computing. Programmable Integrated Photonics (PIP) has emerged as a promising technology for scalable, low-power, and high-bandwidth analog computation. While prior work has explored PIP implementations of quantum and neuromorphic computing, both approaches face significant limitations due to misalignments between their mathematical models and the native capabilities of photonic hardware. Building on the recently proposed Analog Programmable-Photonic Computation (APC) - a computation theory explicitly matched to the technological features of PIP - we introduce its critical missing component: an information theory. We present Analog Programmable-Photonic Information (API), a mathematical framework that addresses fundamental concepts beyond APC by examining the amount of information that can be generated, computed and recovered in a PIP platform. API also demonstrates the robustness of APC against errors arising from system noise and hardware imperfections, enabling scalable computation without the extensive error-correction overhead required in quantum computing. Together, APC and API provide a unified foundation for on-chip photonic computing, offering a complementary alternative to digital, quantum and neuromorphic paradigms, and positioning PIP as a cornerstone technology for next-generation information processing.

arXiv.org

Ultrastable nanophotonic microcavities via integrated thermometry arxiv.org/abs/2506.21692

Ultrastable nanophotonic microcavities via integrated thermometry

Integrated photonic devices that can be co-packaged with electronics and can operate in real-world environments will enable many important applications, such as optical interconnects, quantum information processing, precision measurements, spectroscopy, and microwave generation. Significant progress has been made over the past two decades on increasing the functional complexity of photonic chips. However, the most critical challenge that remains is the lack of scalable techniques to overcome perturbations arising from environmental thermal noise and thermal crosstalk from co-packaged electronics and other photonic devices sharing the same substrate. Here, we propose and demonstrate a fully-integrated scheme to monitor and stabilize the temperature of a high-Q microresonator in a Si-based chip. We show that when stabilized, the microresonator exhibits remarkable resilience against external thermal noise and can serve as a fully-integrated photonic frequency reference. By simply changing the temperature set-point, the cavity resonance frequency can be repeatably tuned without any hysteresis over several hours. We also show that the frequency of a distributed feedback (DFB) laser can be stabilized to this microresonator and realize a 48 dB reduction in its frequency drift, resulting in its center wavelength staying within +-0.5 pm of the mean over the duration of 50 hours in the presence of significant ambient fluctuations. This performance is superior to many commercial DFB systems and is highly suitable for use in data-communication systems. Finally, we demonstrate that the technique can be implemented to stabilize a soliton modelocked Kerr comb against significant ambient and crosstalk thermal noise, without the need for photodetection, paving the way for Kerr-comb-based photonic devices that can operate in the desired modelocked state indefinitely.

arXiv.org

Harnessing Piezoelectric Shear Actuators for Vibration Control in Sandwich Beams arxiv.org/abs/2506.21713

Harnessing Piezoelectric Shear Actuators for Vibration Control in Sandwich Beams

Our study found that integrating shear piezo-transducers inside the beam offers a compact and efficient solution that enables localized damping control without compromising structural integrity. However, the conventional approach of placing the piezos outside the substrate faces challenges and limited accessibility to industrial applications. We determine damping performance for long and slender sandwich beam structures utilizing active vibration control by internally placed piezoelectric shear sensors and actuators. Experimental and numerical results are presented for a clamped-free sandwich beam structure constructed with two stainless steel facings composed of a core layer of foam and a piezoelectric shear-actuator and sensor. This approach of internal actuator and sensor tends to tackle the problems within (high-tech) systems, i.e. mechanical vibrations, a limited amount of design volume, and vulnerability of externally placed piezoelectric transducers to outside conditions. By this new internal sensor-actuator approach, this study addresses a significant gap in the literature. The location of the sensor and actuator has been defined by numerical investigation of the \textit{modal shear strain} and the \textit{effective electro-mechanical coupling coefficient}. The frequency response of the sandwich beam structure has been evaluated using both numerical and experimental investigation. Positive Position Feedback has been employed on the numerical response to simulate the damping performance for the fundamental mode. Different controller gains have been used to analyze the trade-off between effective resonance suppression and increased low-frequency gain. The tip vibrations at the fundamental mode have been reduced from 5.01 mm to 0.34 mm amplitude at steady state, which represents a significant reduction.

arXiv.org

Inverse Design of Diffractive Metasurfaces Using Diffusion Models arxiv.org/abs/2506.21748

Inverse Design of Diffractive Metasurfaces Using Diffusion Models

Metasurfaces are ultra-thin optical elements composed of engineered sub-wavelength structures that enable precise control of light. Their inverse design - determining a geometry that yields a desired optical response - is challenging due to the complex, nonlinear relationship between structure and optical properties. This often requires expert tuning, is prone to local minima, and involves significant computational overhead. In this work, we address these challenges by integrating the generative capabilities of diffusion models into computational design workflows. Using an RCWA simulator, we generate training data consisting of metasurface geometries and their corresponding far-field scattering patterns. We then train a conditional diffusion model to predict meta-atom geometry and height from a target spatial power distribution at a specified wavelength, sampled from a continuous supported band. Once trained, the model can generate metasurfaces with low error, either directly using RCWA-guided posterior sampling or by serving as an initializer for traditional optimization methods. We demonstrate our approach on the design of a spatially uniform intensity splitter and a polarization beam splitter, both produced with low error in under 30 minutes. To support further research in data-driven metasurface design, we publicly release our code and datasets.

arXiv.org

Feedforward equilibrium trajectory optimization with GSPulse arxiv.org/abs/2506.21760

Feedforward equilibrium trajectory optimization with GSPulse

One of the common tasks required for designing new plasma scenarios or evaluating capabilities of a tokamak is to design the desired equilibria using a Grad-Shafranov (GS) equilibrium solver. However, most standard equilibrium solvers are time-independent and do not include dynamic effects such as plasma current flux consumption, induced vessel currents, or voltage constraints. Another class of tools, plasma equilibrium evolution simulators, do include time-dependent effects. These are generally structured to solve the forward problem of evolving the plasma equilibrium given feedback-controlled voltages. In this work, we introduce GSPulse, a novel algorithm for equilibrium trajectory optimization, that is more akin to a pulse planner than a pulse simulator. GSPulse includes time-dependent effects and solves the inverse problem: given a user-specified set of target equilibrium shapes, as well as limits on the coil currents and voltages, the optimizer returns trajectories of the voltages, currents, and achievable equilibria. This task is useful for scoping performance of a tokamak and exploring the space of achievable pulses. The computed equilibria satisfy both Grad-Shafranov force balance and axisymmetric circuit dynamics. The optimization is performed by restructuring the free-boundary equilibrium evolution (FBEE) equations into a form where it is computationally efficient to optimize the entire dynamic sequence. GSPulse can solve for hundreds of equilibria simultaneously within a few minutes. GSPulse has been validated against NSTX-U and MAST-U experiments and against SPARC feedback control simulations, and is being used to perform scenario design for SPARC. The computed trajectories can be used as feedforward inputs to inform and improve feedback performance. The code for GSPulse is available open-source at https://github.com/jwai-cfs/GSPulse_public.

arXiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.