Show newer

Curvature-Constrained Vector Field for Motion Planning of Nonholonomic Robots arxiv.org/abs/2504.02852 .SY .RO .SY

Curvature-Constrained Vector Field for Motion Planning of Nonholonomic Robots

Vector fields are advantageous in handling nonholonomic motion planning as they provide reference orientation for robots. However, additionally incorporating curvature constraints becomes challenging, due to the interconnection between the design of the curvature-bounded vector field and the tracking controller under underactuation. In this paper, we present a novel framework to co-develop the vector field and the control laws, guiding the nonholonomic robot to the target configuration with curvature-bounded trajectory. First, we formulate the problem by introducing the target positive limit set, which allows the robot to converge to or pass through the target configuration, depending on different dynamics and tasks. Next, we construct a curvature-constrained vector field (CVF) via blending and distributing basic flow fields in workspace and propose the saturated control laws with a dynamic gain, under which the tracking error's magnitude decreases even when saturation occurs. Under the control laws, kinematically constrained nonholonomic robots are guaranteed to track the reference CVF and converge to the target positive limit set with bounded trajectory curvature. Numerical simulations show that the proposed CVF method outperforms other vector-field-based algorithms. Experiments on Ackermann UGVs and semi-physical fixed-wing UAVs demonstrate that the method can be effectively implemented in real-world scenarios.

arXiv.org

Mapping Technological Futures: Anticipatory Discourse Through Text Mining arxiv.org/abs/2504.02853 .SI .CL .CY

Mapping Technological Futures: Anticipatory Discourse Through Text Mining

The volatility and unpredictability of emerging technologies, such as artificial intelligence (AI), generate significant uncertainty, which is widely discussed on social media. This study examines anticipatory discourse surrounding technological futures by analysing 1.5 million posts from 400 key opinion leaders (KOLs) published on the X platform (from 2021 to 2023). Using advanced text mining techniques, including BERTopic modelling, sentiment, emotion, and attitude analyses, the research identifies 100 distinct topics reflecting anticipated tech-driven futures. Our findings emphasize the dual role of KOLs in framing \textit{present futures} -- optimistic visions of transformative technologies like AI and IoT -- and influencing \textit{future presents}, where these projections shape contemporary societal and geopolitical debates. Positive emotions such as Hope dominate, outweighing Anxiety, particularly in topics like ``Machine Learning, Data Science, and Deep Learning,'' while discussions around ``Climate Change'' and ``War, Ukraine, and Trump People'' elicit \textit{Anxiety}. By framing technologies as solutions to societal challenges, KOLs act as mediators of societal narratives, bridging imagined futures and current realities. These insights underscore their pivotal role in directing public attention with emerging technologies during periods of heightened uncertainty, advancing our understanding of anticipatory discourse in technology-mediated contexts.

arXiv.org

The epistemic dimension of algorithmic fairness: assessing its impact in innovation diffusion and fair policy making arxiv.org/abs/2504.02856 .CY .SI

The epistemic dimension of algorithmic fairness: assessing its impact in innovation diffusion and fair policy making

Algorithmic fairness is an expanding field that addresses a range of discrimination issues associated with algorithmic processes. However, most works in the literature focus on analyzing it only from an ethical perspective, focusing on moral principles and values that should be considered in the design and evaluation of algorithms, while disregarding the epistemic dimension related to knowledge transmission and validation. However, this aspect of algorithmic fairness should also be included in the debate, as it is crucial to introduce a specific type of harm: an individual may be systematically excluded from the dissemination of knowledge due to the attribution of a credibility deficit/excess. In this work, we specifically focus on characterizing and analyzing the impact of this credibility deficit or excess on the diffusion of innovations on a societal scale, a phenomenon driven by individual attitudes and social interactions, and also by the strength of mutual connections. Indeed, discrimination might shape the latter, ultimately modifying how innovations spread within the network. In this light, to incorporate, also from a formal point of view, the epistemic dimension in innovation diffusion models becomes paramount, especially if these models are intended to support fair policy design. For these reasons, we formalize the epistemic properties of a social environment, by extending the well-established Linear Threshold Model (LTM) in an epistemic direction to show the impact of epistemic biases in innovation diffusion. Focusing on the impact of epistemic bias in both open-loop and closed-loop scenarios featuring optimal fostering policies, our results shed light on the pivotal role the epistemic dimension might have in the debate of algorithmic fairness in decision-making.

arXiv.org

Optimizing Humor Generation in Large Language Models: Temperature Configurations and Architectural Trade-offs arxiv.org/abs/2504.02858 .CL .LG

Optimizing Humor Generation in Large Language Models: Temperature Configurations and Architectural Trade-offs

Large language models (LLMs) demonstrate increasing capabilities in creative text generation, yet systematic evaluations of their humor production remain underexplored. This study presents a comprehensive analysis of 13 state-of-the-art LLMs across five architectural families, evaluating their performance in generating technically relevant humor for software developers. Through a full factorial design testing 715 unique configurations of temperature settings and prompt variations, we assess model outputs using five weighted criteria: humor quality, domain relevance, concept originality, tone precision, and delivery efficiency. Our methodology employs rigorous statistical analysis including ANOVA, correlation studies, and quadratic regression to identify optimal configurations and architectural influences. Results reveal significant performance variations across models, with certain architectures achieving 21.8% superiority over baseline systems. Temperature sensitivity analysis demonstrates that 73% of models achieve peak performance at lower stochasticity settings (<= 0.5), though optimal ranges vary substantially by architecture. We identify distinct model clusters: compact high-performers maintaining efficiency-quality balance versus verbose specialists requiring longer outputs for marginal gains. Statistical validation confirms model architecture explains 38.7% of performance variance, with significant correlations between humor quality and concept originality. The study establishes practical guidelines for model selection and configuration, demonstrating how temperature adjustments and architectural considerations impact humor generation effectiveness. These findings advance understanding of LLM capabilities in creative technical writing and provide empirically validated configuration strategies for developers implementing humor-generation systems.

arXiv.org

AI-Enhanced Resilience in Power Systems: Adversarial Deep Learning for Robust Short-Term Voltage Stability Assessment under Cyber-Attacks arxiv.org/abs/2504.02859 .SY .SY

AI-Enhanced Resilience in Power Systems: Adversarial Deep Learning for Robust Short-Term Voltage Stability Assessment under Cyber-Attacks

In the era of Industry 4.0, ensuring the resilience of cyber-physical systems against sophisticated cyber threats is increasingly critical. This study proposes a pioneering AI-based control framework that enhances short-term voltage stability assessments (STVSA) in power systems under complex composite cyber-attacks. First, by incorporating white-box and black-box adversarial attacks with Denial-of-Service (DoS) perturbations during training, composite adversarial attacks are implemented. Second, the application of Spectral Normalized Conditional Wasserstein Generative Adversarial Network with Gradient Penalty (SNCWGAN-GP) and Fast Gradient Sign Method (FGSM) strengthens the model's resistance to adversarial disturbances, improving data quality and training stability. Third, an assessment model based on Long Short-Term Memory (LSTM)-enhanced Graph Attention Network (L-GAT) is developed to capture dynamic relationships between the post-fault dynamic trajectories and electrical grid topology. Experimental results on the IEEE 39-bus test system demonstrate the efficacy and superiority of the proposed method in composite cyber-attack scenarios. This contribution is pivotal to advancing AI-based resilient control strategies for nonlinear dynamical systems, marking a substantial enhancement in the security of cyber-physical systems.

arXiv.org

Marco: Configurable Graph-Based Task Solving and Multi-AI Agents Framework for Hardware Design arxiv.org/abs/2504.01962 .AR

Correlation-Attention Masked Temporal Transformer for User Identity Linkage Using Heterogeneous Mobility Data arxiv.org/abs/2504.01979 .SI .AI

Correlation-Attention Masked Temporal Transformer for User Identity Linkage Using Heterogeneous Mobility Data

With the rise of social media and Location-Based Social Networks (LBSN), check-in data across platforms has become crucial for User Identity Linkage (UIL). These data not only reveal users' spatio-temporal information but also provide insights into their behavior patterns and interests. However, cross-platform identity linkage faces challenges like poor data quality, high sparsity, and noise interference, which hinder existing methods from extracting cross-platform user information. To address these issues, we propose a Correlation-Attention Masked Transformer for User Identity Linkage Network (MT-Link), a transformer-based framework to enhance model performance by learning spatio-temporal co-occurrence patterns of cross-platform users. Our model effectively captures spatio-temporal co-occurrence in cross-platform user check-in sequences. It employs a correlation attention mechanism to detect the spatio-temporal co-occurrence between user check-in sequences. Guided by attention weight maps, the model focuses on co-occurrence points while filtering out noise, ultimately improving classification performance. Experimental results show that our model significantly outperforms state-of-the-art baselines by 12.92%~17.76% and 5.80%~8.38% improvements in terms of Macro-F1 and Area Under Curve (AUC).

arXiv.org

Information Gain Is Not All You Need arxiv.org/abs/2504.01980 .RO .AI

Information Gain Is Not All You Need

Autonomous exploration in mobile robotics is driven by two competing objectives: coverage, to exhaustively observe the environment; and path length, to do so with the shortest path possible. Though it is difficult to evaluate the best course of action without knowing the unknown, the unknown can often be understood through models, maps, or common sense. However, previous work has shown that improving estimates of information gain through such prior knowledge leads to greedy behavior and ultimately causes backtracking, which degrades coverage performance. In fact, any information gain maximization will exhibit this behavior, even without prior knowledge. Information gained at task completion is constant, and cannot be maximized for. It is therefore an unsuitable choice as an optimization objective. Instead, information gain is a decision criterion for determining which candidate states should still be considered for exploration. The task therefore becomes to reach completion with the shortest total path. Since determining the shortest path is typically intractable, it is necessary to rely on a heuristic or estimate to identify candidate states that minimize the total path length. To address this, we propose a heuristic that reduces backtracking by preferring candidate states that are close to the robot, but far away from other candidate states. We evaluate the performance of the proposed heuristic in simulation against an information gain-based approach and frontier exploration, and show that our method significantly decreases total path length, both with and without prior knowledge of the environment.

arXiv.org

NLS: Natural-Level Synthesis for Hardware Implementation Through GenAI arxiv.org/abs/2504.01981 .AR .AI

NLS: Natural-Level Synthesis for Hardware Implementation Through GenAI

This paper introduces Natural-Level Synthesis, an innovative approach for generating hardware using generative artificial intelligence on both the system level and component-level. NLS bridges a gap in current hardware development processes, where algorithm and application engineers' involvement typically ends at the requirements stage. With NLS, engineers can participate more deeply in the development, synthesis, and test stages by using Gen-AI models to convert natural language descriptions directly into Hardware Description Language code. This approach not only streamlines hardware development but also improves accessibility, fostering a collaborative workflow between hardware and algorithm engineers. We developed the NLS tool to facilitate natural language-driven HDL synthesis, enabling rapid generation of system-level HDL designs while significantly reducing development complexity. Evaluated through case studies and benchmarks using Performance, Power, and Area metrics, NLS shows its potential to enhance resource efficiency in hardware development. This work provides a extensible, efficient solution for hardware synthesis and establishes a Visual Studio Code Extension to assess Gen-AI-driven HDL generation and system integration, laying a foundation for future AI-enhanced and AI-in-the-loop Electronic Design Automation tools.

arXiv.org

Divine Social Networking in the Age of Lost Omens arxiv.org/abs/2504.01982 .SI

Divine Social Networking in the Age of Lost Omens

The last two years have seen significant changes in the divine pantheon of the Lost Omens campaign setting of the Pathfinder Tabletop Roleplaying Game. First, the Pathfinder Remaster, necessitated by the Open Game License debacle, prompted the removal of alignment and an enrichment of divine identities and relationships. Second, the War of Immortals, kicked off by the death of one of the core 20 deities, shook up the membership and relationships within the setting's primary pantheon. These two changes prompted the reprinting of deity information in Pathfinder: Lost Omens Divine Mysteries, which updates and replaces the pre-Remaster Pathfinder: Lost Omens Gods & Magic. Notably, Divine Mysteries features double the page count profiling the core 20 deities. In this paper, we use social network analysis to examine the impact of these changes (Remaster, War of Immortals, and page count) on the relationships among the core 20 deities. In this analysis, each deity features as a node, connected by edges that represent the number of times each pair of deities is mentioned in each other's profiles. The results reveal a much richer, more connected divine network in Divine Mysteries than in Gods & Magic. We conclude by discussing implications for the Lost Omens campaign setting and areas of future development.

arXiv.org

Impedance and Stability Targeted Adaptation for Aerial Manipulator with Unknown Coupling Dynamics arxiv.org/abs/2504.01983 .SY .RO .SY

Impedance and Stability Targeted Adaptation for Aerial Manipulator with Unknown Coupling Dynamics

Stable aerial manipulation during dynamic tasks such as object catching, perching, or contact with rigid surfaces necessarily requires compliant behavior, which is often achieved via impedance control. Successful manipulation depends on how effectively the impedance control can tackle the unavoidable coupling forces between the aerial vehicle and the manipulator. However, the existing impedance controllers for aerial manipulator either ignore these coupling forces (in partitioned system compliance methods) or require their precise knowledge (in complete system compliance methods). Unfortunately, such forces are very difficult to model, if at all possible. To solve this long-standing control challenge, we introduce an impedance controller for aerial manipulator which does not rely on a priori knowledge of the system dynamics and of the coupling forces. The impedance control design can address unknown coupling forces, along with system parametric uncertainties, via suitably designed adaptive laws. The closed-loop system stability is proved analytically and experimental results with a payload-catching scenario demonstrate significant improvements in overall stability and tracking over the state-of-the-art impedance controllers using either partitioned or complete system compliance.

arXiv.org

Multi-Dimensional AGV Path Planning in 3D Warehouses Using Ant Colony Optimization and Advanced Neural Networks arxiv.org/abs/2504.01985 .RO .AI

Multi-Dimensional AGV Path Planning in 3D Warehouses Using Ant Colony Optimization and Advanced Neural Networks

Within modern warehouse scenarios, the rapid expansion of e-commerce and increasingly complex, multi-level storage environments have exposed the limitations of traditional AGV (Automated Guided Vehicle) path planning methods--often reliant on static 2D models and expert-tuned heuristics that struggle to handle dynamic traffic and congestion. Addressing these limitations, this paper introduces a novel AGV path planning approach for 3D warehouse environments that leverages a hybrid framework combining ACO (Ant Colony Optimization) with deep learning models, called NAHACO (Neural Adaptive Heuristic Ant Colony Optimization). NAHACO integrates three key innovations: first, an innovative heuristic algorithm for 3D warehouse cargo modeling using multidimensional tensors, which addresses the challenge of achieving superior heuristic accuracy; second, integration of a congestion-aware loss function within the ACO framework to adjust path costs based on traffic and capacity constraints, called CARL (Congestion-Aware Reinforce Loss), enabling dynamic heuristic calibration for optimizing ACO-based path planning; and third, an adaptive attention mechanism that captures multi-scale spatial features, thereby addressing dynamic heuristic calibration for further optimization of ACO-based path planning and AGV navigation. NAHACO significantly boosts path planning efficiency, yielding faster computation times and superior performance over both vanilla and state-of-the-art methods, while automatically adapting to warehouse constraints for real-time optimization. NAHACO outperforms state-of-the-art methods, lowering the total cost by up to 24.7% on TSP benchmarks. In warehouse tests, NAHACO cuts cost by up to 41.5% and congestion by up to 56.1% compared to previous methods.

arXiv.org

TuRTLe: A Unified Evaluation of LLMs for RTL Generation arxiv.org/abs/2504.01986 .AR .AI

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.