Show newer

Post-quantum encryption algorithms of high-degree 3-variable polynomial congruences: BS cryptosystems and BS key generation arxiv.org/abs/2409.03758 .CR

Post-quantum encryption algorithms of high-degree 3-variable polynomial congruences: BS cryptosystems and BS key generation

We will construct post-quantum encryption algorithms based on three-variable polynomial Beal-Schur congruence. After giving a proof of Beal's conjecture and citing some applications of it to selected cases where the discrete logarithm and some of its generalizations are unsolvable problems, we will investigate the formulation and validity of an appropriate version of the Beal's conjecture on finite fields of integers. In contrast to the infinite case, we will show that the corresponding Beal-Schur congruence equation $x^{p}+y^{q}\equiv z^{r} (mod \mathcal{N})$ has non-trivial solutions into the finite field $\mathbb{Z}_{\mathcal{N}} $, for all sufficiently large primes $\mathcal{N}$ that do not divide the product $xyz$, under certain mutual divisibility conditions of the exponents $p$, $q$ and $r$. We will apply this result to generate the so-called BS cryptosystems, i.e., simple and secure post-quantum encryption algorithms based on the Beal-Schur congruence equation, as well as new cryptographic key generation methods, whose post-quantum algorithmic encryption security relies on having an infinite number of options for the parameters $p$, $q$, $r$, $\mathcal{N}$.

arxiv.org

VERA: Validation and Evaluation of Retrieval-Augmented Systems arxiv.org/abs/2409.03759 .IR .AI

VERA: Validation and Evaluation of Retrieval-Augmented Systems

The increasing use of Retrieval-Augmented Generation (RAG) systems in various applications necessitates stringent protocols to ensure RAG systems accuracy, safety, and alignment with user intentions. In this paper, we introduce VERA (Validation and Evaluation of Retrieval-Augmented Systems), a framework designed to enhance the transparency and reliability of outputs from large language models (LLMs) that utilize retrieved information. VERA improves the way we evaluate RAG systems in two important ways: (1) it introduces a cross-encoder based mechanism that encompasses a set of multidimensional metrics into a single comprehensive ranking score, addressing the challenge of prioritizing individual metrics, and (2) it employs Bootstrap statistics on LLM-based metrics across the document repository to establish confidence bounds, ensuring the repositorys topical coverage and improving the overall reliability of retrieval systems. Through several use cases, we demonstrate how VERA can strengthen decision-making processes and trust in AI applications. Our findings not only contribute to the theoretical understanding of LLM-based RAG evaluation metric but also promote the practical implementation of responsible AI systems, marking a significant advancement in the development of reliable and transparent generative AI technologies.

arxiv.org

Rethinking Deep Learning: Propagating Information in Neural Networks without Backpropagation and Statistical Optimization arxiv.org/abs/2409.03760 .LG

Rethinking Deep Learning: Propagating Information in Neural Networks without Backpropagation and Statistical Optimization

Developing strong AI signifies the arrival of technological singularity, contributing greatly to advancing human civilization and resolving social issues. Neural networks (NNs) and deep learning, which utilize NNs, are expected to lead to strong AI due to their biological neural system-mimicking structures. However, the statistical weight optimization techniques commonly used, such as error backpropagation and loss functions, may hinder the mimicry of neural systems. This study discusses the information propagation capabilities and potential practical applications of NNs as neural system mimicking structures by solving the handwritten character recognition problem in the Modified National Institute of Standards and Technology (MNIST) database without using statistical weight optimization techniques like error backpropagation. In this study, the NNs architecture comprises fully connected layers using step functions as activation functions, with 0-15 hidden layers, and no weight updates. The accuracy is calculated by comparing the average output vectors of the training data for each label with the output vectors of the test data, based on vector similarity. The results showed that the maximum accuracy achieved is around 80%. This indicates that NNs can propagate information correctly without using statistical weight optimization. Additionally, the accuracy decreased with an increasing number of hidden layers. This is attributed to the decrease in the variance of the output vectors as the number of hidden layers increases, suggesting that the output data becomes smooth. This study's NNs and accuracy calculation methods are simple and have room for various improvements. Moreover, creating a feedforward NNs that repeatedly cycles through 'input -> processing -> output -> environmental response -> input -> ...' could pave the way for practical software applications.

arxiv.org

OpenCap markerless motion capture estimation of lower extremity kinematics and dynamics in cycling arxiv.org/abs/2409.03766 .CV

OpenCap markerless motion capture estimation of lower extremity kinematics and dynamics in cycling

Markerless motion capture offers several benefits over traditional marker-based systems by eliminating the need for physical markers, which are prone to misplacement and artifacts. Utilizing computer vision and deep learning algorithms, markerless systems can directly detect human body landmarks, reducing manual processing and errors associated with marker placement. These systems are adaptable, able to track user-defined features, and practical for real-world applications using consumer-grade devices such as smartphone cameras. This study compares the performance of OpenCap, a markerless motion capture system, with traditional marker-based systems in assessing cycling biomechanics. Ten healthy adults participated in experiments to capture sagittal hip, knee, and ankle kinematics and dynamics using both methods. OpenCap used videos from smartphones and integrated computer vision and musculoskeletal simulations to estimate 3D kinematics. Results showed high agreement between the two systems, with no significant differences in kinematic and kinetic measurements for the hip, knee, and ankle. The correlation coefficients exceeded 0.98, indicating very strong consistency. Errors were minimal, with kinematic errors under 4 degrees and kinetic errors below 5 Nm. This study concludes that OpenCap is a viable alternative to marker-based motion capture, offering comparable precision without extensive setup for hip (flexion/extension), knee (flexion/extension), and ankle (dorsiflexion/plantarflexion) joints. Future work should aim to enhance the accuracy of ankle joint measurements and extend analyses to 3D kinematics and kinetics for comprehensive biomechanical assessments.

arxiv.org

CortexCompile: Harnessing Cortical-Inspired Architectures for Enhanced Multi-Agent NLP Code Synthesis arxiv.org/abs/2409.02938 .LG .AI

CortexCompile: Harnessing Cortical-Inspired Architectures for Enhanced Multi-Agent NLP Code Synthesis

Current approaches to automated code generation often rely on monolithic models that lack real-time adaptability and scalability. This limitation is particularly evident in complex programming tasks that require dynamic adjustment and efficiency. The integration of neuroscience principles into Natural Language Processing (NLP) has the potential to revolutionize automated code generation. This paper presents CortexCompile, a novel modular system inspired by the specialized functions of the human brain's cortical regions. By emulating the distinct roles of the Prefrontal Cortex, Parietal Cortex, Temporal Lobe, and Motor Cortex, CortexCompile achieves significant advancements in scalability, efficiency, and adaptability compared to traditional monolithic models like GPT-4o. The system's architecture features a Task Orchestration Agent that manages dynamic task delegation and parallel processing, facilitating the generation of highly accurate and optimized code across increasingly complex programming tasks. Experimental evaluations demonstrate that CortexCompile consistently outperforms GPT-4o in development time, accuracy, and user satisfaction, particularly in tasks involving real-time strategy games and first-person shooters. These findings underscore the viability of neuroscience-inspired architectures in addressing the limitations of current NLP models, paving the way for more efficient and human-like AI systems.

arxiv.org

Comparison of Epilepsy Induced by Ischemic Hypoxic Brain Injury and Hypoglycemic Brain Injury using Multilevel Fusion of Data Features arxiv.org/abs/2409.02957 -bio.NC .SP .LG

Comparison of Epilepsy Induced by Ischemic Hypoxic Brain Injury and Hypoglycemic Brain Injury using Multilevel Fusion of Data Features

The study aims to investigate the similarities and differences in the brain damage caused by Hypoxia-Ischemia (HI), Hypoglycemia, and Epilepsy. Hypoglycemia poses a significant challenge in improving glycemic regulation for insulin-treated patients, while HI brain disease in neonates is associated with low oxygen levels. The study examines the possibility of using a combination of medical data and Electroencephalography (EEG) measurements to predict outcomes over a two-year period. The study employs a multilevel fusion of data features to enhance the accuracy of the predictions. Therefore this paper suggests a hybridized classification model for Hypoxia-Ischemia and Hypoglycemia, Epilepsy brain injury (HCM-BI). A Support Vector Machine is applied with clinical details to define the Hypoxia-Ischemia outcomes of each infant. The newborn babies are assessed every two years again to know the neural development results. A selection of four attributes is derived from the Electroencephalography records, and SVM does not get conclusions regarding the classification of diseases. The final feature extraction of the EEG signal is optimized by the Bayesian Neural Network (BNN) to get the clear health condition of Hypoglycemia and Epilepsy patients. Through monitoring and assessing physical effects resulting from Electroencephalography, The Bayesian Neural Network (BNN) is used to extract the test samples with the most log data and to report hypoglycemia and epilepsy Keywords- Hypoxia-Ischemia , Hypoglycemia , Epilepsy , Multilevel Fusion of Data Features , Bayesian Neural Network (BNN) , Support Vector Machine (SVM)

arxiv.org

Do We Trust What They Say or What They Do? A Multimodal User Embedding Provides Personalized Explanations arxiv.org/abs/2409.02965 .SI .IR .LG

Do We Trust What They Say or What They Do? A Multimodal User Embedding Provides Personalized Explanations

With the rapid development of social media, the importance of analyzing social network user data has also been put on the agenda. User representation learning in social media is a critical area of research, based on which we can conduct personalized content delivery, or detect malicious actors. Being more complicated than many other types of data, social network user data has inherent multimodal nature. Various multimodal approaches have been proposed to harness both text (i.e. post content) and relation (i.e. inter-user interaction) information to learn user embeddings of higher quality. The advent of Graph Neural Network models enables more end-to-end integration of user text embeddings and user interaction graphs in social networks. However, most of those approaches do not adequately elucidate which aspects of the data - text or graph structure information - are more helpful for predicting each specific user under a particular task, putting some burden on personalized downstream analysis and untrustworthy information filtering. We propose a simple yet effective framework called Contribution-Aware Multimodal User Embedding (CAMUE) for social networks. We have demonstrated with empirical evidence, that our approach can provide personalized explainable predictions, automatically mitigating the impact of unreliable information. We also conducted case studies to show how reasonable our results are. We observe that for most users, graph structure information is more trustworthy than text information, but there are some reasonable cases where text helps more. Our work paves the way for more explainable, reliable, and effective social media user embedding which allows for better personalized content delivery.

arxiv.org

Meal-taking activity monitoring in the elderly based on sensor data: Comparison of unsupervised classification methods arxiv.org/abs/2409.02971 .SP .AP .LG

Meal-taking activity monitoring in the elderly based on sensor data: Comparison of unsupervised classification methods

In an era marked by a demographic change towards an older population, there is an urgent need to improve nutritional monitoring in view of the increase in frailty. This research aims to enhance the identification of meal-taking activities by combining K-Means, GMM, and DBSCAN techniques. Using the Davies-Bouldin Index (DBI) for the optimal meal taking activity clustering, the results show that K-Means seems to be the best solution, thanks to its unrivalled efficiency in data demarcation, compared with the capabilities of GMM and DBSCAN. Although capable of identifying complex patterns and outliers, the latter methods are limited by their operational complexities and dependence on precise parameter configurations. In this paper, we have processed data from 4 houses equipped with sensors. The findings indicate that applying the K-Means method results in high performance, evidenced by a particularly low Davies-Bouldin Index (DBI), illustrating optimal cluster separation and cohesion. Calculating the average duration of each activity using the GMM algorithm allows distinguishing various categories of meal-taking activities. Alternatively, this can correspond to different times of the day fitting to each meal-taking activity. Using K-Means, GMM, and DBSCAN clustering algorithms, the study demonstrates an effective strategy for thoroughly understanding the data. This approach facilitates the comparison and selection of the most suitable method for optimal meal-taking activity clustering.

arxiv.org

SDOoop: Capturing Periodical Patterns and Out-of-phase Anomalies in Streaming Data Analysis arxiv.org/abs/2409.02973 .LG

SDOoop: Capturing Periodical Patterns and Out-of-phase Anomalies in Streaming Data Analysis

Streaming data analysis is increasingly required in applications, e.g., IoT, cybersecurity, robotics, mechatronics or cyber-physical systems. Despite its relevance, it is still an emerging field with open challenges. SDO is a recent anomaly detection method designed to meet requirements of speed, interpretability and intuitive parameterization. In this work, we present SDOoop, which extends the capabilities of SDO's streaming version to retain temporal information of data structures. SDOoop spots contextual anomalies undetectable by traditional algorithms, while enabling the inspection of data geometries, clusters and temporal patterns. We used SDOoop to model real network communications in critical infrastructures and extract patterns that disclose their dynamics. Moreover, we evaluated SDOoop with data from intrusion detection and natural science domains and obtained performances equivalent or superior to state-of-the-art approaches. Our results show the high potential of new model-based methods to analyze and explain streaming data. Since SDOoop operates with constant per-sample space and time complexity, it is ideal for big data, being able to instantly process large volumes of information. SDOoop conforms to next-generation machine learning, which, in addition to accuracy and speed, is expected to provide highly interpretable and informative models.

arxiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.