Show newer

A Comprehensive Study of IPTV: Challenges, Opportunities, and Future Trends arxiv.org/abs/2503.13450 .IV .SP .NI

A Comprehensive Study of IPTV: Challenges, Opportunities, and Future Trends

IPTV (Internet Protocol Television) is a transformative approach to delivering audio and video services through high-speed Internet networks, enabling direct access to television content via home computers or set-top boxes. Despite its promising advantages, including flexibility, interactivity, and bundled services such as triple play (voice, Internet, and TV) and quadruple play (adding mobile services), IPTV is still in its development phase. Key challenges include achieving a Quality of Service (QoS) comparable to traditional broadcasters, addressing limited bandwidth, and overcoming a lack of standardization among service providers. This paper explores the technical, operational, and consumer-oriented aspects of IPTV. It discusses data compression techniques, protocols like IGMP and RTSP, and the role of advanced codecs like H.264 in ensuring efficient data transmission. The study also examines the distinctions between IPTV and open-network Internet TV, the importance of security and privacy, and the emergence of new business opportunities through targeted advertising and interactive services. Although IPTV is unlikely to completely replace traditional broadcasting, it is poised to play an important role in shaping the future of television by offering personalized, secure, and scalable viewing experiences.

arXiv.org

Digital audiovisual archives in humanities arxiv.org/abs/2503.13452 .DL

Digital audiovisual archives in humanities

This report, authored in 2003, presents an innovative approach to the management and utilization of audiovisual archives in the humanities and social sciences. Developed by the research team ESCoM, under the auspices of the Maison des Sciences de l'Homme (MSH) in Paris, this program predated platforms like YouTube and was groundbreaking in its vision for the digital preservation, segmentation, and classification of audiovisual content. Its objectives included creating a heritage of scientific knowledge, developing advanced tools for its annotation and reuse, and facilitating the dissemination of specialized research to a broad audience.At its core, the report outlines the development of an integrated environment that allows users to index, annotate, and classify audiovisual segments through personalized ontologies and thematic grids. The proposed methods rely on cutting-edge concepts, such as semantic web technologies, knowledge representation, and conceptual graph editing, to enable researchers and educators to create tailored archives and new multimedia resources. This forward-thinking approach aligns with modern practices of content reuse and republication, demonstrating a vision well ahead of its time.The program also emphasizes the importance of segmenting and indexing audiovisual materials based on user-defined criteria, enabling researchers to identify and highlight specific thematic or conceptual elements within a vast pool of data. By facilitating this level of granularity, the system supports personalized academic and professional applications, including multimedia presentations, educational resources, and research dissemination. It introduces tools such as enhanced media players, ontology builders, and annotation editors to make this process accessible and collaborative.Finally, the report discusses the Opales project, a collaborative initiative that exemplifies this innovative framework. The project developed a prototype environment integrating tools for creating ''hyper-documents'' and supporting multilingual, multi-platform content dissemination. Despite the technological and methodological challenges of the time, the report's vision of interactive, richly annotated audiovisual archives has set the stage for the development of contemporary digital knowledge ecosystems. Its emphasis on semantic representation and user-centric customization continues to resonate in the digital humanities today.

arXiv.org
E-Semiotics

E-Semiotics is a conceptual and practical framework for designing, developing, and managing digital information and knowledge products. It applies semiotic principles to digital environments, focusing on the structural, contextual, and narrative organization of information. Central to E-Semiotics is the concept of ''scenario building,'' which acts as a template or guide for creating and maintaining digital products and services, ensuring usability, adaptability, and efficiency.This approach distinguishes itself from traditional semiotics by addressing the unique features of digital media, such as interactivity, hypertextuality, and modularity. It requires a dual competency in semiotics and technology, making it particularly relevant for developing interactive digital products like e-learning systems, digital libraries, and web portals. E-Semiotics also integrates seamlessly with knowledge management, offering conceptual models and technological tools to optimize the storage, retrieval, and dissemination of information.The methodology includes both a semiotic approach, which focuses on understanding the structural and contextual dimensions of information, and a technological approach, which ensures interoperability, reusability, and scalability of digital tools. It has broad applications in areas such as multi-support publishing, semantic web development, and the creation of dynamic websites and web services. These applications empower organizations, particularly small and medium-sized ones, to leverage digital technologies without extensive technical expertise.E-Semiotics faces challenges like conceptual complexity and economic barriers, but its potential lies in democratizing access to digital tools and fostering innovation. It bridges the gap between theory and practice, offering scalable solutions that respond to evolving user needs. This framework is poised to play a critical role in the digital transformation of communication and knowledge systems, supporting organizations in adapting to the demands of a rapidly changing digital landscape.

arXiv.org

Modeling and Analysis of Non-Terrestrial Networks by Spherical Stochastic Geometry arxiv.org/abs/2503.13455 .NI

Modeling and Analysis of Non-Terrestrial Networks by Spherical Stochastic Geometry

Non-terrestrial networks (NTNs) are anticipated to be indispensable in extending coverage and enabling global communication access in next-generation wireless networks. With the extensive deployment of non-terrestrial platforms, evaluating the performance of NTN-enabled communication systems becomes a challenging task. Spherical stochastic geometry (SG) is a recently proposed analytical framework that has garnered increasing attention. Due to its suitability for modeling large-scale dynamic topologies and its ability to provide an analytical framework for interference analysis and low-complexity performance evaluation, spherical SG has been widely applied in NTN performance analysis. This paper surveys the modeling and analysis of NTN networks based on spherical SG. We begin by introducing the spherical SG framework, detailing its history and development. Next, we categorize existing spherical SG models into three types based on orbital modeling methods and provide algorithm implementations for common models. Furthermore, we investigate the accuracy and necessity of spherical modeling through case studies. On the topology level, concepts such as association strategy, central angle, zenith angle, contact angle, and availability probability are introduced, with simple derivations provided. On the channel level, we detail the modeling of large-scale fading, small-scale fading, and beam gain for different channel links. Finally, we discuss several advanced topics that have not been fully explored but have strong motivation and research potential, and we predict future research directions.

arXiv.org

Circuit Diagram Retrieval Based on Hierarchical Circuit Graph Representation arxiv.org/abs/2503.11658 .AR .AI

Circuit Diagram Retrieval Based on Hierarchical Circuit Graph Representation

In the domain of analog circuit design, the retrieval of circuit diagrams has drawn a great interest, primarily due to its vital role in the consultation of legacy designs and the detection of design plagiarism. Existing image retrieval techniques are adept at handling natural images, which converts images into feature vectors and retrieval similar images according to the closeness of these vectors. Nonetheless, these approaches exhibit limitations when applied to the more specialized and intricate domain of circuit diagrams. This paper presents a novel approach to circuit diagram retrieval by employing a graph representation of circuit diagrams, effectively reformulating the retrieval task as a graph retrieval problem. The proposed methodology consists of two principal components: a circuit diagram recognition algorithm designed to extract the circuit components and topological structure of the circuit using proposed GAM-YOLO model and a 2-step connected domain filtering algorithm, and a hierarchical retrieval strategy based on graph similarity and different graph representation methods for analog circuits. Our methodology pioneers the utilization of graph representation in the retrieval of circuit diagrams, incorporating topological features that are commonly overlooked by standard image retrieval methods. The results of our experiments substantiate the efficacy of our approach in retrieving circuit diagrams across of different types.

arXiv.org

MEADOW: Memory-efficient Dataflow and Data Packing for Low Power Edge LLMs arxiv.org/abs/2503.11663 .AR .AI .LG

MEADOW: Memory-efficient Dataflow and Data Packing for Low Power Edge LLMs

The computational and memory challenges of large language models (LLMs) have sparked several optimization approaches towards their efficient implementation. While prior LLM-targeted quantization, and prior works on sparse acceleration have significantly mitigated the memory and computation bottleneck, they do so assuming high power platforms such as GPUs and server-class FPGAs with large off-chip memory bandwidths and employ a generalized matrix multiplication (GEMM) execution of all the layers in the decoder. In such a GEMM-based execution, data is fetched from an off-chip memory, computed and stored back. However, at reduced off-chip memory capacities, as is the case with low-power edge devices, this implementation strategy significantly increases the attention computation latency owing to the repeated storage and fetch of large intermediate tokens to and from the off-chip memory. Moreover, fetching the weight matrices from a bandwidth constrained memory further aggravates the memory bottleneck problem. To this end, we introduce MEADOW, a framework that significantly reduces the off-chip memory access for LLMs with a novel token-parallel head-sequential (TPHS) dataflow. Additionally, MEADOW applies weight packing that performs loss-less decomposition of large weight matrices to their unique elements thereby, reducing the enormous weight fetch latency. MEADOW demonstrates 1.5x and 2.5x lower decode and prefill latency, respectively, compared to a GEMM-based LLM implementation on the low power Xilinx ZCU102 FPGA platform that consumes less than 10W. Additionally, MEADOW achieves an end-to-end latency improvement of over 40%, compared to prior LLM optimization works.

arXiv.org
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.