摘要 :
The cloud-edge-terminal collaborative network (CETCN) is considered as a novel paradigm for emerging applications owing to its huge potential in providing low-latency and ultra-reliable computing services. However, achieving such ...
展开
The cloud-edge-terminal collaborative network (CETCN) is considered as a novel paradigm for emerging applications owing to its huge potential in providing low-latency and ultra-reliable computing services. However, achieving such benefits is very challenging due to the heterogeneous computing power of terminal devices and the complex environment faced by the CETCN. In particular, the high-dimensional and dynamic environment states cause difficulties for the CETCN to make efficient decisions in terms of task offloading, collaborative caching and mobility management. To this end, artificial intelligence (AI), especially deep reinforcement learning (DRL) has been proven effective in solving sequential decision-making problems in various domains, and offers a promising solution for the above-mentioned issues due to several reasons. Firstly, accurate modelling of the CETCN, which is difficult to obtain for real-world applications, is not required for the DRL-based method. Secondly, DRL can effectively respond to high-dimensional and dynamic tasks through iterative interactions with the environment. Thirdly, due to the complexity of tasks and the differences in resource supply among different vendors, collaboration is required between different vendors to complete tasks. The multi-agent DRL (MADRL) methods are very effective in solving collaborative tasks, where the collaborative tasks can be jointly completed by cloud, edge and terminal devices which provided by different vendors. This survey provides a comprehensive overview regarding the applications of DRL and MADRL in the context of CETCN. The first part of this survey provides a depth overview of the key concepts of the CETCN and the mathematical underpinnings of both DRL and MADRL. Then, we highlight the applications of RL algorithms in solving various challenges within CETCN, such as task offloading, resource allocation, caching and mobility management. In addition, we extend discussion to explore how DRL and MADRL are making inroads into emerging CETCN scenarios like intelligent transportation system (ITS), the industrial Internet of Things (IIoT), smart health and digital agriculture. Furthermore, security considerations related to the application of DRL within CETCN are addressed, along with an overview of existing standards that pertain to edge intelligence. Finally, we list several lessons learned in this evolving field and outline future research opportunities and challenges that are critical for the development of the CETCN. We hope this survey will attract more researchers to investigate scalable and decentralized AI algorithms for the design of CETCN.
收起
摘要 :
Owing to the rapid development of sensor technology, hyperspectral (HS) remote sensing (RS) imaging has provided a significant amount of spatial and spectral information for the observation and analysis of Earth’s surface at a di...
展开
Owing to the rapid development of sensor technology, hyperspectral (HS) remote sensing (RS) imaging has provided a significant amount of spatial and spectral information for the observation and analysis of Earth’s surface at a distance of data acquisition devices. The recent advancement and even revolution of HS RS techniques offer opportunities to realize the potential of various applications while confronting new challenges for efficiently processing and analyzing the enormous HS acquisition data. Due to the maintenance of the 3D HS inherent structure, tensor decomposition has aroused widespread concern and spurred research in HS data processing tasks over the past decades. In this article, we aim to present a comprehensive overview of tensor decomposition, specifically contextualizing the five broad topics in HS data processing: HS restoration, compressive sensing (CS), anomaly detection (AD), HS–multispectral (MS) fusion, and spectral unmixing (SU). For each topic, we elaborate on the remarkable achievements of tensor decomposition models for HS RS, with a pivotal description of the existing methodologies and a representative exhibition of experimental results. As a result, the remaining challenges of the follow-up research directions are outlined and discussed from the perspective of actual HS RS practices and tensor decomposition merged with advanced priors and even deep neural networks. This article summarizes different tensor decomposition-based HS data processing methods and categorizes them into different classes, from simple adoptions to complex combinations with other priors for algorithm beginners. We expect that this survey provides new investigations and development trends for experienced researchers to some extent.
收起
摘要 :
The ever-increasing demand for ubiquitous and differentiated services at anytime and anywhere emphasizes the necessity of aerospace integrated networks (AINs) which consist of multi-tiers, e.g., high-altitude platforms (HAPs) tier...
展开
The ever-increasing demand for ubiquitous and differentiated services at anytime and anywhere emphasizes the necessity of aerospace integrated networks (AINs) which consist of multi-tiers, e.g., high-altitude platforms (HAPs) tier, unmanned aerial vehicles (UAVs) tier, low earth orbit (LEO) satellites tier, etc. for wide coverage and high capacity. However, the inherent heterogeneity, time-variability, and differentiated mobility characteristics between terrestrial networks and AINs constraint the materialization of the aforementioned widely expected potentials of AINs. Therefore, it is crucial to make a profound understanding of AINs’ impacts on pivotal models and methodologies for enabling 6G, thus providing technology reference for AINs empowering 6G. To understand the latest development and ultimately open new research niches on this significant topic, this survey is the pioneer paper to serve as a systematical and comprehensive overview in both single-tier scenarios and combining-tiers scenarios in AINs. We start with a profound discussion about the state-of-the-art potentially promising methodologies and models in the era of AINs empowering 6G from the perspective of the system architecture and networking design and enabling technologies of AINs, to enable flexible and scalable management and control of the AINs and ensure network stability. Furthermore, we make an in-depth literature overview across the network dynamics modeling, theoretical performance analysis model, and system optimization to enhance the system performance and guarantee user-on-demand services. Additionally, we highlight the ongoing research challenges and future directions with focusing on the development trend of ultra-dense satellite constellations in 6G.
收起
摘要 :
This paper provides a state-of-the-art literature review on economic analysis and pricing models for data collection and wireless communication in Internet of Things (IoT). Wireless sensor networks (WSNs) are the main components o...
展开
This paper provides a state-of-the-art literature review on economic analysis and pricing models for data collection and wireless communication in Internet of Things (IoT). Wireless sensor networks (WSNs) are the main components of IoT which collect data from the environment and transmit the data to the sink nodes. For long service time and low maintenance cost, WSNs require adaptive and robust designs to address many issues, e.g., data collection, topology formation, packet forwarding, resource and power optimization, coverage optimization, efficient task allocation, and security. For these issues, sensors have to make optimal decisions from current capabilities and available strategies to achieve desirable goals. This paper reviews numerous applications of the economic and pricing models, known as intelligent rational decision-making methods, to develop adaptive algorithms and protocols for WSNs. Besides, we survey a variety of pricing strategies in providing incentives for phone users in crowdsensing applications to contribute their sensing data. Furthermore, we consider the use of some pricing models in machine-to-machine (M2M) communication. Finally, we highlight some important open research issues as well as future research directions of applying economic and pricing models to IoT.
收起
摘要 :
Artificial Intelligence-Generated Content (AIGC) is an automated method for generating, manipulating, and modifying valuable and diverse data using AI algorithms creatively. This survey paper focuses on the deployment of AIGC appl...
展开
Artificial Intelligence-Generated Content (AIGC) is an automated method for generating, manipulating, and modifying valuable and diverse data using AI algorithms creatively. This survey paper focuses on the deployment of AIGC applications, e.g., ChatGPT and Dall-E, at mobile edge networks, namely mobile AIGC networks, that provide personalized and customized AIGC services in real time while maintaining user privacy. We begin by introducing the background and fundamentals of generative models and the lifecycle of AIGC services at mobile AIGC networks, which includes data collection, training, fine-tuning, inference, and product management. We then discuss the collaborative cloud-edge-mobile infrastructure and technologies required to support AIGC services and enable users to access AIGC at mobile edge networks. Furthermore, we explore AIGC-driven creative applications and use cases for mobile AIGC networks. Additionally, we discuss the implementation, security, and privacy challenges of deploying mobile AIGC networks. Finally, we highlight some future research directions and open issues for the full realization of mobile AIGC networks.
收起
摘要 :
The framework of cognitive wireless networks is expected to endow the wireless devices with the cognition-intelligence ability with which they can efficiently learn and respond to the dynamic wireless environment. In many practica...
展开
The framework of cognitive wireless networks is expected to endow the wireless devices with the cognition-intelligence ability with which they can efficiently learn and respond to the dynamic wireless environment. In many practical scenarios, the complexity of network dynamics makes it difficult to determine the network evolution model in advance. Thus, the wireless decision-making entities may face a black-box network control problem and the model-based network management mechanisms will be no longer applicable. In contrast, model-free learning enables the decision-making entities to adapt their behaviors based on the reinforcement from their interaction with the environment and (implicitly) build their understanding of the system from scratch through trial-and-error. Such characteristics are highly in accordance with the requirement of cognition-based intelligence for devices in cognitive wireless networks. Therefore, model-free learning has been considered as one key implementation approach to adaptive, self-organized network control in cognitive wireless networks. In this paper, we provide a comprehensive survey on the applications of the state-of-the-art model-free learning mechanisms in cognitive wireless networks. According to the system models on which those applications are based, a systematic overview of the learning algorithms in the domains of single-agent system, multiagent systems, and multiplayer games is provided. The applications of model-free learning to various problems in cognitive wireless networks are discussed with the focus on how the learning mechanisms help to provide the solutions to these problems and improve the network performance over the model-based, non-adaptive methods. Finally, a broad spectrum of challenges and open issues is discussed to offer a guideline for the future research directions.
收起
摘要 :
This paper considers the problem of recovering a group sparse signal matrixY=[y1,?,yL]from sparsely corrupted measurementsM=[A(1)y1,?,A(L)yL]+S, whereA(i)'s are known sensing matrices andSis an unknown sparse error matrix. A robus...
展开
This paper considers the problem of recovering a group sparse signal matrixY=[y1,?,yL]from sparsely corrupted measurementsM=[A(1)y1,?,A(L)yL]+S, whereA(i)'s are known sensing matrices andSis an unknown sparse error matrix. A robust group lasso (RGL) model is proposed to recoverYandSthrough simultaneously minimizing the?2,1-norm ofYand the?1-norm ofSunder the measurement constraints. We prove thatYandScan be exactly recovered from the RGL model with high probability for a very general class ofA(i)'s.
收起
摘要 :
Wireless charging is a technology of transmitting power through an air gap to electrical devices for the purpose of energy replenishment. The recent progress in wireless charging techniques and development of commercial products h...
展开
Wireless charging is a technology of transmitting power through an air gap to electrical devices for the purpose of energy replenishment. The recent progress in wireless charging techniques and development of commercial products have provided a promising alternative way to address the energy bottleneck of conventionally portable battery-powered devices. However, the incorporation of wireless charging into the existing wireless communication systems also brings along a series of challenging issues with regard to implementation, scheduling, and power management. In this paper, we present a comprehensive overview of wireless charging techniques, the developments in technical standards, and their recent advances in network applications. In particular, with regard to network applications, we review the static charger scheduling strategies, mobile charger dispatch strategies and wireless charger deployment strategies. Additionally, we discuss open issues and challenges in implementing wireless charging technologies. Finally, we envision some practical future network applications of wireless charging.
收起
摘要 :
In recent years, the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks. Such challenges can be potentially overcome by integrating communicatio...
展开
In recent years, the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks. Such challenges can be potentially overcome by integrating communication, computing, caching, and control (i4C) technologies. In this survey, we first give a snapshot of different aspects of the i4C, comprising background, motivation, leading technological enablers, potential applications, and use cases. Next, we describe different models of communication, computing, caching, and control (4C) to lay the foundation of the integration approach. We review current state-of-the-art research efforts related to the i4C, focusing on recent trends of both conventional and artificial intelligence (AI)-based integration approaches. We also highlight the need for intelligence in resources integration. Then, we discuss the integration of sensing and communication (ISAC) and classify the integration approaches into various classes. Finally, we propose open challenges and present future research directions for beyond 5G networks, such as 6G.
收起
摘要 :
Motivated by the advancing computational capacity of distributed end-user equipment (UE), as well as the increasing concerns about sharing private data, there has been considerable recent interest in machine learning (ML) and arti...
展开
Motivated by the advancing computational capacity of distributed end-user equipment (UE), as well as the increasing concerns about sharing private data, there has been considerable recent interest in machine learning (ML) and artificial intelligence (AI) that can be processed on distributed UEs. Specifically, in this paradigm, parts of an ML process are outsourced to multiple distributed UEs. Then, the processed information is aggregated on a certain level at a central server, which turns a centralized ML process into a distributed one and brings about significant benefits. However, this new distributed ML paradigm raises new risks in terms of privacy and security issues. In this article, we provide a survey of the emerging security and privacy risks of distributed ML from a unique perspective of information exchange levels, which are defined according to the key steps of an ML process, i.e., we consider the following levels: 1) the level of preprocessed data; 2) the level of learning models; 3) the level of extracted knowledge; and 4) the level of intermediate results. We explore and analyze the potential of threats for each information exchange level based on an overview of current state-of-the-art attack mechanisms and then discuss the possible defense methods against such threats. Finally, we complete the survey by providing an outlook on the challenges and possible directions for future research in this critical area.
收起