摘要 :
With the recent growth of the Internet of Things (IoT) and the demand for faster computation, quantized neural networks (QNNs) or QNN-enabled IoT can offer better performance than conventional convolution neural networks (CNNs). W...
展开
With the recent growth of the Internet of Things (IoT) and the demand for faster computation, quantized neural networks (QNNs) or QNN-enabled IoT can offer better performance than conventional convolution neural networks (CNNs). With the aim of reducing memory access costs and increasing the computation efficiency, QNN-enabled devices are expected to transform numerous industrial applications with lower processing latency and power consumption. Another form of QNN is the binarized neural network (BNN), which has 2 bits of quantized levels. In this paper, CNN-, QNN-, and BNN-based pattern recognition techniques are implemented and analyzed on an FPGA. The FPGA hardware acts as an IoT device due to connectivity with the cloud, and QNN and BNN are considered to offer better performance in terms of low power and low resource use on hardware platforms. The CNN and QNN implementation and their comparative analysis are analyzed based on their accuracy, weight bit error, RoC curve, and execution speed. The paper also discusses various approaches that can be deployed for optimizing various CNN and QNN models with additionally available tools. The work is performed on the Xilinx Zynq 7020 series Pynq Z2 board, which serves as our FPGA-based low-power IoT device. The MNIST and CIFAR-10 databases are considered for simulation and experimentation. The work shows that the accuracy is 95.5% and 79.22% for the MNIST and CIFAR-10 databases, respectively, for full precision (32-bit), and the execution time is 5.8 ms and 18 ms for the MNIST and CIFAR-10 databases, respectively, for full precision (32-bit).
收起
摘要 :
In order to reduce unnecessary data transmissions from Internet of Things (IoT) sensors, this article proposes a multivariate-time-series-prediction-based adaptive data transmission period control (PBATPC) algorithm for IoT networ...
展开
In order to reduce unnecessary data transmissions from Internet of Things (IoT) sensors, this article proposes a multivariate-time-series-prediction-based adaptive data transmission period control (PBATPC) algorithm for IoT networks. Based on the spatio-temporal correlation between multivariate time-series data, we developed a novel multivariate time-series data encoding scheme utilizing the proposed time-series distance measure ADMWD. Composed of two significant factors for a multivariate time-series prediction, i.e., the absolute deviation from the mean (ADM) and the weighted differential (WD) distance, the ADMWD considers both the time distance from a prediction point and a negative correlation between the time-series data concurrently. Utilizing the convolutional neural network (CNN) model, a subset of IoT sensor readings can be predicted from encoded multivariate time-series measurements, and we compared the predicted sensor values with actual readings to obtain the adaptive data transmission period. Extensive performance evaluations show a substantial performance gain of the proposed algorithm in terms of the average power reduction ratio (approximately 12%) and average data reconstruction error (approximately 8.32% MAPE). Finally, this article also provides a practical implementation of the proposed PBATPC algorithm via the HTTP protocol under the IEEE 802.11-based WLAN network.
收起
摘要 :
Internet of Things (IoT) and Artificial Intelligence (AI) have received much attention in recent years. Embedded with sensors and connected to the Internet, the IoT device can collect massive data and interact with a human. The da...
展开
Internet of Things (IoT) and Artificial Intelligence (AI) have received much attention in recent years. Embedded with sensors and connected to the Internet, the IoT device can collect massive data and interact with a human. The data collected by IoT can be further analyzed by applying AI mechanisms for exploring the information behind the data and then have impacts on the interactions between human and things. This paper aims to design and implement a Smart Hat, which is a wearable device and majorly applies the IoT and AI technologies, aiming to help a kid for exploring knowledge in a manner of easy, active, and aggressive. The designed Smart Hat can identify objects in the outside environment and give output as an audio format, which adopts the IoT and AI technologies. The learning Smart Hat intends to help kids aid them in the primary learning task of identifying objects without the supervision of the third party (parents, teachers, others etc.,) in real life. This Smart Hat device provides a sophisticated technology to kids for easy, active, and aggressive learning in daily life. Performance studies show that the obtained results are promising and very satisfactory.
收起
摘要 :
As an essential part of Internet of Things, monocular depth estimation (MDE) predicts dense depth maps from a single red-green-blue (RGB) image captured by monocular cameras. Past MDE methods almost focus on improving accuracy at ...
展开
As an essential part of Internet of Things, monocular depth estimation (MDE) predicts dense depth maps from a single red-green-blue (RGB) image captured by monocular cameras. Past MDE methods almost focus on improving accuracy at the cost of increased latency, power consumption, and computational complexity, failing to balance accuracy and efficiency. Additionally, when speeding up depth estimation algorithms, researchers commonly ignore their adaptation to different hardware architectures on edge devices. This article aims to solve these challenges. First, we design an efficient MDE model for precise depth sensing on edge devices. Second, We employ a reinforcement learning algorithm and automatically prune redundant channels of MDE by finding a relatively optimal pruning policy. The pruning approach lowers model runtime and power consumption with little loss of accuracy through achieving a target pruning ratio. Finally, we accelerate the pruned MDE while adapting it to different hardware architectures with a compilation optimization method. The compilation optimization further reduces model runtime by an order of magnitude on hardware architectures. Extensive experiments confirm that our methods are effective for images of different sizes on two public datasets. The pruned and optimized MDE achieves promising depth sensing with a better tradeoff among model runtime, accuracy, computational complexity, and power consumption than the state of the arts on different hardware architectures.
收起
摘要 :
Parking vehicles has been a very challenging task these days.Lack of availability of parking spaces, lack of proper information about vacant spaces,search for those vacant spaces, all add up to the challenge of parking. Finding a ...
展开
Parking vehicles has been a very challenging task these days.Lack of availability of parking spaces, lack of proper information about vacant spaces,search for those vacant spaces, all add up to the challenge of parking. Finding a place for our vehicle, not only causes wastage of time,money and effort,but also causes a lot of inconvenience to the drivers.The problem further escalates to traffic jams, air pollution and environmental damage.Hence,there is an urgent requirement for a proper parking system to be in place,inorder to minimize the parking problems.This paper is basically a survey of a few of the proposed parking systems,that have been put forward,so as to overcome the problems of parking.In this survey paper,various methods and implementations proposed by the authors are discussed, comparethembasedon their efficiency,optimization,cost and a few other parameters.
收起
摘要 :
This paper proposes a heterogeneous processor design for CNN-based AI applications on IoT devices. The heterogeneous processor contains an embedded RISC-V CPU that works as a general processor and an efficient CNN-accelerator that...
展开
This paper proposes a heterogeneous processor design for CNN-based AI applications on IoT devices. The heterogeneous processor contains an embedded RISC-V CPU that works as a general processor and an efficient CNN-accelerator that supports a variety of CNN models with a list of macro instructions. For demonstration, we implement a prototype on an FPGA platform with the RISC-V CPU working under 20 MHz and the CNN accelerator working under 100 MHz. As a case study, we run a CNN-based face detection and recognition application on this prototype. The prototype can process one image in 0.72 seconds and an ASIC implementation working under 400 MHz can process one image in less than 0.15 seconds by estimation, which can satisfy the needs for many IoT scenarios such as access control systems and check-in systems.
收起
摘要 :
The enormous number of applications peer assisted wireless communication and combining mobile platforms such as manned and unmanned vehicles is an enabler for an enormous amount of applications. A main enabler for the applications...
展开
The enormous number of applications peer assisted wireless communication and combining mobile platforms such as manned and unmanned vehicles is an enabler for an enormous amount of applications. A main enabler for the applications is the routing protocol that guides the packets in network. Generally, assumptions on full connectivity is not valid for routing packets in fully connected mobile as hoc networks(MANETs) has been studied in a real time system. In network structure routing protocol must lever alternating connection and end to end connections. Proposed new algorithm termed geographical routing algorithm named “location aware routing for delay tolerant (LAROD), improved along with location service, location dissemination service(LoDis)”, both these suit an intermittently connected MANET.Here, partial knowledge of geographical position “location dissemination takes time in ICMANETs LAROD is planned to route packets”. To accomplish little overhead, a beaconless approach joined with a position based resolution of bids when forwarding packets used by LAROD. Which is simplified with the support of broadcast conversation united with routing eavesdropping. This algorithm ix examined under real time environment application i.e. unmanned aerial vehicles deployed in a reconnaissance scenario with the help of low level packet simulator2. The main objective is to sound design selections in an accurate application with holistic selections in routing location management and the mobility model. The complete approach defends together important and possible excellent if continuing a local database of node locations. The LARODLoDis system is associated with important delaytolerant routing algorithm and is presented to have a competitive edge both in terms of delivery ratio an overhead. For spray and wait competitive edge both in expressions of delivery proportion an above. This case convoluted a fresh Packetlevel implementation in ns2 different to the original connection level custom simulator for spray and wait.
收起
摘要 :
The application of classification and prediction algorithms in healthcare can facilitate the detection of specific vital signs, which can be used to treat or prevent disease. In this study, a new framework based on deep learning a...
展开
The application of classification and prediction algorithms in healthcare can facilitate the detection of specific vital signs, which can be used to treat or prevent disease. In this study, a new framework based on deep learning architectures is introduced for the human activity recognition. Our proposed framework uses biosensors, electrocardiography (ECG) sensors, inertial sensors, and small single-board computers to collect and analyse sensory information. ECG and inertial sensory information is converted to images, and novel preprocessing techniques are used effectively. We use convolutional neural networks (CNN) along with sensor fusion, random forest, and long short-term memory (LSTM) with gated recurrent unit (GRU). The evaluation of the proposed approaches is carried out by considering well-known models such as transfer learning with MobileNet, CNN+MobileNet combined, and support vector machine in terms of accuracy. Moreover, the effects of the Null class, which are commonly seen in popular health-related datasets, are also investigated. The results show that LSTM with GRU, RandomForest and CNN with sensor fusion provided the highest accuracies with 99%, 98% and 98%, respectively. Since edge computing using sensors with relatively limited processing power and capacities has recently become quite common, a comparison is also provided to show the efficiency and edge computability of the architectures proposed in this study. The number of parameters used, the size of the models, and the training times as well as the testing times are evaluated and compared. The random forest algorithm provides the best training and testing time results, while models of LSTM with GRU have the smallest model size.
收起
摘要 :
The correct classification of images is an important application in the monitoring of Internet of things (IoT). In the research of IoT images, a key issue is to recognize multi-class images at a high accuracy. As a result, this pa...
展开
The correct classification of images is an important application in the monitoring of Internet of things (IoT). In the research of IoT images, a key issue is to recognize multi-class images at a high accuracy. As a result, this paper puts forward a classification method for multiclass images based on multiple linear regression (MLR). Firstly, the convolutional neural network (CNN) was improved to automatically generate a network from the IoT terminals, and used to classify images into disjoint class sets (clusters), which were processed by the subsequently constructed expert network. After that, the MLR was introduced to evaluate the accuracy and robustness of the classification of multi-class images. Finally, the proposed method has been verified on CIFAR-10, CIfar-100 and MNIST, etc. benchmark data sets. Our method was found to outperform other methods in classification, and improve the accuracy of the classic AlexNet by 2%. The research results provide theoretical evidence and lay practical basis for the classification of multi-class IoT images.
收起
摘要 :
Anomaly detection in networks to identify intrusions is a common and successful security measure used in many different types of network infrastructure. Network data traffic has increased due to the proliferation of viruses and ot...
展开
Anomaly detection in networks to identify intrusions is a common and successful security measure used in many different types of network infrastructure. Network data traffic has increased due to the proliferation of viruses and other forms of cyber-attacks as network technology and applications have developed quickly. The limitations of classical intrusion detection, such as poor detection accuracy, high false negatives, and dependence on dimensionality reduction methods, become more apparent in the face of massive traffic volumes and characteristic information. That's why IoT infrastructures often use Software-Defined Networking (SDN), allowing for better network adaptability and control. Hence, this paper's convolutional neural network-based Security Evaluation Model (CNN-SEM) is proposed to secure the source SDN controller from traffic degradation and protect the source network from DDoS assaults. The proposed CNN-SEM system might defend against DDoS assaults once discovered by applying and testing a Convolutional Neural Network (CNN). The model can automatically extract the useful aspects of incursion samples, allowing for precise classification of such data. The detection and mitigation modules evaluate the proposed SDN security system's performance, and the findings showed promise against next-generation DDoS assaults. The experimental results show the CNN-SEM achieves a high accuracy ratio of 96.6%, a detection ratio of 97.1 %, precision ratio of 97.2%, a performance ratio of 95.1 % and an enhanced security rate of 98.1 % compared to other methods.
收起