摘要 :
The present study examines global research in the domain of "Quantum Neural Networks" (QNN) on metrics such as publication output, document type, country, collaboration patterns, institutional and author affiliation, subjects, jou...
展开
The present study examines global research in the domain of "Quantum Neural Networks" (QNN) on metrics such as publication output, document type, country, collaboration patterns, institutional and author affiliation, subjects, journal name, and citation patterns. The data for the study was sourced from the Scopus database for the period 1990-2019. During the last 30-year period, the global research output in the subject was 546 publications. The study describes the QNN research in terms of top most productive countries, research organizations, research authors, the most popular subject areas, and source journals. The study also characterizes the most highly cited papers in the subject.
收起
摘要 :
This paper presents a quantum-based algorithm for evolving artificial neural networks (ANNs). The aim is to design an ANN with few connections and high classification performance by simultaneously optimizing the network structure ...
展开
This paper presents a quantum-based algorithm for evolving artificial neural networks (ANNs). The aim is to design an ANN with few connections and high classification performance by simultaneously optimizing the network structure and the connection weights. Unlike most previous studies, the proposed algorithm uses quantum bit representation to codify the network. As a result, the connectivity bits do not indicate the actual links but the probability of the existence of the connections, thus alleviating mapping problems and reducing the risk of throwing away a potential candidate. In addition, in the proposed model, each weight space is decomposed into subspaces in terms of quantum bits. Thus, the algorithm performs a region by region exploration, and evolves gradually to find promising subspaces for further exploitation. This is helpful to provide a set of appropriate weights when evolving the network structure and to alleviate the noisy fitness evaluation problem. The proposed model is tested on four benchmark problems, namely breast cancer and iris, heart, and diabetes problems. The experimental results show that the proposed algorithm can produce compact ANN structures with good generalization ability compared to other algorithms.
收起
摘要 :
Despite its undeniable success, classical machine learning remains a resource-intensive process. Practical computational efforts for training state-of-the-art models can now only be handled by high speed computer hardware. As this...
展开
Despite its undeniable success, classical machine learning remains a resource-intensive process. Practical computational efforts for training state-of-the-art models can now only be handled by high speed computer hardware. As this trend is expected to continue, it should come as no surprise that an increasing number of machine learning researchers are investigating the possible advantages of quantum computing. The scientific literature on Quantum Machine Learning is now enormous, and a review of its current state that can be comprehended without a physics background is necessary. The objective of this study is to present a review of Quantum Machine Learning from the perspective of conventional techniques. Departing from giving a research path from fundamental quantum theory through Quantum Machine Learning algorithms from a computer scientist's perspective, we discuss a set of basic algorithms for Quantum Machine Learning, which are the fundamental components for Quantum Machine Learning algorithms. We implement the Quanvolutional Neural Networks (QNNs) on a quantum computer to recognize handwritten digits, and compare its performance to that of its classical counterpart, the Convolutional Neural Networks (CNNs). Additionally, we implement the QSVM on the breast cancer dataset and compare it to the classical SVM. Finally, we implement the Variational Quantum Classifier (VQC) and many classical classifiers on the Iris dataset to compare their accuracies.
收起
摘要 :
With the recent growth of the Internet of Things (IoT) and the demand for faster computation, quantized neural networks (QNNs) or QNN-enabled IoT can offer better performance than conventional convolution neural networks (CNNs). W...
展开
With the recent growth of the Internet of Things (IoT) and the demand for faster computation, quantized neural networks (QNNs) or QNN-enabled IoT can offer better performance than conventional convolution neural networks (CNNs). With the aim of reducing memory access costs and increasing the computation efficiency, QNN-enabled devices are expected to transform numerous industrial applications with lower processing latency and power consumption. Another form of QNN is the binarized neural network (BNN), which has 2 bits of quantized levels. In this paper, CNN-, QNN-, and BNN-based pattern recognition techniques are implemented and analyzed on an FPGA. The FPGA hardware acts as an IoT device due to connectivity with the cloud, and QNN and BNN are considered to offer better performance in terms of low power and low resource use on hardware platforms. The CNN and QNN implementation and their comparative analysis are analyzed based on their accuracy, weight bit error, RoC curve, and execution speed. The paper also discusses various approaches that can be deployed for optimizing various CNN and QNN models with additionally available tools. The work is performed on the Xilinx Zynq 7020 series Pynq Z2 board, which serves as our FPGA-based low-power IoT device. The MNIST and CIFAR-10 databases are considered for simulation and experimentation. The work shows that the accuracy is 95.5% and 79.22% for the MNIST and CIFAR-10 databases, respectively, for full precision (32-bit), and the execution time is 5.8 ms and 18 ms for the MNIST and CIFAR-10 databases, respectively, for full precision (32-bit).
收起
摘要 :
? 2023Quantum neural network (QNN) is a neural network model based on the principles of quantum mechanics. The advantages of faster computing speed, higher memory capacity, smaller network size and elimination of catastrophic amne...
展开
? 2023Quantum neural network (QNN) is a neural network model based on the principles of quantum mechanics. The advantages of faster computing speed, higher memory capacity, smaller network size and elimination of catastrophic amnesia make it a new idea to solve the problem of training massive data that is difficult for classical neural networks. However, the quantum circuit of QNN are artificially designed with high circuit complexity and low precision in classification tasks. In this paper, a neural architecture search method EQNAS is proposed to improve QNN. First, initializing the quantum population after image quantum encoding. The next step is observing the quantum population and evaluating the fitness. The last is updating the quantum population. Quantum rotation gate update, quantum circuit construction and entirety interference crossover are specific operations. The last two steps need to be carried out iteratively until a satisfactory fitness is achieved. After a lot of experiments on the searched quantum neural networks, the feasibility and effectiveness of the algorithm proposed in this paper are proved, and the searched QNN is obviously better than the original algorithm. The classification accuracy on the mnist dataset and the warship dataset not only increased by 5.31% and 4.52%, respectively, but also reduced the parameters by 21.88% and 31.25% respectively. Code will be available at https://gitee.com/Pcyslist/models/tree/master/research/cv/EQNAS, and https://github.com/Pcyslist/EQNAS.
收起
摘要 :
Quantum Neural Network (QNN) can improve upon the inadequacies of the classical neural network (CNN). The CNN requires a huge memory and needs more computational power. A new field of computation is emerging which integrates quant...
展开
Quantum Neural Network (QNN) can improve upon the inadequacies of the classical neural network (CNN). The CNN requires a huge memory and needs more computational power. A new field of computation is emerging which integrates quantum computation with CNN. A quantum inspired hybrid model of quantum neurons and classical neurons is proposed. This paper details an approach, perhaps the first attempt; towards stock price prediction using this concept is evolved. The stock price prediction initiates the use of QNN in financial engineering applications.
收起
摘要 :
Convolutional neural networks have been shown to extract features better than traditional algorithms in the fields such as image classification, object detection, and speech recognition. In parallel, a variational quantum algorith...
展开
Convolutional neural networks have been shown to extract features better than traditional algorithms in the fields such as image classification, object detection, and speech recognition. In parallel, a variational quantum algorithm incorporating parameterized quantum circuits has higher performance on near-term quantum processors. In this paper, we propose a classification algorithm called variational convolutional neural networks (VCNN), allowing for efficient training and implementation on nearterm quantum devices. The VCNN algorithm combines the multi-scale entanglement renormalization ansatz. We deploy the VCNN algorithm on the TensorFlow Quantum platform with the numerical simulator backends using the MNIST and Fashion MNIST datasets. Experimental results show that the average accuracy of VCNN on classification tasks can reach up to 96.41%. Our algorithm has higher learning accuracy and fewer training epochs than quantum neural network algorithms. Moreover, we conclude that circuit-based models have excellent resilience to noise by numerical simulations.(c) 2022 Elsevier B.V. All rights reserved.
收起
摘要 :
The quantized neural network (QNN) is an efficient approach for network compression and can be widely used in the implementation of field-programmable gate arrays (FPGAs). This article proposes a novel learning framework for n-bit...
展开
The quantized neural network (QNN) is an efficient approach for network compression and can be widely used in the implementation of field-programmable gate arrays (FPGAs). This article proposes a novel learning framework for n-bit QNNs, whose weights are constrained to the power of two. To solve the gradient vanishing problem, we propose a reconstructed gradient function for QNNs in the back-propagation algorithm that can directly get the real gradient rather than estimating an approximate gradient of the expected loss. We also propose a novel QNN structure named n-BQ-NN, which uses shift operation to replace the multiply operation and is more suitable for the inference on FPGAs. Furthermore, we also design a shift vector processing element (SVPE) array to replace all 16-bit multiplications with SHIFT operations in convolution operation on FPGAs. We also carry out comparable experiments to evaluate our framework. The experimental results show that the quantized models of ResNet, DenseNet, and AlexNet through our learning framework can achieve almost the same accuracies with the original full-precision models. Moreover, when using our learning framework to train our n-BQ-NN from scratch, it can achieve state-of-the-art results compared with typical low-precision QNNs. Experiments on Xilinx ZCU102 platform show that our n-BQ-NN with our SVPE can execute 2.9 times faster than that with the vector processing element (VPE) in inference. As the SHIFT operation in our SVPE array will not consume digital signal processing (DSP) resources on FPGAs, the experiments have shown that the use of SVPE array also reduces average energy consumption to 68.7% of the VPE array with 16 bit.
收起
摘要 :
Layerwise quantized neural networks (QNNs), which adopt different precisions for weights or activations in a layerwise manner, have emerged as a promising approach for embedded systems. The layerwise QNNs deploy only required numb...
展开
Layerwise quantized neural networks (QNNs), which adopt different precisions for weights or activations in a layerwise manner, have emerged as a promising approach for embedded systems. The layerwise QNNs deploy only required number of data bits for the computation (e.g., convolution of weights and activations), which in turn reduces computation energy compared to the conventional QNNs. However, the layerwise QNNs still cause a large amount of energy in the conventional memory systems, since memory accesses are not optimized for the required precision of each layer. To address this problem, we propose Quant-PIM, an energy-efficient processing-in-memory (PIM) accelerator for layerwise QNNs. Quant-PIM selectively reads only required data bits within a data word depending on the precision, by deploying the modified I/O gating logics in a 3-D stacked memory. Thus, Quant-PIM significantly reduces energy consumption for memory accesses. In addition, Quant-PIM improves the performance of layerwise QNNs. When the required precision is half of the weight (or activation) size or less, Quant-PIM reads two data blocks in a single read operation by exploiting the saved memory bandwidth from the selective memory access, thus providing higher compute-throughput. Our simulation results show that Quant-PIM reduces system energy by 39.1%similar to 50.4% compared to the PIM system with 16-bit quantized precision, without accuracy loss.
收起
摘要 :
This paper proposes a framework for training feedforward neural network models capable of handling class overlap and imbalance by minimizing an error function that compensates for such imperfections of the training set. A special ...
展开
This paper proposes a framework for training feedforward neural network models capable of handling class overlap and imbalance by minimizing an error function that compensates for such imperfections of the training set. A special case of the proposed error function can be used for training variance-controlled neural networks (VCNNs), which are developed to handle class overlap by minimizing an error function involving the class-specific variance (CSV) computed at their outputs. Another special case of the proposed error function can be used for training class-balancing neural networks (CBNNs), which are developed to handle class imbalance by relying on class-specific correction (CSC). VCNNs and CBNNs are compared with conventional feedforward neural networks (FFNNs), quantum neural networks (QNNs), and resampling techniques. The properties of VCNNs and CBNNs are illustrated by experiments on artificial data. Various experiments involving real-world data reveal the advantages offered by VCNNs and CBNNs in the presence of class overlap and class imbalance.
收起