摘要 :
Mutual Information (MI) is an important dependency measure between random variables, due to its tight connection with information theory. It has numerous applications, both in theory and practice. However, when employed in practic...
展开
Mutual Information (MI) is an important dependency measure between random variables, due to its tight connection with information theory. It has numerous applications, both in theory and practice. However, when employed in practice, it is often necessary to estimate the MI from available data. There are several methods to approximate the MI, but arguably one of the simplest and most widespread techniques is the histogram-based approach. This paper suggests the use of fuzzy partitioning for the histogram-based MI estimation. It uses a general form of fuzzy membership functions, which includes the class of crisp membership functions as a special case. It is accordingly shown that the average absolute error of the fuzzy-histogram method is less than that of the naive histogram method. Moreover, the accuracy of our technique is comparable, and in some cases superior to the accuracy of the Kernel density estimation (KDE) method, which is one of the best MI estimation methods. Furthermore, the computational cost of our technique is significantly less than that of the KDE. The new estimation method is investigated from different aspects, such as average error, bias and variance. Moreover, we explore the usefulness of the fuzzy-histogram MI estimator in a real-world bioinformatics application. Our experiments show that, in contrast to the naive histogram MI estimator, the fuzzy-histogram MI estimator is able to reveal all dependencies between the gene-expression data.
收起
摘要 :
Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random v...
展开
Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In the most of these intensity based methods, images are treated as 1D signal and each pixel is considered independent from its neighbors. Although the location of pixels in an image includes more information than the intensity of them, it is ignored in most of the intensity based methods and they use only the intensity of a pixel to compute the images' histogram. There are some other methods like Regional Mutual Information (RMI) which use both of intensity of pixels and the information of image structure for registration. In this paper using the intensity of neighbor pixels of a pixel in image, it is proposed to make a new feature matrix for any image and measure mutual information between these matrices to determine how much similar two images are. Because of the using structural information of images, this method is more robust against noise and intensity variation and it is more accurate in comparison with methods which use only intensity of pixels and also it is faster than methods like RMI. Experimental results of the rigid registration of clinical brain images (CT), show the superiority of the proposed scheme.
收起
摘要 :
Background: Identification of functional dependence among neurons is a necessary component in both the rational design of neural prostheses as well as in the characterization of network physiology. The objective of this article is...
展开
Background: Identification of functional dependence among neurons is a necessary component in both the rational design of neural prostheses as well as in the characterization of network physiology. The objective of this article is to provide a tutorial for neurosurgeons regarding information theory, specifically time-delayed mutual information, and to compare time-delayed mutual information, an information theoretic quantity based on statistical dependence, with cross-correlation, a commonly used metric for this task in a preliminary analysis of rat hippocampal neurons. Methods: Spike trains were recorded from rats performing delayed nonmatch-to-sample task using an array of electrodes surgically implanted into the hippocampus of each hemisphere of the brain. In addition, spike train simulations of positively correlated neurons, negatively correlated neurons, and neurons correlated by nonlinear functions were generated. These were evaluated by time-delayed mutual information (MI) and cross-correlation. Results: Application of time-delayed MI to experimental data indicated the optimal bin size for information capture in the CA3-CA1 system was 40 ms, which may provide some insight into the spatiotemporal nature of encoding in the rat hippocampus. On simulated data, time-delayed MI showed peak values at appropriate time lags in positively correlated, negatively correlated, and complexly correlated data. Cross-correlation showed peak and troughs with positively correlated and negatively correlated data, but failed to capture some higher order correlations. Conclusions: Comparison of time-delayed MI to cross-correlation in identification of functionally dependent neurons indicates that the methods are not equivalent. Time-delayed MI appeared to capture some interactions between CA3-CA1 neurons at physiologically plausible time delays missed by cross-correlation. It should be considered as a method for identification of functional dependence between neurons and may be useful in the development of neural prosthetics.
收起
摘要 :
Feature selection is an automatic choice for many pattern recognition tasks where dimensionality reduction is sought for minimizing the processing time. In spite of being a well-explored domain, mutual information based feature se...
展开
Feature selection is an automatic choice for many pattern recognition tasks where dimensionality reduction is sought for minimizing the processing time. In spite of being a well-explored domain, mutual information based feature selection methods are currently in emergence because of their significant performance improvement. In this paper, we propose a weighted version of the well-known Maximal Relevance Minimal Redundancy criterion for the purpose of feature selection. The weight of the average redundancy of the candidate feature against all the selected features is continuously incremented with respect to the number of features already selected, while the weight of the class relevance of the candidate feature is kept fixed. An existing variant of normalized mutual information score is utilized for the first time to compute both the relevance as well as the average redundancy. The performance of the proposed approach is demonstrated to be superior to those of several conventional mutual information based feature selection techniques as well as some of the state-of-art feature selection approaches based on analyses on some real-life high dimensional datasets.
收起
摘要 :
Techniques for recording small electrical signals became of paramount importance for the functional understanding of the central nervous system that developed through the 20th century (7). Through several incremental advances in c...
展开
Techniques for recording small electrical signals became of paramount importance for the functional understanding of the central nervous system that developed through the 20th century (7). Through several incremental advances in computer and in material technology and in the understanding of cortical function, signals from the cerebral cortex have now been used in brain-computer interfaces (BCIs) to drive external devices on the command of a paralyzed patient. Spinal cord stimulation is an example of a machine-central nervous system interface that came to widespread clinical use. It was developed subsequent to the gate control theory (9), which raised the possibility of electrical inhibition of pain, and preclinical experiments in cats that showed pulsed DC stimulation of the dorsal columns inhibited paw withdrawal from painful stimuli (15). The initial crude interfaces with a battery and simple electronic circuits that delivered electrical pulses to the spinal cord have been replaced by more sophisticated instruments with advanced reprogramming options.
收起
摘要 :
"Technology is never simply the result of a brilliant idea that fulfills a clear need but rather the result of such an idea and such a need, existing in an environment that already offers everything required to support its growth ...
展开
"Technology is never simply the result of a brilliant idea that fulfills a clear need but rather the result of such an idea and such a need, existing in an environment that already offers everything required to support its growth and development. It is like civilization which is not created out of superior intelligence but is the result of chain of developments, each made possible by certain preconditions" (4). Brain-computer interfaces (BCIs) have a great potential to allow patients with severe neurologic disabilities to return to interaction with society. Interest in this field has increased dramatically from a handful of centers investigating a decade ago to multiple centers throughout the world presently interested in resolving the issues of how to use the BCI to affect communication and mobility problems. The field can be divided into three areas: (a) sensory or input signaling, (b) motor/communication or output signaling and (c) connective or intraneuronal signaling. The latter two areas have the most in common. Each of these systems requires a signal source, a decoding paradigm, and a functional exploit.
收起
摘要 :
Building classification models from real-world datasets became a difficult task, especially in datasets with high dimensional features. Unfortunately, these datasets may include irrelevant or redundant features which have a negati...
展开
Building classification models from real-world datasets became a difficult task, especially in datasets with high dimensional features. Unfortunately, these datasets may include irrelevant or redundant features which have a negative effect on the classification performance. Selecting the significant features and eliminating undesirable features can improve the classification models. Fuzzy mutual information is widely used feature selection to find the best feature subset before classification process. However, it requires more computation and storage space. To overcome these limitations, this paper proposes an improved fuzzy mutual information feature selection based on representative samples. Based on benchmark datasets, the experiments show that the proposed method achieved better results in the terms of classification accuracy, selected feature subset size, storage, and stability.
收起
摘要 :
For any n-partite state rho A(1)A(2)center dot center dot center dot A(n), we define its quantum mutual information matrix as an n x n matrix whose (i;j)-entry is given by quantum mutual information I(rho A(i)A(j)). Although each ...
展开
For any n-partite state rho A(1)A(2)center dot center dot center dot A(n), we define its quantum mutual information matrix as an n x n matrix whose (i;j)-entry is given by quantum mutual information I(rho A(i)A(j)). Although each entry of quantum mutual information matrix, like its classical counterpart, is also used to measure bipartite correlations, the similarity ends here: quantum mutual information matrices are not always positive semidefinite even for collections of up to 3-partite states. In this work, we define the genuine n-partite mutual information which can be easily calculated. This definition is symmetric, nonnegative, bounded and more accurate for measuring multipartite states.
收起
摘要 :
In this paper, we propose a new method to measure the influence of a third variable on the interactions of two variables. The method called transfer mutual information (TMI) is defined by the difference between the mutual informat...
展开
In this paper, we propose a new method to measure the influence of a third variable on the interactions of two variables. The method called transfer mutual information (TMI) is defined by the difference between the mutual information and the partial mutual information. It is established on the assumption that if the presence or the absence of one variable does make change to the interactions of another two variables, then quantifying this change is supposed to be the influence from this variable to those two variables. Moreover, a normalized TMI and other derivatives of the TMI are introduced as well. The empirical analysis including the simulations as well as real-world applications is investigated to examine this measure and to reveal more information among variables. (C) 2016 Elsevier B.V. All rights reserved.
收起
摘要 :
Optical fiber communication networks play an important role in the global telecommunication network. However, nonlinear effects in the optical fiber and transceiver noise greatly limit the performance of fiber communication system...
展开
Optical fiber communication networks play an important role in the global telecommunication network. However, nonlinear effects in the optical fiber and transceiver noise greatly limit the performance of fiber communication systems. In this paper, the product of mutual information (MI) and communication bandwidth is used as the metric of the achievable information rate (AIR). The MI loss caused by the transceiver is also considered in this work, and the bit-wise MI, generalized mutual information (GMI), is used to calculate the AIR. This loss is more significant in the use of higher-order modulation formats. The AIR analysis is carried out in the QPSK, 16QAM, 64QAM and 256QAM modulation formats for the communication systems with different communication bandwidths and transmission distances based on the enhanced Gaussian noise (EGN) model. The paper provides suggestions for the selection of the optimal modulation format in different transmission scenarios.
收起