摘要 :
We study both canonical reproducing kernels and constructive reproducing kernels for holomorphic functions in C-1 and C-n. We compare and contrast the two, and also develop important relations between the two types of kernels. We ...
展开
We study both canonical reproducing kernels and constructive reproducing kernels for holomorphic functions in C-1 and C-n. We compare and contrast the two, and also develop important relations between the two types of kernels. We prove a new result about the relationship between these two kernels on certain domains of finite type.
收起
摘要 :
Kernel structure-related traits are important factors that affect the yield and ear appearance ofwaxy corn (Zea mays L. var. ceratina), regulated together by genetic factors and cultivation practices.Two planting densities, 52500 ...
展开
Kernel structure-related traits are important factors that affect the yield and ear appearance ofwaxy corn (Zea mays L. var. ceratina), regulated together by genetic factors and cultivation practices.Two planting densities, 52500 plants/ha and 67500 plants/ha, were established in this study for threemajor cultivars that are promoted in the local region: Jinnuo 18# (JN18), Jinnuo 20# (JN20), andJindannuo 41# (JdN41). We measured how the kernel length, width, thickness, volume, 100-kernelweight, and kernel weight of different parts of the waxy corn ear changed with planting density andcultivar. The results showed that planting density did not significantly affect the structure-related traits ofkernels in different parts of the waxy corn ear. Most of the kernel structure-related traits in the middle andbasal parts were significantly higher than those in the apical part of the waxy corn ear. Compared withplanting density, cultivar had a greater influence on kernel structure-related traits in waxy corn.Comparison between different cultivars showed that the kernel structure-related traits in JN18 were betterthan those in JdN41.
收起
摘要 :
Kernel smoothers belong to the most popular nonparametric functional estimates used for describing data structure. They can be applied to the fix design regression model as well as to the random design regression model. The main i...
展开
Kernel smoothers belong to the most popular nonparametric functional estimates used for describing data structure. They can be applied to the fix design regression model as well as to the random design regression model. The main idea of this paper is to present a construction of the optimum kernel and optimum boundary kernel by means of the Gegenbauer and Legendre polynomials.
收起
摘要 :
Nowadays, developing effective techniques able to deal with data coming from structured domains is becoming crucial. In this context kernel methods are the state-of-the-art tool widely adopted in real-world applications that invol...
展开
Nowadays, developing effective techniques able to deal with data coming from structured domains is becoming crucial. In this context kernel methods are the state-of-the-art tool widely adopted in real-world applications that involve learning on structured data. Contrarily, when one has to deal with unstructured domains, deep learning methods represent a competitive, or even better, choice. In this paper we propose a new family of kernels for graphs which exploits an abstract representation of the information inspired by the multilayer perceptron architecture. Our proposal exploits the advantages of the two worlds. From one side we exploit the potentiality of the state-of-the-art graph node kernels. From the other side we develop a multilayer architecture through a series of stacked kernel pre-image estimators, trained in an unsupervised fashion via convex optimization. The hidden layers of the proposed framework are trained in a forward manner and this allows us to avoid the greedy layerwise training of classical deep learning. Results on real world graph datasets confirm the quality of the proposal.
收起
摘要 :
There has been growing interest in kernel methods for classification, clustering and dimension reduction. For example, kernel Fisher discriminant analysis, spectral clustering and kernel principal component analysis are widely use...
展开
There has been growing interest in kernel methods for classification, clustering and dimension reduction. For example, kernel Fisher discriminant analysis, spectral clustering and kernel principal component analysis are widely used in statistical learning and data mining applications. The empirical success of the kernel method is generally attributed to nonlinear feature mapping induced by the kernel, which in turn determines a low dimensional data embedding. It is important to understand the effect of a kernel and its associated kernel parameter(s) on the embedding in relation to data distributions. In this paper, we examine the geometry of the nonlinear embedding for kernel principal component analysis (PCA) when polynomial kernels are used. We carry out eigen-analysis of the polynomial kernel operator associated with data distributions and investigate the effect of the degree of polynomial. The results provide both insights into the geometry of nonlinear data embedding and practical guidelines for choosing an appropriate degree for dimension reduction with polynomial kernels. We further comment on the effect of centering kernels on the spectral property of the polynomial kernel operator.
收起
摘要 :
Recent literature has shown the merits of having deep representations in the context of neural networks. An emerging challenge in kernel learning is the definition of similar deep representations. In this paper, we propose a gener...
展开
Recent literature has shown the merits of having deep representations in the context of neural networks. An emerging challenge in kernel learning is the definition of similar deep representations. In this paper, we propose a general methodology to define a hierarchy of base kernels with increasing expressiveness and combine them via multiple kernel learning (MKL) with the aim to generate overall deeper kernels. As a leading example, this methodology is applied to learning the kernel in the space of Dot-Product Polynomials (DPPs), that is a positive combination of homogeneous polynomial kernels (HPKs). We show theoretical properties about the expressiveness of HPKs that make their combination empirically very effective. This can also be seen as learning the coefficients of the Maclaurin expansion of any definite positive dot product kernel thus making our proposed method generally applicable. We empirically show the merits of our approach comparing the effectiveness of the kernel generated by our method against baseline kernels (including homogeneous and non homogeneous polynomials, RBF, etc...) and against another hierarchical approach on several benchmark datasets.
收起
摘要 :
In this paper, we show how the Ordered Decomposition DAGs (ODD) kernel framework, a framework that allows the definition of graph kernels from tree kernels, allows to easily define new state-of-the-art graph kernels. Here we consi...
展开
In this paper, we show how the Ordered Decomposition DAGs (ODD) kernel framework, a framework that allows the definition of graph kernels from tree kernels, allows to easily define new state-of-the-art graph kernels. Here we consider a fast graph kernel based on the Subtree kernel (ST), and we propose various enhancements to increase its expressiveness. The proposed DAG kernel has the same worst-case complexity as the one based on ST, but an improved expressivity due to an augmented set of features. Moreover, we propose a novel weighting scheme for the features, which can be applied to other kernels of the ODD framework. These improvements allow the proposed kernels to improve on the classification performances of the ST-based kernel for several real-world datasets, reaching state-of-the-art performances. (C) 2016 Elsevier B.V. All rights reserved.
收起
摘要 :
Kernel adaptive filters (KAF) are a class of powerful nonlinear filters developed in Reproducing Kernel Hilbert Space (RKHS). The Gaussian kernel is usually the default kernel in KAF algorithms, but selecting the proper kernel siz...
展开
Kernel adaptive filters (KAF) are a class of powerful nonlinear filters developed in Reproducing Kernel Hilbert Space (RKHS). The Gaussian kernel is usually the default kernel in KAF algorithms, but selecting the proper kernel size (bandwidth) is still an open important issue especially for learning with small sample sizes. In previous research, the kernel size was set manually or estimated in advance by Silver man's rule based on the sample distribution. This study aims to develop an online technique for optimizing the kernel size of the kernel least mean square (KLMS) algorithm. A sequential optimization strategy is proposed, and a new algorithm is developed, in which the filter weights and the kernel size are both sequentially updated by stochastic gradient algorithms that minimize the mean square error (MSE). Theoretical results on convergence are also presented. The excellent performance of the new algorithm is confirmed by simulations on static function estimation, short term chaotic time series prediction and real world Internet traffic prediction. (C) 2016 Elsevier B.V. All rights reserved.
收起
摘要 :
Abstract We derive some new connections between the Szeg? kernel, the Poisson kernel, the Dirichlet-to-Neuamnn map, and the Bergman kernel in planar domains. The new formulas shed light on the complexity of the Poisson kernel in m...
展开
Abstract We derive some new connections between the Szeg? kernel, the Poisson kernel, the Dirichlet-to-Neuamnn map, and the Bergman kernel in planar domains. The new formulas shed light on the complexity of the Poisson kernel in multiply connected domains.
收起
摘要 :
We study three canonical kernels on domains in and n. We exposit both the history and the substance of these concepts, and we also present some new calculations and ideas adherent thereto.