摘要 :
This paper proposes a new semisupervised dimension reduction (DR) algorithm based on a discriminative locally enhanced alignment technique. The proposed DR method has two aims: to maximize the distance between different classes ac...
展开
This paper proposes a new semisupervised dimension reduction (DR) algorithm based on a discriminative locally enhanced alignment technique. The proposed DR method has two aims: to maximize the distance between different classes according to the separability of pairwise samples and, at the same time, to preserve the intrinsic geometric structure of the data by the use of both labeled and unlabeled samples. Furthermore, two key problems determining the performance of semisupervised methods are discussed in this paper. The first problem is the proper selection of the unlabeled sample set; the second problem is the accurate measurement of the similarity between samples. In this paper, multilevel segmentation results are employed to solve these problems. Experiments with extensive hyperspectral image data sets showed that the proposed algorithm is notably superior to other state-of-the-art dimensionality reduction methods for hyperspectral image classification.
收起
摘要 :
Abstract This paper proposes a new method of locality preserving projection (LPP), which replaces the squared L2-norm minimization and maximization distances in the objective of conventional LPPW. The proposed method is termed as ...
展开
Abstract This paper proposes a new method of locality preserving projection (LPP), which replaces the squared L2-norm minimization and maximization distances in the objective of conventional LPPW. The proposed method is termed as Simultaneous p- and s-orders Minmax Robust Locality Preserving Projection (psRLPP), which is robust to outlier samples. Then, we design an efficient iterative algorithm to solve the objective problem of psRLPP. At each iteration, our method ends with solving a trace ratio problem rather inexact ratio trace problem. We also conduct some insightful analysis on the existence of local minimum and the convergence of the proposed algorithm. These characteristics make our psRLPP more intuitive and powerful than the most up-to-date method, robust LPP via p-order minimization (RLPP) which considers only the p-order minimization of the L2-norm distance and requires transforming the original trace ratio problem in each iteration into an inexact ratio problem in the solving of projection vectors. Theoretical insights and effectiveness of our method is further supported by promising experimental results for clustering.
收起
摘要 :
How to reduce noise with less speech distortion is a challenging issue for speech enhancement. We propose a novel approach for reducing noise with the cost of less speech distortion. A noise signal can generally be considered to c...
展开
How to reduce noise with less speech distortion is a challenging issue for speech enhancement. We propose a novel approach for reducing noise with the cost of less speech distortion. A noise signal can generally be considered to consist of two components, a "white-like" component with a uniform energy distribution and a "color" component with a concentrated energy distribution in some frequency bands. An approach based on noise eigenspace projections is proposed to pack the color component into a subspace, named "noise subspace". This subspace is then removed from the eigenspace to reduce the color component. For the white-like component, a conventional enhancement algorithm is adopted as a complementary processor. We tested our algorithm on a speech enhancement task using speech data from the Texas Instruments and Massachusetts Institute of Technology (TIMIT) dataset and noise data from NOISEX-92. The experimental results show that the proposed algorithm efficiently reduces noise with little speech distortion. Objective and subjective evaluations confirmed that the proposed algorithm outperformed conventional enhancement algorithms.
收起
摘要 :
It is well known that polarimetric synthetic aperture radar (PolSAR) backscattering features are highly influenced by the variation of incidence angle (VIA), which usually hampers the classification of most grazing-angle-sensitive...
展开
It is well known that polarimetric synthetic aperture radar (PolSAR) backscattering features are highly influenced by the variation of incidence angle (VIA), which usually hampers the classification of most grazing-angle-sensitive targets, such as land and ocean targets. To relieve this issue, various feature extraction approaches have been suggested to enhance the class discriminability while reducing the observed feature dimensionality. The Laplacian eigenmap-based dimension reduction (DR) has been proven to be an effective way to deal with VIA problems, provided that the manifold parameters [e.g., the heat kernel (HK)] have been optimally sought, which is often difficult in practice. In this letter, an adaptive Laplacian eigenmap-based DR method is presented to find a learned subspace where the local geometry with discriminative prior knowledge is preserved as much as possible while near optimal HK and scale factor parameters are automatically identified. The learned feature representation is then employed for the subsequent classification. The improved Laplacian eigenmap algorithm was validated by three uninhabited-aerial-vehicle-synthetic-aperture-radar L-band PolSAR images from the Gulf Deepwater Horizon oil spill, which were clearly impacted by the VIA phenomenon. The experimental results showed that the proposed algorithm works well in ocean target discrimination compared with the current common methods.
收起
摘要 :
Airborne laser scanning is nowadays widely used for the estimation of forest stand parameters. Prediction models have to deal with high-dimensional laser data sets as well as limited field calibration data. This problem is enhance...
展开
Airborne laser scanning is nowadays widely used for the estimation of forest stand parameters. Prediction models have to deal with high-dimensional laser data sets as well as limited field calibration data. This problem is enhanced in mountainous areas where forest is highly heterogeneous and field data collection is costly. Artificial neural network models and support vector regression (SVR) have already demonstrated their ability to address such issues for species-specific plot volume prediction. In this letter, we compare the stand parameter prediction accuracies of support vector machines and ordinary least squares multiple-regression models for dominant height, basal area, mean diameter, and stem density. The sensitivity of these techniques to the input variables is investigated by testing data sets which include different numbers and types of laser metrics, and by reducing their dimension with principal component and independent component analyses. Whereas usual variables only reflect the vertical distribution, we also integrate the entropy of the horizontal distribution of the point cloud in the laser metrics. The results show that SVR prediction models are of similar accuracy with multiple-regression models but are more robust regarding the metrics included in the data sets. Preliminary dimension reduction of the data set by principal component analysis generally benefits more to SVR than to multiple regression. The optimal combination of laser metrics to be included in the data sets mainly depends on the forest parameter to be estimated.
收起
摘要 :
In this paper, we propose a spectral–spatial feature based classification (SSFC) framework that jointly uses dimension reduction and deep learning techniques for spectral and spatial feature extraction, respectively. In this fram...
展开
In this paper, we propose a spectral–spatial feature based classification (SSFC) framework that jointly uses dimension reduction and deep learning techniques for spectral and spatial feature extraction, respectively. In this framework, a balanced local discriminant embedding algorithm is proposed for spectral feature extraction from high-dimensional hyperspectral data sets. In the meantime, convolutional neural network is utilized to automatically find spatial-related features at high levels. Then, the fusion feature is extracted by stacking spectral and spatial features together. Finally, the multiple-feature-based classifier is trained for image classification. Experimental results on well-known hyperspectral data sets show that the proposed SSFC method outperforms other commonly used methods for hyperspectral image classification.
收起
摘要 :
Inductive learning systems were successfully applied in a number of medical domains. Nevertheless, the effective use of these systems often requires data preprocessing before applying a learning algorithm. This is especially impor...
展开
Inductive learning systems were successfully applied in a number of medical domains. Nevertheless, the effective use of these systems often requires data preprocessing before applying a learning algorithm. This is especially important for multidimensional heterogeneous data presented by a large number of features of different types. Dimensionality reduction (DR) is one commonly applied approach. The goal of this paper is to study the impact of natural clustering—clustering according to expert domain knowledge—on DR for supervised learning (SL) in the area of antibiotic resistance. We compare several data-mining strategies that apply DR by means of feature extraction or feature selection with subsequent SL on microbiological data. The results of our study show that local DR within natural clusters may result in better representation for SL in comparison with the global DR on the whole data.
收起
摘要 :
Coiled-tubing (CT) applications include drilling operations, hydraulic fracturing, well completions, removing sand or fill from wellbore, and other applications that involve pumping fluids at high temperatures and high salinity. B...
展开
Coiled-tubing (CT) applications include drilling operations, hydraulic fracturing, well completions, removing sand or fill from wellbore, and other applications that involve pumping fluids at high temperatures and high salinity. Because of curvature effects in CT, huge pressure losses occur, limiting the maximum flow rate achieved. By adding specific chemicals known as "friction reducers" or "drag reducers" to the fluids, these pressure losses can be minimized to a great extent. Previously, several authors have published their results for fluid flow through CT, but only a limited number of studies have been reported that relate to temperature and salinity effects on drag reduction in fluids flowing through CT. This paper discusses an experimental study of two commonly used drag reducers—ASP-700 and ASP-820—flowing through CT with different salinities and temperatures. Both small-scale and large-scale flow loops are used in this study. The small-scale flow loop includes a 1/2-in.-outside-diameter (OD) smooth CT, while the large-scale flow loop includes 2(3/8)-in. rough CT. Elevated temperature tests and salinity tests are conducted using optimum concentrations of drag reducers in fresh water, 2% KC1, and synthetic seawater. The flow data gathered were analyzed and used to develop correlations that can predict drag reduction at different salinities and temperatures. The developed correlations show reasonable agreement with experimental data.
收起
摘要 :
In this paper, the authors propose a dimension reduction level set method (DR-LSM) for shape and topology optimization of heat conduction problems on general free-form surfaces utilizing the conformal geometry theory. The original...
展开
In this paper, the authors propose a dimension reduction level set method (DR-LSM) for shape and topology optimization of heat conduction problems on general free-form surfaces utilizing the conformal geometry theory. The original heat conduction optimization problem defined on a free-form surface embedded in the 3D space can be equivalently transferred and solved on a 2D parameter domain utilizing the conformal invariance of the Laplace equation along with the extended level set method (X-LSM). Reducing the dimension can not only significantly reduce the computational cost of finite element analysis but also overcome the hurdles of dynamic boundary evolution on free-form surfaces. The equivalence of this dimension reduction method rests on the fact that the covariant derivatives on the manifold can be represented by the Euclidean gradient operators multiplied by a scalar with the conformal mapping. The proposed method is applied to the design of conformal thermal control structures on free-form surfaces. Specifically, both the Hamilton-Jacobi equation and the heat equation, the two governing PDEs for boundary evolution and thermal conduction phenomena, are transformed from the manifold in 3D space to the 2D rectangular domain using conformal parameterization. The objective function, constraints, and the design velocity field are also computed equivalently with FEA on the 2D parameter domain with properly modified forms. The effectiveness and efficiency of the proposed method are systematically demonstrated through five numerical examples of heat conduction problems on the manifolds. (c) 2022 Elsevier B.V. All rights reserved.
收起
摘要 :
Nowadays, being in digital era the data generated by various applications are increasing drastically both row-wise and column wise; this creates a bottleneck for analytics and also increases the burden of machine learning algorith...
展开
Nowadays, being in digital era the data generated by various applications are increasing drastically both row-wise and column wise; this creates a bottleneck for analytics and also increases the burden of machine learning algorithms that work for pattern recognition. This cause of dimensionality can be handled through reduction techniques. The Dimensionality Reduction (DR) can be handled in two ways namely Feature Selection (FS) and Feature Extraction (FE). This paper focuses on a survey of feature selection methods, from this extensive survey we can conclude that most of the FS methods use static data. However, after the emergence of IoT and web-based applications, the data are generated dynamically and grow in a fast rate, so it is likely to have noisy data, it also hinders the performance of the algorithm. With the increase in the size of the data set, the scalability of the FS methods becomes jeopardized. So the existing DR algorithms do not address the issues with the dynamic data. Using FS methods not only reduces the burden of the data but also avoids overfitting of the model.
收起