摘要 :
This article brings together the advances of multisource and multitemporal data fusion approaches with respect to the various research communities and provides a thorough and discipline-specific starting point for researchers at d...
展开
This article brings together the advances of multisource and multitemporal data fusion approaches with respect to the various research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references. More specifically, this work provides a bird's-eye view of many important contributions specifically dedicated to the topics of pansharpening and resolution enhancement, point cloud data fusion, hyperspectral and lidar data fusion, multitemporal data fusion, and big data and social media. In addition, the main challenges and possible future research in each area are outlined and discussed.
收起
摘要 :
ABSTRACT Transformer has achieved outstanding performance in many fields such as computer vision benefiting from its powerful and efficient modelling ability and long-range feature extraction capability complementary to convolutio...
展开
ABSTRACT Transformer has achieved outstanding performance in many fields such as computer vision benefiting from its powerful and efficient modelling ability and long-range feature extraction capability complementary to convolution. However, on the one hand, the lack of CNN’s innate inductive biases, such as translation invariance and local sensitivity, makes Transformer require more data for learning. On the other hand, labelled hyperspectral samples are scarce due to the time-consuming and costly annotation task. To this end, we propose a semi-supervised hierarchical Transformer model for HSI classification to improve the classification performance of the Transformer with limited labelled samples. In order to perturb the samples more fully and extensively to improve the model performance, two different data augmentation methods are used to perturb the unlabelled samples, and two sets of augmented samples are obtained respectively. The pseudo-label obtained on the original unlabelled sample is used to simultaneously supervise the augmented sample obtained on this unlabelled sample. Among them, only the pseudo-labels above the threshold are retained. To further improve the model stability and classification accuracy, hierarchical patch embedding is proposed to eliminate the mutual interference between pixels. Extensive experiments on three well-known hyperspectral datasets validate the effectiveness of the proposed semi-supervised Transformer model. The experiments show that the model achieves excellent classification accuracy even when there are only 10 labelled samples in each category, which can effectively improve the classification performance of Transformer under small-scale labelled samples.
收起
摘要 :
Remote sensing provides valuable information about objects and areas from a distance in either active (e.g., radar and lidar) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sense...
展开
Remote sensing provides valuable information about objects and areas from a distance in either active (e.g., radar and lidar) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering a true unknown image from a degraded observed one. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques along different paths according to sensor type.
收起
摘要 :
The synergistic combination of deep learning (DL) models and Earth observation (EO) promises significant advances to support the Sustainable Development Goals (SDGs). New developments and a plethora of applications are already cha...
展开
The synergistic combination of deep learning (DL) models and Earth observation (EO) promises significant advances to support the Sustainable Development Goals (SDGs). New developments and a plethora of applications are already changing the way humanity will face the challenges of our planet. This article reviews current DL approaches for EO data, along with their applications toward monitoring and achieving the SDGs most impacted by the rapid development of DL in EO. We systematically review case studies to achieve zero hunger, create sustainable cities, deliver tenure security, mitigate and adapt to climate change, and preserve biodiversity. Important societal, economic, and environmental implications are covered. Exciting times are coming when algorithms and Earth data can help in our endeavor to address the climate crisis and support more sustainable development.
收起
摘要 :
Hyperspectral images (HSIs) provide detailed spectral information through hundreds of (narrow) spectral channels (also known as dimensionality or bands), which can be used to accurately classify diverse materials of interest. The ...
展开
Hyperspectral images (HSIs) provide detailed spectral information through hundreds of (narrow) spectral channels (also known as dimensionality or bands), which can be used to accurately classify diverse materials of interest. The increased dimensionality of such data makes it possible to significantly improve data information content but provides a challenge to conventional techniques (the so-called curse of dimensionality) for accurate analysis of HSIs.
收起
摘要 :
Hyperspectral remote sensing is based on measuring the scattered and reflected electromagnetic signals from the Earth’s surface emitted by the Sun. The received radiance at the sensor is usually degraded by atmospheric effects an...
展开
Hyperspectral remote sensing is based on measuring the scattered and reflected electromagnetic signals from the Earth’s surface emitted by the Sun. The received radiance at the sensor is usually degraded by atmospheric effects and instrumental (sensor) noises which include thermal (Johnson) noise, quantization noise, and shot (photon) noise. Noise reduction is often considered as a preprocessing step for hyperspectral imagery. In the past decade, hyperspectral noise reduction techniques have evolved substantially from two dimensional bandwise techniques to three dimensional ones, and varieties of low-rank methods have been forwarded to improve the signal to noise ratio of the observed data. Despite all the developments and advances, there is a lack of a comprehensive overview of these techniques and their impact on hyperspectral imagery applications. In this paper, we address the following two main issues; (1) Providing an overview of the techniques developed in the past decade for hyperspectral image noise reduction; (2) Discussing the performance of these techniques by applying them as a preprocessing step to improve a hyperspectral image analysis task, i.e., classification. Additionally, this paper discusses about the hyperspectral image modeling and denoising challenges. Furthermore, different noise types that exist in hyperspectral images have been described. The denoising experiments have confirmed the advantages of the use of low-rank denoising techniques compared to the other denoising techniques in terms of signal to noise ratio and spectral angle distance. In the classification experiments, classification accuracies have improved when denoising techniques have been applied as a preprocessing step.
收起
摘要 :
The increased availability of data from different satellite and airborne sensors from a particular scene makes it desirable to jointly use data from multiple data sources for improved information extraction and classification. In ...
展开
The increased availability of data from different satellite and airborne sensors from a particular scene makes it desirable to jointly use data from multiple data sources for improved information extraction and classification. In particular, hyperspectral sensors provide valuable spectral information that can be used to discriminate different classes of interest, but they do not provide structural and elevation information. On the other hand, LiDAR data can extract useful information related to the size, structure and elevation of different objects, but cannot model the spectral characteristics of different materials. In this paper, a new classification framework is proposed by considering the integration of hyperspectral and LiDAR data. In this case, the recently introduced theoretically sound attribute profile (AP) is considered to model the spatial information of LiDAR and hyperspectral data. In parallel, in order to reduce the redundancy of the hyperspectral data and address the so-called curse of dimensionality, supervised feature extraction techniques are taken into account. Then, the new features obtained by the AP and the supervised feature extraction techniques are concatenated into a stacked vector. The final classification map is achieved by using either support vector machine or random forest classification techniques. The proposed method was applied on two data sets and the obtained results were compared in terms of classification accuracies and CPU processing time. From the results it can be concluded that the proposed method can classify the integration of hyperspectral and LiDAR data accurately in a very acceptable CPU processing time. It should be noted that the proposed method is fully automatic and there is no need to set any parameters to increase the favourability of the proposed method.
收起
摘要 :
Among all the natural hazards, earthquake prediction is an arduous task. Although many studies have been published on earthquake hazard assessment (EHA), very few have been published on the use of artificial intelligence (AI) in s...
展开
Among all the natural hazards, earthquake prediction is an arduous task. Although many studies have been published on earthquake hazard assessment (EHA), very few have been published on the use of artificial intelligence (AI) in spatial probability assessment (SPA). There is a great deal of complexity observed in the SPA modeling process due to the involvement of seismological to geophysical factors. Recent studies have shown that the insertion of certain integrated factors such as ground shaking, seismic gap, and tectonic contacts in the AI model improves accuracy to a great extent. Because of the black-box nature of AI models, this paper explores the use of an explainable artificial intelligence (XAI) model in SPA. This study aims to develop a hybrid Inception v3-ensemble extreme gradient boosting (XGBoost) model and shapely additive explanations (SHAP). The model would efficiently interpret and recognize factors’ behavior and their weighted contribution. The work explains the specific factors responsible for and their importance in SPA. The earthquake inventory data were collected from the US Geological Survey (USGS) for the past 22 years ranging the magnitudes from 5 Mw and above. Landsat-8 satellite imagery and digital elevation model (DEM) data were also incorporated in the analysis. Results revealed that the SHAP outputs align with the hybrid Inception v3-XGBoost model (87.9% accuracy) explanations, thus indicating the necessity to add new factors such as seismic gaps and tectonic contacts, where the absence of these factors makes the prediction model performs poorly. According to SHAP interpretations, peak ground accelerations (PGA), magnitude variation, seismic gap, and epicenter density are the most critical factors for SPA. The recent Turkey earthquakes (Mw 7.8, 7.5, and 6.7) due to the active east Anatolian fault validate the obtained AI-based earthquake SPA results. The conclusions drawn from the explainable algorithm depicted the importance of relevant, irrelevant, and new futuristic factors in AI-based SPA modeling.
收起
摘要 :
Recent advances in artificial intelligence (AI) have significantly intensified research in the geoscience and remote sensing (RS) field. AI algorithms, especially deep learning-based ones, have been developed and applied widely to...
展开
Recent advances in artificial intelligence (AI) have significantly intensified research in the geoscience and remote sensing (RS) field. AI algorithms, especially deep learning-based ones, have been developed and applied widely to RS data analysis. The successful application of AI covers almost all aspects of Earth-observation (EO) missions, from low-level vision tasks like superresolution, denoising, and inpainting, to high-level vision tasks like scene classification, object detection, and semantic segmentation. Although AI techniques enable researchers to observe and understand the earth more accurately, the vulnerability and uncertainty of AI models deserve further attention, considering that many geoscience and RS tasks are highly safety critical. This article reviews the current development of AI security in the geoscience and RS field, covering the following five important aspects: adversarial attack, backdoor attack, federated learning (FL), uncertainty, and explainability. Moreover, the potential opportunities and trends are discussed to provide insights for future research. To the best of the authors’ knowledge, this article is the first attempt to provide a systematic review of AI security-related research in the geoscience and RS community. Available code and datasets are also listed in the article to move this vibrant field of research forward.
收起
摘要 :
Hyperspectral image classification has been a vibrant area of research in recent years. Given a set of observations, i.e., pixel vectors in a hyperspectral image, classification approaches try to allocate a unique label to each pi...
展开
Hyperspectral image classification has been a vibrant area of research in recent years. Given a set of observations, i.e., pixel vectors in a hyperspectral image, classification approaches try to allocate a unique label to each pixel vector. However, the classification of hyperspectral images is a challenging task for a number of reasons, such as the presence of redundant features, the imbalance among the limited number of available training samples, and the high dimensionality of the data.
收起