Skip to content

Soft-threshold Defensive Method for Ad of RSI: Inspired by the Evidences via Throughout Experiments

Notifications You must be signed in to change notification settings

RayleighChen/AEs-remote-sensing-images

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Should We Trust the Deep Neural Network for Remote Sensing Image Classification? A Comprehensive Analysis.

Abstract

Deep neural networks (DNNs), which learn a hierarchical representation of features, have shown remarkable performance in big data analytics of remote sensing. However, previous research indicates that DNNs are easily spoofed by adversarial examples, which are crafted images with artificial perturbations that fool a DNN model towards wrong predictions. In order to comprehensive evaluates the impact of adversarial examples on the remote sensing image (RSI) classification, this research tests 8 state-of-the-art classification DNNs using 6 benchmark RSIs. These datasets include both optical and synthetic-aperture radar (SAR) images of different spectral and spatial resolutions. In the experiment, we create 48 classification scenarios and use 4 cutting-edge attack algorithms to investigate the influence of the adversarial example on the classification of RSIs. The experimental result shows that the fooling rates of the attacks are all over 98% across the 48 scenarios. We also find that the seriousness of the adversarial problem is negatively correlated with the richness of the feature information of the optical data used in the model. In addition, adversarial examples generated from SAR images are easier to be used for fooling the models with an average fooling rate of 76.01%. By analyzing the class distribution of these adversarial examples, we find that the distribution of the misclassifications is not affected by the types of models and attack algorithms. Adversarial examples of RSIs of the same class are clustered on fixed several classes. The analysis of classes of adversarial examples not only helps us explore the relationships between dataset classes, but also provides insights for further designing defensive algorithms.

About

Soft-threshold Defensive Method for Ad of RSI: Inspired by the Evidences via Throughout Experiments

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages