Skip to content

Removing the spatial aliasing in the seismic data using Deep Learning Super-resolution

Notifications You must be signed in to change notification settings

garg-aayush/spatial-alias-removal

Repository files navigation

Spatial aliasing removal using deep learning super-resolution

Aayush Garg, Delft University of Technology
Abe Vos, Nikita Bortych and Deepak Gupta, University of Amsterdam

Abstract

Seismic data is often insufficiently or irregularly sampled due to logistics and cost constraints associated with the data acquisition. Even in the simple case of regularly but too coarse sampled data, it leads to the loss of high-wavenumbers and overlapping of the aliased energy artifacts with the signal energy. When we image the spatially aliased data, we encounter the trade-off between the resolution of the image and aliasing artifacts. In this paper, we use a deep learning super-resolution network to upscale the data by a factor of two in the spatial direction and remove the spatial aliasing present in the data. Also, we make use of a loss function that minimizes the error both in the spacetime and the f-k domain to make the network more robust. We show that the trained network is able to reconstruct the dense data with half the receiver interval and remove the spatial aliasing in the f-k domain both for the training and for a blind dataset. This reconstructed dense data improves the accuracy of seismic imaging as a result of denser sampling and removed spatial aliasing.


Blind data test results

Input data with spatial aliasing Output data without spatial aliasing
In shot domain  In shot domain
In fk domain  In shot domain

Influence of spatial aliasing in imaging

Imaging with spatially aliased blind data

In shot domain

Imaging with spatial aliasing removed blind data

 In shot domain


Repository info

Scripts

  • train.py: python script to train the given dataset
  • models.py: model class definitions for SRCNN, EDSR and VDSR approaches
  • dataset.py: implementation of the dataset class and transformations
  • mat_generation.py: applies the trained model to the given dataset

Folders

  • train_data: contains the training dataset
  • blind_data: contains the datset for the blind test
  • final/data: contains the input and output data after training the network saved separately as training/validation/test datasets
  • results/result_2: contains the final and intermediate results generated while training the network

Datasets

  • The training dataset consist of 400 shot records generated for the Marmousi model using acoustic finite-difference modelling. The input low-resolution spatially aliased dataset contains of shots with 20 m receiver spacing and the output high-resolution contains of same shots with 10 m receiver spacing.
  • The blind dataset consist of low-resolution spatially aliased shot records with 20 m receiver spacing generated for the Sigsbee model using acoustic finite-difference modelling.

Note:

  • We made use of freely available fdelmodc program to generate the training and blind datasets.
  • You can download both the training and blind datasets using the following google drive link.

Steps to train the network and run the blind test

  1. First of all, ensure you have the correct python environment and dependencies to run the scripts (see below).

  2. Clone/Download the repository and navigate to the downloaded folder.

$ git clone https://github.com/garg-aayush/spatial-alias-removal
$ cd spatial-alias-removal
  1. Download the datasets from the google drive link and add to the respective directories in the spatial-alias-removal

  2. In order to train the network run

#It assumes you have access to GPU
$ python train.py -d train_data -x data_20_big -y data_10_big -n 1 --device cuda:0 --n_epochs 50
  1. Then, use the trained network to remove spatial aliasing from the blind dataset
#It assumes you have access to GPU 
$ python mat_generation.py --data_root blind_data -x data_20_big --model_folder results/result_2 --device cuda:0 

Useful information

  • The above mentioned steps were followed exactly along with the other mentioned parameters in the scripts to generate the results for the current repository.

  • You can get more information about the various parameters in the scripts either by going through the scripts or else

$ python train.py -h
$ python mat_generation.py -h
  • The scripts assumes the training/blind datasets to be in nt X nr X ns size saved in .mat format. The network was trained for input sample example of 251 X 151 size and output sample example of 251 X 301 size. We have not tested the network for different size examples.

  • You can directly use the trained network (already saved in results/result_2) directly without training by skipping the step 4 in the above section.

  • Note, the scripts assume that you have access to GPU for training the data. In case, if you don't have access to GPU, change --device cuda:0 to --device cpu while running the scripts. We recommend to train the network on a GPU, otherwise it will take quite a long time to train the network.


Dependencies

The scripts depends requires the following packages:

The best practice is to create a conda environment with the following packages before running the scripts.


About

Removing the spatial aliasing in the seismic data using Deep Learning Super-resolution

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages