Skip to content

Latest commit

 

History

History
124 lines (101 loc) · 9.17 KB

README.md

File metadata and controls

124 lines (101 loc) · 9.17 KB

MLography: An End-to-End Computer Vision Methodology for Quantitative Metallography

Introduction

Metallography is crucial for a proper assessment of material's properties. It involves mainly the investigation of spatial distribution of grains and the occurrence and characteristics of inclusions or precipitates. This work presents an holistic artificial intelligence model for Anomaly Detection that automatically quantifies the degree of anomaly of impurities in alloys. We suggest the following examination process: (1) Deep semantic segmentation is performed on the inclusions (based on a suitable metallographic database of alloys and corresponding tags of inclusions), producing inclusions masks that are saved into a separated database. (2) Deep image inpainting is performed to fill the removed inclusions parts, resulting in 'clean' metallographic images, which contain the background of grains. (3) Grains' boundaries are marked using deep semantic segmentation (based on another metallographic database of alloys), producing boundaries that are ready for further inspection on the distribution of grains' size. (4) Deep anomaly detection and pattern recognition is performed on the inclusions masks to determine spatial, shape and area anomaly detection of the inclusions. Finally, the system recommends to an expert on areas of interests for further examination. The performance of the model is presented and analyzed based on few representative cases. Although the models presented here were developed for metallography analysis, most of them can be generalized to a wider set of problems in which anomaly detection of geometrical objects is desired.

Output examples

Segmentation

The Segmentation module allows to segment objects (impurities and grains' boundaries) based on a U-Net trained on databases with only few hundreds of small images (128x128 pixels). After the network is trained, each input image is divided to sqaures with big overlap. The network generates a segmentation mask for each sqaure, and the final full segmentation mask is generated by averaging the overlapping pixels.

  1. Input image: (Other examples are in Segmentation/unet/data/metallography/train/image/) Input segmentation image
  2. Impurities Segmentation: (Other examples are in Segmentation/unet/data/full_segmented_binary/) Impurities segmentation image
  3. Impurities Inpainting as a preliminary step for the next one using Generative Inpainting: (Other examples are in Segmentation/unet/data/without_impurities/) Impurities inpainting image
  4. Grains' Boundaries (GB) Segmentation: (Other examples are in Segmentation/unet/data/post_segmented_edges_binary/binary/) GB segmentation image
  5. Final output image: (Other examples are in Segmentation/unet/data/post_segmented_edges_binary/masked/) Output image

Anomaly Detection

In the following anomaly detection output examples of metallographic scans, red objects (impurities) are the most anomalous and blue are the most non-anomalous. In MLography there are several kinds of anomaly measures:

  1. Spatial Anomaly: objects (impurities) that are big and distant compared to their neighborhood are considered anomalous:

Spatial Anomaly on a sample image 2. Shape Anomaly: objects (impurities) of an non-symmetric shapes are considerd anomalous.

Shape Anomaly on a sample image

2.5. Spatial and Shape Anomaly: Combining the scores of Spatial and Shape anomalies highlights the most anomalous objects from both measures. Spatial and Shape Anomaly combined 3. Area Anomaly: locating and quantifying areas of anomalous objects (impurities).

Area Anomaly on a sample image

Citation

For more information about the measures and their means of the implementations, please refer to the paper. If you found these codes useful for your research, please consider citing: https://arxiv.org/abs/2104.11159

Instructions

Requirments

You may use the file MLographyENV to create anaconda environment with the required packages. To build the package run:

conda create --name <env_name> --file MLographyENV

Then, activate your environment:

conda activate <env_name>

And install ray:

pip install ray

Running

Segmentation

For impurities inpainting support, please clone the following git repository: Generative Inpainting, to Segmentation/ directory. Additional modifications are required in the source code, for supporting the option --multiple that will inpaint all input images. To run the program (on trained U-Net models for impurities and GBs) use:

cd Segmentation/unet
python main.py --state=use --in_dir=<input directory of scans, we used "Segmentation/unet/data/metallography/train/"> --in_img=<the name of the image if a single input image is desired, e.g. "25.jpg", otherwise leave empty> --imp_model_name=<impurities segmentation u-net model path> --gb_model_name=<gb segmentation u-net model path>

Anomaly Detection

There are several scripts:

  1. anomaly_detection.py - the main script. Currently it allows to execute the Shape and Spatial Anomaly functionality (extract_impurities_and_detect_shape_spatial_anomaly) and Area Anomaly functionality (extract_impurities_and_detect_anomaly) which uses the previous measures to locate and quantify the anomalous areas in the scan.
  2. impurity_extract.py - pre-processing the input scan image (using image processing techniqes such as water-shed algorithm).
  3. spatial_anomaly.py - implements the Spatial Anoamly functionality, mainly with the Weighted-kth-Neighbour algorithm.
  4. shape_anomaly.py - pre-step to Shape Anoamly functionality, it mainly calculates the difference between areas of each impurity to its minimal enclosing circle. It is used for creating the training set to the auto-encoder model described next.
  5. neural_net.py - the auto-encoder neural network model that is responsible for training and loading data for the Shape Anoamly.
  6. use_model.py - uses the neural network for prediction and evaluating the reconstruction loss, ass well as testing for the Shape Anoamly.
  7. area_anomaly.py - implements the Area Anoamly functionality, mainly with the Market-Clustering algorithm.

To run the program (on a trained auto-encoder model) use:

python anomaly_detection.py --input_scans=<input directory of scans, we used "./tags_png_cropped/*"> --model_name="<auto-encoder-model-name>" --min_threshold=<used for pre-processing, we used 30> --area_anomaly_dir=<log direcory for output, default is "./logs/area/">

In order to order all the area anomaly add the flag --order and if you want to print the precentiles in which all areas of the input scans are placed, add the flag --print_order.

Training

Segmentation

cd Segmentation/unet
python main.py --state=train --model_name=<segmentation u-net model path>

Anomaly Detection

In order to train the auto-encoder model for the shape anomaly measure, on your data use:

python neural_net.py --model_name="<model name without file extension>" --anomaly_blank_label=<True if the use of blank labels for anomalous objects is desired>

Your data should reside in a directory in data/, then divided to two directories: train/ and validation/, in each one will be one directory - normal/, or two directories - anomaly/ and normal/ if the use of blank labels for anomalous objects is desired. These directories should hold all your data.

For splitting the data to the needed directories use the split_data.py script:

python anomaly_detection.py --detect=False --order=False --print_order=False prepare_data=True prepare_data_path="<path to data to be rescaled and prepared>"

Data

The data that was used in the paper for:

  • Semantic segmentation of impurities can be found in:

    • Images: Segmentation/unet/data/small/train/image_preprocess_cons/.
    • Labels: Segmentation/unet/data/small/train/label_fixed_cons/.
  • Semantic segmentation of grains' boundaries can be found in:

    • Images: Segmentation/unet/data/squares_128/train/image/.
    • Labels: Segmentation/unet/data/squares_128/train/inv_label/.
  • Anomaly Detection can be found in tags_png_cropped/.