Skip to content

Work flows and tools to create data that backs the BRAVO API.

License

Notifications You must be signed in to change notification settings

statgen/bravo_data_prep

Repository files navigation

BRAVO Data Pipeline

Processing data to power the BRowse All Variants Online (BRAVO) API

  1. Build, download, or install dependencies.
    1. Compile custom tools
    2. Install external tools
    3. Download external data
  2. Collect data to be processed into convenient location.
  3. Modify nextflow configs to match paths on your system or cluster.
  4. Run nextflow workflows

Input Data

Naming: The pipeline depends on the names of the input cram files having the sample ID as the first part of the filename. Specifically, the expectation that the ID preceeds the first . such that a call to getSimpleName() yields the ID.

Sequence Data

Source cram files. Original sequences from which the variant calls were made.

Variant calls

Source bcf files. Generated running the topmed variant calling pipeline

Data Preparation Tools

Compile Custom Tools

In the tools/ directory you will find tools/scripts to prepare your data for importing into Mongo database and using in BRAVO browser.

cd tools/cpp_tools
cget install .

This build executables in tools/cpp_tools/cget/bin

External Tools

BamUtil, VEP, and Loftee tools required are described in dependencies.md

External Data

Gencode, Ensembl, dbSNP, and HUGO data required are described in basis_data.md

Nextflow Scripts

In the workflows/ directory are three Nextflow configs and scripts used to prepare the runtime data for the BRAVO API.

Details about the steps of the pipeline are detailed in data_prep_steps.md.

The three nextflow pipelines are:

  1. Prepare VCF Teddy
  2. Sequences
  3. Coverage

Downstream data for BRAVO API

The make_vignette_dir.sh script consolidates the results from the nextflow scripts into a data directory organized for the BRAVO API. It is designed for small data sets, and should be run after the three data pipelines complete.

There are two data sets that Bravo API needs to run:

  • Runtime Data are flat files on disk read at runtime.
  • Basis Data files processed and loaded into mongo db.

Downstream data subdirectory notes

data/
├── cache
├── coverage
│   ├── bin_1
│   ├── bin_25e-2
│   ├── bin_50e-2
│   ├── bin_75e-2
│   └── full
├── crams
│   ├── sequences
│   ├── variant_map.tsv.gz
│   └── variant_map.tsv.gz.tbi
└── reference
    ├── hs38DH.fa
    └── hs38DH.fa.fai
  • reference/ holds the refercence fasta files for the genome
  • API's SEQUENCE_DIR config val is asking for directory that contains the 'sequences' directory.
    • sequences dirname is hardcoded
    • variant_map.tsv.gz file name is hardcoded.
    • variant_map.tsv.gz.tbi file name is hardcoded.
  • Under sequence/, directory structure and filenames are perscribed.
    • All two hex character directories 00 to ff should exist as subdirectories.
    • cram files must have the filename in the exact form of sample_id.cram
    • The sub dir a cram belongs in is the first two characters of the md5 hexdigest of the sample_id.
      • E.g. foobar123.cram would be in directory "ae"
        hashlib.md5("foobar123".encode()).hexdigest()[:2]
      • This dir structure is produced by the nextflow pipeline
  • coverage directory contents are taken from result/ dir of coverage workflow
  • variant_map.tsv.gz is an output of RandomHetHom3

About

Work flows and tools to create data that backs the BRAVO API.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published