-
Notifications
You must be signed in to change notification settings - Fork 597
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GenomicsDB: max # of alleles should be configurable #2687
Labels
Comments
@lbergelson and @ldgauthier have confirmed that GATK CombineGVCFs (the predecessor to GenomicsDB) also had this same limit, so GenomicsDB is not doing anything radically new here. This ticket is just to ensure that the limit is configurable if it already isn't |
Will add a variable to our Protobuf configuration object - the JSON already an option to set this. |
droazen
pushed a commit
that referenced
this issue
Jul 6, 2018
… sites-only query support, and bug fixes (#4645) This PR addresses required changes in order to use latest version of GenomicsDB which exposes new functionality such as: * Multi interval import and query support: * We create multiple arrays (directories) in a single workspace - one per interval. So, if you wish to import intervals ("chr1", [ 1, 100M ]) and ("chr2", [ 1, 100M ]), you end up with 2 directories/arrays in the workspace with names chr1$1$100M and chr2$1$100M. The array names depend on the partition bounds. * During the read phase, the user only supplies the workspace. The array names are obtained by scanning the entries in the workspace and reading the right arrays. For example, if you wish to read ("chr2", [ 50, 50M] ), then only the second array is queried. In the previous version of the tool, the array name was a constant - genomicsdb_array. The new version will be backward compatible with respect to reads. Hence, if a directory named genomicsdb_array is found in the workspace directory, it's passed as the array for the GenomicsDBFeatureReader otherwise the array names are generated from the directory entry names. * Parallel import based on chromosome intervals. The number of threads to use can be specified as an integer argument to the executeImport call. If no argument is specified, the number of threads is determined by Java's ForkJoinPool (typically equal to the #cores in the system). * The max number of intervals to import in parallel can be controlled by the command line argument --max-num-intervals-to-import-in-parallel (default 1) Note that increasing parallelism increases the number of FeatureReaders opened to feed data to the importer. So, if you are using N threads and your batch size is B, you will have N*B feature readers open. * Protobuf based API for import and read #3688 #2687 * Option to produce GT field * Option to produce GT for spanning deletion based on min PL value * Doesn't support #4541 or #3689 yet - next version * Bug fixes * Fix for #4716 * More error messages
Implemented in #4645 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We're seeing messages like the following when running
GenomicsDBImport
:Is this limit of 50 configurable, if we wanted to raise it, and if not, could it be made configurable?
The text was updated successfully, but these errors were encountered: