+datafusion.catalog.create_default_catalog_and_schema |
+true |
+Whether the default catalog and schema should be created automatically. |
+
+datafusion.catalog.default_catalog |
+datafusion |
+The default catalog name - this impacts what SQL queries use if not specified |
+
+datafusion.catalog.default_schema |
+public |
+The default schema name - this impacts what SQL queries use if not specified |
+
+datafusion.catalog.information_schema |
+false |
+Should DataFusion provide access to information_schema virtual tables for displaying schema information |
+
+datafusion.catalog.location |
+NULL |
+Location scanned to load tables for default schema |
+
+datafusion.catalog.format |
+NULL |
+Type of TableProvider to use when loading default schema |
+
+datafusion.catalog.has_header |
+false |
+If the file has a header |
+
+datafusion.execution.batch_size |
+8192 |
+Default batch size while creating new batches, it’s especially useful for buffer-in-memory batches since creating tiny batches would result in too much metadata memory consumption |
+
+datafusion.execution.coalesce_batches |
+true |
+When set to true, record batches will be examined between each operator and small batches will be coalesced into larger batches. This is helpful when there are highly selective filters or joins that could produce tiny output batches. The target batch size is determined by the configuration setting |
+
+datafusion.execution.collect_statistics |
+false |
+Should DataFusion collect statistics after listing files |
+
+datafusion.execution.target_partitions |
+0 |
+Number of partitions for query execution. Increasing partitions can increase concurrency. Defaults to the number of CPU cores on the system |
+
+datafusion.execution.time_zone |
++00:00 |
+The default time zone Some functions, e.g. EXTRACT(HOUR from SOME_TIME) , shift the underlying datetime according to this time zone, and then extract the hour |
+
+datafusion.execution.parquet.enable_page_index |
+false |
+If true, uses parquet data page level metadata (Page Index) statistics to reduce the number of rows decoded. |
+
+datafusion.execution.parquet.pruning |
+true |
+If true, the parquet reader attempts to skip entire row groups based on the predicate in the query and the metadata (min/max values) stored in the parquet file |
+
+datafusion.execution.parquet.skip_metadata |
+true |
+If true, the parquet reader skip the optional embedded metadata that may be in the file Schema. This setting can help avoid schema conflicts when querying multiple parquet files with schemas containing compatible types but different metadata |
+
+datafusion.execution.parquet.metadata_size_hint |
+NULL |
+If specified, the parquet reader will try and fetch the last size_hint bytes of the parquet file optimistically. If not specified, two reads are required: One read to fetch the 8-byte parquet footer and another to fetch the metadata length encoded in the footer |
+
+datafusion.execution.parquet.pushdown_filters |
+false |
+If true, filter expressions are be applied during the parquet decoding operation to reduce the number of rows decoded |
+
+datafusion.execution.parquet.reorder_filters |
+false |
+If true, filter expressions evaluated during the parquet decoding operation will be reordered heuristically to minimize the cost of evaluation. If false, the filters are applied in the same order as written in the query |
+
+datafusion.optimizer.enable_round_robin_repartition |
+true |
+When set to true, the physical plan optimizer will try to add round robin repartitioning to increase parallelism to leverage more CPU cores |
+
+datafusion.optimizer.filter_null_join_keys |
+false |
+When set to true, the optimizer will insert filters before a join between a nullable and non-nullable column to filter out nulls on the nullable side. This filter can add additional overhead when the file format does not fully support predicate push down. |
+
+datafusion.optimizer.repartition_aggregations |
+true |
+Should DataFusion repartition data using the aggregate keys to execute aggregates in parallel using the provided target_partitions level |
+
+datafusion.optimizer.repartition_file_min_size |
+10485760 |
+Minimum total files size in bytes to perform file scan repartitioning. |
+
+datafusion.optimizer.repartition_joins |
+true |
+Should DataFusion repartition data using the join keys to execute joins in parallel using the provided target_partitions level |
+
+datafusion.optimizer.repartition_file_scans |
+true |
+When set to true, file groups will be repartitioned to achieve maximum parallelism. Currently supported only for Parquet format in which case multiple row groups from the same file may be read concurrently. If false then each row group is read serially, though different files may be read in parallel. |
+
+datafusion.optimizer.repartition_windows |
+true |
+Should DataFusion repartition data using the partitions keys to execute window functions in parallel using the provided target_partitions level |
+
+datafusion.optimizer.repartition_sorts |
+true |
+Should DataFusion execute sorts in a per-partition fashion and merge afterwards instead of coalescing first and sorting globally. With this flag is enabled, plans in the form below “SortExec: [a@0 ASC]”, “ CoalescePartitionsExec”, “ RepartitionExec: partitioning=RoundRobinBatch(8), input_partitions=1”, would turn into the plan below which performs better in multithreaded environments “SortPreservingMergeExec: [a@0 ASC]”, “ SortExec: [a@0 ASC]”, “ RepartitionExec: partitioning=RoundRobinBatch(8), input_partitions=1”, |
+
+datafusion.optimizer.skip_failed_rules |
+true |
+When set to true, the logical plan optimizer will produce warning messages if any optimization rules produce errors and then proceed to the next rule. When set to false, any rules that produce errors will cause the query to fail |
+
+datafusion.optimizer.max_passes |
+3 |
+Number of times that the optimizer will attempt to optimize the plan |
+
+datafusion.optimizer.top_down_join_key_reordering |
+true |
+When set to true, the physical plan optimizer will run a top down process to reorder the join keys |
+
+datafusion.optimizer.prefer_hash_join |
+true |
+When set to true, the physical plan optimizer will prefer HashJoin over SortMergeJoin. HashJoin can work more efficiently than SortMergeJoin but consumes more memory |
+
+datafusion.optimizer.hash_join_single_partition_threshold |
+1048576 |
+The maximum estimated size in bytes for one input side of a HashJoin will be collected into a single partition |
+
+datafusion.explain.logical_plan_only |
+false |
+When set to true, the explain statement will only print logical plans |
+
+datafusion.explain.physical_plan_only |
+false |
+When set to true, the explain statement will only print physical plans |
+
+datafusion.sql_parser.parse_float_as_decimal |
+false |
+When set to true, SQL parser will parse float as decimal type |
+
+datafusion.sql_parser.enable_ident_normalization |
+true |
+When set to true, SQL parser will normalize ident (convert ident to lowercase when not quoted) |
+
+
+