You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
The ability to read partitioned tables is added the the ListingTable in #1141. In that implementation the table lists all the files before applying the partition pruning. This will be very slow for large tables.
Describe the solution you'd like
Instead of calling collect() on the stream of all files:
if a filter exists on the first level of partitions, first call the list_dir() feature of the object store to get the first level of partition and apply the filter on it
if the filter above was applied and was "selective enough", only list the files in the resulting folders for further pruning
otherwise use list_file on the entire table but evaluate the pruning progressively to stop listing as soon as limit is reached.
Describe alternatives you've considered
The current implementation works well on reasonably sized tables (few thousand files), but will fall short on huge tables (e.g. 100k file).
Additional context
The function where the magic happens:
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
The ability to read partitioned tables is added the the
ListingTable
in #1141. In that implementation the table lists all the files before applying the partition pruning. This will be very slow for large tables.Describe the solution you'd like
Instead of calling
collect()
on the stream of all files:list_dir()
feature of the object store to get the first level of partition and apply the filter on itlist_file
on the entire table but evaluate the pruning progressively to stop listing as soon as limit is reached.Describe alternatives you've considered
The current implementation works well on reasonably sized tables (few thousand files), but will fall short on huge tables (e.g. 100k file).
Additional context
The function where the magic happens:
The text was updated successfully, but these errors were encountered: