You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the usage question you have. Please include as many useful details as possible.
Hi,
beeing new to Apache Arrow I'm a little confused about the different options to interact with Parquet files. The documentation in the Go library is in many places very sparse and existing examples from various sources don't seem to match my use case.
The question is:
Given that you have a parquet file containing serveral thousand rows each with an ID column and a Data column, where the data column holds some larger blob, how do you seek certain rows based on their ID column and extract the data of the Data column in an efficient and memory-friendly way?
By 'memory-friendly' I mean that only the relevant values should be read from the parquet files and loaded into memory, not the whole column, rowgroup, batch or chunk. Reading the ID column completely into memory would be fine, but not the blob data.
I tried the variant with creating a pqarrow.RecordReader based on a pqarrow.FileReader based on parquet.Reader, but it seems that the Record batches always load the the whole batch (incl. all column data) into memory, not just when loading the value of of a column entry by index. While this approach works as desired, it has a very high memory usage due to the large blobs.
I also tried to extract the relevant row-indexes in a first sweep to then somehow only retrieve these rows from the Data column in a second sweep, but I could not find a way that improved the first approach.
There is probably a simple way (without using pqarrow?) by just iterating over the parquet file rowgroups, but the usage of the available datastructures are not reallly documented well (FieldReaders, chunks, etc.)
Btw, doing the same thing with DuckDb works very well and is noticably lighter on memory than the RecordReader approach, but including that library for the simple seek&extract use case is somewhat overkill and I would prefer to avoid it.
Describe the usage question you have. Please include as many useful details as possible.
Hi,
beeing new to Apache Arrow I'm a little confused about the different options to interact with Parquet files. The documentation in the Go library is in many places very sparse and existing examples from various sources don't seem to match my use case.
The question is:
Given that you have a parquet file containing serveral thousand rows each with an ID column and a Data column, where the data column holds some larger blob, how do you seek certain rows based on their ID column and extract the data of the Data column in an efficient and memory-friendly way?
By 'memory-friendly' I mean that only the relevant values should be read from the parquet files and loaded into memory, not the whole column, rowgroup, batch or chunk. Reading the ID column completely into memory would be fine, but not the blob data.
I tried the variant with creating a pqarrow.RecordReader based on a pqarrow.FileReader based on parquet.Reader, but it seems that the Record batches always load the the whole batch (incl. all column data) into memory, not just when loading the value of of a column entry by index. While this approach works as desired, it has a very high memory usage due to the large blobs.
I also tried to extract the relevant row-indexes in a first sweep to then somehow only retrieve these rows from the Data column in a second sweep, but I could not find a way that improved the first approach.
There is probably a simple way (without using pqarrow?) by just iterating over the parquet file rowgroups, but the usage of the available datastructures are not reallly documented well (FieldReaders, chunks, etc.)
Btw, doing the same thing with DuckDb works very well and is noticably lighter on memory than the RecordReader approach, but including that library for the simple seek&extract use case is somewhat overkill and I would prefer to avoid it.
Thanks for any hints
Jochen Mehlhorn [email protected], Mercedes-Benz Tech Innovation GmbH
Provider Information
Component(s)
Go
The text was updated successfully, but these errors were encountered: