Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Go][Parquet] Looking for Memory-friendly way to seek & extract data from parquet columns #38

Open
jo-me opened this issue Feb 19, 2024 · 0 comments

Comments

@jo-me
Copy link

jo-me commented Feb 19, 2024

Describe the usage question you have. Please include as many useful details as possible.

Hi,

beeing new to Apache Arrow I'm a little confused about the different options to interact with Parquet files. The documentation in the Go library is in many places very sparse and existing examples from various sources don't seem to match my use case.

The question is:
Given that you have a parquet file containing serveral thousand rows each with an ID column and a Data column, where the data column holds some larger blob, how do you seek certain rows based on their ID column and extract the data of the Data column in an efficient and memory-friendly way?

By 'memory-friendly' I mean that only the relevant values should be read from the parquet files and loaded into memory, not the whole column, rowgroup, batch or chunk. Reading the ID column completely into memory would be fine, but not the blob data.

I tried the variant with creating a pqarrow.RecordReader based on a pqarrow.FileReader based on parquet.Reader, but it seems that the Record batches always load the the whole batch (incl. all column data) into memory, not just when loading the value of of a column entry by index. While this approach works as desired, it has a very high memory usage due to the large blobs.

I also tried to extract the relevant row-indexes in a first sweep to then somehow only retrieve these rows from the Data column in a second sweep, but I could not find a way that improved the first approach.

There is probably a simple way (without using pqarrow?) by just iterating over the parquet file rowgroups, but the usage of the available datastructures are not reallly documented well (FieldReaders, chunks, etc.)

Btw, doing the same thing with DuckDb works very well and is noticably lighter on memory than the RecordReader approach, but including that library for the simple seek&extract use case is somewhat overkill and I would prefer to avoid it.

Thanks for any hints

Jochen Mehlhorn [email protected], Mercedes-Benz Tech Innovation GmbH

Provider Information

Component(s)

Go

@assignUser assignUser transferred this issue from apache/arrow Aug 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant