[ https://issues.apache.org/jira/browse/ARROW-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17662108#comment-17662108 ]
Rok Mihevc commented on ARROW-5086: ----------------------------------- This issue has been migrated to [issue #21574|https://github.com/apache/arrow/issues/21574] on GitHub. Please see the [migration documentation|https://github.com/apache/arrow/issues/14542] for further details. > [Python] Space leak in ParquetFile.read_row_group() > ---------------------------------------------------- > > Key: ARROW-5086 > URL: https://issues.apache.org/jira/browse/ARROW-5086 > Project: Apache Arrow > Issue Type: Bug > Components: Python > Affects Versions: 0.12.1 > Reporter: Jakub Okoński > Assignee: Wes McKinney > Priority: Major > Labels: parquet, pull-request-available > Fix For: 0.15.0 > > Attachments: all.png, all.png > > Time Spent: 0.5h > Remaining Estimate: 0h > > I have a code pattern like this: > > reader = pq.ParquetFile(path) > for ix in range(0, reader.num_row_groups): > table = reader.read_row_group(ix, columns=self._columns) > # operate on table > > But it leaks memory over time, only releasing it when the reader object is > collected. Here's a workaround > > num_row_groups = pq.ParquetFile(path).num_row_groups > for ix in range(0, num_row_groups): > table = pq.ParquetFile(path).read_row_group(ix, columns=self._columns) > # operate on table > > This puts an upper bound on memory usage and is what I'd expect from the > code. I also put gc.collect() to the end of every loop. > > I charted out memory usage for a small benchmark that just copies a file, one > row group at a time, converting to pandas and back to arrow on the writer > path. Line in black is the first one, using a single reader object. Blue is > instantiating a fresh reader in every iteration. -- This message was sent by Atlassian Jira (v8.20.10#820010)