You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've identified a memory leak when importing Parquet files into Pandas DataFrames using the PyArrow engine. The issue occurs specifically during the conversion from Arrow to Pandas objects, as memory is not released even after deleting the DataFrame and invoking garbage collection.
Key findings:
No leak with PyArrow alone: When using PyArrow to read Parquet without converting to Pandas (i.e., no .to_pandas()), the memory leak does not occur.
Leak with .to_pandas(): The memory leak appears during the conversion from Arrow to Pandas, suggesting the problem is tied to this process.
No issue with Fastparquet or Polars: Fastparquet and Polars (even with PyArrow) do not exhibit this memory issue, reinforcing that the problem is in Pandas’ handling of Arrow data.
# memory_test.pyimportpandasaspdimportpolarsasplimportgcimportpyarrow.parquetimportctypesdata_path='random_dataset.parquet'# To manually trigger memory releasemalloc_trim=ctypes.CDLL("libc.so.6").malloc_trimfor_inrange(10):
df=pd.read_parquet(data_path, engine="pyarrow")
# Also tested with:# df = pyarrow.parquet.read_pandas(data_path).to_pandas()# df = pl.read_parquet(data_path, use_pyarrow=True)deldf# Explicitly delete DataFramefor_inrange(3): # Force garbage collection multiple timesgc.collect()
memory_info=psutil.virtual_memory()
print(f"\n\nIteration number: {i}")
print(f"Total Memory: {memory_info.total/ (1024**3):.2f} GB")
print(f"Memory at disposal: {memory_info.available/ (1024**3):.2f} GB")
print(f"Memory Used: {memory_info.used/ (1024**3):.2f} GB")
print(f"Percentage of memory used: {memory_info.percent}%")
# Calling malloc_trim(0) is the only way we found to release the memorymalloc_trim(0)
Observations:
Garbage Collection: Despite invoking the garbage collector multiple times, memory allocated to the Python process keeps increasing when .to_pandas() is used, indicating improper memory release during the conversion.
Direct Use of PyArrow: When we import the data directly using PyArrow (without converting to Pandas), the memory usage remains stable, showing that the problem originates in the Arrow-to-Pandas conversion process.
Manual Memory Release (ctypes): The only reliable way we have found to release the memory is by manually calling malloc_trim(0) via ctypes. However, we believe this is not a proper solution and that memory management should be handled internally by Pandas.
OS environment
Icon name: computer-vm
Chassis: vm
Virtualization: microsoft
Operating System: Red Hat Enterprise Linux 8.10 (Ootpa)
CPE OS Name: cpe:/o:redhat:enterprise_linux:8::baseos
Kernel: Linux 4.18.0-553.16.1.el8_10.x86_64
Architecture: x86-64
Conclusion
The issue seems to occur during the conversion from Arrow to Pandas, rather than being a problem within PyArrow itself. Given that memory is only released by manually invoking malloc_trim(0), we suspect there is a problem with how PyArrow handles memory management when converting the data to Panda. This issue does not arise when using Fastparquet engine or Polars instead of Pandas, further indicating that it is specific to the Pandas-Arrow interaction.
We recommend investigating how memory is allocated and released during the conversion from Arrow objects to Pandas DataFrames to resolve this issue.
Please let us know if further details are needed, and we are happy to assist.
Description
We've identified a memory leak when importing Parquet files into Pandas DataFrames using the PyArrow engine. The issue occurs specifically during the conversion from Arrow to Pandas objects, as memory is not released even after deleting the DataFrame and invoking garbage collection.
Key findings:
Reproduction Code
Observations:
OS environment
Icon name: computer-vm
Chassis: vm
Virtualization: microsoft
Operating System: Red Hat Enterprise Linux 8.10 (Ootpa)
CPE OS Name: cpe:/o:redhat:enterprise_linux:8::baseos
Kernel: Linux 4.18.0-553.16.1.el8_10.x86_64
Architecture: x86-64
Conclusion
The issue seems to occur during the conversion from Arrow to Pandas, rather than being a problem within PyArrow itself. Given that memory is only released by manually invoking malloc_trim(0), we suspect there is a problem with how PyArrow handles memory management when converting the data to Panda. This issue does not arise when using Fastparquet engine or Polars instead of Pandas, further indicating that it is specific to the Pandas-Arrow interaction.
We recommend investigating how memory is allocated and released during the conversion from Arrow objects to Pandas DataFrames to resolve this issue.
Please let us know if further details are needed, and we are happy to assist.
Contributors:
We would appreciate any feedback or insights from the maintainers and other contributors on how to improve memory management in this context.
Installed Versions
INSTALLED VERSIONS
commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140
python : 3.10.14.final.0
python-bits : 64
OS : Linux
OS-release : 4.18.0-553.16.1.el8_10.x86_64
Version : #1 SMP Thu Aug 1 04:16:12 EDT 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.2
numpy : 2.0.0
pytz : 2024.1
dateutil : 2.9.0.post0
setuptools : 69.5.1
pip : 24.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.4
IPython : 8.26.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : None
dataframe-api-compat : None
fastparquet : 2024.5.0
fsspec : 2024.6.1
gcsfs : None
matplotlib : 3.9.0
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 17.0.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
Component(s)
Parquet, Python
The text was updated successfully, but these errors were encountered: