-
-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
read parquet from s3 failing with 'GeoArrowEngine' has no attribute 'extract_filesystem' #250
Comments
Thanks @raybellwaves. I wonder if this is a duplicate of #241?
I'm not aware of any related changes in this release. The Also, as Joris mentioned here #241 (comment), I would expect |
Cross posting from the other thread #241 (comment) ======= dask-geopandas/dask_geopandas/io/parquet.py Lines 15 to 22 in d3e15d1
I think some envs default to have pyarrow so you really need a clean env to test this. A solution to this is to throw an import error/warning when instantiating GeoArrowEngine if pyarrow was not properly imported. To reiterate this fails
this works
|
might want to do something similar to how geopandas check if pygeos is installed |
We have nightly testing of reading geoparquet in our s3 buckets (using intake-geopandas). This started failing with the release of dask 2023.4.0 three days ago cc. @jrbourbeau.
I try and update this if I can find a geoparquet hosted on a public s3 bucket.
Create new environment:
mamba create -n test_env python=3.10 --y && conda activate test_env
Install dask-geopandas and s3fs:
pip install dask-geopandas s3fs
open a (geo)parquet:
See packages installed:
The text was updated successfully, but these errors were encountered: