You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
pole_of_inaccessibility.py, run by the postprocessor service on new shapefile tables, peaks at around 500 MB of active memory when run against Australia's SA2 shapefile. Depending on someone's Docker setup, the postprocessor container might be killed early for exceeding system limits even for that small shapefile. The Atlas should be able to handle much larger datasets than that.
The script currently uses geopandas to read from and write to PostGIS, and there are issues with geopandas which prevent using Pandas' chunksize to work with tables chunk by chunk. One alternative is to rewrite the script to use psycopg2 directly. One could then limit the read to only the primary key and geometry columns, and the write to only setting the pole of inaccessibility for each row, rather than reading and writing the entire table.
The text was updated successfully, but these errors were encountered:
pole_of_inaccessibility.py, run by the postprocessor service on new shapefile tables, peaks at around 500 MB of active memory when run against Australia's SA2 shapefile. Depending on someone's Docker setup, the postprocessor container might be killed early for exceeding system limits even for that small shapefile. The Atlas should be able to handle much larger datasets than that.
The script currently uses geopandas to read from and write to PostGIS, and there are issues with geopandas which prevent using Pandas'
chunksize
to work with tables chunk by chunk. One alternative is to rewrite the script to use psycopg2 directly. One could then limit the read to only the primary key and geometry columns, and the write to only setting the pole of inaccessibility for each row, rather than reading and writing the entire table.The text was updated successfully, but these errors were encountered: