You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some granules contain upwards of 140,000 feature records (eg SWOT_L2_HR_LakeSP_Prior_020_123_AR_20240825T010843_20240825T011234_PIC0_01.zip). The lambda that submits the records to be inserted into dynamodb times out after 10 minutes during unpacking the features from the shapefile and adding units and other info from the filename to each record.
My guess is this is mostly due to the for loop here:
Some granules contain upwards of 140,000 feature records (eg SWOT_L2_HR_LakeSP_Prior_020_123_AR_20240825T010843_20240825T011234_PIC0_01.zip). The lambda that submits the records to be inserted into dynamodb times out after 10 minutes during unpacking the features from the shapefile and adding units and other info from the filename to each record.
My guess is this is mostly due to the for loop here:
hydrocron/hydrocron/db/io/swot_shp.py
Line 182 in 7396ac1
Because this happens during the granule unpacking none of the features get loaded before the timeout.
The text was updated successfully, but these errors were encountered: