Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for velocity mosaics from granule stacks without a common reference date #57

Open
wants to merge 8 commits into
base: develop
Choose a base branch
from
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [PEP 440](https://www.python.org/dev/peps/pep-0440/)
and uses [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [0.4.1]
### Added
* Support for generating velocity mosaics from granule stacks without a common reference date

## [0.4.0]
### Added
* Ability to update the reference date for OPERA DISP granule xarray objects
Expand Down
20 changes: 18 additions & 2 deletions src/opera_disp_tms/generate_sw_vel_tile.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

from opera_disp_tms import generate_sw_disp_tile as sw_disp
from opera_disp_tms.search import Granule
from opera_disp_tms.utils import create_buffered_bbox, create_tile_name, get_raster_as_numpy
from opera_disp_tms.utils import create_buffered_bbox, create_tile_name, get_raster_as_numpy, within_one_day


gdal.UseExceptions()
Expand Down Expand Up @@ -79,6 +79,20 @@ def parallel_linear_regression(x: np.ndarray, y: np.ndarray) -> np.ndarray:
return slope_array


def align_to_common_reference_date(granule_xrs: list[xr.DataArray]) -> None:
asjohnston-asf marked this conversation as resolved.
Show resolved Hide resolved
previous_granule_xr = granule_xrs[0]
correction = np.zeros(previous_granule_xr.shape)

for granule_xr in granule_xrs:
if not within_one_day(granule_xr.attrs['reference_date'], previous_granule_xr.attrs['reference_date']):
correction = previous_granule_xr.data
granule_xr += correction
previous_granule_xr = granule_xr

for granule_xr in granule_xrs:
granule_xr.attrs['reference_date'] = granule_xrs[0].attrs['reference_date']


def add_velocity_data_to_array(
granules: Iterable[Granule],
geotransform: Affine,
Expand All @@ -97,8 +111,10 @@ def add_velocity_data_to_array(
Returns:
np.ndarray: The updated array
"""
sorted_granules = sorted(granules, key=lambda g: g.secondary_date)
bbox = create_buffered_bbox(geotransform.to_gdal(), frame_map_array.shape, 90) # EPSG:3857 is in meters
granule_xrs = [sw_disp.load_sw_disp_granule(x, bbox) for x in granules]
granule_xrs = [sw_disp.load_sw_disp_granule(x, bbox) for x in sorted_granules]
align_to_common_reference_date(granule_xrs)
Copy link
Contributor

@forrestfwilliams forrestfwilliams Dec 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When this function is called we're not currently guaranteed to have all granules between two dates. For example, if we're trying to get velocity between date1 and date5, there might be granules like:

date1_date2, date1_date3 date3_date4, date4_date5

but calling find_needed_granules here with the minmax strategy would only load date1_date3, and date4_date5.

Two possible ways to accomplish this:

  1. All ways download all granules between two dates, even if some granules will not be needed
  2. Dynamically find the set of granules needed to reconstruct the full deformation history and load them so they can be used in align_to_common_reference_date

Copy link
Member Author

@asjohnston-asf asjohnston-asf Dec 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The two core use cases I see brewing have been:

For a particular frame...

  • ...give me a single data band representing net/cumulative displacement from Date A to Date B. This satisfies the needs of the cumulative displacement and secant velocity visualizations.
  • ...give me a time series of data bands, each representing net/cumulative displacement from a common reference date, for all available time steps from Date A to Date B. This satisfies the needs of the velocity-via-linear-regression visualization.

I suspect these are also the data-preparation steps data users will need to do for most analysis using this data set. I'm somewhat eager to revisit find_needed_granules, update_reference_date, and align_to_common_reference_date to provide a better interface for those two more generic use cases.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose the latter time-series approach satisfies all use cases, but when you only care about total change the former approach would be faster and require less data access.

cube = xr.concat(granule_xrs, dim='years_since_start')

years_since_start = get_years_since_start([g.attrs['secondary_date'] for g in granule_xrs])
Expand Down
23 changes: 23 additions & 0 deletions tests/test_generate_sw_vel_tile.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from datetime import datetime

import numpy as np
import xarray as xr

from opera_disp_tms import generate_sw_vel_tile as sw_vel

Expand Down Expand Up @@ -54,3 +55,25 @@ def test_parallel_linear_regression():
x = np.arange(3, dtype='float64')
slope = sw_vel.parallel_linear_regression(x, y)
assert np.all(np.isclose(slope, 1.0, atol=1e-6))


def test_align_to_common_reference_date():
xrs = [
xr.DataArray(data=[1., 0.], attrs={'reference_date': datetime(1, 1, 1), 'secondary_date': datetime(1, 1, 2)}),
xr.DataArray(data=[5., 0.], attrs={'reference_date': datetime(1, 1, 1), 'secondary_date': datetime(1, 1, 3)}),
xr.DataArray(data=[-1., 0.], attrs={'reference_date': datetime(1, 1, 3), 'secondary_date': datetime(1, 1, 4)}),
xr.DataArray(data=[7., 0.], attrs={'reference_date': datetime(1, 1, 3), 'secondary_date': datetime(1, 1, 5)}),
xr.DataArray(data=[-3., 1.], attrs={'reference_date': datetime(1, 1, 5), 'secondary_date': datetime(1, 1, 6)}),
]

expected = [
xr.DataArray(data=[1., 0.], attrs={'reference_date': datetime(1, 1, 1), 'secondary_date': datetime(1, 1, 2)}),
xr.DataArray(data=[5., 0.], attrs={'reference_date': datetime(1, 1, 1), 'secondary_date': datetime(1, 1, 3)}),
xr.DataArray(data=[4., 0.], attrs={'reference_date': datetime(1, 1, 1), 'secondary_date': datetime(1, 1, 4)}),
xr.DataArray(data=[12., 0.], attrs={'reference_date': datetime(1, 1, 1), 'secondary_date': datetime(1, 1, 5)}),
xr.DataArray(data=[9., 1.], attrs={'reference_date': datetime(1, 1, 1), 'secondary_date': datetime(1, 1, 6)}),
]

sw_vel.align_to_common_reference_date(xrs)

assert all(a.identical(b) for a, b, in zip(xrs, expected))
Loading