Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* fix deleted method * format * format * deprecated scipy * add None check * update Dataset * add registers for collate * minor formatting * remove model tensor reshaping * rename args * network name changes * name changes * add keyword arg * move normalization method * update tests * convert to tensors * update README * format * format * format * format * format * use nn. * format * format * format * format * format * update loss * fix dataset test * move to modules * pin changes * stash * run train session * fix train data storage * add enum * change default transforms to None * change time series start/end get method * update script flow * add tower * add worker check * implement * start on predict data * fix inference steps * replace deprecated * fix predict window * cleanup * set defaults * comment * comment * changes * add foj loss * add foj loss * fix dropout * cleanup params * cleanup transfer * cleanup args * fix deep supervision * fix deep supervision * need to update the spatial gpkg * update dependencies * add 0 check * add multipolygon method * add pool_first arg * add pool_first arg * add pool_first arg * add pool_first arg * add pool_first arg * remove args * script changes * add optional StdConv * add pyogrio * replace gw polygon_to_array * add CLI arg for unknown labels * make read method * update progress bars * fix augmentations * fix augmentations * fix augmentation bdist * add args * add args * transfer updates * move methods * move scale * pass conv args * move methods * move data methods * add module import * modfy CLI defaults * update create methods * replace tqdm with rich * add batch file store * remove old code * support weak supervision * fix tests * test mask-rcnn * stash * change conv args * change conv args * test zoom and support score masking * modify tower unet * change arg name * cleanup unused and add order option * cleanup unused and add order option * change CLI * update CLI args * update test * change module names * stash * stash * add temp exception * reconfigure resconv * remove files * modify transfer model * fix mask scoring * use transfer name * separate spatial splits * add missing method * transfer loss * all_touched=True * add data methods * add edge check * add transfer arg * finetune * re-test params * get utm zone from lat/lon * change time scaler param * pass activation param * experimental final layers * cleanuup refinee * start/end * enums * enums * topology loss * cleanup * test attention * vit * residual in residual * attention * remove CLI option * predict start/end * temp test * libraries * reconfigure augmentations * random rng support * simplify augmenter * add data check for geometry collection * augment probability handling * force new spatial data for transfer model * formatting * test new loss * move modules * move modules * move modules * profiler * module imports * docs: update README (#75) * feat: transfer ltae v2 (#76) * update README * remove import * pass all_touched arg * add lon/lat to data batch * pass persistent workers arg * 👕 formatting * pass lon/lat coordinates * cleanup loss * rename network names * remove ViT * try/except import natten * 👕 formatting * pass all_touched from CLI * add all_touched CLI arg * fix: torch20 tests (#77) * 🔨 modify CI workflow * ⚡ remove install line * ⚡ update augmentation test names * 📝 update torch version in README * 🔨 modify CI workflow * 🔨 setup lib requirements * update augmentation tests * ⚡ update data tests * ⚡ update data tests * ⚡ better einops characters * 👕 formatting * ➕ raise minimum torchmetrics version * 🎨 formatting * fix: remove instances of torch_geometric (#78) * fix: test batch save step (#80) * docs: torch20 docs (#79) * 📝 update README * 📝 add sub-md * 📝 add install line * 📝 remove line * 📝 better comments * 📝 add README links * fix: torch20 cleanup (#81) * 🗑️ remove redundant method * 🎨 formatting * 🔥 py3 super() and remove unused methods * ✅ add tests for transformer * 🎨 formatting * ✅ add tests for tower unet * 🎨 formatting * 🔥 remove FOJ * 🔥 py3 super() and remove unused methods * 🎨 formatting * 🔥 py3 super() and remove unused methods * fix: add fiona dependency (#82) * fix: use cpu device (#83) * 🐛 add supported attention * 🐛 revert attention * 🐛✅ attention tests * 🙈 hide test for now * fix: torch20 train test (#84) * ✏️ change variable name * 🎨 formatting * ✅ test other weights * 🙈 block train CLI test * fix: torch20 network c (#85) * 🔥 remove network c module * 🎨 formatting * torch20 transfer (#86) * 🎨 formatting * 🔥 remove gain from dask storage * 🎨 formatting * 🎨 formatting * ⚡️ change CLI defaults * ✨ new loss enum * ⚡️ relative import * ✅ update tests * 🔒️ upgrade setuptools * fix: torch20 ex (#87) * ⚡️ sum weights * ⚡️ move sum * ✅ update loss tests * 🎨 formatting * ⚡️ use common method * 🎨 formatting * ➖ make kornia optional * ➕ increase dependency versions * ➖ cleanup imports * fix: ➖ remove boundary loss (#88) * ➖ remove topology loss * 📝 add comments for prior change * ⚡️ change default loss * 🎨 * ⚡️ add CLI data-pattern; remove args * 🎨 * 🔥 cleanup unused methods * 📝 new relative imports * ⚡️ change defaults * 🎨 * 🎨 * ⚡️ import loss * 🎨 * 🎨 * 🎨 * 🎨 * ✅ tests * 🎨 * 🎨 * 📝 change var name * 📝 change var name * change to class * ⚡️ predict callback return 3 bands * 🔥 remove unused import * 🎨 * ⚡️ relative loss imports * ⚡️ update enums * 🔥 remove arg in predict method * 🔥 remove temporal transformer * 🎨 * ⚡️ update main model network * ⚡️ ensure grad * 🎨 * 🎨 * 🎨 * 🔥 remove file * ✨ new relative import * 🔥 remove imports * 🎨 * 🔥 remove file * ⚡️ tests * ⚡️ scripts * 🔥 remove keyword arg * ⚡️ add keywrod arg * ⚡️ comment * 🎨 * 🎨 * 🎨 * 🎨 * 🔥 remove code * 🎨 * 🔥 remove imports * 🚑️ comment * 🎨 * 🔥 remove unused modules * 🎨 * 🎨 * 🐛 change default classes * 🔥 remove arg * 📝 * ⚡️ imports * 🐛 better check * ⚡️ change default values * ⚡️ rearrange convolutions * 🎨 * ⚡️ simplify pre * 🔥 remove unused losses * 🔥 remove unused module * 📝 static version --------- Co-authored-by: Ubuntu <[email protected]>
- Loading branch information