Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
merge changes from deepspeed master (#24)
* [WarmupDecayLR] fix log(0) & 1/log(1) bugs (deepspeedai#772) * fix log(0) & 1/log(1) bugs * simplify Co-authored-by: Jeff Rasley <[email protected]> Co-authored-by: Reza Yazdani <[email protected]> Co-authored-by: Cheng Li <[email protected]> * bump to v0.3.12 * Bug fix: Remove client optimizer param_group list item that does not have 'params' (deepspeedai#827) Co-authored-by: Jeff Rasley <[email protected]> * [doc] pipeline doc typos/improvements (deepspeedai#659) Admin merging for pure-doc PR that does not trigger build. * Samyamr/inference hook fix (deepspeedai#851) * Fix mis-aligned-grad When a parameter is not divisible by world size, the partitioned gradients are mis-aligned due to incorrect padding handling. This PR should fix for that. * Formatting fix * Adding static_scale test back for Z3, and also changing hidden size to be not divisile by world_size * also removing alignment from flat fp16 buffers * Testing for hidden dim alignment * inference hook fix * Update stage3.py * formatting * [bug-fix] move params to gpu if offload params is turned off Co-authored-by: Samyam Rajbhandari <[email protected]> Co-authored-by: Shaden Smith <[email protected]> Co-authored-by: Jeff Rasley <[email protected]> * ZeRO Stage 2: Clear reduced gradients (deepspeedai#856) * Ensure gradients of other partitions are cleared after reduction * Remove redundant code Co-authored-by: Jeff Rasley <[email protected]> * [runner/launch] propagate the error (deepspeedai#854) Co-authored-by: Jeff Rasley <[email protected]> * docs: minor spelling tweaks (deepspeedai#858) * Allow args to be optional in deepspeed.initialize (deepspeedai#825) * Fix ZeRO3 save_checkpoint (deepspeedai#857) Co-authored-by: Jeff Rasley <[email protected]> * Make config objects json serializable (deepspeedai#862) Co-authored-by: Jeff Rasley <[email protected]> * bump version 0.3.13 * 1-bit Adam v2 (deepspeedai#817) Authors: @awan-10 @conglongli @samyam @jeffra What's new: NCCL-based implementation which provides better performance and usability compared to the MPI-based implementation. Add support to momentum masks for those parameters with constant zero gradients during training. Bug fixes (e.g., deepspeedai#813). * NCCL-based 1-bit Adam + Code Refactor for Comm. Backends (deepspeedai#594) * NCCL based 1-bit Implementation + Refactor to add communication backends (deepspeedai#593) * add nccl 1-bit optim. * temporary commit to save stuff. * Use dist collectives instead of mpi routines. * remove old code for comm. * Fix bugs. still does not work. * modify to test the nccl side code path * Initial gather impl. Works intra-node. * Updates to comm. phase 2. nccl comm. passed the tests. * refactor code to introduce nccl/mpi as backends for onebit adam. * Refactor updates to test/engine. * Fix compile/runtime errors. * simplify support for nccl/mpi backends. * Add missign file * Add compression backend in constructor. Revert later. * modify test with some perf counting. * Implement a true non-blocking gather for nccl side. * Revert "Add compression backend in constructor. Revert later." This reverts commit df8c40d. * improve the 1-bit adam test. * Refactor comm. and compression backend in 1-bit adam. * Fix the test. * Fix runtime errors and typos in nccl backend * fix mpi backend. modify tests. * modify nccl perf test. * fix mpi side errors. * Add an mpi perf test * Sync DSE. * Remove old collectives file. * Undo a typo. * Graceful failure for torch versions that don't support nccl pt2pt. * Revert "Merge branch 'master' into staging-1bit-nccl-v2" This reverts commit 7840085, reversing changes made to a6dba72. * Revert "Revert "Merge branch 'master' into staging-1bit-nccl-v2"" This reverts commit 6dbdd98. * comm optimization + 1-bit lamb * Saving/debugging commit. * finalizing 1-bit lamb * finalizing 1-bit lamb * add momentum mask and chkpt handling for 1-bit adam * Cleanup and modify nccl test to be runnable with deepspeed launcher. * Fix format. * fix formatting again. * make test runnable without mpi4py * Add dist.alltoall and dist.allgather instead of custom functions. * remove debug prints. * formatting and renaming * renaming * renaming * add unit test, fix existing tests * skip unit test when torch < 1.8 * revert 1-bit lamb * flatten momentum when dimension is more than 1 * add warning message for 1-bit adam under fp32 * improve version check * add fp32 test * 1-bit adam doc * fix file name * doc fix * torch 1.8 is released * doc fix * fix tests * update news * add doc for momentum mask * fix checkpoing handling, add unit test * checkpoint handling doc * doc final cleanup * bump dates * update tests * url change * doc fix * fix test * doc update Co-authored-by: Ammar Ahmad Awan <[email protected]> Co-authored-by: Jeff Rasley <[email protected]> * consistent checkpoint filenaming (deepspeedai#865) * consistent checkpoint filenaming * backward compatible rename Co-authored-by: Olatunji Ruwase <[email protected]> * [doc] launcher (deepspeedai#868) As discussed in deepspeedai#662 this PR modifies the doc: * explains what to use instead of CUDA_VISIBLE_DEVICES * puts the `--hostfile` cl arg in the correct place in the invocation script Fixes: deepspeedai#662 Co-authored-by: Jeff Rasley <[email protected]> * [doc] pipeline (deepspeedai#888) * [doc] pipeline As @g-karthik flagged in deepspeedai#659 (comment) my previous correction PR had one sentence that said the wrong thing. So this PR attempts to rectify that. Thank you! * tweak * [debug utils] see_memory_usage fixes (deepspeedai#890) * see_memory_usage fixes * didn't expect pt-1.2 * fix the order of things * fix the order of things * full fp32 weights reconstruction for zero 2+3 (deepspeedai#892) * save_fp16_model consolidated for zero3 (deepspeedai#893) Co-authored-by: Olatunji Ruwase <[email protected]> * Fix zero stage2 cpu_offload when some model trainable parameters skipped in training (deepspeedai#861) * Fix zero stage2 cpu_offload when some model trainable parameters skipped in training, as in deepspeedai#707 As some model trainable parameters skipped in training, their backward hooks in self.create_reduce_and_remove_grad_hooks() will not run, so they have no norm_for_param_grads * Trim space * Trim space Co-authored-by: Olatunji Ruwase <[email protected]> * update kramdown (deepspeedai#901) security alert related to older kramdown version * update backward api doc (deepspeedai#903) * Bump kramdown from 2.3.0 to 2.3.1 in /docs (deepspeedai#905) Bumps [kramdown](https://github.com/gettalong/kramdown) from 2.3.0 to 2.3.1. - [Release notes](https://github.com/gettalong/kramdown/releases) - [Changelog](https://github.com/gettalong/kramdown/blob/master/doc/news.page) - [Commits](https://github.com/gettalong/kramdown/commits) Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Jeff Rasley <[email protected]> * We're hiring! + integration posts * [website] We're hiring! + integration posts * [website] we're hiring! * zero.Init() clarification (deepspeedai#880) * zero.Init() clarification clarify that if `model.half()` can't fit into gpu memory `zero.Init()` is a must. this proposal is via @samyam's clarification shared elsewhere. Thank you. * style * add clarity * style Co-authored-by: Olatunji Ruwase <[email protected]> * disable pipe test (deepspeedai#915) This test has been giving us trouble for a bit, seeing nondeterministic failures, skipping for now to not break out CI. Need to revisit soon though. * Add link to AML examples. (deepspeedai#916) Co-authored-by: Jeff Rasley <[email protected]> Co-authored-by: Stas Bekman <[email protected]> Co-authored-by: Jeff Rasley <[email protected]> Co-authored-by: Reza Yazdani <[email protected]> Co-authored-by: Cheng Li <[email protected]> Co-authored-by: Samyam Rajbhandari <[email protected]> Co-authored-by: Shaden Smith <[email protected]> Co-authored-by: Olatunji Ruwase <[email protected]> Co-authored-by: brett koonce <[email protected]> Co-authored-by: Conglong Li <[email protected]> Co-authored-by: Ammar Ahmad Awan <[email protected]> Co-authored-by: hamlet <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: sid <[email protected]>
- Loading branch information