-
Notifications
You must be signed in to change notification settings - Fork 917
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Forward-merge branch-21.12 to branch-22.02 [skip gpuci] #9664
Merged
ajschmidt8
merged 36 commits into
rapidsai:branch-22.02
from
robertmaynard:branch-22.02-merge-21.12
Nov 11, 2021
Merged
Forward-merge branch-21.12 to branch-22.02 [skip gpuci] #9664
ajschmidt8
merged 36 commits into
rapidsai:branch-22.02
from
robertmaynard:branch-22.02-merge-21.12
Nov 11, 2021
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
`segmented_gather()` currently assumes that null LIST rows also have a `0` size (as defined by the difference of adjacent offsets.) This might not hold, for example, for LIST columns that are members of STRUCT columns whose parent null masks are superimposed on its children. This would cause a non-empty list row to be marked null, without compaction. This leads to errors in fetching elements of a list row as seen in NVIDIA/spark-rapids/pull/3770. This commit adds the handling of uncompacted LIST rows in `segmented_gather()`. Authors: - MithunR (https://github.com/mythrocks) Approvers: - Conor Hoekstra (https://github.com/codereport) - Nghia Truong (https://github.com/ttnghia) - David Wendt (https://github.com/davidwendt) URL: rapidsai#9537
Depends on NVIDIA/spark-rapids#4019. Authors: - Rong Ou (https://github.com/rongou) Approvers: - Jason Lowe (https://github.com/jlowe) URL: rapidsai#9607
We currently have a data-corruption issue on 11.5 with brotli: rapidsai#9546 To unblock other developers from facing this issue, this PR adds an 11.5 specific xfail. Authors: - GALI PREM SAGAR (https://github.com/galipremsagar) Approvers: - https://github.com/brandon-b-miller URL: rapidsai#9612
…ckHostColumnVectors (rapidsai#9596) Fixes rapidsai#9595 Currently, JCudfSerialization.unpackHostColumnVectors build child columns as HostColumnVector. This may lead to "Close called too many times" on HostColumnVector if the close method of child columns and parent one are both called. This PR attempts to replace HostColumnVector with HostColumnVectorCore for child (non-top level) columns, in case of the potential problem described above. Authors: - Alfred Xu (https://github.com/sperlingxx) Approvers: - Jason Lowe (https://github.com/jlowe) URL: rapidsai#9596
This PR adds 11.5 dev yml file to the cudf repo. Authors: - GALI PREM SAGAR (https://github.com/galipremsagar) Approvers: - AJ Schmidt (https://github.com/ajschmidt8) URL: rapidsai#9617
This adds start/end ranges to cuDF JNI. These types of ranges can be started and stopped in different threads, and the creation and stopping does not need to follow the order that a push/pop range follows (when a start/end range is closed, the actual range is closed, not whoever was started last). This adds a boolean and a long to `NvtxRange` as a class. It may be better to create a `NvtxRangeStartEnd`, and even expose some helper factory-like methods in `NvtxRange`. Looking for ideas on whether it is preferred as implemented, or if we want to break it off into a different instance. Authors: - Alessandro Bellina (https://github.com/abellina) Approvers: - Jason Lowe (https://github.com/jlowe) URL: rapidsai#9563
Resolves: rapidsai#7123 This PR adds a common dtype casting as requested here: rapidsai#7123 (comment) Authors: - GALI PREM SAGAR (https://github.com/galipremsagar) Approvers: - https://github.com/brandon-b-miller - Michael Wang (https://github.com/isVoid) URL: rapidsai#9585
Fixes: rapidsai#9387 This PR fixes `usecols` parameter usage in `dask_cudf.read_csv`. When the csv read using byterange's the csv reader has to be passed complete column names in `names` param but should pass `usecols` to return the exact columns that are needed only. Authors: - GALI PREM SAGAR (https://github.com/galipremsagar) Approvers: - Charles Blackmon-Luca (https://github.com/charlesbluca) URL: rapidsai#9618
Depends on rapidsai#9518 Reference rapidsai#9518 (comment) Add support for inclusive scan of duration types with SUM aggregation. Also refactor the `scan_tests.cpp` to put rank tests into `rank_tests.cpp`, common utilities into `scan_tests.hpp`, and move the actual unit test logic out of `cudf::test` namespace. Authors: - David Wendt (https://github.com/davidwendt) Approvers: - Jake Hemstad (https://github.com/jrhemstad) - Nghia Truong (https://github.com/ttnghia) URL: rapidsai#9536
…ai#9608) Closes rapidsai#9599 Saves input `NativeFile` objects before converting them to `NativeFileDatasource`, and uses the saved objects to read/parse metadata with pyarrow. Authors: - Richard (Rick) Zamora (https://github.com/rjzamora) Approvers: - Charles Blackmon-Luca (https://github.com/charlesbluca) URL: rapidsai#9608
…apidsai#8886) Closes rapidsai#3234. * Adds a `calendrical_month_sequence` (open to naming suggestions here) API at the C++ layer, that generates a sequence of dates, each separated by `n` calendrical months from the previous element. * Adds a `date_range` API at the Python layer, that is much more general and generates a sequence of timestamps, each separated by a fixed (e.g., 5 seconds, 10 days, or a combination) or non-fixed (e.g., 2 months, 5 years) frequences. Authors: - Ashwin Srinath (https://github.com/shwina) - Michael Wang (https://github.com/isVoid) Approvers: - AJ Schmidt (https://github.com/ajschmidt8) - Jake Hemstad (https://github.com/jrhemstad) - David Wendt (https://github.com/davidwendt) - https://github.com/brandon-b-miller - GALI PREM SAGAR (https://github.com/galipremsagar) URL: rapidsai#8886
Resolves: rapidsai#9246 This PR : - [x] Adds a dedicated page for StringHandling APIs. - [x] Fixes some of the `StringMethods` docstrings. - [x] Added some of the missing `StringMethods` APIs which were recently added. Authors: - GALI PREM SAGAR (https://github.com/galipremsagar) Approvers: - https://github.com/brandon-b-miller URL: rapidsai#9624
…ing to `float` (rapidsai#9613) Fixes: rapidsai#7488 This PR add's support for strings that are `nan`, `inf` & `-inf` and their case-sensitive variations to be supported while type-casting from string column to `float` dtype. Authors: - GALI PREM SAGAR (https://github.com/galipremsagar) Approvers: - David Wendt (https://github.com/davidwendt) - https://github.com/brandon-b-miller URL: rapidsai#9613
The new methods are trivial aliases for the `truediv` method. Resolves rapidsai#9627. Authors: - Vyas Ramasubramani (https://github.com/vyasr) Approvers: - GALI PREM SAGAR (https://github.com/galipremsagar) URL: rapidsai#9630
- Use nvCOMP for Snappy compression/decompression by default. - Change the environment variable naming to be consistent with GDS policy. - Add detailed description of the behavior to the docs. Authors: - Vukasin Milovanovic (https://github.com/vuule) Approvers: - Ram (Ramakrishna Prabhu) (https://github.com/rgsl888prabhu) - Elias Stehle (https://github.com/elstehle) URL: rapidsai#9582
Fixes rapidsai#9625. Updates `hash_join::compute_join_output_size` to use std::size_t instead of cudf::size_type as the intermediate type to hold the computed output size. Authors: - Jason Lowe (https://github.com/jlowe) Approvers: - Nghia Truong (https://github.com/ttnghia) - Alessandro Bellina (https://github.com/abellina) - MithunR (https://github.com/mythrocks) - Mike Wilson (https://github.com/hyperbolic2346) - https://github.com/nvdbaranec URL: rapidsai#9626
…apidsai#9633) The latest version of cuCollections now support using installed versions of libcudacxx. This is needed so that developers can run `./build.sh` multiple times without issue. Authors: - Robert Maynard (https://github.com/robertmaynard) Approvers: - Vyas Ramasubramani (https://github.com/vyasr) URL: rapidsai#9633
rapidsai#9487) I noticed that for some of dask-cudf's supported aggregations (specifically `first` and `last`), we end up throwing a `ValueError` in `_tree_node_agg` because we do not have a case for them: ```python import cudf import dask_cudf from dask_sql import Context df = cudf.DataFrame({"yr": [0, 2, 3] * 500, "inches": list(range(1500))}) ddf = dask_cudf.from_cudf(df, npartitions=5) ddf.groupby(by="yr").agg({'yr': ['first']}).compute() ``` <details> ``` ValueError Traceback (most recent call last) <ipython-input-1-7d2fee978fba> in <module> 6 ddf = dask_cudf.from_cudf(df, npartitions=5) 7 ----> 8 ddf.groupby(by="yr").agg({'yr': ['first']}).compute() ~/compose/etc/conda/cuda_11.2/envs/rapids/lib/python3.8/site-packages/dask/base.py in compute(self, **kwargs) 286 dask.base.compute 287 """ --> 288 (result,) = compute(self, traverse=False, **kwargs) 289 return result 290 ~/compose/etc/conda/cuda_11.2/envs/rapids/lib/python3.8/site-packages/dask/base.py in compute(*args, **kwargs) 568 postcomputes.append(x.__dask_postcompute__()) 569 --> 570 results = schedule(dsk, keys, **kwargs) 571 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)]) 572 ~/compose/etc/conda/cuda_11.2/envs/rapids/lib/python3.8/site-packages/dask/local.py in get_sync(dsk, keys, **kwargs) 561 """ 562 kwargs.pop("num_workers", None) # if num_workers present, remove it --> 563 return get_async( 564 synchronous_executor.submit, 565 synchronous_executor._max_workers, ~/compose/etc/conda/cuda_11.2/envs/rapids/lib/python3.8/site-packages/dask/local.py in get_async(submit, num_workers, dsk, result, cache, get_id, rerun_exceptions_locally, pack_exception, raise_exception, callbacks, dumps, loads, chunksize, **kwargs) 504 while state["waiting"] or state["ready"] or state["running"]: 505 fire_tasks(chunksize) --> 506 for key, res_info, failed in queue_get(queue).result(): 507 if failed: 508 exc, tb = loads(res_info) ~/compose/etc/conda/cuda_11.2/envs/rapids/lib/python3.8/concurrent/futures/_base.py in result(self, timeout) 435 raise CancelledError() 436 elif self._state == FINISHED: --> 437 return self.__get_result() 438 439 self._condition.wait(timeout) ~/compose/etc/conda/cuda_11.2/envs/rapids/lib/python3.8/concurrent/futures/_base.py in __get_result(self) 387 if self._exception: 388 try: --> 389 raise self._exception 390 finally: 391 # Break a reference cycle with the exception in self._exception ~/compose/etc/conda/cuda_11.2/envs/rapids/lib/python3.8/site-packages/dask/local.py in submit(self, fn, *args, **kwargs) 546 fut = Future() 547 try: --> 548 fut.set_result(fn(*args, **kwargs)) 549 except BaseException as e: 550 fut.set_exception(e) ~/compose/etc/conda/cuda_11.2/envs/rapids/lib/python3.8/site-packages/dask/local.py in batch_execute_tasks(it) 235 Batch computing of multiple tasks with `execute_task` 236 """ --> 237 return [execute_task(*a) for a in it] 238 239 ~/compose/etc/conda/cuda_11.2/envs/rapids/lib/python3.8/site-packages/dask/local.py in <listcomp>(.0) 235 Batch computing of multiple tasks with `execute_task` 236 """ --> 237 return [execute_task(*a) for a in it] 238 239 ~/compose/etc/conda/cuda_11.2/envs/rapids/lib/python3.8/site-packages/dask/local.py in execute_task(key, task_info, dumps, loads, get_id, pack_exception) 226 failed = False 227 except BaseException as e: --> 228 result = pack_exception(e, dumps) 229 failed = True 230 return key, result, failed ~/compose/etc/conda/cuda_11.2/envs/rapids/lib/python3.8/site-packages/dask/local.py in execute_task(key, task_info, dumps, loads, get_id, pack_exception) 221 try: 222 task, data = loads(task_info) --> 223 result = _execute_task(task, data) 224 id = get_id() 225 result = dumps((result, id)) ~/compose/etc/conda/cuda_11.2/envs/rapids/lib/python3.8/site-packages/dask/core.py in _execute_task(arg, cache, dsk) 119 # temporaries by their reference count and can execute certain 120 # operations in-place. --> 121 return func(*(_execute_task(a, cache) for a in args)) 122 elif not ishashable(arg): 123 return arg ~/cudf/python/dask_cudf/dask_cudf/groupby.py in _tree_node_agg(dfs, gb_cols, split_out, dropna, sort, sep) 482 agg_dict[col] = [agg] 483 else: --> 484 raise ValueError(f"Unexpected aggregation: {agg}") 485 486 gb = df.groupby(gb_cols, dropna=dropna, as_index=False, sort=sort).agg( ValueError: Unexpected aggregation: first ``` </details> This PR unifies all references to `_supported` to now reference module level `SUPPORTED_AGGS`, and makes sure all aggs in this variable are handled in `_tree_node_agg`. This variable is also imported and used in `test_groupby_basic_aggs`, so that we don't need to remember to add an agg to that test manually. Additionally added `CudfDataFrameGroupBy.collect` to get these updated tests passing. Authors: - Charles Blackmon-Luca (https://github.com/charlesbluca) Approvers: - GALI PREM SAGAR (https://github.com/galipremsagar) - Richard (Rick) Zamora (https://github.com/rjzamora) URL: rapidsai#9487
Closes rapidsai#8351 This PR adds API `cudf::strings::format_list_column` to create the formatted output as described in rapidsai#8351. The API only accepts lists columns of strings. ``` Example 1 l1 = { [[a,b,c], [d,e]], [[f,g], [h]] } s1 = format_list_column(l1) s1 is now ["[[a,b,c],[d,e]]", "[[f,g],[h]]"] Example 2 l2 = { [[a,b,c], [d,e]], [NULL], [[f,g], NULL [h]] } s2 = format_list_column(l1, '-', [':', '{', '}']) s2 is now ["{{a:b:c}:{d:e}}", "{-}", "{{f:g}:-:{h}}"] ``` The format API takes parameters to specify the strings to use for `[` , `]` and ',' as well as the string used to represent null entries. Authors: - David Wendt (https://github.com/davidwendt) Approvers: - Robert Maynard (https://github.com/robertmaynard) - AJ Schmidt (https://github.com/ajschmidt8) - Vyas Ramasubramani (https://github.com/vyasr) - Karthikeyan (https://github.com/karthikeyann) URL: rapidsai#9454
…umns with unsupported dtypes (rapidsai#9359) Depends on rapidsai#9343 This PR updates the way that `apply` fails when the user tries to apply a function to the dataframe that uses a column with an unsupported dtype. As a side effect of moving towards a row like abstraction in user UDFs, it becomes tricky to determine if a field with an unsupported dtype is actually used. Consider the following example. This function should be allowed to execute, because even though the string field `b` is part of the "row", it is not used within the function. ```python df = cudf.DataFrame({ 'a':[1,2,3], 'b':['a','b','c'], 'c':[7,8,9] }) def f(row): return row['a'] + row['c'] res = df.apply(f, axis=1) ``` That said, the following should fail to be applied: ```python def f(row): return row['a'] + row['b'] res = df.apply(f, axis=1) ``` In this case the information that we need to determine if we should throw can only be deduced from the actual function logic. Furthermore simplistic upfront analysis won't cut it in a lot of cases such as the following which should be allowed, since `x` is never used: ```python def f(row): x = row['b'] return row['a'] + row['c'] ``` One can see that coupling this with variable aliases and the rest of python's language abstractions can quickly defeat simple algorithms for inspecting the function and determining if we should throw. The solution in this PR is to insert a special sentinel type in place of the type that would usually represent a supported dtype, and make it so that type deliberately can not be used for anything. That way during the function analysis that numba is already set up to do, it will fail and return a proper error to the user that explains what is happening if the field is used and if not, it will simply be compiled out of the final kernel. Authors: - https://github.com/brandon-b-miller Approvers: - Vyas Ramasubramani (https://github.com/vyasr) - Bradley Dice (https://github.com/bdice) - Graham Markall (https://github.com/gmarkall) URL: rapidsai#9359
when calling from a .cu file which will be compiled by nvcc, the following error will be thrown: ``` /cudf-src/java/src/main/native/include/jni_utils.hpp(331): error: reinterpret_cast cannot cast away const or other type qualifiers ``` Add the const keyword to get rid of this error. Signed-off-by: Allen Xu <[email protected]> Co-authored-by: Jiaming Yuan <[email protected]> Authors: - Allen Xu (https://github.com/wjxiz1992) Approvers: - Jason Lowe (https://github.com/jlowe) URL: rapidsai#9637
…_TEST_SUITE" (rapidsai#9574) This PR does a simple refactoring in the unit tests, replacing the calls to macro `TYPED_TEST_CASE` by calling to `TYPED_TEST_SUITE` macro instead. This is to fix the warning: "TYPED_TEST_CASE is deprecated, please use TYPED_TEST_SUITE". Due to high conflict with other PRs, this PR should be merged last, after other C++ PRs, before code freeze. Authors: - Nghia Truong (https://github.com/ttnghia) Approvers: - Vukasin Milovanovic (https://github.com/vuule) - https://github.com/nvdbaranec URL: rapidsai#9574
… aggregate() and scan() (rapidsai#9545) This PR adds struct support for `min`, `max`, `argmin` and `argmax` in groupby aggregation, and `min`, `max` in groupby scan. Although these operations require implementation for both hash-based and sort-based approaches, this PR only adds struct support into the sort-based approach due to the complexity of the hash-based implementation. Struct support for the hash-based approach will be future work. Partially addresses rapidsai#8974 and rapidsai#7995. Authors: - Nghia Truong (https://github.com/ttnghia) Approvers: - David Wendt (https://github.com/davidwendt) - Jake Hemstad (https://github.com/jrhemstad) URL: rapidsai#9545
The recent PR rapidsai#9574 missed updating the docs. This PR fixes that. Authors: - Conor Hoekstra (https://github.com/codereport) Approvers: - Nghia Truong (https://github.com/ttnghia) - MithunR (https://github.com/mythrocks) URL: rapidsai#9654
…sai#9653) This PR resolves rapidsai#9648, fixing a bug introduced in rapidsai#9542 wherein name equality was enforced for Series objects compared via `equals`. It looks like pandas requires name equality only for DataFrames, and we simply didn't have tests for this before. This should go in the 21.12 release since it's a bug affecting downstream dependencies. Authors: - Vyas Ramasubramani (https://github.com/vyasr) Approvers: - GALI PREM SAGAR (https://github.com/galipremsagar) URL: rapidsai#9653
Closes rapidsai#9647. This is a simple fix in `probe_join_hash_table` where `compute_join_output_size` was being invoked non-lazily when checking if the optional `output_size` was set. The whole point of the check is to not call `compute_join_output_size` again, since its value gets discarded given a passed `output_size` We are trying to add (NVIDIA/spark-rapids#4036) some performance optimizations in Spark, and we noticed an extra call to this kernel that accounted for ~10s extra time in our query. Authors: - Alessandro Bellina (https://github.com/abellina) Approvers: - Jake Hemstad (https://github.com/jrhemstad) - Nghia Truong (https://github.com/ttnghia) - Vyas Ramasubramani (https://github.com/vyasr) URL: rapidsai#9649
…sai#9088) Depends on rapidsai#9040 Removes the json reader and impl classes, replacing member variables with local variables, reduces cognitive overhead, and facilitates further refactoring. Authors: - Christopher Harris (https://github.com/cwharris) Approvers: - Ram (Ramakrishna Prabhu) (https://github.com/rgsl888prabhu) - MithunR (https://github.com/mythrocks) - Elias Stehle (https://github.com/elstehle) URL: rapidsai#9088
…ns (rapidsai#9345) This PR changes the interface of `lists::drop_list_duplicates` such that it may accept a second (optional) input `values` lists column, and returns a pairs of lists columns containing the results of copying the input column without duplicate entries. If the optional `values` column is given, the users are responsible to have the keys-values columns having the same number of entries in each row. Otherwise, the results will be undefined. When copying the key entries, the corresponding value entries are also copied at the same time. A parameter `duplicate_keep_option` reused from stream compaction is used to specify which duplicate keys will be copying. This closes rapidsai#9124, and blocked by rapidsai#9425. Authors: - Nghia Truong (https://github.com/ttnghia) Approvers: - Jake Hemstad (https://github.com/jrhemstad) - https://github.com/nvdbaranec URL: rapidsai#9345
- Merge func_pdf and func_gdf into a single `func`. - Add support to bitwise logical ops. - Use internal `set_base_mask` api, eliminating deprecation warning. - Update df.apply docstrings to match rapidsai#9343 - Add (sampled) parametrization ops over mixed dtypes test case Authors: - Michael Wang (https://github.com/isVoid) Approvers: - https://github.com/brandon-b-miller - Vyas Ramasubramani (https://github.com/vyasr) URL: rapidsai#9422
…i#9638) This fixes a `read_parquet` bug discovered while iterating on rapidsai#9589 Without this fix, the optimized `read_parquet` code path will fail when the pandas metadata includes index-column information. It may also fail when the data includes list or struct columns (depending on the engine that wrote the parquet file). Authors: - Richard (Rick) Zamora (https://github.com/rjzamora) Approvers: - https://github.com/brandon-b-miller URL: rapidsai#9638
Closes rapidsai#9546 This PR fixes the issue likely through elimination of undefined behavior. Modified local heap implementation to return `void*` instead on `uint8_t*`. This greatly reduces the number of `reinterpret_cast`s. Also changed heap type to `char*`, presumably reducing/eliminating aliasing issues. Some other clean up in related code included. Authors: - Vukasin Milovanovic (https://github.com/vuule) Approvers: - GALI PREM SAGAR (https://github.com/galipremsagar) - David Wendt (https://github.com/davidwendt) - Robert Maynard (https://github.com/robertmaynard) - Charles Blackmon-Luca (https://github.com/charlesbluca) URL: rapidsai#9632
This PR updates the `conda` recipe build strings and `cudatoolkit` version specifications as part of the Enhanced Compatibility efforts. The build strings have been updated to only include the major CUDA version (i.e. `librmm-21.12.00a-cuda11_gc781527_12.tar.bz2`) and the `cudatoolkit` version specifications will now be formatted like `cudatoolkit >=x,<y.0a0` (i.e. `cudatoolkit >=11,<12.0a0`). Moving forward, we'll build the packages with a single CUDA version (i.e. `11.4`) and test them in environments with different CUDA versions (i.e. `11.0`, `11.2`, `11.4`, etc.). Authors: - AJ Schmidt (https://github.com/ajschmidt8) Approvers: - Ray Douglass (https://github.com/raydouglass) URL: rapidsai#9456
Ensure that when a new cudf version is made, we also bump the rapids-cmake version at the same time. Otherwise we will get the previous release's dependencies by mistake. Authors: - Robert Maynard (https://github.com/robertmaynard) Approvers: - Ray Douglass (https://github.com/raydouglass) - Mark Harris (https://github.com/harrism) - Jason Lowe (https://github.com/jlowe) URL: rapidsai#9249
…ask and count of unset bits (rapidsai#9616) Closes rapidsai#9176 - [x] Update `bitmask_and` and `bitmask_or` to return both resulting mask and count of unset bits - [x] Refactor related implementations to use new `bitmask_and/or` - [x] Update unit tests Authors: - Yunsong Wang (https://github.com/PointKernel) Approvers: - Mike Wilson (https://github.com/hyperbolic2346) - Bradley Dice (https://github.com/bdice) - Jason Lowe (https://github.com/jlowe) URL: rapidsai#9616
These are some minor updates requested for PR rapidsai#9088 that I forgot to push prior to merging the PR. Authors: - Christopher Harris (https://github.com/cwharris) Approvers: - Bradley Dice (https://github.com/bdice) - Nghia Truong (https://github.com/ttnghia) URL: rapidsai#9659
robertmaynard
requested review from
mythrocks,
nvdbaranec,
shwina and
isVoid
November 11, 2021 19:55
github-actions
bot
added
CMake
CMake build issue
conda
Java
Affects Java cuDF API.
Python
Affects Python cuDF API.
libcudf
Affects libcudf (C++/CUDA) code.
labels
Nov 11, 2021
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.