-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation of Dataset.count()
is ambiguous
#8055
Comments
Thanks for the issue, many aggregations could indeed use some extra explanation. Concerning the dead links, that's really not good. Probably we should look into letting sphinx report them. |
that's what |
See also #7378
This is correct.
I would start by adding a
The "See Also" piece is harder. Potentially instead of the current template we generate that too by adding a An alternate solution would be to manually fix the docstring for |
* Changed aggregation example to include 0 value * Fixed broken links in count docs (#8055) * Added entry to whats-new.rst * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: Guillermo Cossio <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
What is your issue?
Hello everyone!
I've noticed that the documentation for
Dataset.count
andDataArray.count
is a little unclear with respect to what it actually does. In my experience, it's not immediately obvious when looking at this site whether zero values are counted or not.The given example does not contain a 0 value that would clarify this, and what is worse is that the "see also" section points to
np.count
and todask.array.count
which do not exist. The closest NumPy equivalent isnp.count_nonzero
, which has a different behavior to the xarraycount
method. The xarray count is actually equivalent to Pandas DataFrame.count, which count all non-NA values including 0.I would take a shot at writing a clearer docstring but I'm not sure how to do that using the
generate_aggregations.py
file...The text was updated successfully, but these errors were encountered: