We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
(optional) I have confirmed this bug exists on the master branch of pandas.
import pandas as pd import numpy as np s=pd.Series([True, pd.NA, np.nan]) s.value_counts(dropna=False) print(s.value_counts(dropna=False)) print(s.mode(dropna=False))
For s.value_counts(...) the result is
s.value_counts(...)
NaN 2 True 1 dtype: int64
but for s.mode(...) the output is:
s.mode(...)
0 NaN 1 True 2 <NA> dtype: object
for the sake of consistancy one would expect mode to be only NaN, as it was counted twice with value_counts, i.e.
mode
NaN
value_counts
0 NaN dtype: object
pd.show_versions()
commit : b5958ee python : 3.7.3.final.0 python-bits : 64 OS : Linux OS-release : 4.4.0-53-generic Version : #74-Ubuntu SMP Fri Dec 2 15:59:10 UTC 2016 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8
pandas : 1.1.5 numpy : 1.19.2 pytz : 2020.5 dateutil : 2.8.1 pip : 20.3.3 setuptools : 51.0.0.post20201207 Cython : 0.29.21 pytest : 5.4.3 hypothesis : 5.36.0 sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : 2.11.2 IPython : 7.19.0 pandas_datareader: None bs4 : 4.9.3 bottleneck : None fsspec : None fastparquet : None gcsfs : None matplotlib : 3.3.1 numexpr : None odfpy : None openpyxl : None pandas_gbq : None pyarrow : None pytables : None pyxlsb : None s3fs : None scipy : 1.2.1 sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlwt : None numba : 0.50.1
The text was updated successfully, but these errors were encountered:
The underlying issue is, that for value_counts with np.object, nas are always dropped:
np.object
pandas/pandas/_libs/hashtable_func_helper.pxi.in
Lines 97 to 101 in f1d4026
and are taken care in a post processing step:
pandas/pandas/core/algorithms.py
Lines 877 to 881 in f9ce9d6
This is not done for mode:
Line 323 in f1d4026
And because pd.NA and np.nan aren't "the same" for hashtables, we see three diffent modes.
pd.NA
np.nan
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
(optional) I have confirmed this bug exists on the master branch of pandas.
Problem description
For
s.value_counts(...)
the result isbut for
s.mode(...)
the output is:Expected Output
for the sake of consistancy one would expect
mode
to be onlyNaN
, as it was counted twice withvalue_counts
, i.e.Output of
pd.show_versions()
INSTALLED VERSIONS
commit : b5958ee
python : 3.7.3.final.0
python-bits : 64
OS : Linux
OS-release : 4.4.0-53-generic
Version : #74-Ubuntu SMP Fri Dec 2 15:59:10 UTC 2016
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.1.5
numpy : 1.19.2
pytz : 2020.5
dateutil : 2.8.1
pip : 20.3.3
setuptools : 51.0.0.post20201207
Cython : 0.29.21
pytest : 5.4.3
hypothesis : 5.36.0
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.19.0
pandas_datareader: None
bs4 : 4.9.3
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : 3.3.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.2.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : 0.50.1
The text was updated successfully, but these errors were encountered: