-
-
Notifications
You must be signed in to change notification settings - Fork 18.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: Index.drop_duplicates()
is inconsistent for unhashable values
#60925
Comments
Index.drop_duplicates()
is inconsistent for hashable valuesIndex.drop_duplicates()
is inconsistent for unhashable values
Thanks for the report! This should raise consistently. Further investigations and PRs to fix are welcome! Edit for visibility: As @MarcoGorelli points out below, this should even raise on index construction! The source of the issue appears to be here: Lines 347 to 348 in 0305656
We create an unpopulated hash table, and then fail on the Lines 335 to 337 in 0305656
Still, I do not see how this could then return correct results. It is certainly inefficient. tuples = [(k,) for k in range(20000)] + [(0,)]
idx = pd.Index(tuples)
%timeit idx.drop_duplicates()
# 445 μs ± 3.66 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
lists = [[k] for k in range(20000)] + [[0]]
idx = pd.Index(lists)
try:
idx.drop_duplicates()
except TypeError:
pass
%timeit idx.drop_duplicates()
# 3.03 s ± 18.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) My guess is that the hash table is somehow degenerating into saying everything is a collision, and therefore doing an O(n) lookup (where n is the size of the hash table). |
Took a look. Think I see what's happening. The root cause is that values are put in hashtables in two different ways - one where First path. Raises due to calling pandas/pandas/_libs/hashtable_class_helper.pxi.in Lines 1387 to 1392 in 19ea997
Second path. Does not raise. pandas/pandas/_libs/hashtable_func_helper.pxi.in Lines 169 to 172 in 19ea997
The reason there are two code paths is:
As for what to do, I'm going on the premise given by @rhshadrach that we should raise twice. Intuitively, there is no need to have strong support for nonhashable values. I'm going to suggest a few options, though I'm not yet familiar with the code so not sure which is best (if any).
Thoughts on how to proceed? |
Should this not raise even earlier? As in, should |
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
pandas.Index.drop_duplicates()
inconsistently raisesTypeError: unhashable type: 'list'
when its values encompass a list. This error does not seem to prevent the underlying uniqueness computation from happening. In addition to the submitted reproducible example there is a direct causation here in theIndex
object:If we call
.drop_duplicates
when the Index contains unhashable types, we observe aTypeError
.But for some reason if we simply ignore the error the first time and try
.drop_duplicates()
again it works and removes the duplicated entities including the unhashable ones?Where we can see that the underlying Index implementation populates its hashtable mapping even though the original call to
drop_duplicates
fails. We know this population is successful because the second attempt at.drop_duplicates
works.Finally, it appears that attribute checking on a
pandas.DataFrame
causes thePyObjectHashTable
to be constructed for the column index. This is likely due to the shared code path between__getattr__
and__getitem__
.Expected Behavior
I expect that
Index.drop_duplicates()
should work regardless of whether an attribute has been checked or not. The following two snippets should produce equivalent results (whether that is to raise an error or to produce a result):Installed Versions
INSTALLED VERSIONS
commit : 0691c5c
python : 3.12.7
python-bits : 64
OS : Linux
OS-release : 6.6.52-1-lts
Version : #1 SMP PREEMPT_DYNAMIC Wed, 18 Sep 2024 19:02:04 +0000
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.2.0
html5lib : None
hypothesis : 6.125.3
gcsfs : None
jinja2 : 3.1.5
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.0
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None
The text was updated successfully, but these errors were encountered: