Skip to content

Commit

Permalink
Fix flaky memory usage test by guaranteeing array size. (#10114)
Browse files Browse the repository at this point in the history
The test `test_dataframe.py::test_memory_usage_multi` is currently flaky. In theory it can fail for any value of the `rows` parameter, but in practice we only observe failures for the smaller value of 10. The reason for this is that the data for the `MultiIndex` is being constructed by randomly sampling from an array of size 3, and for a sufficiently small sample (e.g. 10) the probability that selection will not actually include all three values (e.g. a sample of `[0, 1, 1, 1, 0, 1, 1, 0, 0, 1]`) is not vanishingly small and occurs with observable frequency. The resulting `MultiIndex` will encode the levels for that column as a column with only two values, and as a result the column will occupy 8 fewer bytes (one 64 bit integer or float) less of space than expected. This PR changes that by always sampling without replacement from an array of the same length as the number of rows. I could also have fixed this problem by fixing a random seed that ensures that all the values are always sampled, but I made this change instead because 1) it more clearly conveys the intent, and 2) fixing a seed is a change that we should discuss and apply globally across all our tests.

Authors:
  - Vyas Ramasubramani (https://github.com/vyasr)

Approvers:
  - Bradley Dice (https://github.com/bdice)
  - Ashwin Srinath (https://github.com/shwina)

URL: #10114
  • Loading branch information
vyasr authored Jan 24, 2022
1 parent 6a77acc commit 6d11823
Showing 1 changed file with 10 additions and 4 deletions.
14 changes: 10 additions & 4 deletions python/cudf/cudf/tests/test_dataframe.py
Original file line number Diff line number Diff line change
Expand Up @@ -5474,20 +5474,26 @@ def test_memory_usage_list():
@pytest.mark.parametrize("rows", [10, 100])
def test_memory_usage_multi(rows):
deep = True
# We need to sample without replacement to guarantee that the size of the
# levels are always the same.
df = pd.DataFrame(
{
"A": np.arange(rows, dtype="int32"),
"B": np.random.choice(np.arange(3, dtype="int64"), rows),
"C": np.random.choice(np.arange(3, dtype="float64"), rows),
"B": np.random.choice(
np.arange(rows, dtype="int64"), rows, replace=False
),
"C": np.random.choice(
np.arange(rows, dtype="float64"), rows, replace=False
),
}
).set_index(["B", "C"])
gdf = cudf.from_pandas(df)
# Assume MultiIndex memory footprint is just that
# of the underlying columns, levels, and codes
expect = rows * 16 # Source Columns
expect += rows * 16 # Codes
expect += 3 * 8 # Level 0
expect += 3 * 8 # Level 1
expect += rows * 8 # Level 0
expect += rows * 8 # Level 1

assert expect == gdf.index.memory_usage(deep=deep)

Expand Down

0 comments on commit 6d11823

Please sign in to comment.