-
-
Notifications
You must be signed in to change notification settings - Fork 18.6k
BUG: ArrowExtensionArray.mode(dropna=False) not respecting NAs #50986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
pandas/core/arrays/arrow/array.py
Outdated
counts = modes.field(1) | ||
# counts sorted descending i.e counts[0] = max | ||
if not dropna and self._data.null_count > counts[0].as_py(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i know you've wanted to keep as much of this "in pyarrow" as possible, but i find this hard to follow. would it be that bad to just implement mode in terms of value_counts?
if not len(self):[...]
vcs = self.value_counts(dropna=dropna)
res_ser = vcs[vcs == vcs.max()].sort_index()
return res_ser.index._values
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Locally with dispatching the mode implementation to value counts results in a noticeable slowdown, so I would prefer to keep using the pc.mode
implementation
In [9]: %timeit arr._mode()
147 µs ± 1.69 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [4]: %timeit arr._mode()
1.13 ms ± 10.1 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense, thanks for taking a look
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing this comment here, I find it surprising that there is such a big difference between both. Especially because you are doing a count_distinct to include all unique values in the result of mode
, so essentially making it equivalent to a value_counts. So if there is a big difference, that seems to indicate some performance issue in the implementation in arrow.
Now, trying with the following, I see different results:
In [8]: arr = pa.array(np.random.randint(0, 1000, 1_000_000))
In [12]: %timeit pc.mode(arr, pc.count_distinct(arr).as_py())
12.7 ms ± 496 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [13]: %timeit pc.value_counts(arr)
8.96 ms ± 170 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(it might depend a lot on the exact characteristics of the data, though, number of uniques vs total number of values, etc)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a quick benchmark replacing the implementation to use ArrowExtensionArray.value_counts
, not just comparing pc.mode
vs pc.value_counts
per se
% ipython
Python 3.8.15 | packaged by conda-forge | (default, Nov 22 2022, 08:53:40)
Type 'copyright', 'credits' or 'license' for more information
IPython 8.8.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import pyarrow as pa
In [2]: data = list(range(100_000)) + [None] * 100_000
In [3]: arr = pd.arrays.ArrowExtensionArray(pa.array(data))
In [4]: %timeit arr._mode()
15.9 ms ± 57.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
% git diff
+ vcs = self.value_counts(dropna=dropna)
+ res_ser = vcs[vcs == vcs.max()].sort_index()
+ return res_ser.index._values
+ # pa_type = self._data.type
+ # if pa.types.is_temporal(pa_type):
+ # nbits = pa_type.bit_width
+ # if nbits == 32:
+ # data = self._data.cast(pa.int32())
+ # elif nbits == 64:
+ # data = self._data.cast(pa.int64())
+ # else:
+ # raise NotImplementedError(pa_type)
+ # else:
+ # data = self._data
+ #
+ # modes = pc.mode(data, pc.count_distinct(data).as_py())
+ # counts = modes.field(1)
+ # # counts sorted descending i.e counts[0] = max
+ # if not dropna and self._data.null_count > counts[0].as_py():
+ # return type(self)(pa.array([None], type=pa_type))
+ # mask = pc.equal(counts, counts[0])
+ # most_common = modes.field(0).filter(mask)
+ #
+ # if pa.types.is_temporal(pa_type):
+ # most_common = most_common.cast(pa_type)
+ #
+ # if not dropna and self._data.null_count == counts[0].as_py():
+ # most_common = pa.concat_arrays(
+ # [most_common, pa.array([None], type=pa_type)]
+ # )
+ #
+ # return type(self)(most_common)
% ipython
Python 3.8.15 | packaged by conda-forge | (default, Nov 22 2022, 08:53:40)
Type 'copyright', 'credits' or 'license' for more information
IPython 8.8.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import pyarrow as pa
In [2]: data = list(range(100_000)) + [None] * 100_000
In [3]: arr = pd.arrays.ArrowExtensionArray(pa.array(data))
In [4]: %timeit arr._mode()
54.7 ms ± 672 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then I suppose the overhead is coming from the extra work the ArrowExtensionArray.value_counts
is doing?
Doing it with pyarrow compute directly here in _mode
might be faster and simpler (compared to current _mode
)? Something like:
res = pc.value_counts(self.data)
most_common = res.field("values").filter(pc.equal(res.field("counts"), pc.max(res.field("counts"))))
(this is still faster than calling pc.mode
on your example data)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Looks like pc.value_counts
also has some benefits
- Works for string and binary types
- The result maintains the order of the original values.
So it's good to switch to using value_counts here
looks like this breaks the existing mode tests |
Couple comments, neither deal-breakers. Otherwise LGTM |
@jorisvandenbossche can you take a look? this calls a bunch of pyarrow stuff directly |
pandas/core/arrays/arrow/array.py
Outdated
if not dropna and self._data.null_count > counts[0].as_py(): | ||
return type(self)(pa.array([None], type=pa_type)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just wanted to comment this on the arrow issue that you could do something like the above as a workaround, but so you are already doing that;)
Personally, I think it's a fine workaround on the pandas side (certainly on the short term). It should also be basically as performant compared to when pyarrow would do this in mode
itself, since null_count
is cheap (and cached).
ids=["multi_mode", "single_mode"], | ||
) | ||
def test_mode_dropna_true(data_for_grouping, take_idx, exp_idx, request): | ||
pa_dtype = data_for_grouping.dtype.pyarrow_dtype | ||
if pa.types.is_string(pa_dtype) or pa.types.is_binary(pa_dtype): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!
mask = pc.equal(counts, counts[0]) | ||
most_common = values.filter(mask) | ||
if dropna: | ||
data = data.drop_null() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if you checked, but it might be more efficient to do this after the value_counts, so on res
(assuming that res
is a much shorter array, and so cheaper to filter)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In [1]: import pyarrow as pa
In [2]: data = list(range(100_000)) + [None] * 100_000
In [3]: arr = pd.arrays.ArrowExtensionArray(pa.array(data))
In [4]: %timeit arr._mode()
7.01 ms ± 148 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) <-- drop_null before
6.93 ms ± 112 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) <-- drop_null after
So might as well do this after as you suggested
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like filtering after gives incorrect result for multi-mode tests. If filtering were to occur after, I would have to drop the NAs in values
and then filter the counts
where the NA were in values
. To keep things simpler for now I'll give leave this to filter before the value_counts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If filtering were to occur after, I would have to drop the NAs in
values
and then filter thecounts
where the NA were invalues
That would be something like:
if dropna:
res = res.filter(res.field("values").is_valid())
to drop values based on one field of the struct, before calculating most_common
.
So that line is a bit more complicated as calling drop_null
, but only slightly. Now, it also doesn't seem to matter that much ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah thanks. This passes the tests but appears slower than dropping the NAs beforehand for this example, so I think we should just drop the NAs beforehand for now.
In [1]: import pyarrow as pa
In [2]: data = list(range(100_000)) + [None] * 100_000
In [3]: arr = pd.arrays.ArrowExtensionArray(pa.array(data))
In [4]: %timeit arr._mode()
6.72 ms ± 45 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) <- drop_nulls before
7.24 ms ± 87 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) <- fllter, is_valild
This reverts commit 1a680a8.
Greenish |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks for taking this on
doc/source/whatsnew/vX.X.X.rst
file if fixing a bug or adding a new feature.