Skip to content

Commit 6639d48

Browse files
committed
backticks and moved to .22
1 parent 374c370 commit 6639d48

File tree

4 files changed

+226
-0
lines changed

4 files changed

+226
-0
lines changed

doc/source/whatsnew/v0.21.1.txt.orig

Lines changed: 154 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,154 @@
1+
.. _whatsnew_0211:
2+
3+
v0.21.1
4+
-------
5+
6+
This is a minor release from 0.21.1 and includes a number of deprecations, new
7+
features, enhancements, and performance improvements along with a large number
8+
of bug fixes. We recommend that all users upgrade to this version.
9+
10+
.. _whatsnew_0211.enhancements:
11+
12+
New features
13+
~~~~~~~~~~~~
14+
15+
-
16+
-
17+
-
18+
19+
.. _whatsnew_0211.enhancements.other:
20+
21+
Other Enhancements
22+
^^^^^^^^^^^^^^^^^^
23+
24+
- :meth:`Timestamp.timestamp` is now available in Python 2.7. (:issue:`17329`)
25+
-
26+
-
27+
28+
.. _whatsnew_0211.deprecations:
29+
30+
Deprecations
31+
~~~~~~~~~~~~
32+
33+
-
34+
-
35+
-
36+
37+
.. _whatsnew_0211.performance:
38+
39+
Performance Improvements
40+
~~~~~~~~~~~~~~~~~~~~~~~~
41+
42+
- Improved performance of plotting large series/dataframes (:issue:`18236`).
43+
-
44+
-
45+
46+
.. _whatsnew_0211.docs:
47+
48+
Documentation Changes
49+
~~~~~~~~~~~~~~~~~~~~~
50+
51+
-
52+
-
53+
-
54+
55+
.. _whatsnew_0211.bug_fixes:
56+
57+
Bug Fixes
58+
~~~~~~~~~
59+
60+
Conversion
61+
^^^^^^^^^^
62+
63+
- Bug in :class:`TimedeltaIndex` subtraction could incorrectly overflow when ``NaT`` is present (:issue:`17791`)
64+
- Bug in :class:`DatetimeIndex` subtracting datetimelike from DatetimeIndex could fail to overflow (:issue:`18020`)
65+
- Bug in :meth:`IntervalIndex.copy` when copying and ``IntervalIndex`` with non-default ``closed`` (:issue:`18339`)
66+
- Bug in :func:`DataFrame.to_dict` where columns of datetime that are tz-aware were not converted to required arrays when used with ``orient='records'``, raising``TypeError` (:issue:`18372`)
67+
-
68+
-
69+
70+
Indexing
71+
^^^^^^^^
72+
73+
- Bug in a boolean comparison of a ``datetime.datetime`` and a ``datetime64[ns]`` dtype Series (:issue:`17965`)
74+
- Bug where a ``MultiIndex`` with more than a million records was not raising ``AttributeError`` when trying to access a missing attribute (:issue:`18165`)
75+
- Bug in :class:`IntervalIndex` constructor when a list of intervals is passed with non-default ``closed`` (:issue:`18334`)
76+
- Bug in ``Index.putmask`` when an invalid mask passed (:issue:`18368`)
77+
-
78+
79+
I/O
80+
^^^
81+
82+
- Bug in class:`~pandas.io.stata.StataReader` not converting date/time columns with display formatting addressed (:issue:`17990`). Previously columns with display formatting were normally left as ordinal numbers and not converted to datetime objects.
83+
- Bug in :func:`read_csv` when reading a compressed UTF-16 encoded file (:issue:`18071`)
84+
- Bug in :func:`read_csv` for handling null values in index columns when specifying ``na_filter=False`` (:issue:`5239`)
85+
- Bug in :func:`read_csv` when reading numeric category fields with high cardinality (:issue:`18186`)
86+
- Bug in :meth:`DataFrame.to_csv` when the table had ``MultiIndex`` columns, and a list of strings was passed in for ``header`` (:issue:`5539`)
87+
- :func:`read_parquet` now allows to specify the columns to read from a parquet file (:issue:`18154`)
88+
- :func:`read_parquet` now allows to specify kwargs which are passed to the respective engine (:issue:`18216`)
89+
- Bug in parsing integer datetime-like columns with specified format in ``read_sql`` (:issue:`17855`).
90+
- Bug in :meth:`DataFrame.to_msgpack` when serializing data of the numpy.bool_ datatype (:issue:`18390`)
91+
92+
93+
Plotting
94+
^^^^^^^^
95+
96+
-
97+
-
98+
-
99+
100+
Groupby/Resample/Rolling
101+
^^^^^^^^^^^^^^^^^^^^^^^^
102+
103+
- Bug in ``DataFrame.resample(...).apply(...)`` when there is a callable that returns different columns (:issue:`15169`)
104+
- Bug in ``DataFrame.resample(...)`` when there is a time change (DST) and resampling frequecy is 12h or higher (:issue:`15549`)
105+
- Bug in ``pd.DataFrameGroupBy.count()`` when counting over a datetimelike column (:issue:`13393`)
106+
<<<<<<< HEAD
107+
- Bug in ``rolling.var`` where calculation is inaccurate with a zero-valued array (:issue:`18430`)
108+
=======
109+
- Bug when grouping by a single column and aggregating with a class like`list` or `tuple` (:issue:`18079`)
110+
>>>>>>> added whatsnew
111+
-
112+
-
113+
114+
Sparse
115+
^^^^^^
116+
117+
-
118+
-
119+
-
120+
121+
Reshaping
122+
^^^^^^^^^
123+
124+
- Error message in ``pd.merge_asof()`` for key datatype mismatch now includes datatype of left and right key (:issue:`18068`)
125+
- Bug in ``pd.concat`` when empty and non-empty DataFrames or Series are concatenated (:issue:`18178` :issue:`18187`)
126+
- Bug in ``DataFrame.filter(...)`` when :class:`unicode` is passed as a condition in Python 2 (:issue:`13101`)
127+
-
128+
129+
Numeric
130+
^^^^^^^
131+
132+
- Bug in ``pd.Series.rolling.skew()`` and ``rolling.kurt()`` with all equal values has floating issue (:issue:`18044`)
133+
-
134+
-
135+
-
136+
137+
Categorical
138+
^^^^^^^^^^^
139+
140+
- Bug in :meth:`DataFrame.astype` where casting to 'category' on an empty ``DataFrame`` causes a segmentation fault (:issue:`18004`)
141+
- Error messages in the testing module have been improved when items have different ``CategoricalDtype`` (:issue:`18069`)
142+
- ``CategoricalIndex`` can now correctly take a ``pd.api.types.CategoricalDtype`` as its dtype (:issue:`18116`)
143+
- Bug in ``Categorical.unique()`` returning read-only ``codes`` array when all categories were ``NaN`` (:issue:`18051`)
144+
145+
String
146+
^^^^^^
147+
148+
- :meth:`Series.str.split()` will now propogate ``NaN`` values across all expanded columns instead of ``None`` (:issue:`18450`)
149+
150+
Other
151+
^^^^^
152+
153+
-
154+
-

grp.patch

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
--- a/pandas/core/groupby.py
2+
+++ b/pandas/core/groupby.py
3+
@@ -3022,7 +3022,9 @@ class SeriesGroupBy(GroupBy):
4+
if isinstance(func_or_funcs, compat.string_types):
5+
return getattr(self, func_or_funcs)(*args, **kwargs)
6+
7+
- if hasattr(func_or_funcs, '__iter__'):
8+
+ if isinstance(func_or_funcs, collections.Iterable):
9+
+ # Catch instances of lists / tuples
10+
+ # but not the class list / tuple itself.
11+
ret = self._aggregate_multiple_funcs(func_or_funcs,
12+
(_level or 0) + 1)
13+
else:

grp_test.patch

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
--- a/pandas/tests/groupby/test_groupby.py
2+
+++ b/pandas/tests/groupby/test_groupby.py
3+
@@ -2725,3 +2725,12 @@ def _check_groupby(df, result, keys, field, f=lambda x: x.sum()):
4+
expected = f(df.groupby(tups)[field])
5+
for k, v in compat.iteritems(expected):
6+
assert (result[k] == v)
7+
+
8+
+
9+
+def test_tuple():
10+
+ df = pd.DataFrame({'A': [1, 1, 1, 3, 3, 3],
11+
+ 'B': [1, 1, 1, 4, 4, 4], 'C': [1, 1, 1, 3, 4, 4]})
12+
+
13+
+ result = df.groupby(['A', 'B']).aggregate(tuple)
14+
+ result2 = df.groupby('A').aggregate(tuple)
15+
+ result2 = df.groupby('A').aggregate([tuple])

test_agg.py

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
import pandas as pd
2+
import numpy as np
3+
4+
def f(x):
5+
return list(x)
6+
7+
#df = pd.DataFrame({'A' : [1, 1, 3], 'B' : [1, 2, 4]})
8+
#result = df.groupby('A').aggregate(f)
9+
10+
11+
#df = pd.DataFrame({'A' : [1, 1, 3], 'B' : [1, 2, 4]})
12+
#result = df.groupby('A').aggregate(list)
13+
#result = df.groupby('A').agg(list)
14+
15+
df = pd.DataFrame({'A' : [1, 1, 3], 'B' : [1, 1, 4], 'C' : [1, 3, 4]})
16+
#result = df.groupby(['A', 'B']).aggregate(pd.Series)
17+
18+
19+
#df = pd.DataFrame({'A': [1, 1, 1, 3, 3, 3],
20+
# 'B': [1, 1, 1, 4, 4, 4], 'C': [1, 1, 1, 3, 4, 4]})
21+
22+
#print ('series ')
23+
result = df.groupby('A')['C'].aggregate(np.array)
24+
#print (result)
25+
#
26+
result = df.groupby(['A', 'B']).aggregate(np.array)
27+
#print (result)
28+
#
29+
# result = df.groupby('A')['C'].aggregate(list)
30+
# print (result)
31+
32+
def f(x):
33+
return np.array(x)
34+
35+
print ('array')
36+
result = df.groupby(['A', 'B']).aggregate(f)
37+
print (result)
38+
39+
# result = df.groupby('A')['C'].aggregate(tuple)
40+
# expected = pd.Series([(1, 1, 1), (3, 4, 4)], index=[1, 3], name='C')
41+
# expected.index.name = 'A'
42+
43+
44+

0 commit comments

Comments
 (0)