Skip to content

DOC: Fix typos #61580

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 6, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/source/user_guide/reshaping.rst
Original file line number Diff line number Diff line change
Expand Up @@ -395,7 +395,7 @@ variables and the values representing the presence of those variables per row.
pd.get_dummies(df["key"])
df["key"].str.get_dummies()

``prefix`` adds a prefix to the the column names which is useful for merging the result
``prefix`` adds a prefix to the column names which is useful for merging the result
with the original :class:`DataFrame`:

.. ipython:: python
Expand Down
2 changes: 1 addition & 1 deletion doc/source/user_guide/user_defined_functions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -319,7 +319,7 @@ to the original data.

In the example, the ``warm_up_all_days`` function computes the ``max`` like an aggregation, but instead
of returning just the maximum value, it returns a ``DataFrame`` with the same shape as the original one
with the values of each day replaced by the the maximum temperature of the city.
with the values of each day replaced by the maximum temperature of the city.

``transform`` is also available for :meth:`SeriesGroupBy.transform`, :meth:`DataFrameGroupBy.transform` and
:meth:`Resampler.transform`, where it's more common. You can read more about ``transform`` in groupby
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v1.1.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1039,7 +1039,7 @@ Missing
^^^^^^^
- Calling :meth:`fillna` on an empty :class:`Series` now correctly returns a shallow copied object. The behaviour is now consistent with :class:`Index`, :class:`DataFrame` and a non-empty :class:`Series` (:issue:`32543`).
- Bug in :meth:`Series.replace` when argument ``to_replace`` is of type dict/list and is used on a :class:`Series` containing ``<NA>`` was raising a ``TypeError``. The method now handles this by ignoring ``<NA>`` values when doing the comparison for the replacement (:issue:`32621`)
- Bug in :meth:`~Series.any` and :meth:`~Series.all` incorrectly returning ``<NA>`` for all ``False`` or all ``True`` values using the nulllable Boolean dtype and with ``skipna=False`` (:issue:`33253`)
- Bug in :meth:`~Series.any` and :meth:`~Series.all` incorrectly returning ``<NA>`` for all ``False`` or all ``True`` values using the nullable Boolean dtype and with ``skipna=False`` (:issue:`33253`)
- Clarified documentation on interpolate with ``method=akima``. The ``der`` parameter must be scalar or ``None`` (:issue:`33426`)
- :meth:`DataFrame.interpolate` uses the correct axis convention now. Previously interpolating along columns lead to interpolation along indices and vice versa. Furthermore interpolating with methods ``pad``, ``ffill``, ``bfill`` and ``backfill`` are identical to using these methods with :meth:`DataFrame.fillna` (:issue:`12918`, :issue:`29146`)
- Bug in :meth:`DataFrame.interpolate` when called on a :class:`DataFrame` with column names of string type was throwing a ValueError. The method is now independent of the type of the column names (:issue:`33956`)
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v3.0.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -833,7 +833,7 @@ Groupby/resample/rolling
- Bug in :meth:`DataFrameGroupby.transform` and :meth:`SeriesGroupby.transform` with a reducer and ``observed=False`` that coerces dtype to float when there are unobserved categories. (:issue:`55326`)
- Bug in :meth:`Rolling.apply` for ``method="table"`` where column order was not being respected due to the columns getting sorted by default. (:issue:`59666`)
- Bug in :meth:`Rolling.apply` where the applied function could be called on fewer than ``min_period`` periods if ``method="table"``. (:issue:`58868`)
- Bug in :meth:`Series.resample` could raise when the the date range ended shortly before a non-existent time. (:issue:`58380`)
- Bug in :meth:`Series.resample` could raise when the date range ended shortly before a non-existent time. (:issue:`58380`)

Reshaping
^^^^^^^^^
Expand Down
2 changes: 1 addition & 1 deletion pandas/_libs/tslibs/conversion.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -797,7 +797,7 @@ cdef int64_t parse_pydatetime(
dts : *npy_datetimestruct
Needed to use in pydatetime_to_dt64, which writes to it.
creso : NPY_DATETIMEUNIT
Resolution to store the the result.
Resolution to store the result.

Raises
------
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/arrays/categorical.py
Original file line number Diff line number Diff line change
Expand Up @@ -1666,7 +1666,7 @@ def __array__(
Parameters
----------
dtype : np.dtype or None
Specifies the the dtype for the array.
Specifies the dtype for the array.

copy : bool or None, optional
See :func:`numpy.asarray`.
Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/io/xml/test_to_xml.py
Original file line number Diff line number Diff line change
Expand Up @@ -1345,7 +1345,7 @@ def test_ea_dtypes(any_numeric_ea_dtype, parser):
assert equalize_decl(result).strip() == expected


def test_unsuported_compression(parser, geom_df):
def test_unsupported_compression(parser, geom_df):
with pytest.raises(ValueError, match="Unrecognized compression type"):
with tm.ensure_clean() as path:
geom_df.to_xml(path, parser=parser, compression="7z")
Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/io/xml/test_xml.py
Original file line number Diff line number Diff line change
Expand Up @@ -1961,7 +1961,7 @@ def test_wrong_compression(parser, compression, compression_only):
read_xml(path, parser=parser, compression=attempted_compression)


def test_unsuported_compression(parser):
def test_unsupported_compression(parser):
with pytest.raises(ValueError, match="Unrecognized compression type"):
with tm.ensure_clean() as path:
read_xml(path, parser=parser, compression="7z")
Expand Down
2 changes: 1 addition & 1 deletion web/pandas/pdeps/0007-copy-on-write.md
Original file line number Diff line number Diff line change
Expand Up @@ -525,7 +525,7 @@ following cases:
* Selecting a single column (as a Series) out of a DataFrame is always a view
(``df['a']``)
* Slicing columns from a DataFrame creating a subset DataFrame (``df[['a':'b']]`` or
``df.loc[:, 'a': 'b']``) is a view _if_ the the original DataFrame consists of a
``df.loc[:, 'a': 'b']``) is a view _if_ the original DataFrame consists of a
single block (single dtype, consolidated) and _if_ you are slicing (so not a list
selection). In all other cases, getting a subset is always a copy.
* Selecting rows _can_ return a view, when the row indexer is a `slice` object.
Expand Down
4 changes: 2 additions & 2 deletions web/pandas/pdeps/0014-string-dtype.md
Original file line number Diff line number Diff line change
Expand Up @@ -220,8 +220,8 @@ in pandas 2.3 and removed in pandas 3.0.

The `storage` keyword of `StringDtype` is kept to disambiguate the underlying
storage of the string data (using pyarrow or python objects), but an additional
`na_value` is introduced to disambiguate the the variants using NA semantics
and NaN semantics.
`na_value` is introduced to disambiguate the variants using NA semantics and
NaN semantics.

Overview of the different ways to specify a dtype and the resulting concrete
dtype of the data:
Expand Down
Loading