What’s new in 1.3.0 (July 2, 2021)¶
These are the changes in pandas 1.3.0. See Release notes for a full changelog including other versions of pandas.
Warning
When reading new Excel 2007+ (.xlsx
) files, the default argument
engine=None
to read_excel()
will now result in using the
openpyxl engine in all cases
when the option io.excel.xlsx.reader
is set to "auto"
.
Previously, some cases would use the
xlrd engine instead. See
What’s new 1.2.0 for background on this change.
Enhancements¶
Custom HTTP(s) headers when reading csv or json files¶
When reading from a remote URL that is not handled by fsspec (e.g. HTTP and
HTTPS) the dictionary passed to storage_options
will be used to create the
headers included in the request. This can be used to control the User-Agent
header or send other custom headers (GH36688).
For example:
In [1]: headers = {"User-Agent": "pandas"}
In [2]: df = pd.read_csv(
...: "https://download.bls.gov/pub/time.series/cu/cu.item",
...: sep="\t",
...: storage_options=headers
...: )
...:
---------------------------------------------------------------------------
ConnectionRefusedError Traceback (most recent call last)
File /usr/lib/python3.11/urllib/request.py:1348, in AbstractHTTPHandler.do_open(self, http_class, req, **http_conn_args)
1347 try:
-> 1348 h.request(req.get_method(), req.selector, req.data, headers,
1349 encode_chunked=req.has_header('Transfer-encoding'))
1350 except OSError as err: # timeout error
File /usr/lib/python3.11/http/client.py:1282, in HTTPConnection.request(self, method, url, body, headers, encode_chunked)
1281 """Send a complete request to the server."""
-> 1282 self._send_request(method, url, body, headers, encode_chunked)
File /usr/lib/python3.11/http/client.py:1328, in HTTPConnection._send_request(self, method, url, body, headers, encode_chunked)
1327 body = _encode(body, 'body')
-> 1328 self.endheaders(body, encode_chunked=encode_chunked)
File /usr/lib/python3.11/http/client.py:1277, in HTTPConnection.endheaders(self, message_body, encode_chunked)
1276 raise CannotSendHeader()
-> 1277 self._send_output(message_body, encode_chunked=encode_chunked)
File /usr/lib/python3.11/http/client.py:1037, in HTTPConnection._send_output(self, message_body, encode_chunked)
1036 del self._buffer[:]
-> 1037 self.send(msg)
1039 if message_body is not None:
1040
1041 # create a consistent interface to message_body
File /usr/lib/python3.11/http/client.py:975, in HTTPConnection.send(self, data)
974 if self.auto_open:
--> 975 self.connect()
976 else:
File /usr/lib/python3.11/http/client.py:1447, in HTTPSConnection.connect(self)
1445 "Connect to a host on a given (SSL) port."
-> 1447 super().connect()
1449 if self._tunnel_host:
File /usr/lib/python3.11/http/client.py:941, in HTTPConnection.connect(self)
940 sys.audit("http.client.connect", self, self.host, self.port)
--> 941 self.sock = self._create_connection(
942 (self.host,self.port), self.timeout, self.source_address)
943 # Might fail in OSs that don't implement TCP_NODELAY
File /usr/lib/python3.11/socket.py:851, in create_connection(address, timeout, source_address, all_errors)
850 if not all_errors:
--> 851 raise exceptions[0]
852 raise ExceptionGroup("create_connection failed", exceptions)
File /usr/lib/python3.11/socket.py:836, in create_connection(address, timeout, source_address, all_errors)
835 sock.bind(source_address)
--> 836 sock.connect(sa)
837 # Break explicitly a reference cycle
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
URLError Traceback (most recent call last)
Cell In [2], line 1
----> 1 df = pd.read_csv(
2 "https://download.bls.gov/pub/time.series/cu/cu.item",
3 sep="\t",
4 storage_options=headers
5 )
File /usr/lib/python3/dist-packages/pandas/util/_decorators.py:211, in deprecate_kwarg.<locals>._deprecate_kwarg.<locals>.wrapper(*args, **kwargs)
209 else:
210 kwargs[new_arg_name] = new_arg_value
--> 211 return func(*args, **kwargs)
File /usr/lib/python3/dist-packages/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
325 if len(args) > num_allow_args:
326 warnings.warn(
327 msg.format(arguments=_format_argument_list(allow_args)),
328 FutureWarning,
329 stacklevel=find_stack_level(),
330 )
--> 331 return func(*args, **kwargs)
File /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py:950, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
935 kwds_defaults = _refine_defaults_read(
936 dialect,
937 delimiter,
(...)
946 defaults={"delimiter": ","},
947 )
948 kwds.update(kwds_defaults)
--> 950 return _read(filepath_or_buffer, kwds)
File /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py:605, in _read(filepath_or_buffer, kwds)
602 _validate_names(kwds.get("names", None))
604 # Create the parser.
--> 605 parser = TextFileReader(filepath_or_buffer, **kwds)
607 if chunksize or iterator:
608 return parser
File /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py:1442, in TextFileReader.__init__(self, f, engine, **kwds)
1439 self.options["has_index_names"] = kwds["has_index_names"]
1441 self.handles: IOHandles | None = None
-> 1442 self._engine = self._make_engine(f, self.engine)
File /usr/lib/python3/dist-packages/pandas/io/parsers/readers.py:1735, in TextFileReader._make_engine(self, f, engine)
1733 if "b" not in mode:
1734 mode += "b"
-> 1735 self.handles = get_handle(
1736 f,
1737 mode,
1738 encoding=self.options.get("encoding", None),
1739 compression=self.options.get("compression", None),
1740 memory_map=self.options.get("memory_map", False),
1741 is_text=is_text,
1742 errors=self.options.get("encoding_errors", "strict"),
1743 storage_options=self.options.get("storage_options", None),
1744 )
1745 assert self.handles is not None
1746 f = self.handles.handle
File /usr/lib/python3/dist-packages/pandas/io/common.py:713, in get_handle(path_or_buf, mode, encoding, compression, memory_map, is_text, errors, storage_options)
710 codecs.lookup_error(errors)
712 # open URLs
--> 713 ioargs = _get_filepath_or_buffer(
714 path_or_buf,
715 encoding=encoding,
716 compression=compression,
717 mode=mode,
718 storage_options=storage_options,
719 )
721 handle = ioargs.filepath_or_buffer
722 handles: list[BaseBuffer]
File /usr/lib/python3/dist-packages/pandas/io/common.py:363, in _get_filepath_or_buffer(filepath_or_buffer, encoding, compression, mode, storage_options)
361 # assuming storage_options is to be interpreted as headers
362 req_info = urllib.request.Request(filepath_or_buffer, headers=storage_options)
--> 363 with urlopen(req_info) as req:
364 content_encoding = req.headers.get("Content-Encoding", None)
365 if content_encoding == "gzip":
366 # Override compression based on Content-Encoding header
File /usr/lib/python3/dist-packages/pandas/io/common.py:265, in urlopen(*args, **kwargs)
259 """
260 Lazy-import wrapper for stdlib urlopen, as that imports a big chunk of
261 the stdlib.
262 """
263 import urllib.request
--> 265 return urllib.request.urlopen(*args, **kwargs)
File /usr/lib/python3.11/urllib/request.py:216, in urlopen(url, data, timeout, cafile, capath, cadefault, context)
214 else:
215 opener = _opener
--> 216 return opener.open(url, data, timeout)
File /usr/lib/python3.11/urllib/request.py:519, in OpenerDirector.open(self, fullurl, data, timeout)
516 req = meth(req)
518 sys.audit('urllib.Request', req.full_url, req.data, req.headers, req.get_method())
--> 519 response = self._open(req, data)
521 # post-process response
522 meth_name = protocol+"_response"
File /usr/lib/python3.11/urllib/request.py:536, in OpenerDirector._open(self, req, data)
533 return result
535 protocol = req.type
--> 536 result = self._call_chain(self.handle_open, protocol, protocol +
537 '_open', req)
538 if result:
539 return result
File /usr/lib/python3.11/urllib/request.py:496, in OpenerDirector._call_chain(self, chain, kind, meth_name, *args)
494 for handler in handlers:
495 func = getattr(handler, meth_name)
--> 496 result = func(*args)
497 if result is not None:
498 return result
File /usr/lib/python3.11/urllib/request.py:1391, in HTTPSHandler.https_open(self, req)
1390 def https_open(self, req):
-> 1391 return self.do_open(http.client.HTTPSConnection, req,
1392 context=self._context, check_hostname=self._check_hostname)
File /usr/lib/python3.11/urllib/request.py:1351, in AbstractHTTPHandler.do_open(self, http_class, req, **http_conn_args)
1348 h.request(req.get_method(), req.selector, req.data, headers,
1349 encode_chunked=req.has_header('Transfer-encoding'))
1350 except OSError as err: # timeout error
-> 1351 raise URLError(err)
1352 r = h.getresponse()
1353 except:
URLError: <urlopen error [Errno 111] Connection refused>
Read and write XML documents¶
We added I/O support to read and render shallow versions of XML documents with
read_xml()
and DataFrame.to_xml()
. Using lxml as parser,
both XPath 1.0 and XSLT 1.0 are available. (GH27554)
In [1]: xml = """<?xml version='1.0' encoding='utf-8'?>
...: <data>
...: <row>
...: <shape>square</shape>
...: <degrees>360</degrees>
...: <sides>4.0</sides>
...: </row>
...: <row>
...: <shape>circle</shape>
...: <degrees>360</degrees>
...: <sides/>
...: </row>
...: <row>
...: <shape>triangle</shape>
...: <degrees>180</degrees>
...: <sides>3.0</sides>
...: </row>
...: </data>"""
In [2]: df = pd.read_xml(xml)
In [3]: df
Out[3]:
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
In [4]: df.to_xml()
Out[4]:
<?xml version='1.0' encoding='utf-8'?>
<data>
<row>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</row>
<row>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
For more, see Writing XML in the user guide on IO tools.
Styler enhancements¶
We provided some focused development on Styler
. See also the Styler documentation
which has been revised and improved (GH39720, GH39317, GH40493).
The method
Styler.set_table_styles()
can now accept more natural CSS language for arguments, such as'color:red;'
instead of[('color', 'red')]
(GH39563)The methods
Styler.highlight_null()
,Styler.highlight_min()
, andStyler.highlight_max()
now allow custom CSS highlighting instead of the default background coloring (GH40242)
Styler.apply()
now accepts functions that return anndarray
whenaxis=None
, making it now consistent with theaxis=0
andaxis=1
behavior (GH39359)When incorrectly formatted CSS is given via
Styler.apply()
orStyler.applymap()
, an error is now raised upon rendering (GH39660)
Styler.format()
now accepts the keyword argumentescape
for optional HTML and LaTeX escaping (GH40388, GH41619)
Styler.background_gradient()
has gained the argumentgmap
to supply a specific gradient map for shading (GH22727)
Styler.clear()
now clearsStyler.hidden_index
andStyler.hidden_columns
as well (GH40484)Added the method
Styler.highlight_between()
(GH39821)Added the method
Styler.highlight_quantile()
(GH40926)Added the method
Styler.text_gradient()
(GH41098)Added the method
Styler.set_tooltips()
to allow hover tooltips; this can be used enhance interactive displays (GH21266, GH40284)Added the parameter
precision
to the methodStyler.format()
to control the display of floating point numbers (GH40134)
Styler
rendered HTML output now follows the w3 HTML Style Guide (GH39626)Many features of the
Styler
class are now either partially or fully usable on a DataFrame with a non-unique indexes or columns (GH41143)One has greater control of the display through separate sparsification of the index or columns using the new styler options, which are also usable via
option_context()
(GH41142)Added the option
styler.render.max_elements
to avoid browser overload when styling large DataFrames (GH40712)Added the method
Styler.to_latex()
(GH21673, GH42320), which also allows some limited CSS conversion (GH40731)Added the method
Styler.to_html()
(GH13379)Added the method
Styler.set_sticky()
to make index and column headers permanently visible in scrolling HTML frames (GH29072)
DataFrame constructor honors copy=False
with dict¶
When passing a dictionary to DataFrame
with copy=False
,
a copy will no longer be made (GH32960).
In [3]: arr = np.array([1, 2, 3])
In [4]: df = pd.DataFrame({"A": arr, "B": arr.copy()}, copy=False)
In [5]: df
Out[5]:
A B
0 1 1
1 2 2
2 3 3
df["A"]
remains a view on arr
:
In [6]: arr[0] = 0
In [7]: assert df.iloc[0, 0] == 0
The default behavior when not passing copy
will remain unchanged, i.e.
a copy will be made.
PyArrow backed string data type¶
We’ve enhanced the StringDtype
, an extension type dedicated to string data.
(GH39908)
It is now possible to specify a storage
keyword option to StringDtype
. Use
pandas options or specify the dtype using dtype='string[pyarrow]'
to allow the
StringArray to be backed by a PyArrow array instead of a NumPy array of Python objects.
The PyArrow backed StringArray requires pyarrow 1.0.0 or greater to be installed.
Warning
string[pyarrow]
is currently considered experimental. The implementation
and parts of the API may change without warning.
In [8]: pd.Series(['abc', None, 'def'], dtype=pd.StringDtype(storage="pyarrow"))
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In [8], line 1
----> 1 pd.Series(['abc', None, 'def'], dtype=pd.StringDtype(storage="pyarrow"))
File /usr/lib/python3/dist-packages/pandas/core/arrays/string_.py:109, in StringDtype.__init__(self, storage)
105 raise ValueError(
106 f"Storage must be 'python' or 'pyarrow'. Got {storage} instead."
107 )
108 if storage == "pyarrow" and pa_version_under1p01:
--> 109 raise ImportError(
110 "pyarrow>=1.0.0 is required for PyArrow backed StringArray."
111 )
112 self.storage = storage
ImportError: pyarrow>=1.0.0 is required for PyArrow backed StringArray.
You can use the alias "string[pyarrow]"
as well.
In [9]: s = pd.Series(['abc', None, 'def'], dtype="string[pyarrow]")
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In [9], line 1
----> 1 s = pd.Series(['abc', None, 'def'], dtype="string[pyarrow]")
File /usr/lib/python3/dist-packages/pandas/core/series.py:404, in Series.__init__(self, data, index, dtype, name, copy, fastpath)
402 data = {}
403 if dtype is not None:
--> 404 dtype = self._validate_dtype(dtype)
406 if isinstance(data, MultiIndex):
407 raise NotImplementedError(
408 "initializing a Series from a MultiIndex is not supported"
409 )
File /usr/lib/python3/dist-packages/pandas/core/generic.py:450, in NDFrame._validate_dtype(cls, dtype)
448 """validate the passed dtype"""
449 if dtype is not None:
--> 450 dtype = pandas_dtype(dtype)
452 # a compound dtype
453 if dtype.kind == "V":
File /usr/lib/python3/dist-packages/pandas/core/dtypes/common.py:1774, in pandas_dtype(dtype)
1771 return dtype
1773 # registered extension types
-> 1774 result = registry.find(dtype)
1775 if result is not None:
1776 return result
File /usr/lib/python3/dist-packages/pandas/core/dtypes/base.py:521, in Registry.find(self, dtype)
519 for dtype_type in self.dtypes:
520 try:
--> 521 return dtype_type.construct_from_string(dtype)
522 except TypeError:
523 pass
File /usr/lib/python3/dist-packages/pandas/core/arrays/string_.py:155, in StringDtype.construct_from_string(cls, string)
153 return cls(storage="python")
154 elif string == "string[pyarrow]":
--> 155 return cls(storage="pyarrow")
156 else:
157 raise TypeError(f"Cannot construct a '{cls.__name__}' from '{string}'")
File /usr/lib/python3/dist-packages/pandas/core/arrays/string_.py:109, in StringDtype.__init__(self, storage)
105 raise ValueError(
106 f"Storage must be 'python' or 'pyarrow'. Got {storage} instead."
107 )
108 if storage == "pyarrow" and pa_version_under1p01:
--> 109 raise ImportError(
110 "pyarrow>=1.0.0 is required for PyArrow backed StringArray."
111 )
112 self.storage = storage
ImportError: pyarrow>=1.0.0 is required for PyArrow backed StringArray.
In [10]: s
Out[10]:
0 C
1 a
2 B
dtype: object
You can also create a PyArrow backed string array using pandas options.
In [11]: with pd.option_context("string_storage", "pyarrow"):
....: s = pd.Series(['abc', None, 'def'], dtype="string")
....:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In [11], line 2
1 with pd.option_context("string_storage", "pyarrow"):
----> 2 s = pd.Series(['abc', None, 'def'], dtype="string")
File /usr/lib/python3/dist-packages/pandas/core/series.py:404, in Series.__init__(self, data, index, dtype, name, copy, fastpath)
402 data = {}
403 if dtype is not None:
--> 404 dtype = self._validate_dtype(dtype)
406 if isinstance(data, MultiIndex):
407 raise NotImplementedError(
408 "initializing a Series from a MultiIndex is not supported"
409 )
File /usr/lib/python3/dist-packages/pandas/core/generic.py:450, in NDFrame._validate_dtype(cls, dtype)
448 """validate the passed dtype"""
449 if dtype is not None:
--> 450 dtype = pandas_dtype(dtype)
452 # a compound dtype
453 if dtype.kind == "V":
File /usr/lib/python3/dist-packages/pandas/core/dtypes/common.py:1774, in pandas_dtype(dtype)
1771 return dtype
1773 # registered extension types
-> 1774 result = registry.find(dtype)
1775 if result is not None:
1776 return result
File /usr/lib/python3/dist-packages/pandas/core/dtypes/base.py:521, in Registry.find(self, dtype)
519 for dtype_type in self.dtypes:
520 try:
--> 521 return dtype_type.construct_from_string(dtype)
522 except TypeError:
523 pass
File /usr/lib/python3/dist-packages/pandas/core/arrays/string_.py:151, in StringDtype.construct_from_string(cls, string)
147 raise TypeError(
148 f"'construct_from_string' expects a string, got {type(string)}"
149 )
150 if string == "string":
--> 151 return cls()
152 elif string == "string[python]":
153 return cls(storage="python")
File /usr/lib/python3/dist-packages/pandas/core/arrays/string_.py:109, in StringDtype.__init__(self, storage)
105 raise ValueError(
106 f"Storage must be 'python' or 'pyarrow'. Got {storage} instead."
107 )
108 if storage == "pyarrow" and pa_version_under1p01:
--> 109 raise ImportError(
110 "pyarrow>=1.0.0 is required for PyArrow backed StringArray."
111 )
112 self.storage = storage
ImportError: pyarrow>=1.0.0 is required for PyArrow backed StringArray.
In [12]: s
Out[12]:
0 C
1 a
2 B
dtype: object
The usual string accessor methods work. Where appropriate, the return type of the Series or columns of a DataFrame will also have string dtype.
In [13]: s.str.upper()
Out[13]:
0 C
1 A
2 B
dtype: object
In [14]: s.str.split('b', expand=True).dtypes
Out[14]:
0 object
dtype: object
String accessor methods returning integers will return a value with Int64Dtype
In [15]: s.str.count("a")
Out[15]:
0 0
1 1
2 0
dtype: int64
Centered datetime-like rolling windows¶
When performing rolling calculations on DataFrame and Series objects with a datetime-like index, a centered datetime-like window can now be used (GH38780). For example:
In [16]: df = pd.DataFrame(
....: {"A": [0, 1, 2, 3, 4]}, index=pd.date_range("2020", periods=5, freq="1D")
....: )
....:
In [17]: df
Out[17]:
A
2020-01-01 0
2020-01-02 1
2020-01-03 2
2020-01-04 3
2020-01-05 4
In [18]: df.rolling("2D", center=True).mean()
Out[18]:
A
2020-01-01 0.5
2020-01-02 1.5
2020-01-03 2.5
2020-01-04 3.5
2020-01-05 4.0
Other enhancements¶
DataFrame.rolling()
,Series.rolling()
,DataFrame.expanding()
, andSeries.expanding()
now support amethod
argument with a'table'
option that performs the windowing operation over an entireDataFrame
. See Window Overview for performance and functional benefits (GH15095, GH38995)ExponentialMovingWindow
now support aonline
method that can performmean
calculations in an online fashion. See Window Overview (GH41673)Added
MultiIndex.dtypes()
(GH37062)Added
end
andend_day
options for theorigin
argument inDataFrame.resample()
(GH37804)Improved error message when
usecols
andnames
do not match forread_csv()
andengine="c"
(GH29042)Improved consistency of error messages when passing an invalid
win_type
argument in Window methods (GH15969)read_sql_query()
now accepts adtype
argument to cast the columnar data from the SQL database based on user input (GH10285)read_csv()
now raisingParserWarning
if length of header or given names does not match length of data whenusecols
is not specified (GH21768)Improved integer type mapping from pandas to SQLAlchemy when using
DataFrame.to_sql()
(GH35076)to_numeric()
now supports downcasting of nullableExtensionDtype
objects (GH33013)Added support for dict-like names in
MultiIndex.set_names
andMultiIndex.rename
(GH20421)read_excel()
can now auto-detect .xlsb files and older .xls files (GH35416, GH41225)ExcelWriter
now accepts anif_sheet_exists
parameter to control the behavior of append mode when writing to existing sheets (GH40230)Rolling.sum()
,Expanding.sum()
,Rolling.mean()
,Expanding.mean()
,ExponentialMovingWindow.mean()
,Rolling.median()
,Expanding.median()
,Rolling.max()
,Expanding.max()
,Rolling.min()
, andExpanding.min()
now support Numba execution with theengine
keyword (GH38895, GH41267)DataFrame.apply()
can now accept NumPy unary operators as strings, e.g.df.apply("sqrt")
, which was already the case forSeries.apply()
(GH39116)DataFrame.apply()
can now accept non-callable DataFrame properties as strings, e.g.df.apply("size")
, which was already the case forSeries.apply()
(GH39116)DataFrame.applymap()
can now accept kwargs to pass on to the user-providedfunc
(GH39987)Passing a
DataFrame
indexer toiloc
is now disallowed forSeries.__getitem__()
andDataFrame.__getitem__()
(GH39004)Series.apply()
can now accept list-like or dictionary-like arguments that aren’t lists or dictionaries, e.g.ser.apply(np.array(["sum", "mean"]))
, which was already the case forDataFrame.apply()
(GH39140)DataFrame.plot.scatter()
can now accept a categorical column for the argumentc
(GH12380, GH31357)Series.loc()
now raises a helpful error message when the Series has aMultiIndex
and the indexer has too many dimensions (GH35349)read_stata()
now supports reading data from compressed files (GH26599)Added support for parsing
ISO 8601
-like timestamps with negative signs toTimedelta
(GH37172)Added support for unary operators in
FloatingArray
(GH38749)RangeIndex
can now be constructed by passing arange
object directly e.g.pd.RangeIndex(range(3))
(GH12067)Series.round()
andDataFrame.round()
now work with nullable integer and floating dtypes (GH38844)read_csv()
andread_json()
expose the argumentencoding_errors
to control how encoding errors are handled (GH39450)GroupBy.any()
andGroupBy.all()
use Kleene logic with nullable data types (GH37506)GroupBy.any()
andGroupBy.all()
return aBooleanDtype
for columns with nullable data types (GH33449)GroupBy.any()
andGroupBy.all()
raising withobject
data containingpd.NA
even whenskipna=True
(GH37501)GroupBy.rank()
now supports object-dtype data (GH38278)Constructing a
DataFrame
orSeries
with thedata
argument being a Python iterable that is not a NumPyndarray
consisting of NumPy scalars will now result in a dtype with a precision the maximum of the NumPy scalars; this was already the case whendata
is a NumPyndarray
(GH40908)Add keyword
sort
topivot_table()
to allow non-sorting of the result (GH39143)Add keyword
dropna
toDataFrame.value_counts()
to allow counting rows that includeNA
values (GH41325)Series.replace()
will now cast results toPeriodDtype
where possible instead ofobject
dtype (GH41526)Improved error message in
corr
andcov
methods onRolling
,Expanding
, andExponentialMovingWindow
whenother
is not aDataFrame
orSeries
(GH41741)Series.between()
can now acceptleft
orright
as arguments toinclusive
to include only the left or right boundary (GH40245)DataFrame.explode()
now supports exploding multiple columns. Itscolumn
argument now also accepts a list of str or tuples for exploding on multiple columns at the same time (GH39240)DataFrame.sample()
now accepts theignore_index
argument to reset the index after sampling, similar toDataFrame.drop_duplicates()
andDataFrame.sort_values()
(GH38581)
Notable bug fixes¶
These are bug fixes that might have notable behavior changes.
Categorical.unique
now always maintains same dtype as original¶
Previously, when calling Categorical.unique()
with categorical data, unused categories in the new array
would be removed, making the dtype of the new array different than the
original (GH18291)
As an example of this, given:
In [19]: dtype = pd.CategoricalDtype(['bad', 'neutral', 'good'], ordered=True)
In [20]: cat = pd.Categorical(['good', 'good', 'bad', 'bad'], dtype=dtype)
In [21]: original = pd.Series(cat)
In [22]: unique = original.unique()
Previous behavior:
In [1]: unique
['good', 'bad']
Categories (2, object): ['bad' < 'good']
In [2]: original.dtype == unique.dtype
False
New behavior:
In [23]: unique
Out[23]:
['good', 'bad']
Categories (3, object): ['bad' < 'neutral' < 'good']
In [24]: original.dtype == unique.dtype
Out[24]: True
Preserve dtypes in DataFrame.combine_first()
¶
DataFrame.combine_first()
will now preserve dtypes (GH7509)
In [25]: df1 = pd.DataFrame({"A": [1, 2, 3], "B": [1, 2, 3]}, index=[0, 1, 2])
In [26]: df1
Out[26]:
A B
0 1 1
1 2 2
2 3 3
In [27]: df2 = pd.DataFrame({"B": [4, 5, 6], "C": [1, 2, 3]}, index=[2, 3, 4])
In [28]: df2
Out[28]:
B C
2 4 1
3 5 2
4 6 3
In [29]: combined = df1.combine_first(df2)
Previous behavior:
In [1]: combined.dtypes
Out[2]:
A float64
B float64
C float64
dtype: object
New behavior:
In [30]: combined.dtypes
Out[30]:
A float64
B int64
C float64
dtype: object
Groupby methods agg and transform no longer changes return dtype for callables¶
Previously the methods DataFrameGroupBy.aggregate()
,
SeriesGroupBy.aggregate()
, DataFrameGroupBy.transform()
, and
SeriesGroupBy.transform()
might cast the result dtype when the argument func
is callable, possibly leading to undesirable results (GH21240). The cast would
occur if the result is numeric and casting back to the input dtype does not change any
values as measured by np.allclose
. Now no such casting occurs.
In [31]: df = pd.DataFrame({'key': [1, 1], 'a': [True, False], 'b': [True, True]})
In [32]: df
Out[32]:
key a b
0 1 True True
1 1 False True
Previous behavior:
In [5]: df.groupby('key').agg(lambda x: x.sum())
Out[5]:
a b
key
1 True 2
New behavior:
In [33]: df.groupby('key').agg(lambda x: x.sum())
Out[33]:
a b
key
1 1 2
float
result for GroupBy.mean()
, GroupBy.median()
, and GroupBy.var()
¶
Previously, these methods could result in different dtypes depending on the input values. Now, these methods will always return a float dtype. (GH41137)
In [34]: df = pd.DataFrame({'a': [True], 'b': [1], 'c': [1.0]})
Previous behavior:
In [5]: df.groupby(df.index).mean()
Out[5]:
a b c
0 True 1 1.0
New behavior:
In [35]: df.groupby(df.index).mean()
Out[35]:
a b c
0 1.0 1.0 1.0
Try operating inplace when setting values with loc
and iloc
¶
When setting an entire column using loc
or iloc
, pandas will try to
insert the values into the existing data rather than create an entirely new array.
In [36]: df = pd.DataFrame(range(3), columns=["A"], dtype="float64")
In [37]: values = df.values
In [38]: new = np.array([5, 6, 7], dtype="int64")
In [39]: df.loc[[0, 1, 2], "A"] = new
In both the new and old behavior, the data in values
is overwritten, but in
the old behavior the dtype of df["A"]
changed to int64
.
Previous behavior:
In [1]: df.dtypes
Out[1]:
A int64
dtype: object
In [2]: np.shares_memory(df["A"].values, new)
Out[2]: False
In [3]: np.shares_memory(df["A"].values, values)
Out[3]: False
In pandas 1.3.0, df
continues to share data with values
New behavior:
In [40]: df.dtypes
Out[40]:
A float64
dtype: object
In [41]: np.shares_memory(df["A"], new)
Out[41]: False
In [42]: np.shares_memory(df["A"], values)
Out[42]: True
Never operate inplace when setting frame[keys] = values
¶
When setting multiple columns using frame[keys] = values
new arrays will
replace pre-existing arrays for these keys, which will not be over-written
(GH39510). As a result, the columns will retain the dtype(s) of values
,
never casting to the dtypes of the existing arrays.
In [43]: df = pd.DataFrame(range(3), columns=["A"], dtype="float64")
In [44]: df[["A"]] = 5
In the old behavior, 5
was cast to float64
and inserted into the existing
array backing df
:
Previous behavior:
In [1]: df.dtypes
Out[1]:
A float64
In the new behavior, we get a new array, and retain an integer-dtyped 5
:
New behavior:
In [45]: df.dtypes
Out[45]:
A int64
dtype: object
Consistent casting with setting into Boolean Series¶
Setting non-boolean values into a Series
with dtype=bool
now consistently
casts to dtype=object
(GH38709)
In [46]: orig = pd.Series([True, False])
In [47]: ser = orig.copy()
In [48]: ser.iloc[1] = np.nan
In [49]: ser2 = orig.copy()
In [50]: ser2.iloc[1] = 2.0
Previous behavior:
In [1]: ser
Out [1]:
0 1.0
1 NaN
dtype: float64
In [2]:ser2
Out [2]:
0 True
1 2.0
dtype: object
New behavior:
In [51]: ser
Out[51]:
0 True
1 NaN
dtype: object
In [52]: ser2
Out[52]:
0 True
1 2.0
dtype: object
GroupBy.rolling no longer returns grouped-by column in values¶
The group-by column will now be dropped from the result of a
groupby.rolling
operation (GH32262)
In [53]: df = pd.DataFrame({"A": [1, 1, 2, 3], "B": [0, 1, 2, 3]})
In [54]: df
Out[54]:
A B
0 1 0
1 1 1
2 2 2
3 3 3
Previous behavior:
In [1]: df.groupby("A").rolling(2).sum()
Out[1]:
A B
A
1 0 NaN NaN
1 2.0 1.0
2 2 NaN NaN
3 3 NaN NaN
New behavior:
In [55]: df.groupby("A").rolling(2).sum()
Out[55]:
B
A
1 0 NaN
1 1.0
2 2 NaN
3 3 NaN
Removed artificial truncation in rolling variance and standard deviation¶
Rolling.std()
and Rolling.var()
will no longer
artificially truncate results that are less than ~1e-8
and ~1e-15
respectively to
zero (GH37051, GH40448, GH39872).
However, floating point artifacts may now exist in the results when rolling over larger values.
In [56]: s = pd.Series([7, 5, 5, 5])
In [57]: s.rolling(3).var()
Out[57]:
0 NaN
1 NaN
2 1.333333
3 0.000000
dtype: float64
GroupBy.rolling with MultiIndex no longer drops levels in the result¶
GroupBy.rolling()
will no longer drop levels of a DataFrame
with a MultiIndex
in the result. This can lead to a perceived duplication of levels in the resulting
MultiIndex
, but this change restores the behavior that was present in version 1.1.3 (GH38787, GH38523).
In [58]: index = pd.MultiIndex.from_tuples([('idx1', 'idx2')], names=['label1', 'label2'])
In [59]: df = pd.DataFrame({'a': [1], 'b': [2]}, index=index)
In [60]: df
Out[60]:
a b
label1 label2
idx1 idx2 1 2
Previous behavior:
In [1]: df.groupby('label1').rolling(1).sum()
Out[1]:
a b
label1
idx1 1.0 2.0
New behavior:
In [61]: df.groupby('label1').rolling(1).sum()
Out[61]:
a b
label1 label1 label2
idx1 idx1 idx2 1.0 2.0
Backwards incompatible API changes¶
Increased minimum versions for dependencies¶
Some minimum supported versions of dependencies were updated. If installed, we now require:
Package |
Minimum Version |
Required |
Changed |
---|---|---|---|
numpy |
1.17.3 |
X |
X |
pytz |
2017.3 |
X |
|
python-dateutil |
2.7.3 |
X |
|
bottleneck |
1.2.1 |
||
numexpr |
2.7.0 |
X |
|
pytest (dev) |
6.0 |
X |
|
mypy (dev) |
0.812 |
X |
|
setuptools |
38.6.0 |
X |
For optional libraries the general recommendation is to use the latest version. The following table lists the lowest version per library that is currently being tested throughout the development of pandas. Optional libraries below the lowest tested version may still work, but are not considered supported.
Package |
Minimum Version |
Changed |
---|---|---|
beautifulsoup4 |
4.6.0 |
|
fastparquet |
0.4.0 |
X |
fsspec |
0.7.4 |
|
gcsfs |
0.6.0 |
|
lxml |
4.3.0 |
|
matplotlib |
2.2.3 |
|
numba |
0.46.0 |
|
openpyxl |
3.0.0 |
X |
pyarrow |
0.17.0 |
X |
pymysql |
0.8.1 |
X |
pytables |
3.5.1 |
|
s3fs |
0.4.0 |
|
scipy |
1.2.0 |
|
sqlalchemy |
1.3.0 |
X |
tabulate |
0.8.7 |
X |
xarray |
0.12.0 |
|
xlrd |
1.2.0 |
|
xlsxwriter |
1.0.2 |
|
xlwt |
1.3.0 |
|
pandas-gbq |
0.12.0 |
See Dependencies and Optional dependencies for more.
Other API changes¶
Partially initialized
CategoricalDtype
objects (i.e. those withcategories=None
) will no longer compare as equal to fully initialized dtype objects (GH38516)Accessing
_constructor_expanddim
on aDataFrame
and_constructor_sliced
on aSeries
now raise anAttributeError
. Previously aNotImplementedError
was raised (GH38782)Added new
engine
and**engine_kwargs
parameters toDataFrame.to_sql()
to support other future “SQL engines”. Currently we still only useSQLAlchemy
under the hood, but more engines are planned to be supported such as turbodbc (GH36893)Removed redundant
freq
fromPeriodIndex
string representation (GH41653)ExtensionDtype.construct_array_type()
is now a required method instead of an optional one forExtensionDtype
subclasses (GH24860)Calling
hash
on non-hashable pandas objects will now raiseTypeError
with the built-in error message (e.g.unhashable type: 'Series'
). Previously it would raise a custom message such as'Series' objects are mutable, thus they cannot be hashed
. Furthermore,isinstance(<Series>, abc.collections.Hashable)
will now returnFalse
(GH40013)Styler.from_custom_template()
now has two new arguments for template names, and removed the oldname
, due to template inheritance having been introducing for better parsing (GH42053). Subclassing modifications to Styler attributes are also needed.
Build¶
Documentation in
.pptx
and.pdf
formats are no longer included in wheels or source distributions. (GH30741)
Deprecations¶
Deprecated dropping nuisance columns in DataFrame reductions and DataFrameGroupBy operations¶
Calling a reduction (e.g. .min
, .max
, .sum
) on a DataFrame
with
numeric_only=None
(the default), columns where the reduction raises a TypeError
are silently ignored and dropped from the result.
This behavior is deprecated. In a future version, the TypeError
will be raised,
and users will need to select only valid columns before calling the function.
For example:
In [62]: df = pd.DataFrame({"A": [1, 2, 3, 4], "B": pd.date_range("2016-01-01", periods=4)})
In [63]: df
Out[63]:
A B
0 1 2016-01-01
1 2 2016-01-02
2 3 2016-01-03
3 4 2016-01-04
Old behavior:
In [3]: df.prod()
Out[3]:
Out[3]:
A 24
dtype: int64
Future behavior:
In [4]: df.prod()
...
TypeError: 'DatetimeArray' does not implement reduction 'prod'
In [5]: df[["A"]].prod()
Out[5]:
A 24
dtype: int64
Similarly, when applying a function to DataFrameGroupBy
, columns on which
the function raises TypeError
are currently silently ignored and dropped
from the result.
This behavior is deprecated. In a future version, the TypeError
will be raised, and users will need to select only valid columns before calling
the function.
For example:
In [64]: df = pd.DataFrame({"A": [1, 2, 3, 4], "B": pd.date_range("2016-01-01", periods=4)})
In [65]: gb = df.groupby([1, 1, 2, 2])
Old behavior:
In [4]: gb.prod(numeric_only=False)
Out[4]:
A
1 2
2 12
Future behavior:
In [5]: gb.prod(numeric_only=False)
...
TypeError: datetime64 type does not support prod operations
In [6]: gb[["A"]].prod(numeric_only=False)
Out[6]:
A
1 2
2 12
Other Deprecations¶
Deprecated allowing scalars to be passed to the
Categorical
constructor (GH38433)Deprecated constructing
CategoricalIndex
without passing list-like data (GH38944)Deprecated allowing subclass-specific keyword arguments in the
Index
constructor, use the specific subclass directly instead (GH14093, GH21311, GH22315, GH26974)Deprecated the
astype()
method of datetimelike (timedelta64[ns]
,datetime64[ns]
,Datetime64TZDtype
,PeriodDtype
) to convert to integer dtypes, usevalues.view(...)
instead (GH38544). This deprecation was later reverted in pandas 1.4.0.Deprecated
MultiIndex.is_lexsorted()
andMultiIndex.lexsort_depth()
, useMultiIndex.is_monotonic_increasing()
instead (GH32259)Deprecated keyword
try_cast
inSeries.where()
,Series.mask()
,DataFrame.where()
,DataFrame.mask()
; cast results manually if desired (GH38836)Deprecated comparison of
Timestamp
objects withdatetime.date
objects. Instead of e.g.ts <= mydate
usets <= pd.Timestamp(mydate)
orts.date() <= mydate
(GH36131)Deprecated
Rolling.win_type
returning"freq"
(GH38963)Deprecated
Rolling.is_datetimelike
(GH38963)Deprecated
DataFrame
indexer forSeries.__setitem__()
andDataFrame.__setitem__()
(GH39004)Deprecated
ExponentialMovingWindow.vol()
(GH39220)Using
.astype
to convert betweendatetime64[ns]
dtype andDatetimeTZDtype
is deprecated and will raise in a future version, useobj.tz_localize
orobj.dt.tz_localize
instead (GH38622)Deprecated casting
datetime.date
objects todatetime64
when used asfill_value
inDataFrame.unstack()
,DataFrame.shift()
,Series.shift()
, andDataFrame.reindex()
, passpd.Timestamp(dateobj)
instead (GH39767)Deprecated
Styler.set_na_rep()
andStyler.set_precision()
in favor ofStyler.format()
withna_rep
andprecision
as existing and new input arguments respectively (GH40134, GH40425)Deprecated
Styler.where()
in favor of using an alternative formulation withStyler.applymap()
(GH40821)Deprecated allowing partial failure in
Series.transform()
andDataFrame.transform()
whenfunc
is list-like or dict-like and raises anything butTypeError
;func
raising anything but aTypeError
will raise in a future version (GH40211)Deprecated arguments
error_bad_lines
andwarn_bad_lines
inread_csv()
andread_table()
in favor of argumenton_bad_lines
(GH15122)Deprecated support for
np.ma.mrecords.MaskedRecords
in theDataFrame
constructor, pass{name: data[name] for name in data.dtype.names}
instead (GH40363)Deprecated using
merge()
,DataFrame.merge()
, andDataFrame.join()
on a different number of levels (GH34862)Deprecated the use of
**kwargs
inExcelWriter
; use the keyword argumentengine_kwargs
instead (GH40430)Deprecated the
level
keyword forDataFrame
andSeries
aggregations; use groupby instead (GH39983)Deprecated the
inplace
parameter ofCategorical.remove_categories()
,Categorical.add_categories()
,Categorical.reorder_categories()
,Categorical.rename_categories()
,Categorical.set_categories()
and will be removed in a future version (GH37643)Deprecated
merge()
producing duplicated columns through thesuffixes
keyword and already existing columns (GH22818)Deprecated setting
Categorical._codes
, create a newCategorical
with the desired codes instead (GH40606)Deprecated the
convert_float
optional argument inread_excel()
andExcelFile.parse()
(GH41127)Deprecated behavior of
DatetimeIndex.union()
with mixed timezones; in a future version both will be cast to UTC instead of object dtype (GH39328)Deprecated using
usecols
with out of bounds indices forread_csv()
withengine="c"
(GH25623)Deprecated special treatment of lists with first element a Categorical in the
DataFrame
constructor; pass aspd.DataFrame({col: categorical, ...})
instead (GH38845)Deprecated behavior of
DataFrame
constructor when adtype
is passed and the data cannot be cast to that dtype. In a future version, this will raise instead of being silently ignored (GH24435)Deprecated the
Timestamp.freq
attribute. For the properties that use it (is_month_start
,is_month_end
,is_quarter_start
,is_quarter_end
,is_year_start
,is_year_end
), when you have afreq
, use e.g.freq.is_month_start(ts)
(GH15146)Deprecated construction of
Series
orDataFrame
withDatetimeTZDtype
data anddatetime64[ns]
dtype. UseSeries(data).dt.tz_localize(None)
instead (GH41555, GH33401)Deprecated behavior of
Series
construction with large-integer values and small-integer dtype silently overflowing; useSeries(data).astype(dtype)
instead (GH41734)Deprecated behavior of
DataFrame
construction with floating data and integer dtype casting even when lossy; in a future version this will remain floating, matchingSeries
behavior (GH41770)Deprecated inference of
timedelta64[ns]
,datetime64[ns]
, orDatetimeTZDtype
dtypes inSeries
construction when data containing strings is passed and nodtype
is passed (GH33558)In a future version, constructing
Series
orDataFrame
withdatetime64[ns]
data andDatetimeTZDtype
will treat the data as wall-times instead of as UTC times (matching DatetimeIndex behavior). To treat the data as UTC times, usepd.Series(data).dt.tz_localize("UTC").dt.tz_convert(dtype.tz)
orpd.Series(data.view("int64"), dtype=dtype)
(GH33401)Deprecated passing lists as
key
toDataFrame.xs()
andSeries.xs()
(GH41760)Deprecated boolean arguments of
inclusive
inSeries.between()
to have{"left", "right", "neither", "both"}
as standard argument values (GH40628)Deprecated passing arguments as positional for all of the following, with exceptions noted (GH41485):
concat()
(other thanobjs
)read_csv()
(other thanfilepath_or_buffer
)read_table()
(other thanfilepath_or_buffer
)DataFrame.clip()
andSeries.clip()
(other thanupper
andlower
)DataFrame.drop_duplicates()
(except forsubset
),Series.drop_duplicates()
,Index.drop_duplicates()
andMultiIndex.drop_duplicates()
DataFrame.drop()
(other thanlabels
) andSeries.drop()
DataFrame.ffill()
,Series.ffill()
,DataFrame.bfill()
, andSeries.bfill()
DataFrame.fillna()
andSeries.fillna()
(apart fromvalue
)DataFrame.interpolate()
andSeries.interpolate()
(other thanmethod
)DataFrame.mask()
andSeries.mask()
(other thancond
andother
)DataFrame.reset_index()
(other thanlevel
) andSeries.reset_index()
DataFrame.set_axis()
andSeries.set_axis()
(other thanlabels
)DataFrame.set_index()
(other thankeys
)DataFrame.sort_values()
(other thanby
) andSeries.sort_values()
DataFrame.where()
andSeries.where()
(other thancond
andother
)Index.set_names()
andMultiIndex.set_names()
(except fornames
)MultiIndex.codes()
(except forcodes
)MultiIndex.set_levels()
(except forlevels
)Resampler.interpolate()
(other thanmethod
)
Performance improvements¶
Performance improvement in
IntervalIndex.isin()
(GH38353)Performance improvement in
Series.mean()
for nullable data types (GH34814)Performance improvement in
Series.isin()
for nullable data types (GH38340)Performance improvement in
DataFrame.fillna()
withmethod="pad"
ormethod="backfill"
for nullable floating and nullable integer dtypes (GH39953)Performance improvement in
DataFrame.corr()
formethod=kendall
(GH28329)Performance improvement in
DataFrame.corr()
formethod=spearman
(GH40956, GH41885)Performance improvement in
Rolling.corr()
andRolling.cov()
(GH39388)Performance improvement in
RollingGroupby.corr()
,ExpandingGroupby.corr()
,ExpandingGroupby.corr()
andExpandingGroupby.cov()
(GH39591)Performance improvement in
unique()
for object data type (GH37615)Performance improvement in
json_normalize()
for basic cases (including separators) (GH40035 GH15621)Performance improvement in
ExpandingGroupby
aggregation methods (GH39664)Performance improvement in
Styler
where render times are more than 50% reduced and now matchesDataFrame.to_html()
(GH39972 GH39952, GH40425)The method
Styler.set_td_classes()
is now as performant asStyler.apply()
andStyler.applymap()
, and even more so in some cases (GH40453)Performance improvement in
ExponentialMovingWindow.mean()
withtimes
(GH39784)Performance improvement in
GroupBy.apply()
when requiring the Python fallback implementation (GH40176)Performance improvement in the conversion of a PyArrow Boolean array to a pandas nullable Boolean array (GH41051)
Performance improvement for concatenation of data with type
CategoricalDtype
(GH40193)Performance improvement in
GroupBy.cummin()
andGroupBy.cummax()
with nullable data types (GH37493)Performance improvement in
Series.nunique()
with nan values (GH40865)Performance improvement in
DataFrame.transpose()
,Series.unstack()
withDatetimeTZDtype
(GH40149)Performance improvement in
Series.plot()
andDataFrame.plot()
with entry point lazy loading (GH41492)
Bug fixes¶
Categorical¶
Bug in
CategoricalIndex
incorrectly failing to raiseTypeError
when scalar data is passed (GH38614)Bug in
CategoricalIndex.reindex
failed when theIndex
passed was not categorical but whose values were all labels in the category (GH28690)Bug where constructing a
Categorical
from an object-dtype array ofdate
objects did not round-trip correctly withastype
(GH38552)Bug in constructing a
DataFrame
from anndarray
and aCategoricalDtype
(GH38857)Bug in setting categorical values into an object-dtype column in a
DataFrame
(GH39136)Bug in
DataFrame.reindex()
was raising anIndexError
when the new index contained duplicates and the old index was aCategoricalIndex
(GH38906)Bug in
Categorical.fillna()
with a tuple-like category raisingNotImplementedError
instead ofValueError
when filling with a non-category tuple (GH41914)
Datetimelike¶
Bug in
DataFrame
andSeries
constructors sometimes dropping nanoseconds fromTimestamp
(resp.Timedelta
)data
, withdtype=datetime64[ns]
(resp.timedelta64[ns]
) (GH38032)Bug in
DataFrame.first()
andSeries.first()
with an offset of one month returning an incorrect result when the first day is the last day of a month (GH29623)Bug in constructing a
DataFrame
orSeries
with mismatcheddatetime64
data andtimedelta64
dtype, or vice-versa, failing to raise aTypeError
(GH38575, GH38764, GH38792)Bug in constructing a
Series
orDataFrame
with adatetime
object out of bounds fordatetime64[ns]
dtype or atimedelta
object out of bounds fortimedelta64[ns]
dtype (GH38792, GH38965)Bug in
DatetimeIndex.intersection()
,DatetimeIndex.symmetric_difference()
,PeriodIndex.intersection()
,PeriodIndex.symmetric_difference()
always returning object-dtype when operating withCategoricalIndex
(GH38741)Bug in
DatetimeIndex.intersection()
giving incorrect results with non-Tick frequencies withn != 1
(GH42104)Bug in
Series.where()
incorrectly castingdatetime64
values toint64
(GH37682)Bug in
Categorical
incorrectly typecastingdatetime
object toTimestamp
(GH38878)Bug in comparisons between
Timestamp
object anddatetime64
objects just outside the implementation bounds for nanoseconddatetime64
(GH39221)Bug in
Timestamp.round()
,Timestamp.floor()
,Timestamp.ceil()
for values near the implementation bounds ofTimestamp
(GH39244)Bug in
Timedelta.round()
,Timedelta.floor()
,Timedelta.ceil()
for values near the implementation bounds ofTimedelta
(GH38964)Bug in
date_range()
incorrectly creatingDatetimeIndex
containingNaT
instead of raisingOutOfBoundsDatetime
in corner cases (GH24124)Bug in
infer_freq()
incorrectly fails to infer ‘H’ frequency ofDatetimeIndex
if the latter has a timezone and crosses DST boundaries (GH39556)Bug in
Series
backed byDatetimeArray
orTimedeltaArray
sometimes failing to set the array’sfreq
toNone
(GH41425)
Timedelta¶
Bug in constructing
Timedelta
fromnp.timedelta64
objects with non-nanosecond units that are out of bounds fortimedelta64[ns]
(GH38965)Bug in constructing a
TimedeltaIndex
incorrectly acceptingnp.datetime64("NaT")
objects (GH39462)Bug in constructing
Timedelta
from an input string with only symbols and no digits failed to raise an error (GH39710)Bug in
TimedeltaIndex
andto_timedelta()
failing to raise when passed non-nanosecondtimedelta64
arrays that overflow when converting totimedelta64[ns]
(GH40008)
Timezones¶
Numeric¶
Bug in
DataFrame.quantile()
,DataFrame.sort_values()
causing incorrect subsequent indexing behavior (GH38351)Bug in
DataFrame.sort_values()
raising anIndexError
for emptyby
(GH40258)Bug in
DataFrame.select_dtypes()
withinclude=np.number
would drop numericExtensionDtype
columns (GH35340)Bug in
DataFrame.mode()
andSeries.mode()
not keeping consistent integerIndex
for empty input (GH33321)Bug in
DataFrame.rank()
when the DataFrame containednp.inf
(GH32593)Bug in
DataFrame.rank()
withaxis=0
and columns holding incomparable types raising anIndexError
(GH38932)Bug in
Series.rank()
,DataFrame.rank()
, andGroupBy.rank()
treating the most negativeint64
value as missing (GH32859)Bug in
DataFrame.select_dtypes()
different behavior between Windows and Linux withinclude="int"
(GH36596)Bug in
DataFrame.apply()
andDataFrame.agg()
when passed the argumentfunc="size"
would operate on the entireDataFrame
instead of rows or columns (GH39934)Bug in
DataFrame.transform()
would raise aSpecificationError
when passed a dictionary and columns were missing; will now raise aKeyError
instead (GH40004)Bug in
GroupBy.rank()
giving incorrect results withpct=True
and equal values between consecutive groups (GH40518)Bug in
Series.count()
would result in anint32
result on 32-bit platforms when argumentlevel=None
(GH40908)Bug in
Series
andDataFrame
reductions with methodsany
andall
not returning Boolean results for object data (GH12863, GH35450, GH27709)Bug in
Series.clip()
would fail if the Series contains NA values and has nullable int or float as a data type (GH40851)Bug in
UInt64Index.where()
andUInt64Index.putmask()
with annp.int64
dtypeother
incorrectly raisingTypeError
(GH41974)Bug in
DataFrame.agg()
not sorting the aggregated axis in the order of the provided aggregation functions when one or more aggregation function fails to produce results (GH33634)Bug in
DataFrame.clip()
not interpreting missing values as no threshold (GH40420)
Conversion¶
Bug in
Series.to_dict()
withorient='records'
now returns Python native types (GH25969)Bug in
Series.view()
andIndex.view()
when converting between datetime-like (datetime64[ns]
,datetime64[ns, tz]
,timedelta64
,period
) dtypes (GH39788)Bug in creating a
DataFrame
from an emptynp.recarray
not retaining the original dtypes (GH40121)Bug in
DataFrame
failing to raise aTypeError
when constructing from afrozenset
(GH40163)Bug in
Index
construction silently ignoring a passeddtype
when the data cannot be cast to that dtype (GH21311)Bug in
StringArray.astype()
falling back to NumPy and raising when converting todtype='categorical'
(GH40450)Bug in
factorize()
where, when given an array with a numeric NumPy dtype lower than int64, uint64 and float64, the unique values did not keep their original dtype (GH41132)Bug in
DataFrame
construction with a dictionary containing an array-like withExtensionDtype
andcopy=True
failing to make a copy (GH38939)Bug in
qcut()
raising error when takingFloat64DType
as input (GH40730)Bug in
DataFrame
andSeries
construction withdatetime64[ns]
data anddtype=object
resulting indatetime
objects instead ofTimestamp
objects (GH41599)Bug in
DataFrame
andSeries
construction withtimedelta64[ns]
data anddtype=object
resulting innp.timedelta64
objects instead ofTimedelta
objects (GH41599)Bug in
DataFrame
construction when given a two-dimensional object-dtypenp.ndarray
ofPeriod
orInterval
objects failing to cast toPeriodDtype
orIntervalDtype
, respectively (GH41812)Bug in constructing a
Series
from a list and aPandasDtype
(GH39357)Bug in creating a
Series
from arange
object that does not fit in the bounds ofint64
dtype (GH30173)Bug in creating a
Series
from adict
with all-tuple keys and anIndex
that requires reindexing (GH41707)Bug in
infer_dtype()
not recognizing Series, Index, or array with a Period dtype (GH23553)Bug in
infer_dtype()
raising an error for generalExtensionArray
objects. It will now return"unknown-array"
instead of raising (GH37367)Bug in
DataFrame.convert_dtypes()
incorrectly raised aValueError
when called on an empty DataFrame (GH40393)
Strings¶
Bug in the conversion from
pyarrow.ChunkedArray
toStringArray
when the original had zero chunks (GH41040)Bug in
Series.replace()
andDataFrame.replace()
ignoring replacements withregex=True
forStringDType
data (GH41333, GH35977)Bug in
Series.str.extract()
withStringArray
returning object dtype for an emptyDataFrame
(GH41441)Bug in
Series.str.replace()
where thecase
argument was ignored whenregex=False
(GH41602)
Interval¶
Bug in
IntervalIndex.intersection()
andIntervalIndex.symmetric_difference()
always returning object-dtype when operating withCategoricalIndex
(GH38653, GH38741)Bug in
IntervalIndex.intersection()
returning duplicates when at least one of theIndex
objects have duplicates which are present in the other (GH38743)IntervalIndex.union()
,IntervalIndex.intersection()
,IntervalIndex.difference()
, andIntervalIndex.symmetric_difference()
now cast to the appropriate dtype instead of raising aTypeError
when operating with anotherIntervalIndex
with incompatible dtype (GH39267)PeriodIndex.union()
,PeriodIndex.intersection()
,PeriodIndex.symmetric_difference()
,PeriodIndex.difference()
now cast to object dtype instead of raisingIncompatibleFrequency
when operating with anotherPeriodIndex
with incompatible dtype (GH39306)Bug in
IntervalIndex.is_monotonic()
,IntervalIndex.get_loc()
,IntervalIndex.get_indexer_for()
, andIntervalIndex.__contains__()
when NA values are present (GH41831)
Indexing¶
Bug in
Index.union()
andMultiIndex.union()
dropping duplicateIndex
values whenIndex
was not monotonic orsort
was set toFalse
(GH36289, GH31326, GH40862)Bug in
CategoricalIndex.get_indexer()
failing to raiseInvalidIndexError
when non-unique (GH38372)Bug in
IntervalIndex.get_indexer()
whentarget
hasCategoricalDtype
and both the index and the target contain NA values (GH41934)Bug in
Series.loc()
raising aValueError
when input was filtered with a Boolean list and values to set were a list with lower dimension (GH20438)Bug in inserting many new columns into a
DataFrame
causing incorrect subsequent indexing behavior (GH38380)Bug in
DataFrame.__setitem__()
raising aValueError
when setting multiple values to duplicate columns (GH15695)Bug in
DataFrame.loc()
,Series.loc()
,DataFrame.__getitem__()
andSeries.__getitem__()
returning incorrect elements for non-monotonicDatetimeIndex
for string slices (GH33146)Bug in
DataFrame.reindex()
andSeries.reindex()
with timezone aware indexes raising aTypeError
formethod="ffill"
andmethod="bfill"
and specifiedtolerance
(GH38566)Bug in
DataFrame.reindex()
withdatetime64[ns]
ortimedelta64[ns]
incorrectly casting to integers when thefill_value
requires casting to object dtype (GH39755)Bug in
DataFrame.__setitem__()
raising aValueError
when setting on an emptyDataFrame
using specified columns and a nonemptyDataFrame
value (GH38831)Bug in
DataFrame.loc.__setitem__()
raising aValueError
when operating on a unique column when theDataFrame
has duplicate columns (GH38521)Bug in
DataFrame.iloc.__setitem__()
andDataFrame.loc.__setitem__()
with mixed dtypes when setting with a dictionary value (GH38335)Bug in
Series.loc.__setitem__()
andDataFrame.loc.__setitem__()
raisingKeyError
when provided a Boolean generator (GH39614)Bug in
Series.iloc()
andDataFrame.iloc()
raising aKeyError
when provided a generator (GH39614)Bug in
DataFrame.__setitem__()
not raising aValueError
when the right hand side is aDataFrame
with wrong number of columns (GH38604)Bug in
Series.__setitem__()
raising aValueError
when setting aSeries
with a scalar indexer (GH38303)Bug in
DataFrame.loc()
dropping levels of aMultiIndex
when theDataFrame
used as input has only one row (GH10521)Bug in
DataFrame.__getitem__()
andSeries.__getitem__()
always raisingKeyError
when slicing with existing strings where theIndex
has milliseconds (GH33589)Bug in setting
timedelta64
ordatetime64
values into numericSeries
failing to cast to object dtype (GH39086, GH39619)Bug in setting
Interval
values into aSeries
orDataFrame
with mismatchedIntervalDtype
incorrectly casting the new values to the existing dtype (GH39120)Bug in setting
datetime64
values into aSeries
with integer-dtype incorrectly casting the datetime64 values to integers (GH39266)Bug in setting
np.datetime64("NaT")
into aSeries
withDatetime64TZDtype
incorrectly treating the timezone-naive value as timezone-aware (GH39769)Bug in
Index.get_loc()
not raisingKeyError
whenkey=NaN
andmethod
is specified butNaN
is not in theIndex
(GH39382)Bug in
DatetimeIndex.insert()
when insertingnp.datetime64("NaT")
into a timezone-aware index incorrectly treating the timezone-naive value as timezone-aware (GH39769)Bug in incorrectly raising in
Index.insert()
, when setting a new column that cannot be held in the existingframe.columns
, or inSeries.reset_index()
orDataFrame.reset_index()
instead of casting to a compatible dtype (GH39068)Bug in
RangeIndex.append()
where a single object of length 1 was concatenated incorrectly (GH39401)Bug in
RangeIndex.astype()
where when converting toCategoricalIndex
, the categories became aInt64Index
instead of aRangeIndex
(GH41263)Bug in setting
numpy.timedelta64
values into an object-dtypeSeries
using a Boolean indexer (GH39488)Bug in setting numeric values into a into a boolean-dtypes
Series
usingat
oriat
failing to cast to object-dtype (GH39582)Bug in
DataFrame.__setitem__()
andDataFrame.iloc.__setitem__()
raisingValueError
when trying to index with a row-slice and setting a list as values (GH40440)Bug in
DataFrame.loc()
not raisingKeyError
when the key was not found inMultiIndex
and the levels were not fully specified (GH41170)Bug in
DataFrame.loc.__setitem__()
when setting-with-expansion incorrectly raising when the index in the expanding axis contained duplicates (GH40096)Bug in
DataFrame.loc.__getitem__()
withMultiIndex
casting to float when at least one index column has float dtype and we retrieve a scalar (GH41369)Bug in
DataFrame.loc()
incorrectly matching non-Boolean index elements (GH20432)Bug in indexing with
np.nan
on aSeries
orDataFrame
with aCategoricalIndex
incorrectly raisingKeyError
whennp.nan
keys are present (GH41933)Bug in
Series.__delitem__()
withExtensionDtype
incorrectly casting tondarray
(GH40386)Bug in
DataFrame.at()
with aCategoricalIndex
returning incorrect results when passed integer keys (GH41846)Bug in
DataFrame.loc()
returning aMultiIndex
in the wrong order if an indexer has duplicates (GH40978)Bug in
DataFrame.__setitem__()
raising aTypeError
when using astr
subclass as the column name with aDatetimeIndex
(GH37366)Bug in
PeriodIndex.get_loc()
failing to raise aKeyError
when given aPeriod
with a mismatchedfreq
(GH41670)Bug
.loc.__getitem__
with aUInt64Index
and negative-integer keys raisingOverflowError
instead ofKeyError
in some cases, wrapping around to positive integers in others (GH41777)Bug in
Index.get_indexer()
failing to raiseValueError
in some cases with invalidmethod
,limit
, ortolerance
arguments (GH41918)Bug when slicing a
Series
orDataFrame
with aTimedeltaIndex
when passing an invalid string raisingValueError
instead of aTypeError
(GH41821)Bug in
Index
constructor sometimes silently ignoring a specifieddtype
(GH38879)Index.where()
behavior now mirrorsIndex.putmask()
behavior, i.e.index.where(mask, other)
matchesindex.putmask(~mask, other)
(GH39412)
Missing¶
Bug in
Grouper
did not correctly propagate thedropna
argument;DataFrameGroupBy.transform()
now correctly handles missing values fordropna=True
(GH35612)Bug in
isna()
,Series.isna()
,Index.isna()
,DataFrame.isna()
, and the correspondingnotna
functions not recognizingDecimal("NaN")
objects (GH39409)Bug in
DataFrame.fillna()
not accepting a dictionary for thedowncast
keyword (GH40809)Bug in
isna()
not returning a copy of the mask for nullable types, causing any subsequent mask modification to change the original array (GH40935)Bug in
DataFrame
construction with float data containingNaN
and an integerdtype
casting instead of retaining theNaN
(GH26919)Bug in
Series.isin()
andMultiIndex.isin()
didn’t treat all nans as equivalent if they were in tuples (GH41836)
MultiIndex¶
Bug in
DataFrame.drop()
raising aTypeError
when theMultiIndex
is non-unique andlevel
is not provided (GH36293)Bug in
MultiIndex.intersection()
duplicatingNaN
in the result (GH38623)Bug in
MultiIndex.equals()
incorrectly returningTrue
when theMultiIndex
containedNaN
even when they are differently ordered (GH38439)Bug in
MultiIndex.intersection()
always returning an empty result when intersecting withCategoricalIndex
(GH38653)Bug in
MultiIndex.difference()
incorrectly raisingTypeError
when indexes contain non-sortable entries (GH41915)Bug in
MultiIndex.reindex()
raising aValueError
when used on an emptyMultiIndex
and indexing only a specific level (GH41170)Bug in
MultiIndex.reindex()
raisingTypeError
when reindexing against a flatIndex
(GH41707)
I/O¶
Bug in
Index.__repr__()
whendisplay.max_seq_items=1
(GH38415)Bug in
read_csv()
not recognizing scientific notation if the argumentdecimal
is set andengine="python"
(GH31920)Bug in
read_csv()
interpretingNA
value as comment, whenNA
does contain the comment string fixed forengine="python"
(GH34002)Bug in
read_csv()
raising anIndexError
with multiple header columns andindex_col
is specified when the file has no data rows (GH38292)Bug in
read_csv()
not acceptingusecols
with a different length thannames
forengine="python"
(GH16469)Bug in
read_csv()
returning object dtype whendelimiter=","
withusecols
andparse_dates
specified forengine="python"
(GH35873)Bug in
read_csv()
raising aTypeError
whennames
andparse_dates
is specified forengine="c"
(GH33699)Bug in
read_clipboard()
andDataFrame.to_clipboard()
not working in WSL (GH38527)Allow custom error values for the
parse_dates
argument ofread_sql()
,read_sql_query()
andread_sql_table()
(GH35185)Bug in
DataFrame.to_hdf()
andSeries.to_hdf()
raising aKeyError
when trying to apply for subclasses ofDataFrame
orSeries
(GH33748)Bug in
HDFStore.put()
raising a wrongTypeError
when saving a DataFrame with non-string dtype (GH34274)Bug in
json_normalize()
resulting in the first element of a generator object not being included in the returned DataFrame (GH35923)Bug in
read_csv()
applying the thousands separator to date columns when the column should be parsed for dates andusecols
is specified forengine="python"
(GH39365)Bug in
read_excel()
forward fillingMultiIndex
names when multiple header and index columns are specified (GH34673)Bug in
read_excel()
not respectingset_option()
(GH34252)Bug in
read_csv()
not switchingtrue_values
andfalse_values
for nullable Boolean dtype (GH34655)Bug in
read_json()
whenorient="split"
not maintaining a numeric string index (GH28556)read_sql()
returned an empty generator ifchunksize
was non-zero and the query returned no results. Now returns a generator with a single empty DataFrame (GH34411)Bug in
read_hdf()
returning unexpected records when filtering on categorical string columns using thewhere
parameter (GH39189)Bug in
read_sas()
raising aValueError
whendatetimes
were null (GH39725)Bug in
read_excel()
dropping empty values from single-column spreadsheets (GH39808)Bug in
read_excel()
loading trailing empty rows/columns for some filetypes (GH41167)Bug in
read_excel()
raising anAttributeError
when the excel file had aMultiIndex
header followed by two empty rows and no index (GH40442)Bug in
read_excel()
,read_csv()
,read_table()
,read_fwf()
, andread_clipboard()
where one blank row after aMultiIndex
header with no index would be dropped (GH40442)Bug in
DataFrame.to_string()
misplacing the truncation column whenindex=False
(GH40904)Bug in
DataFrame.to_string()
adding an extra dot and misaligning the truncation row whenindex=False
(GH40904)Bug in
read_orc()
always raising anAttributeError
(GH40918)Bug in
read_csv()
andread_table()
silently ignoringprefix
ifnames
andprefix
are defined, now raising aValueError
(GH39123)Bug in
read_csv()
andread_excel()
not respecting the dtype for a duplicated column name whenmangle_dupe_cols
is set toTrue
(GH35211)Bug in
read_csv()
silently ignoringsep
ifdelimiter
andsep
are defined, now raising aValueError
(GH39823)Bug in
read_csv()
andread_table()
misinterpreting arguments whensys.setprofile
had been previously called (GH41069)Bug in the conversion from PyArrow to pandas (e.g. for reading Parquet) with nullable dtypes and a PyArrow array whose data buffer size is not a multiple of the dtype size (GH40896)
Bug in
read_excel()
would raise an error when pandas could not determine the file type even though the user specified theengine
argument (GH41225)Bug in
read_clipboard()
copying from an excel file shifts values into the wrong column if there are null values in first column (GH41108)Bug in
DataFrame.to_hdf()
andSeries.to_hdf()
raising aTypeError
when trying to append a string column to an incompatible column (GH41897)
Period¶
Plotting¶
Bug in
plotting.scatter_matrix()
raising when 2dax
argument passed (GH16253)Prevent warnings when Matplotlib’s
constrained_layout
is enabled (GH25261)Bug in
DataFrame.plot()
was showing the wrong colors in the legend if the function was called repeatedly and some calls usedyerr
while others didn’t (GH39522)Bug in
DataFrame.plot()
was showing the wrong colors in the legend if the function was called repeatedly and some calls usedsecondary_y
and others uselegend=False
(GH40044)Bug in
DataFrame.plot.box()
whendark_background
theme was selected, caps or min/max markers for the plot were not visible (GH40769)
Groupby/resample/rolling¶
Bug in
GroupBy.agg()
withPeriodDtype
columns incorrectly casting results too aggressively (GH38254)Bug in
SeriesGroupBy.value_counts()
where unobserved categories in a grouped categorical Series were not tallied (GH38672)Bug in
SeriesGroupBy.value_counts()
where an error was raised on an empty Series (GH39172)Bug in
GroupBy.indices()
would contain non-existent indices when null values were present in the groupby keys (GH9304)Fixed bug in
GroupBy.sum()
causing a loss of precision by now using Kahan summation (GH38778)Fixed bug in
GroupBy.cumsum()
andGroupBy.mean()
causing loss of precision through using Kahan summation (GH38934)Bug in
Resampler.aggregate()
andDataFrame.transform()
raising aTypeError
instead ofSpecificationError
when missing keys had mixed dtypes (GH39025)Bug in
DataFrameGroupBy.idxmin()
andDataFrameGroupBy.idxmax()
withExtensionDtype
columns (GH38733)Bug in
Series.resample()
would raise when the index was aPeriodIndex
consisting ofNaT
(GH39227)Bug in
RollingGroupby.corr()
andExpandingGroupby.corr()
where the groupby column would return0
instead ofnp.nan
when providingother
that was longer than each group (GH39591)Bug in
ExpandingGroupby.corr()
andExpandingGroupby.cov()
where1
would be returned instead ofnp.nan
when providingother
that was longer than each group (GH39591)Bug in
GroupBy.mean()
,GroupBy.median()
andDataFrame.pivot_table()
not propagating metadata (GH28283)Bug in
Series.rolling()
andDataFrame.rolling()
not calculating window bounds correctly when window is an offset and dates are in descending order (GH40002)Bug in
Series.groupby()
andDataFrame.groupby()
on an emptySeries
orDataFrame
would lose index, columns, and/or data types when directly using the methodsidxmax
,idxmin
,mad
,min
,max
,sum
,prod
, andskew
or using them throughapply
,aggregate
, orresample
(GH26411)Bug in
GroupBy.apply()
where aMultiIndex
would be created instead of anIndex
when used on aRollingGroupby
object (GH39732)Bug in
DataFrameGroupBy.sample()
where an error was raised whenweights
was specified and the index was anInt64Index
(GH39927)Bug in
DataFrameGroupBy.aggregate()
andResampler.aggregate()
would sometimes raise aSpecificationError
when passed a dictionary and columns were missing; will now always raise aKeyError
instead (GH40004)Bug in
DataFrameGroupBy.sample()
where column selection was not applied before computing the result (GH39928)Bug in
ExponentialMovingWindow
when calling__getitem__
would incorrectly raise aValueError
when providingtimes
(GH40164)Bug in
ExponentialMovingWindow
when calling__getitem__
would not retaincom
,span
,alpha
orhalflife
attributes (GH40164)ExponentialMovingWindow
now raises aNotImplementedError
when specifyingtimes
withadjust=False
due to an incorrect calculation (GH40098)Bug in
ExponentialMovingWindowGroupby.mean()
where thetimes
argument was ignored whenengine='numba'
(GH40951)Bug in
ExponentialMovingWindowGroupby.mean()
where the wrong times were used the in case of multiple groups (GH40951)Bug in
ExponentialMovingWindowGroupby
where the times vector and values became out of sync for non-trivial groups (GH40951)Bug in
Series.asfreq()
andDataFrame.asfreq()
dropping rows when the index was not sorted (GH39805)Bug in aggregation functions for
DataFrame
not respectingnumeric_only
argument whenlevel
keyword was given (GH40660)Bug in
SeriesGroupBy.aggregate()
where using a user-defined function to aggregate a Series with an object-typedIndex
causes an incorrectIndex
shape (GH40014)Bug in
RollingGroupby
whereas_index=False
argument ingroupby
was ignored (GH39433)Bug in
GroupBy.any()
andGroupBy.all()
raising aValueError
when using with nullable type columns holdingNA
even withskipna=True
(GH40585)Bug in
GroupBy.cummin()
andGroupBy.cummax()
incorrectly rounding integer values near theint64
implementations bounds (GH40767)Bug in
GroupBy.rank()
with nullable dtypes incorrectly raising aTypeError
(GH41010)Bug in
GroupBy.cummin()
andGroupBy.cummax()
computing wrong result with nullable data types too large to roundtrip when casting to float (GH37493)Bug in
DataFrame.rolling()
returning mean zero for allNaN
window withmin_periods=0
if calculation is not numerical stable (GH41053)Bug in
DataFrame.rolling()
returning sum not zero for allNaN
window withmin_periods=0
if calculation is not numerical stable (GH41053)Bug in
SeriesGroupBy.agg()
failing to retain orderedCategoricalDtype
on order-preserving aggregations (GH41147)Bug in
GroupBy.min()
andGroupBy.max()
with multiple object-dtype columns andnumeric_only=False
incorrectly raising aValueError
(GH41111)Bug in
DataFrameGroupBy.rank()
with the GroupBy object’saxis=0
and therank
method’s keywordaxis=1
(GH41320)Bug in
DataFrameGroupBy.__getitem__()
with non-unique columns incorrectly returning a malformedSeriesGroupBy
instead ofDataFrameGroupBy
(GH41427)Bug in
DataFrameGroupBy.transform()
with non-unique columns incorrectly raising anAttributeError
(GH41427)Bug in
Resampler.apply()
with non-unique columns incorrectly dropping duplicated columns (GH41445)Bug in
Series.groupby()
aggregations incorrectly returning emptySeries
instead of raisingTypeError
on aggregations that are invalid for its dtype, e.g..prod
withdatetime64[ns]
dtype (GH41342)Bug in
DataFrameGroupBy
aggregations incorrectly failing to drop columns with invalid dtypes for that aggregation when there are no valid columns (GH41291)Bug in
DataFrame.rolling.__iter__()
whereon
was not assigned to the index of the resulting objects (GH40373)Bug in
DataFrameGroupBy.transform()
andDataFrameGroupBy.agg()
withengine="numba"
where*args
were being cached with the user passed function (GH41647)Bug in
DataFrameGroupBy
methodsagg
,transform
,sum
,bfill
,ffill
,pad
,pct_change
,shift
,ohlc
dropping.columns.names
(GH41497)
Reshaping¶
Bug in
merge()
raising error when performing an inner join with partial index andright_index=True
when there was no overlap between indices (GH33814)Bug in
DataFrame.unstack()
with missing levels led to incorrect index names (GH37510)Bug in
merge_asof()
propagating the right Index withleft_index=True
andright_on
specification instead of left Index (GH33463)Bug in
DataFrame.join()
on a DataFrame with aMultiIndex
returned the wrong result when one of both indexes had only one level (GH36909)merge_asof()
now raises aValueError
instead of a crypticTypeError
in case of non-numerical merge columns (GH29130)Bug in
DataFrame.join()
not assigning values correctly when the DataFrame had aMultiIndex
where at least one dimension had dtypeCategorical
with non-alphabetically sorted categories (GH38502)Series.value_counts()
andSeries.mode()
now return consistent keys in original order (GH12679, GH11227 and GH39007)Bug in
DataFrame.stack()
not handlingNaN
inMultiIndex
columns correctly (GH39481)Bug in
DataFrame.apply()
would give incorrect results when the argumentfunc
was a string,axis=1
, and the axis argument was not supported; now raises aValueError
instead (GH39211)Bug in
DataFrame.sort_values()
not reshaping the index correctly after sorting on columns whenignore_index=True
(GH39464)Bug in
DataFrame.append()
returning incorrect dtypes with combinations ofExtensionDtype
dtypes (GH39454)Bug in
DataFrame.append()
returning incorrect dtypes when used with combinations ofdatetime64
andtimedelta64
dtypes (GH39574)Bug in
DataFrame.append()
with aDataFrame
with aMultiIndex
and appending aSeries
whoseIndex
is not aMultiIndex
(GH41707)Bug in
DataFrame.pivot_table()
returning aMultiIndex
for a single value when operating on an empty DataFrame (GH13483)Index
can now be passed to thenumpy.all()
function (GH40180)Bug in
DataFrame.stack()
not preservingCategoricalDtype
in aMultiIndex
(GH36991)Bug in
to_datetime()
raising an error when the input sequence contained unhashable items (GH39756)Bug in
Series.explode()
preserving the index whenignore_index
wasTrue
and values were scalars (GH40487)Bug in
to_datetime()
raising aValueError
whenSeries
containsNone
andNaT
and has more than 50 elements (GH39882)Bug in
Series.unstack()
andDataFrame.unstack()
with object-dtype values containing timezone-aware datetime objects incorrectly raisingTypeError
(GH41875)Bug in
DataFrame.melt()
raisingInvalidIndexError
whenDataFrame
has duplicate columns used asvalue_vars
(GH41951)
Sparse¶
Bug in
DataFrame.sparse.to_coo()
raising aKeyError
with columns that are a numericIndex
without a0
(GH18414)Bug in
SparseArray.astype()
withcopy=False
producing incorrect results when going from integer dtype to floating dtype (GH34456)Bug in
SparseArray.max()
andSparseArray.min()
would always return an empty result (GH40921)
ExtensionArray¶
Bug in
DataFrame.where()
whenother
is a Series with anExtensionDtype
(GH38729)Fixed bug where
Series.idxmax()
,Series.idxmin()
,Series.argmax()
, andSeries.argmin()
would fail when the underlying data is anExtensionArray
(GH32749, GH33719, GH36566)Fixed bug where some properties of subclasses of
PandasExtensionDtype
where improperly cached (GH40329)Bug in
DataFrame.mask()
where masking a DataFrame with anExtensionDtype
raises aValueError
(GH40941)
Styler¶
Bug in
Styler
where thesubset
argument in methods raised an error for some valid MultiIndex slices (GH33562)Styler
rendered HTML output has seen minor alterations to support w3 good code standards (GH39626)Bug in
Styler
where rendered HTML was missing a column class identifier for certain header cells (GH39716)Bug in
Styler.background_gradient()
where text-color was not determined correctly (GH39888)Bug in
Styler.set_table_styles()
where multiple elements in CSS-selectors of thetable_styles
argument were not correctly added (GH34061)Bug in
Styler
where copying from Jupyter dropped the top left cell and misaligned headers (GH12147)Bug in
Styler.where
wherekwargs
were not passed to the applicable callable (GH40845)Bug in
Styler
causing CSS to duplicate on multiple renders (GH39395, GH40334)
Other¶
inspect.getmembers(Series)
no longer raises anAbstractMethodError
(GH38782)Bug in
Series.where()
with numeric dtype andother=None
not casting tonan
(GH39761)Bug in
assert_series_equal()
,assert_frame_equal()
,assert_index_equal()
andassert_extension_array_equal()
incorrectly raising when an attribute has an unrecognized NA type (GH39461)Bug in
assert_index_equal()
withexact=True
not raising when comparingCategoricalIndex
instances withInt64Index
andRangeIndex
categories (GH41263)Bug in
DataFrame.equals()
,Series.equals()
, andIndex.equals()
with object-dtype containingnp.datetime64("NaT")
ornp.timedelta64("NaT")
(GH39650)Bug in
show_versions()
where console JSON output was not proper JSON (GH39701)Bug in
pandas.util.hash_pandas_object()
not recognizinghash_key
,encoding
andcategorize
when the input object type is aDataFrame
(GH41404)