pyspark.pandas.groupby.SeriesGroupBy.value_counts#
- SeriesGroupBy.value_counts(sort=None, ascending=None, dropna=True)[source]#
Compute group sizes.
- Parameters
- sortboolean, default None
Sort by frequencies.
- ascendingboolean, default False
Sort in ascending order.
- dropnaboolean, default True
Don’t include counts of NaN.
Examples
>>> df = ps.DataFrame({'A': [1, 2, 2, 3, 3, 3], ... 'B': [1, 1, 2, 3, 3, np.nan]}, ... columns=['A', 'B']) >>> df A B 0 1 1.0 1 2 1.0 2 2 2.0 3 3 3.0 4 3 3.0 5 3 NaN
>>> df.groupby('A')['B'].value_counts().sort_index() A B 1 1.0 1 2 1.0 1 2.0 1 3 3.0 2 Name: count, dtype: int64
Don’t include counts of NaN when dropna is False.
>>> df.groupby('A')['B'].value_counts( ... dropna=False).sort_index() A B 1 1.0 1 2 1.0 1 2.0 1 3 3.0 2 NaN 1 Name: count, dtype: int64