pyspark.sql.DataFrame.withColumnsRenamed#

DataFrame.withColumnsRenamed(colsMap)[source]#

Returns a new DataFrame by renaming multiple columns. This is a no-op if the schema doesn’t contain the given column names.

New in version 3.4.0: Added support for multiple columns renaming

Parameters
colsMapdict

A dict of existing column names and corresponding desired column names. Currently, only a single map is supported.

Returns
DataFrame

DataFrame with renamed columns.

Examples

>>> df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])

Example 1: Rename a single column

>>> df.withColumnsRenamed({"age": "age2"}).show()
+----+-----+
|age2| name|
+----+-----+
|   2|Alice|
|   5|  Bob|
+----+-----+

Example 2: Rename multiple columns

>>> df.withColumnsRenamed({"age": "age2", "name": "name2"}).show()
+----+-----+
|age2|name2|
+----+-----+
|   2|Alice|
|   5|  Bob|
+----+-----+

Example 3: Rename non-existing column (no-op)

>>> df.withColumnsRenamed({"non_existing": "new_name"}).show()
+---+-----+
|age| name|
+---+-----+
|  2|Alice|
|  5|  Bob|
+---+-----+

Example 4: Rename with an empty dictionary (no-op)

>>> df.withColumnsRenamed({}).show()
+---+-----+
|age| name|
+---+-----+
|  2|Alice|
|  5|  Bob|
+---+-----+