If you didn’t resolve the error, you can try this alternate to save your Pyspark dataframe to local machine as csv file.
With display(dataframe):
Here I created a dataframe with 10,000 rows for your reference. With the display(), databricks allows to download the rows up to 1 million.
Code:
from pyspark.sql.types import StructType,StructField, StringType, IntegerType
schema=StructType([ \
StructField("id",IntegerType(),True), \
StructField("firstname",StringType(),True) \
])
data2=[(1,"Rakesh")]
for i in range(2,10000):
data2.append((i,"Rakesh"))
df=spark.createDataFrame(data=data2,schema=schema)
df.show(5)
display(df)
Dataframe Creation:

display(df):
In this output by default the display() shows 1000 rows and to download the total dataframe click on the downarrow and then click on Download full results.

Then, click on re-execute and download, now you can download the dataframe as csv file to your local machine.
