You can use spark.table("mytable") or spark.sql("select * from mytable") to store the sql table as dataframe after creating sql table.
This is my sample SQL table:

Using spark.table("mytable"):

Using spark.sql("select * from mytable):

Then save the dataframe as csv using your code.
df1.write.format("csv").mode("overwrite").save("/tmp/spark_output/datacsv")
But in this approach the spark will create multiple csv's of our data like this.

To get a single csv file you can use coalse(1), but if your data is small, you can use pandas here.
import pandas
pandas_converted=df.toPandas()
pandas_converted.to_csv("/dbfs/tmp/spark_output/mycsv.csv")

Make sure you add /dbfs at start of the file, otherwise it won't detect the path here.
Single csv in the path:

SOURCES and REFERENCES:
Writing to a CSV file by Jeremy Peach(analyticjeremy).