You can try to use Spark to SQL DB connector to write data to SQL database using bulk insert in Scala, please refer to the section Write data to Azure SQL database or SQL Server using Bulk Insert of Azure offical document Accelerate real-time big data analytics with Spark connector for Azure SQL Database and SQL Server, as the screenshot below.

So I think the problem for you now is how to pass a PySpark dataframe data_frame in Python to the code in Scala. You can use the function registerTempTable of a dataframe with a table name like temp_table as the code and figure below in a databricks python notebook.
# register a temp table for a dataframe in Python
data_frame.registerTempTable("temp_table")
%scala
val scalaDF = table("temp_table")

Then to run the bulk insert codes in Scala after %scala
%scala
import com.microsoft.azure.sqldb.spark.bulkcopy.BulkCopyMetadata
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
/**
Add column Metadata.
If not specified, metadata is automatically added
from the destination table, which may suffer performance.
*/
var bulkCopyMetadata = new BulkCopyMetadata
bulkCopyMetadata.addColumnMetadata(1, "Title", java.sql.Types.NVARCHAR, 128, 0)
bulkCopyMetadata.addColumnMetadata(2, "FirstName", java.sql.Types.NVARCHAR, 50, 0)
bulkCopyMetadata.addColumnMetadata(3, "LastName", java.sql.Types.NVARCHAR, 50, 0)
val bulkCopyConfig = Config(Map(
"url" -> "mysqlserver.database.windows.net",
"databaseName" -> "MyDatabase",
"user" -> "username",
"password" -> "*********",
"dbTable" -> "dbo.Clients",
"bulkCopyBatchSize" -> "2500",
"bulkCopyTableLock" -> "true",
"bulkCopyTimeout" -> "600"
))
scalaDF.bulkCopyToSqlDB(bulkCopyConfig, bulkCopyMetadata)