There was a data in a csv file in Azure datalake storage. I have to fetch that data via azure DataBricks in Python(Pyspark), and then transfer it to Azure SQL Database. All I have done in databricks is in Pyspark like storage account connectivity, database connectivity via Java database connectivity and then finally transfer it to database.
I have uploaded a three files you can check one is dataset, the DBC file contains the notebook code, metadata, and visualizations and I have also uploaded the notebook file where you can see pyspark code seperately.