Hi,
In a project where I work with Databricks, we use this connector to read /
write data to Azure SQL Database. Currently with Spark 2.4.5 and Scala 2.11.
But those setups are getting old. What happens if we update Spark to 3.0.1 or
higher and Scala 2.12.
This connector does not work according
I would suggest to ask microsoft and databricks, this forum is for apache
spark.
if you are interested please drop me a note separately as I m keen to
understand the issue as we use same setup
Ayan
On Mon, 26 Oct 2020 at 11:53 pm, wrote:
> Hi,
>
>
>
> In a project where I work with Databricks,
Let’s say that I have a spark dataframe as 3 columns:
id, name, age.
When I save it into HDFS/S3, it saves as:
(where I have used “partitionBy(id, name)”)
/id=1/name=Alex/.parquet
/id=2/name=Bob/.parquet
If I want not to include “id=” and “name=” in
directory structures, what should I do
Theref
The best option certainly would be to recompile the Spark Connector for
MS SQL server using the Spark 3.0.1/Scala 2.12 dependencies, and just
fix the compiler errors as you go. The code is open source on github
(https://github.com/microsoft/sql-spark-connector). Looks like this
connector is us
Hi all.
As the title,Is there any good plan? Or other suggestions, thanks for all
answers.
--
Best regards
Lucien
Hi,
Thanks.
I believe that this is an error message coming from the MongoDB server
itself. Essentially there are multiple instances of my application
running at the same time. So with a single or small number of
applications there are never issues. It's an issue when a sufficient
number of a