Is there a way through which we can use* groupByKey() Function in spark
structured streaming without aggregates ?*
I have a scenario like below, where we would like to group the items based
on a key without applying any aggregates.
*Sample incoming data:*
I would like to apply groupByKey on fi
> the user and developer guide will come soon...
Yea, it looks nice! Thanks for the work.
On Tue, Oct 27, 2020 at 11:21 PM Gabor Somogyi
wrote:
> Thanks Takeshi for sharing it, that can be used as an example.
> The user and developer guide will come soon...
>
> On Tue, Oct 27, 2020 at 2:31 PM T
Thanks Takeshi for sharing it, that can be used as an example.
The user and developer guide will come soon...
On Tue, Oct 27, 2020 at 2:31 PM Takeshi Yamamuro
wrote:
> Hi,
>
> Please see an example code in
> https://github.com/gaborgsomogyi/spark-jdbc-connection-provider (
> https://github.com/a
Hi,
Please see an example code in
https://github.com/gaborgsomogyi/spark-jdbc-connection-provider (
https://github.com/apache/spark/pull/29024).
Since it depends on the service loader, I think you need to add a
configuration file in META-INF/services.
Bests,
Takeshi
On Tue, Oct 27, 2020 at 9:50
Guys do you know how I can use the custom implementation of
JdbcConnectionProvider?
As far as I understand in the spark jdbc we can use custom Driver, like
this:
*val jdbcDF = spark.read
.format("jdbc")
.option("url", "jdbc:postgresql:dbserver").option("driver", "my.drivier")
*
And we need a m