How to use groupByKey() in spark structured streaming without aggregates

2020-10-27 Thread act_coder
Is there a way through which we can use* groupByKey() Function in spark structured streaming without aggregates ?* I have a scenario like below, where we would like to group the items based on a key without applying any aggregates. *Sample incoming data:* I would like to apply groupByKey on fi

Re: Custom JdbcConnectionProvider

2020-10-27 Thread Takeshi Yamamuro
> the user and developer guide will come soon... Yea, it looks nice! Thanks for the work. On Tue, Oct 27, 2020 at 11:21 PM Gabor Somogyi wrote: > Thanks Takeshi for sharing it, that can be used as an example. > The user and developer guide will come soon... > > On Tue, Oct 27, 2020 at 2:31 PM T

Re: Custom JdbcConnectionProvider

2020-10-27 Thread Gabor Somogyi
Thanks Takeshi for sharing it, that can be used as an example. The user and developer guide will come soon... On Tue, Oct 27, 2020 at 2:31 PM Takeshi Yamamuro wrote: > Hi, > > Please see an example code in > https://github.com/gaborgsomogyi/spark-jdbc-connection-provider ( > https://github.com/a

Re: Custom JdbcConnectionProvider

2020-10-27 Thread Takeshi Yamamuro
Hi, Please see an example code in https://github.com/gaborgsomogyi/spark-jdbc-connection-provider ( https://github.com/apache/spark/pull/29024). Since it depends on the service loader, I think you need to add a configuration file in META-INF/services. Bests, Takeshi On Tue, Oct 27, 2020 at 9:50

Custom JdbcConnectionProvider

2020-10-27 Thread rafaelkyrdan
Guys do you know how I can use the custom implementation of JdbcConnectionProvider? As far as I understand in the spark jdbc we can use custom Driver, like this: *val jdbcDF = spark.read .format("jdbc") .option("url", "jdbc:postgresql:dbserver").option("driver", "my.drivier") * And we need a m