Jacek, you mean
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.ForeachWriter
? I do not understand how to use it, since it passes every value
separately, not every partition. And addding to table value by value would
not work

2017-02-07 12:10 GMT-08:00 Jacek Laskowski <ja...@japila.pl>:

> Hi,
>
> Have you considered foreach sink?
>
> Jacek
>
> On 6 Feb 2017 8:39 p.m., "Egor Pahomov" <pahomov.e...@gmail.com> wrote:
>
>> Hi, I'm thinking of using Structured Streaming instead of old streaming,
>> but I need to be able to save results to Hive table. Documentation for file
>> sink says(http://spark.apache.org/docs/latest/structured-streamin
>> g-programming-guide.html#output-sinks): "Supports writes to partitioned
>> tables. ". But being able to write to partitioned directories is not
>> enough to write to the table: someone needs to write to Hive metastore. How
>> can I use Structured Streaming and write to Hive table?
>>
>> --
>>
>>
>> *Sincerely yoursEgor Pakhomov*
>>
>


-- 


*Sincerely yoursEgor Pakhomov*

Reply via email to