mail.com]
Enviado el: martes, 29 de noviembre de 2016 15:52
Para: user@flink.apache.org
Asunto: Re: Problems with RollingSink
Hi Diego,
If you want the data of all streams to be written to the same files, you can
also union the streams before sending them to the sink.
Best, Fabian
2016-1
t; streams somehow before sinking… maybe through Kafka?
>
> Kind Regards,
>
> Diego
>
>
>
> *De:* Kostas Kloudas [mailto:k.klou...@data-artisans.com
> ]
> *Enviado el:* lunes, 28 de noviembre de 2016 19:13
> *Para:* user@flink.apache.org
> *Asunto:* Re: Problems with Rol
o achieve this in a different manner by joining the
> streams somehow before sinking… maybe through Kafka?
>
> Kind Regards,
>
> Diego
>
>
> <>
> De: Kostas Kloudas [mailto:k.klou...@data-artisans.com]
> Enviado el: lunes, 28 de noviembre de 2016
this in a different manner by joining the
streams somehow before sinking… maybe through Kafka?
Kind Regards,
Diego
De: Kostas Kloudas [mailto:k.klou...@data-artisans.com]
Enviado el: lunes, 28 de noviembre de 2016 19:13
Para: user@flink.apache.org
Asunto: Re: Problems with RollingSink
Hi
Hi Diego,
The message shows that two tasks are trying to touch concurrently the same file.
This message is thrown upon recovery after a failure, or at the initialization
of the job?
Could you please check the logs for other exceptions before this?
Can this be related to this issue?
https://www.
Hi colleagues,
I am experiencing problems when trying to write events from a stream to HDFS. I
get the following exception:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
failed to create file
/user/biguardian/events/2016-11-28--15/flinkpar