Source: Impulse switched to FINISHED in Flink streaming mode

2021-03-03 Thread Dmytro Dragan
Hi guys, We have a quite simple generator source: GenerateSequence.from(0) .withRate(1000, Duration.standardSeconds(1L)) .withTimestampFn((Long l) -> Instant.now()) And after starting in Flink streaming mode, we see that Impulse source is switching to FINISHED, while other parts

Re: Source: Impulse switched to FINISHED in Flink streaming mode

2021-03-03 Thread Dmytro Dragan
My bad: the issue was related to skipped checkpointingInterval Best regards, Dmytro Dragan | dd...@softserveinc.com | Lead Big Data Engineer| SoftServe From: Dmytro Dragan Reply-To: "user@beam.apache.org" Date: Wednesday, 3 March

Scio 0.10.0 released

2021-03-03 Thread Neville Li
Hi all, We just released Scio 0.10.0. Here's a short summary of the notable changes since 0.9.x: - Better decoupled Google Cloud Platform dependencies - Simplify coder implicits for faster compilation - Sort Merge Bucket performance improvements and bug fixes - Type-safe Parquet support - Parquet

Does writeDynamic() support writing different element groups to different output paths?

2021-03-03 Thread Tao Li
Hi Beam community, I have a streaming app that writes every hour’s data to a folder named with this hour. With Flink (for example), we can leverage “Bucketing File Sink”: https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/connectors/filesystem_sink.html However I am not seeing Bea

Re: Does writeDynamic() support writing different element groups to different output paths?

2021-03-03 Thread Kobe Feng
I used the following way long time ago for writing into partitions in hdfs (maybe better solutions from others), and not sure any interface change which you need to check: val baseDir = HadoopClient.resolve(basePath, env) datum.apply("darwin.write.hadoop.parquet." + postfix, FileIO.writeDynamic[St