[ https://issues.apache.org/jira/browse/FLINK-21003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
zhangyunyun updated FLINK-21003: -------------------------------- Description: When I add a sink to OSS, use the code below: {code:java} String path = "oss://<bucket>/<dir>"; StreamingFileSink streamingFileSink = StreamingFileSink .forRowFormat(new Path(path), new SimpleStringEncoder<String>("UTF-8")) .withRollingPolicy( DefaultRollingPolicy.builder() .withRolloverInterval(TimeUnit.MINUTES.toMillis(5)) .withInactivityInterval(TimeUnit.MINUTES.toMillis(1)) .withMaxPartSize(1024 * 1024 * 10) .build() ).build(); strStream.addSink(streamingFileSink);{code} It occus an error: {code:java} Recoverable writers on Hadoop are only supported for HDF {code} Is there any mistakes I made? OR I want to use Aliyun OSS to store the stream data split to different files. The Flink official document's example is use below: {code:java} // Write to OSS bucket stream.writeAsText("oss://<your-bucket>/<object-name>") {code} How to use this to split to different files by the data's attributes? Thanks! was: When I add a sink to OSS, use the code below: {code:java} String path = "oss://<bucket>/<dir>"; StreamingFileSink streamingFileSink = StreamingFileSink .forRowFormat(new Path(path), new SimpleStringEncoder<String>("UTF-8")) .withRollingPolicy( DefaultRollingPolicy.builder() .withRolloverInterval(TimeUnit.MINUTES.toMillis(5)) .withInactivityInterval(TimeUnit.MINUTES.toMillis(1)) .withMaxPartSize(1024 * 1024 * 10) .build() ).build(); strStream.addSink(streamingFileSink);{code} It occus an error: {code:java} Recoverable writers on Hadoop are only supported for HDF {code} Is there something I made a mistake? I want to use Aliyun OSS to store the stream data split to different files. The Flink official document's example is use below: {code:java} // Write to OSS bucket stream.writeAsText("oss://<your-bucket>/<object-name>") {code} How to use this to split to different files by the data's attributes? Thanks! > Flink add Sink to AliyunOSS doesn't work > ---------------------------------------- > > Key: FLINK-21003 > URL: https://issues.apache.org/jira/browse/FLINK-21003 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem > Affects Versions: 1.11.0 > Reporter: zhangyunyun > Priority: Major > > When I add a sink to OSS, use the code below: > {code:java} > String path = "oss://<bucket>/<dir>"; > StreamingFileSink streamingFileSink = StreamingFileSink > .forRowFormat(new Path(path), new SimpleStringEncoder<String>("UTF-8")) > .withRollingPolicy( > DefaultRollingPolicy.builder() > .withRolloverInterval(TimeUnit.MINUTES.toMillis(5)) > .withInactivityInterval(TimeUnit.MINUTES.toMillis(1)) > .withMaxPartSize(1024 * 1024 * 10) > .build() > ).build(); > strStream.addSink(streamingFileSink);{code} > It occus an error: > {code:java} > Recoverable writers on Hadoop are only supported for HDF > {code} > Is there any mistakes I made? > OR > I want to use Aliyun OSS to store the stream data split to different files. > The Flink official document's example is use below: > {code:java} > // Write to OSS bucket > stream.writeAsText("oss://<your-bucket>/<object-name>") > {code} > How to use this to split to different files by the data's attributes? > > Thanks! > > > > > > > > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)