[ 
https://issues.apache.org/jira/browse/FLINK-21003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17405061#comment-17405061
 ] 

Alex Z commented on FLINK-21003:
--------------------------------

I think the pr seems to solve this problem[ FLINK-11388 ].But i don't know why 
the pr was closed and not merged.

> Flink add Sink to AliyunOSS doesn't work
> ----------------------------------------
>
>                 Key: FLINK-21003
>                 URL: https://issues.apache.org/jira/browse/FLINK-21003
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / FileSystem
>    Affects Versions: 1.11.0
>            Reporter: zhangyunyun
>            Priority: Minor
>              Labels: auto-deprioritized-major
>
> When I add a sink to OSS, use the code below:
> {code:java}
> String path = "oss://<bucket>/<dir>";
> StreamingFileSink streamingFileSink = StreamingFileSink
>     .forRowFormat(new Path(path), new SimpleStringEncoder<String>("UTF-8"))
>     .withRollingPolicy(
>         DefaultRollingPolicy.builder()
>             .withRolloverInterval(TimeUnit.MINUTES.toMillis(5))
>             .withInactivityInterval(TimeUnit.MINUTES.toMillis(1))
>             .withMaxPartSize(1024 * 1024 * 10)
>             .build()
>     ).build();
> strStream.addSink(streamingFileSink);{code}
>  It occus an error:
> {code:java}
> Recoverable writers on Hadoop are only supported for HDF
> {code}
> Is there any mistakes I made?
> OR
> I want to use Aliyun OSS to store the stream data split to different files. 
> The Flink official document's example is use below:
> {code:java}
> // Write to OSS bucket
> stream.writeAsText("oss://<your-bucket>/<object-name>")
> {code}
> How to use this to split to different files by the data's attributes?
>  
> Thanks!
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to