[ 
https://issues.apache.org/jira/browse/FLINK-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15357379#comment-15357379
 ] 

ASF GitHub Bot commented on FLINK-4123:
---------------------------------------

Github user zentol commented on a diff in the pull request:

    https://github.com/apache/flink/pull/2183#discussion_r69163543
  
    --- Diff: 
flink-streaming-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/streaming/connectors/cassandra/CassandraTupleWriteAheadSink.java
 ---
    @@ -110,11 +92,25 @@ public void close() throws Exception {
        }
     
        @Override
    -   protected void sendValues(Iterable<IN> values, long timestamp) throws 
Exception {
    -           //verify that no query failed until now
    -           if (exception != null) {
    -                   throw new Exception(exception);
    -           }
    +   protected boolean sendValues(Iterable<IN> values, long timestamp) 
throws Exception {
    +           int updatesSent = 0;
    +           final AtomicInteger updatesConfirmed = new AtomicInteger(0);
    +
    +           final AtomicContainer<Throwable> exception = new 
AtomicContainer<>();
    +
    +           FutureCallback<ResultSet> callback = new 
FutureCallback<ResultSet>() {
    +                   @Override
    +                   public void onSuccess(ResultSet resultSet) {
    +                           updatesConfirmed.incrementAndGet();
    +                   }
    +
    +                   @Override
    +                   public void onFailure(Throwable throwable) {
    +                           exception.set(throwable);
    --- End diff --
    
    i don't think it matters too much, as long as _some_ exception is noticed. 
The first exception would probably be the most reasonable approach though.


> CassandraWriteAheadSink can hang on cassandra failure
> -----------------------------------------------------
>
>                 Key: FLINK-4123
>                 URL: https://issues.apache.org/jira/browse/FLINK-4123
>             Project: Flink
>          Issue Type: Bug
>          Components: Cassandra Connector
>    Affects Versions: 1.1.0
>            Reporter: Chesnay Schepler
>            Assignee: Chesnay Schepler
>            Priority: Blocker
>             Fix For: 1.1.0
>
>
> The CassandraWriteAheadSink verifies that all writes send to cassandra have 
> been applied by counting how many were sent and how many callbacks were 
> activated. Once all writes were sent the sink enters into a loop that is only 
> exited once both counts are equal.
> Thus, should cassandra crash after all writes were sent, without having 
> acknowledged all writes, the sink will deadlock in the loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to