echauchot commented on code in PR #3:
URL: 
https://github.com/apache/flink-connector-cassandra/pull/3#discussion_r1098953393


##########
flink-connector-cassandra/src/main/java/org/apache/flink/connector/cassandra/source/reader/CassandraSplitReader.java:
##########
@@ -0,0 +1,280 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.cassandra.source.reader;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.connector.base.source.reader.RecordsBySplits;
+import org.apache.flink.connector.base.source.reader.RecordsWithSplitIds;
+import org.apache.flink.connector.base.source.reader.splitreader.SplitReader;
+import org.apache.flink.connector.base.source.reader.splitreader.SplitsChange;
+import org.apache.flink.connector.cassandra.source.split.CassandraSplit;
+import org.apache.flink.connector.cassandra.source.split.CassandraSplitState;
+import org.apache.flink.connector.cassandra.source.split.RingRange;
+import org.apache.flink.streaming.connectors.cassandra.ClusterBuilder;
+
+import com.datastax.driver.core.Cluster;
+import com.datastax.driver.core.ColumnMetadata;
+import com.datastax.driver.core.Metadata;
+import com.datastax.driver.core.PreparedStatement;
+import com.datastax.driver.core.ResultSet;
+import com.datastax.driver.core.Session;
+import com.datastax.driver.core.Token;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.math.BigInteger;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+
+/**
+ * {@link SplitReader} for Cassandra source. This class is responsible for 
fetching the records as
+ * {@link CassandraRow}. For that, it executes a range query (query that 
outputs records belonging
+ * to a {@link RingRange}) based on the user specified query. This class 
manages the Cassandra
+ * cluster and session.
+ */
+public class CassandraSplitReader implements SplitReader<CassandraRow, 
CassandraSplit> {
+
+    private static final Logger LOG = 
LoggerFactory.getLogger(CassandraSplitReader.class);
+    public static final String SELECT_REGEXP = "(?i)select .+ from 
(\\w+)\\.(\\w+).*;$";
+
+    private final Cluster cluster;
+    private final Session session;
+    private final Set<CassandraSplitState> unprocessedSplits;
+    private final AtomicBoolean wakeup = new AtomicBoolean(false);
+    private final String query;
+
+    public CassandraSplitReader(ClusterBuilder clusterBuilder, String query) {
+        // need a thread safe set
+        this.unprocessedSplits = ConcurrentHashMap.newKeySet();
+        this.query = query;
+        cluster = clusterBuilder.getCluster();
+        session = cluster.connect();
+    }
+
+    @Override
+    public RecordsWithSplitIds<CassandraRow> fetch() {
+        Map<String, Collection<CassandraRow>> recordsBySplit = new HashMap<>();
+        Set<String> finishedSplits = new HashSet<>();
+        Metadata clusterMetadata = cluster.getMetadata();
+
+        String partitionKey = getPartitionKey(clusterMetadata);
+        String finalQuery = generateRangeQuery(query, partitionKey);
+        PreparedStatement preparedStatement = session.prepare(finalQuery);
+        // Set wakeup to false to start consuming.
+        wakeup.compareAndSet(true, false);
+        for (CassandraSplitState cassandraSplitState : unprocessedSplits) {

Review Comment:
   > I'll look around. The naming seems off though; it's not a maximum records 
for a split (which would be more relevant for the SplitGenerator), but more of 
a batch size.
   
   Agree, it is a batch size with the current code
   
   > You also have to consider how this information can be stored in a 
checkpoint
   
   Yes, I did not have time to address this point before my vacation but I 
wanted to validate with you the general approach of these 3 big architecture 
changes. Now regarding this point: indeed, the ResultSet is just a handler on 
the fetching process, so storing it as part of the checkpoint does not help in 
knowing what was already processed after restoration. That is sure that 
Cassandra does not provide deterministic order because ordering does not scale, 
so counting does not work either. I looked into `ResultSet#ExecutionInfo` and 
found no counter or processed Row ids etc... 
   
   So I guess that leaves the SplitGenerator solution you proposed. In that 
case the conf becomes a max_rows_per_split and not a batch_size anymore. Also, 
with this solution: 
   
   - We might get more splits than the current parallelism. So each split 
reader will read more than one split
   
   - The aim here is to avoid storing too much in memory so I'd prefer that the 
user conf parameter is in bytes. and I can get my size estimates from Beam 2017 
code so that the SplitGenerator provides splits of a given size. Then all the 
records of each split will be stored in memory to create the needed 
`RecordsBySplits`. 
   
   WDYT ?
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to