dannycranmer commented on code in PR #1:
URL: 
https://github.com/apache/flink-connector-dynamodb/pull/1#discussion_r1009201543


##########
flink-connector-dynamodb/src/main/java/org/apache/flink/streaming/connectors/dynamodb/sink/DynamoDbSink.java:
##########
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.dynamodb.sink;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.connector.base.sink.AsyncSinkBase;
+import org.apache.flink.connector.base.sink.writer.BufferedRequestState;
+import org.apache.flink.connector.base.sink.writer.ElementConverter;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Properties;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * A DynamoDB Sink that performs async requests against a destination table 
using the buffering
+ * protocol specified in {@link AsyncSinkBase}.
+ *
+ * <p>The sink internally uses a {@link
+ * software.amazon.awssdk.services.dynamodb.DynamoDbAsyncClient} to 
communicate with the AWS
+ * endpoint.
+ *
+ * <p>The behaviour of the buffering may be specified by providing 
configuration during the sink
+ * build time.
+ *
+ * <ul>
+ *   <li>{@code maxBatchSize}: the maximum size of a batch of entries that may 
be written to
+ *       DynamoDb. DynamoDB client supports only up to 25 elements in the 
batch.
+ *   <li>{@code maxInFlightRequests}: the maximum number of in flight requests 
that may exist, if
+ *       any more in flight requests need to be initiated once the maximum has 
been reached, then it
+ *       will be blocked until some have completed
+ *   <li>{@code maxBufferedRequests}: the maximum number of elements held in 
the buffer, requests to
+ *       sink will backpressure while the number of elements in the buffer is 
at the maximum
+ *   <li>{@code maxBatchSizeInBytes}: the maximum size of a batch of entries 
that may be written to
+ *       DynamoDb measured in bytes
+ *   <li>{@code maxTimeInBufferMS}: the maximum amount of time an entry is 
allowed to live in the
+ *       buffer, if any element reaches this age, the entire buffer will be 
flushed immediately
+ *   <li>{@code maxRecordSizeInBytes}: the maximum size of a record the sink 
will accept into the
+ *       buffer, a record of size larger than this will be rejected when 
passed to the sink
+ *   <li>{@code failOnError}: when an exception is encountered while 
persisting to DynamoDb, the job
+ *       will fail immediately if failOnError is set
+ *   <li>{@code dynamoDbTablesConfig}: if provided for the table, the DynamoDb 
sink will attempt to
+ *       deduplicate records with the same primary and/or secondary keys in 
the same batch request.
+ *       Only the latest record with the same combination of key attributes is 
preserved in the
+ *       request.
+ * </ul>

Review Comment:
   This comment is out of sync with the recent changes to the class



##########
flink-connector-dynamodb/src/main/java/org/apache/flink/streaming/connectors/dynamodb/sink/DynamoDbWriteRequest.java:
##########
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.dynamodb.sink;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.util.Preconditions;
+
+import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
+
+import java.io.Serializable;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ * Represents a single Write Request to DynamoDb. Contains the item to be 
written as well as the
+ * type of the Write Request (PUT/DELETE)
+ */
+@PublicEvolving
+public class DynamoDbWriteRequest implements Serializable {
+
+    private final Map<String, AttributeValue> item;
+    private final DynamoDbWriteRequestType type;
+
+    private DynamoDbWriteRequest(Map<String, AttributeValue> item, 
DynamoDbWriteRequestType type) {
+        this.item = item;
+        this.type = type;
+    }
+
+    public Map<String, AttributeValue> getItem() {
+        return item;
+    }
+
+    public DynamoDbWriteRequestType getType() {
+        return type;
+    }
+
+    @Override
+    public boolean equals(Object o) {
+        if (this == o) {
+            return true;
+        }
+        if (o == null || getClass() != o.getClass()) {
+            return false;
+        }
+        DynamoDbWriteRequest that = (DynamoDbWriteRequest) o;
+        return item.equals(that.item) && type == that.type;
+    }
+
+    @Override
+    public int hashCode() {
+        return Objects.hash(item, type);
+    }

Review Comment:
   If this is only required for tests, can we please delete it, [as per Flink 
coding 
guidelines](https://flink.apache.org/contributing/code-style-and-quality-java.html#equals--hashcode)?
 



##########
flink-connector-dynamodb/src/main/java/org/apache/flink/streaming/connectors/dynamodb/sink/DynamoDbWriteRequestType.java:
##########
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.dynamodb.sink;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+/**
+ * Represents a DynamoDb Write Request type. The following types are currently 
supported
+ *
+ * <ul>
+ *   <li>PUT - Put Request
+ *   <li>DELETE - Delete Request
+ * </ul>
+ */
+@PublicEvolving
+public enum DynamoDbWriteRequestType {
+
+    // Note: Enums have no stable hash code across different JVMs, use 
toByteValue() for
+    // this purpose.
+    PUT((byte) 0),
+    DELETE((byte) 1);

Review Comment:
   Is there a reason to not use `ordinal()` and define your own here?



##########
flink-connector-dynamodb/src/main/java/org/apache/flink/streaming/connectors/dynamodb/util/DynamoDbType.java:
##########
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.dynamodb.util;
+
+import org.apache.flink.annotation.Internal;
+
+/**
+ * enum representing the dynamodb types.
+ *
+ * <ul>
+ *   <li>String
+ *   <li>Number
+ *   <li>Boolean
+ *   <li>Null
+ *   <li>Binary
+ *   <li>String Set
+ *   <li>Number Set
+ *   <li>Binary Set
+ *   <li>List
+ *   <li>Map
+ * </ul>
+ */
+@Internal
+public enum DynamoDbType {
+
+    // Note: Enums have no stable hash code across different JVMs, use 
toByteValue() for
+    // this purpose.
+    STRING((byte) 0),
+    NUMBER((byte) 1),
+    BOOLEAN((byte) 2),
+    NULL((byte) 3),
+    BINARY((byte) 4),
+    STRING_SET((byte) 5),
+    NUMBER_SET((byte) 6),
+    BINARY_SET((byte) 7),
+    LIST((byte) 8),
+    MAP((byte) 9);
+
+    private final byte value;
+
+    DynamoDbType(byte value) {
+        this.value = value;
+    }
+
+    /**
+     * Returns the byte value representation of this {@link DynamoDbType}. The 
byte value is used
+     * for serialization and deserialization.
+     *
+     * <ul>
+     *   <li>"0" represents {@link #STRING}.
+     *   <li>"1" represents {@link #NUMBER}.
+     *   <li>"2" represents {@link #BOOLEAN}.
+     *   <li>"3" represents {@link #NULL}.
+     *   <li>"4" represents {@link #BINARY}.
+     *   <li>"5" represents {@link #STRING_SET}.
+     *   <li>"6" represents {@link #NUMBER_SET}.
+     *   <li>"7" represents {@link #BINARY_SET}.
+     *   <li>"8" represents {@link #LIST}.
+     *   <li>"9" represents {@link #MAP}.
+     * </ul>
+     */
+    public byte toByteValue() {
+        return value;
+    }
+
+    /**
+     * Creates a {@link DynamoDbType} from the given byte value. Each {@link 
DynamoDbType} has a
+     * byte value representation.
+     *
+     * @see #toByteValue() for mapping of byte value and {@link DynamoDbType}.
+     */
+    public static DynamoDbType fromByteValue(byte value) {
+        switch (value) {
+            case 0:
+                return STRING;
+            case 1:
+                return NUMBER;
+            case 2:
+                return BOOLEAN;
+            case 3:
+                return NULL;
+            case 4:
+                return BINARY;
+            case 5:
+                return STRING_SET;
+            case 6:
+                return NUMBER_SET;
+            case 7:
+                return BINARY_SET;
+            case 8:
+                return LIST;
+            case 9:
+                return MAP;
+            default:
+                throw new UnsupportedOperationException(
+                        "Unsupported byte value '" + value + "' for DynamoDb 
type.");
+        }

Review Comment:
   Same question as above, why not use `ordinal()` ?



##########
flink-connector-dynamodb/src/main/java/org/apache/flink/streaming/connectors/dynamodb/util/DynamoDbSerializationUtil.java:
##########
@@ -0,0 +1,195 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.dynamodb.util;
+
+import org.apache.flink.annotation.Internal;
+import 
org.apache.flink.streaming.connectors.dynamodb.sink.DynamoDbWriteRequest;
+import 
org.apache.flink.streaming.connectors.dynamodb.sink.DynamoDbWriteRequestType;
+
+import software.amazon.awssdk.core.SdkBytes;
+import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
+
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.LinkedHashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * Serialization Utils for DynamoDb {@link AttributeValue}. This class is 
currently not
+ * serializable, see <a 
href="https://github.com/aws/aws-sdk-java-v2/issues/3143";>open issue</a>
+ */
+@Internal
+public class DynamoDbSerializationUtil {
+
+    public static void serializeWriteRequest(
+            DynamoDbWriteRequest dynamoDbWriteRequest, DataOutputStream out) 
throws IOException {
+        out.writeByte(dynamoDbWriteRequest.getType().toByteValue());
+        Map<String, AttributeValue> item = dynamoDbWriteRequest.getItem();
+        serializeItem(item, out);
+    }
+
+    public static DynamoDbWriteRequest deserializeWriteRequest(DataInputStream 
in)
+            throws IOException {
+        int writeRequestType = in.read();
+        DynamoDbWriteRequestType dynamoDbWriteRequestType =
+                DynamoDbWriteRequestType.fromByteValue((byte) 
writeRequestType);
+        Map<String, AttributeValue> item = deserializeItem(in);
+        return 
DynamoDbWriteRequest.build().setType(dynamoDbWriteRequestType).setItem(item).build();
+    }
+
+    private static void serializeItem(Map<String, AttributeValue> item, 
DataOutputStream out)
+            throws IOException {
+        out.writeInt(item.size());
+        for (Map.Entry<String, AttributeValue> entry : item.entrySet()) {
+            out.writeUTF(entry.getKey());
+            AttributeValue value = entry.getValue();
+            serializeAttributeValue(value, out);
+        }
+    }
+
+    private static void serializeAttributeValue(AttributeValue value, 
DataOutputStream out)
+            throws IOException {
+        if (value.nul() != null) {
+            out.writeByte(DynamoDbType.NULL.toByteValue());
+        } else if (value.bool() != null) {
+            out.writeByte(DynamoDbType.BOOLEAN.toByteValue());
+            out.writeBoolean(value.bool());
+        } else if (value.s() != null) {
+            out.writeByte(DynamoDbType.STRING.toByteValue());
+            out.writeUTF(value.s());
+        } else if (value.n() != null) {
+            out.writeByte(DynamoDbType.NUMBER.toByteValue());
+            out.writeUTF(value.n());
+        } else if (value.b() != null) {
+            out.writeByte(DynamoDbType.BINARY.toByteValue());
+            out.writeInt(value.b().asByteArrayUnsafe().length);
+            out.write(value.b().asByteArrayUnsafe());
+        } else if (value.hasSs()) {
+            out.writeByte(DynamoDbType.STRING_SET.toByteValue());
+            out.writeInt(value.ss().size());
+            for (String s : value.ss()) {
+                out.writeUTF(s);
+            }
+        } else if (value.hasNs()) {
+            out.writeByte(DynamoDbType.NUMBER_SET.toByteValue());
+            out.writeInt(value.ns().size());
+            for (String s : value.ns()) {
+                out.writeUTF(s);
+            }
+        } else if (value.hasBs()) {
+            out.writeByte(DynamoDbType.BINARY_SET.toByteValue());
+            out.writeInt(value.bs().size());
+            for (SdkBytes sdkBytes : value.bs()) {
+                byte[] bytes = sdkBytes.asByteArrayUnsafe();
+                out.writeInt(bytes.length);
+                out.write(bytes);
+            }
+        } else if (value.hasL()) {
+            out.writeByte(DynamoDbType.LIST.toByteValue());
+            List<AttributeValue> l = value.l();
+            out.writeInt(l.size());
+            for (AttributeValue attributeValue : l) {
+                serializeAttributeValue(attributeValue, out);
+            }
+        } else if (value.hasM()) {
+            out.writeByte(DynamoDbType.MAP.toByteValue());
+            Map<String, AttributeValue> m = value.m();
+            serializeItem(m, out);
+        } else {
+            throw new IllegalArgumentException("Attribute value must not be 
empty: " + value);
+        }
+    }
+
+    private static Map<String, AttributeValue> deserializeItem(DataInputStream 
in)
+            throws IOException {
+        int size = in.readInt();
+        Map<String, AttributeValue> item = new HashMap<>(size);
+        for (int i = 0; i < size; i++) {
+            String key = in.readUTF();
+            AttributeValue attributeValue = deserializeAttributeValue(in);
+            item.put(key, attributeValue);
+        }
+        return item;
+    }
+
+    private static AttributeValue deserializeAttributeValue(DataInputStream 
in) throws IOException {
+        int type = in.read();
+        DynamoDbType dynamoDbType = DynamoDbType.fromByteValue((byte) type);
+        return deserializeAttributeValue(dynamoDbType, in);
+    }
+
+    private static AttributeValue deserializeAttributeValue(
+            DynamoDbType dynamoDbType, DataInputStream in) throws IOException {
+        switch (dynamoDbType) {
+            case NULL:
+                return AttributeValue.builder().nul(true).build();
+            case STRING:
+                return AttributeValue.builder().s(in.readUTF()).build();
+            case NUMBER:
+                return AttributeValue.builder().n(in.readUTF()).build();
+            case BOOLEAN:
+                return AttributeValue.builder().bool(in.readBoolean()).build();
+            case BINARY:
+                int length = in.readInt();
+                byte[] bytes = new byte[length];
+                in.read(bytes);
+                return 
AttributeValue.builder().b(SdkBytes.fromByteArray(bytes)).build();
+            case STRING_SET:
+                int stringSetSize = in.readInt();
+                Set<String> stringSet = new LinkedHashSet<>(stringSetSize);
+                for (int i = 0; i < stringSetSize; i++) {
+                    stringSet.add(in.readUTF());
+                }
+                return AttributeValue.builder().ss(stringSet).build();

Review Comment:
   Why use a `Set` here? Does this mean this is a non-symmetric transform? We 
could hit a situation where:
   `!originalObject.equals(deserialize(serialize(originalObject)))`
   
   If so, I think we should not be deduplicating here and the transform should 
be symmetrical 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to