Hi Aljoscha,
Thanks for your reply.
Do you have a suggestion how can we workaround it ?
We have a production system running with Flink and it is mandatory to add
one more field in the state.
Maybe some how we can write our own serializer?
Thanks,
Konstantin
--
Sent from: http://apache-flink
Hi guys,
I have the flink streaming job running (1.2.0 version) which has the
following state:
private transient ValueState>> userState;
With following configuration:
final ValueStateDescriptor>> descriptor =
new ValueStateDescriptor<>("userState",
TypeInformation.of(new UserTyp
Thanks for your comment.
If I write the KafkaPartitioner anyway I have to somehow pass the
*kafka.producer.Partitioner* which is not so easy.
So I have found the easiest solution for this, you have just pass /null/
value:
outputStream.addSink(new
FlinkKafkaProducer010<>(producerProperties.getProp
Exactly, I did like this, the only thing is that I am using 1.2.0 version of
Flink and in this version the class name is KafkaPartitioner.
But the problem is that I would not like to "fork" the Kafka's source code.
(Please check my first comment)
Thanks,
Konstantin
--
Sent from: http://apache-
Hi Chesnay,
Thanks for your reply.
I would like to use the partitioner within the Kafka Sink operation.
By default kafka sink is using FixedPartitioner:
public FlinkKafkaProducer010(String topicId, KeyedSerializationSchema
serializationSchema, Properties producerConfig) {
Hey,
I would like to use a round-robin kafka partitioner in the apache flink.
(the default one)
I forked the Kafka's code from the DefaultPartitioner class.
public class HashPartitioner extends KafkaPartitioner implements
Serializable {
private final AtomicInteger counter = new AtomicInte
Hi Gordon,
Thanks again for your answer.
But I am not sure if I understood this part:
"The workaround, for now, would be to explicitly disable chaining of the
consumer source with any stateful operators before taking the savepoint and
changing the operator UID."
So my code looks like this:
Hi Gordon,
Thanks for your quick reply.
I have following consumer:
jobConfiguration.getEnv().addSource(
new FlinkKafkaConsumer010<>(properties.getProperty(TOPIC), deserializer,
properties));
How can I set the UID for the consumer ?
Thanks again for help!
Regards,
Konstantin
--
Sent from: h
Hi guys,
We have a running apache flink streaming job which interacts with apache
kafka (consumer and producer).
Now we would like to change the kafka cluster without loosing Flink's state.
Is it possible to do it ? If yes, what is the right way to do it ?
Thanks in advance!
Best,
Konstant