Hello Dimitry,
Could you please elaborate on your tuning on ->
environment.addDefaultKryoSerializer(..) .
I'm interested on knowing what have you done there for a boost of about
50% .
Some small or simple example would be very nice.
Thank you very much in advance.
Kind Regards,
Hello,
Thank you all for the kindly reply.
I've got the general idea. I am using version flink's 1.1.3.
So it seems the fix of Meter's won't make it to 1.1.4 ?
Best Regards,
Daniel Santos
On 12/05/2016 01:28 PM, Chesnay Schepler wrote:
We don't have to include it
aphite.class:
org.apache.flink.metrics.graphite.GraphiteReporter
metrics.reporter.graphite.host: CARBONRELAYHOST
metrics.reporter.graphite.port: 2003
Shouldn't also the Aerospike Client be closed ? Or am I missing
something, or doing something wrong ?
Sorry for the long post.
Best Regards,
memory.
These observations described are based on what flink's metrics are being
sent and recorded to our graphite's system.
Best Regards,
Daniel Santos
On 11/29/2016 04:04 PM, Cliff Resnick wrote:
Are you using the RocksDB backend in native mode? If so then the
off-heap memory may b
taskmanager.numberOfTaskSlots: 8
taskmanager.memory.preallocate: false
taskmanager.network.numberOfBuffers: 12500
taskmanager.memory.off-heap: false
-
What would be the possible causes of such behavior ?
Best Regards,
Daniel Santos
t but with a slightly diferent
initial value - InitialValueA case class A(n1 : Int, n2 : Int).
Will result in an error.
Is it possible to change the initial value, between savepoints ?
Maybe implementing a different/custom serializer for classA ?
Best Regards,
Daniel Santos
Hello,
On flink do you have the checkpoint enabled ?
env.enableCheckpointing(interval = CHKPOINT_INTERVAL)
Regards,
Daniel Santos
On 11/08/2016 12:30 PM, vinay patil wrote:
Yes Kafka and Flink connect to that zookeeper only.
Not sure why it is not listing the consumer
Regards,
Vinay
rrect ?
And kafka and flink connects to that only zookeeper ?
Best Regards,
Daniel Santos
On 11/08/2016 11:18 AM, vinay patil wrote:
Hi Daniel,
Yes I have specified the zookeeper host in server.properties file , so
the broker is connected to zookeeper.
https://kafka.apache.org/document
flink side I would configure
"props.setProperty("zookeeper.connect", zkHosts)" the same resulting in :
props.setProperty("zookeeperconnect", "zk1:2181,zk2:2181,zk3:2181/kafka09")
That is what I mean by broker's path.
Best Regards,
Daniel Santos
O
have set it on kafka config. Ignore it
otherwise, resulting in "zk1:2181,zk2:2181,zk3:2181" .
After that -> val source = env.addSource(new
FlinkKafkaConsumer09[String](KAFKA_TOPIC, new SimpleStringSchema(), props))
Then on kafkamanager -> consumers I have the groupID prod.
Ho
nd for viewing purposes
only.
Best Regards,
Daniel Santos
On November 7, 2016 7:13:54 PM GMT+00:00, Vinay Patil
wrote:
>Hi,
>
>I am monitoring Kafka using KafkaManager for checking offset lag and
>other
>Kafka metrics, however I am not able to see the consumers when I
Hi,
Thank you Ufuk.
Hmm. Out of curiosity.
Is there any idea when will 1.2 be released?
Best Regards,
Daniel Santos
On November 7, 2016 12:45:51 PM GMT+00:00, Ufuk Celebi wrote:
>On 7 November 2016 at 13:06:16, Daniel Santos (dsan...@cryptolab.net)
>wrote:
>> I believe t
?
Best Regards,
Daniel Santos
On 11/04/2016 05:54 PM, Aljoscha Krettek wrote:
Hi,
the state of the window is kept by the WindowOperator (which uses the
state descriptor you mentioned to access the state). The FoldFunction
does not itself keep the state but is only used to update the state
in
tion, how does flink keeps track of
the accumulator ?
Because from my tests it seems that it does not.
Everytime the program/stream/job is restart the accumulator start from
the Initial Value.
Kind Regards,
Daniel Santos
On 11/04/2016 11:01 AM, Aljoscha Krettek wrote:
Hi Daniel,
Flink will ch
Hello,
I have some question that has been bugging me.
Let's say we have a Kafka Source.
Checkpoint is enabled, with a period of 5 seconds.
We have a FSBackend ( Hadoop ).
Now imagine we have a window a tumbling of 10 Minutes.
For simplicity we are going to say that we are counting all elements
15 matches
Mail list logo