emory setups, we have seen a slightly better performance using
>> off-heap memory allocation. This can be configured using
>> taskmanager.memory.off-heap: true.
>>
>> Please let us know if you experience any further issues.
>>
>> Best,
>> Max
>>
>>
;>> > Thanks. Flink allocates its network memory as direct memory outside
> >>> > the normal Java heap. By default, that is 64MB but can grow up to
> >>> > 128MB on heavy network transfer. How much memory does your machine
> >>> > have? Co
oo low (it constraints the number of
> direct memory at the moment).
>
> Thank you,
> Max
>
>
> On Mon, Oct 19, 2015 at 3:24 PM, Jakob Ericsson
> wrote:
> > Hello,
> >
> > We are running into a strange problem with Direct Memory buffers. From
> what
> >
Hello,
We are running into a strange problem with Direct Memory buffers. From what
I know, we are not using any direct memory buffers inside our code.
This is pretty trivial streaming application just doing some dedupliction
and union some kafka streams.
/Jakob
2015-10-19 13:27:59,064 INFO or
eter in your flink-conf.yaml. Here you can basically
> specify all the JVM options which will be given to the JVMs. Thus, in your
> case you could try the following settings: env.java.opts:
> -XX:+UseConcMarkSweepGC.
>
> Cheers,
> Till
>
>
> On Mon, Sep 28, 2015 at 10:
Hi,
I'm testing Flink streaming but seems to have some problems with jvm core
dumps.
I haven't really looked at the heap dump yet.
It seems to be related to the G1 GC.
If I want to go back to CMS-GC, do you have any preferred settings?
#
# A fatal error has been detected by the Java Runtime E
cover from these exceptions. We
> rely on Flink's fault tolerance mechanisms to restart the data consumption
> (from the last valid offset).
> Do you have set the setNumberOfExecutionRetries() on the ExecutionConfig?
>
>
> On Thu, Sep 24, 2015 at 9:57 PM, Jakob Ericsson
>
topic doesn't exist anymore.
>
>
> Robert
>
> On Fri, Sep 18, 2015 at 2:21 PM, Jakob Ericsson
> wrote:
>
>> Hit another problem. It is probably related to a topic that still exist
>> in zk but is not used anymore (therefore no partitions) or I want to start
Consumer.java:280)
at
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer081.(FlinkKafkaConsumer081.java:55)
On Fri, Sep 18, 2015 at 11:02 AM, Jakob Ericsson
wrote:
> That will work. We have some utility classes for exposing the ZK-info.
>
> On Fri, Sep 18, 2015 at 10:50 AM, Robert M
re:
> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
> Kafka has plans to release the new consumer API in October.
> As soon as the new API is out, we'll support it.
>
> I hope this solution is okay for you. If not, please let me know ;)
>
>
> Robert
Hi,
Would it be possible to get the FlinkKafkaConsumer to support multiple
topics, like a list?
Or would it be better to instantiate one FlinkKafkaConsumers per topic and
add as a source?
We have about 40-50 topics to listen for one job.
Or even better, supply a regexp pattern that defines the qu
11 matches
Mail list logo