Just to confirm: are you able to compile and run the program of testing Kafka
similar to the following?
import org.apache.kafka.clients.producer.{ProducerConfig, KafkaProducer,
ProducerRecord}
import org.apache.flink.streaming.api.environment._
import org.apache.flink.streaming.connectors.kafka
Hi Michele,
I'm happy that you got it to run the way you want.
I guess services such as the HDFS NameNode and YARNs ResourceManager are
running on the master.
I don't know what you are doing on the cluster, but I suspect it is for
experimentation only. As long as you are not maintaining a huge HD
OK thanks Robert you have been very clear now! :)
just one question, more related on emr than to flink, if i cannot run anything
on the EMR master, then is it useful to allocate a big machine (8 core, 30GB)
on it? I thought it was the jm but it is not
Il giorno 27/lug/2015, alle ore 14:56,
Hi Michele,
> no in an EMR configuration with 1 master and 5 core I have 5 active node
in the resource manager…sounds strange to me: ganglia shows 6 nodes and 1
is always offload
Okay, so there are only 5 machines available to deploy containers to. The
JobManager/ApplicationMaster will also occu
Thank you for posting the full SBT files.
I now understand why you exclude the kafka dependency from Flink. SBT does
not support to read maven properties only defined in profiles.
I will fix the issue for Flink 0.10 (
https://issues.apache.org/jira/browse/FLINK-2408)
I was not able to reproduce
Hi Ufuk,
yes, I figured out that the HMaster of hbase did not start properly!
Now everything is working :)
Thanks for your help!
Best regards,
Lydia
> Am 27.07.2015 um 11:45 schrieb Ufuk Celebi :
>
> Any update on this Lydia?
>
> On 23 Jul 2015, at 16:38, Ufuk Celebi wrote:
>
>> Unfortuna
Any update on this Lydia?
On 23 Jul 2015, at 16:38, Ufuk Celebi wrote:
> Unfortunately we don't gain more insight from the DEBUG logs... it looks like
> the connection is just lost. Did you have a look the *.out files as well?
> Maybe something was printed to sysout.
Hi Fabian, thanks for your reply
so you flink is using about 50% of memory for itself right?
anyway now I am running an EMR with 1 master and 5 core all of them are
m3.2xlarge with 8 cores and 30GB of memory
I would like to run flink on yarn with 40 slots on 5 tm with the maximum
available reso
Hi Michele,
the 10506 MB refer to the size of Flink's managed memory whereas the 20992
MB refer to the total amount of TM memory. At start-up, the TM allocates a
fraction of the JVM memory as byte arrays and manages this portion by
itself. The remaining memory is used as regular JVM heap for TM an
I have been able to run 5 tm with -jm 2048 and -tm 20992 and 8 slots each but
in flink dashboard it says “Flink Managed Memory 10506mb” with an exclamation
mark saying it is much smaller than the physical memory (30105mb)…that’s true
but i cannot run the cluster with more than 20992
thanks
Il
Hi Robert,
thanks for answering, today I have been able to try again: no in an EMR
configuration with 1 master and 5 core I have 5 active node in the resource
manager…sounds strange to me: ganglia shows 6 nodes and 1 is always offload
the total amount of memory is 112.5GB that is actually 22.5 f
Hi Flavio,
for the user code logic Flink uses exclusively Java serialization. What you
can do, though, is to override the readObject and writeObject methods which
are used by Java serialization. Within the methods you can serialize the
other object you’re referencing.
Cheers,
Till
On Mon, Jul
Hi to all,
in my Flink job I initialize some java object that doesn't implement
serializable to use it within some Flink function (i.e. map or flatMap). At
the moment the only way to achieve that is to keep those operators as
private classes in the main one and reference to static fields or implem
13 matches
Mail list logo