Hi to all,
i am trying to make Flink to work with Kafka but i always have the following
exception. It works perfect on my laptop but when i try to use it on the
cluster it always fails.
java.lang.Exception
at
org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(Lega
this what you're looking for?
>
> Also, note that if you have a very large graph, you should avoid using
> collect() and fromCollection().
>
> -Vasia.
>
> On 25 November 2015 at 18:03, Stefanos Antaris <mailto:antaris.stefa...@gmail.com>> wrote:
&
ould store the values to disk and have a
> custom input format to read them into datasets. Would that work for you?
>
> -Vasia.
>
> On 25 November 2015 at 15:09, Stefanos Antaris <mailto:antaris.stefa...@gmail.com>> wrote:
> Hi to all,
>
> i am working on a proje
Hi to all,
i am working on a project with Gelly and i need to create a graph with billions
of nodes. Although i have the edge list, the node in the Graph needs to be a
POJO object, the construction of which takes long time in order to finally
create the final graph. Is it possible to store the
the NodeManager hosts is not correct (pointing to
> localhost instead of the master)
>
> On Thu, Nov 19, 2015 at 11:41 AM, Stefanos Antaris
> wrote:
> Hi to all,
>
> i am trying to use Flink with Hadoop yarn but i am facing an exception while
> trying to create a
efore configuring the storageDir.
#
# recovery.zookeeper.storageDir: hdfs:///recovery
Thanks in advance,
Stefanos Antaris
015 at 5:43 PM, Andra Lungu wrote:
>
>> Hi,
>>
>> You can use something like iostat to extract the CPU usage.
>> For instance, I call this script on the JM node:
>> #!/usr/bin/env bash
>>
>> lines=`cat /home/andra.lungu/hostnames.txt | paste -d, -s`
Hi to all.
I am working an a research project using flink and i would like to extract the
CPU + RAM resources consumed on each worker in order to include the stats in my
paper. Can anyone advice me on how could i extract them?
Thanks in advance.
Stefanos