Thanks Stephan and Till.
Since I couldn't make a working example of Flink and Kafka after struggling
a week, I have to temporarily stop the evaluation work and switch to other
tasks. I hope in the near future, someone can come up with a working sample
of KafkaWordCount, similar to that of Spark s
Hi Stephan,
yes, now it is solved. I was running an older version of the client by
mistake.
Thanks
Juan
On Thu, 2015-07-23 at 15:31 +0200, Stephan Ewen wrote:
> Hi!
>
>
> Seems that you have different versions of the code running locally and
> on the cluster. Is it possible that you did a co
Hi!
Seems that you have different versions of the code running locally and on
the cluster. Is it possible that you did a code update locally (client),
and forgot to update the cluster (or the other way around)?
Greetings,
Stephan
On Thu, Jul 23, 2015 at 3:10 PM, Juan Fumero <
juan.jose.fumero.a
Hi,
When I execute from Java the app in a cluster, the user app is
blocked in this point:
log4j:WARN No appenders could be found for logger
(org.apache.flink.api.java.ExecutionEnvironment).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.
I guess you are starting Flink via the scripts. There is a file
conf/log4j.properties. Change
log4j.rootLogger=INFO, file
to
log4j.rootLogger=DEBUG, file
– Ufuk
On 23 Jul 2015, at 14:44, Lydia Ickler wrote:
> Hi Ufuk,
>
> no, I don’t mind!
> Where would I change the log level?
>
> Best re
I think you should see also some exception also in the hbase server
logs..for some reason it seems that HBase close the client connections
On Thu, Jul 23, 2015 at 2:44 PM, Lydia Ickler
wrote:
> Hi Ufuk,
>
> no, I don’t mind!
> Where would I change the log level?
>
> Best regards,
> Lydia
>
> > A
Hi Ufuk,
no, I don’t mind!
Where would I change the log level?
Best regards,
Lydia
> Am 23.07.2015 um 14:41 schrieb Ufuk Celebi :
>
> Hey Lydia,
>
> it looks like the HBase client is losing its connection to HBase. Before
> that, everything seems to be working just fine (X rows are read etc.)
Hey Lydia,
it looks like the HBase client is losing its connection to HBase. Before that,
everything seems to be working just fine (X rows are read etc.).
Do you mind setting the log level to DEBUG and then posting the logs again?
– Ufuk
On 23 Jul 2015, at 14:12, Lydia Ickler wrote:
> Hi,
>
Hi,
I am trying to read data from a HBase Table via the HBaseReadExample.java
Unfortunately, my run gets always stuck at the same position.
Do you guys have any suggestions?
In the master node it says:
14:05:04,239 INFO org.apache.flink.runtime.jobmanager.JobManager
- Received job bb9
Yes it does. :-) I have implemented it with Hadoop1 and Hadoop2.
Essentially I have extended the HadoopOutputFormat reusing part of the code
of the HadoopOutputFormatBase, and set the MongoOutputCommiter to replace
the FileOutputCommitter.
saluti,
Stefano
Stefano Bortoli, PhD
*ENS Technical Di
Does this make the MongoHadoopOutputFormat work for you?
On Thu, Jul 23, 2015 at 12:44 PM, Stefano Bortoli
wrote:
> https://issues.apache.org/jira/browse/FLINK-2394?filter=-2
>
> Meanwhile, I have implemented the MongoHadoopOutputFormat overriding open,
> close and globalFinalize methods.
>
> sa
https://issues.apache.org/jira/browse/FLINK-2394?filter=-2
Meanwhile, I have implemented the MongoHadoopOutputFormat overriding open,
close and globalFinalize methods.
saluti,
Stefano
2015-07-22 17:11 GMT+02:00 Stephan Ewen :
> Thank's for reporting this, Stefano!
>
> Seems like the HadoopOutpu
12 matches
Mail list logo