of course,I tried to configure the task slot during a debug test and I
forgot to remove it..
Just for curiosity, is there any good reason why you've changed the default
parallellelism that way?and moreover, is it the only unexpected changed
behaviour wrt the previous API version?
On 14 Oct 2015 18:
Hi Flavio!
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment()
by default picks up the number of cores as the parallelism, while the
manual environments do not do that.
You can still set it manually set the parallelism
"env.setParallelism(Runtime.getRuntime().availableProcess
I would also suggest to create a mapper after the source. Make sure the
mapper is chained to the kafka source, then, you'll not really see a big
delay in the timestamp written to redis.
Just out of curiosity, why do you need to write a timestamp to redis for
each record from Kafka?
On Wed, Oct 14
Yes, we’re onto the exactly-once ; trying to write RCFiles (Parquet and
ORCFiles are not compatible because of their footer).
It seems to be working perfectly.
As expected, Flink is falling back to .valid-length metadata on HDFS 2.6 (and
2.3).
From: Robert Metzger [mailto:rmetz...@apache.org]
Great. We are shading curator now into a different location, that's why you
can't find it anymore.
I suspect you're trying out our new exactly-once filesystem sinks. Please
let us know how well its working for you and if you're missing something.
Its a pretty new feature :)
Also note that you can
Yes … You’re right.
Anyway, adding the log4j jar solved the issue and our app is working properly,
thanks !
About curator, I just observed that it was not there anymore when comparing the
old and new fatjars. But it’s probably now in another dependency, anyway there
is no curator-related error
One more thing regarding the truncate method: Its supported as of Hadoop
2.7.0 (https://issues.apache.org/jira/browse/HDFS-3107)
On Wed, Oct 14, 2015 at 5:00 PM, Robert Metzger wrote:
> Ah, I know what's causing this issue.
> In the latest 0.10-SNAPSHOT, we have removed log4j from the fat jar.
>
Ah, I know what's causing this issue.
In the latest 0.10-SNAPSHOT, we have removed log4j from the fat jar.
Can you copy everything from the lib/ folder from your maven build into the
lib/ folder of your flink installation?
Log4j is now in a separate jar in the lib/ folder .
What about the curator
Great.
Which classes can it not find at runtime?
I'll try to build and run Flink with exactly the command you've provided.
On Wed, Oct 14, 2015 at 4:49 PM, Gwenhael Pasquiers <
gwenhael.pasqui...@ericsson.com> wrote:
> Hi Robert !
>
>
>
> I’m using ” mvn clean install -DskipTests -Dhadoop.versio
The first class that it can not find is :
org.apache.log4j.Level
The org.apache.log4j package is not present in the fat jar I get from the mvn
command, but it is in the one you distributed on your website.
From: Robert Metzger [mailto:rmetz...@apache.org]
Sent: mercredi 14 octobre 2015 16:54
To:
Hi Robert !
I’m using ” mvn clean install -DskipTests -Dhadoop.version=2.6.0 “.
From: Robert Metzger [mailto:rmetz...@apache.org]
Sent: mercredi 14 octobre 2015 16:47
To: user@flink.apache.org
Subject: Re: Building Flink with hadoop 2.6
Hi Gwen,
can you tell us the "mvn" command you're using f
Hi Gwen,
can you tell us the "mvn" command you're using for building Flink?
On Wed, Oct 14, 2015 at 4:37 PM, Gwenhael Pasquiers <
gwenhael.pasqui...@ericsson.com> wrote:
> Hi ;
>
>
>
> We need to test some things with flink and hadoop 2.6 (the trunc method).
>
>
>
> We’ve set up a build task o
Hi Juho,
sorry for the late reply, I was busy with Flink Forward :)
The Flink Kafka Consumer needs both addresses.
Kafka uses the bootstrap servers to connect to the brokers to consume
messages.
The Zookeeper connection is used to commit the offsets of the consumer
group once a state snapshot in
Hi ;
We need to test some things with flink and hadoop 2.6 (the trunc method).
We've set up a build task on our Jenkins and everything seem okay.
However when we replace the original jar from your 0.10-SNAPSHOT distribution
by ours there are some missing dependencies (log4j, curator, and maybe
Hi Fabian and Stephan, back to work :)
I finally managed to find the problem of the parallelism encountered by my
colleague!
Basically that was introduced by this API change. Before I was using
env.setConfiguration() to merge the default params with some custom ones.
Now, after the API change I wa
> On 10 Oct 2015, at 22:59, snntr wrote:
>
> Hey everyone,
>
> I was having the same problem with S3 and found this thread very useful.
> Everything works fine now, when I start Flink from my IDE, but when I run
> the jar in local mode I keep getting
>
> java.lang.IllegalArgumentException: A
If you're using Scala, then you're bound to a maximum of 22 fields in a
tuple, because the Scala library does not provide larger tuples. You could
generate your own case classes which have more than the 22 fields, though.
On Oct 14, 2015 11:30 AM, "Ufuk Celebi" wrote:
>
> > On 13 Oct 2015, at 16:
> On 12 Oct 2015, at 22:47, Jerry Peng wrote:
>
> Hello,
>
> I am trying to do some benchmark testing with flink streaming. When flink
> reads a message in from Kafka, I want to write a timestamp to redis. How can
> I modify the existing kafka consumer code to do this? What would be easies
> On 13 Oct 2015, at 16:06, schul...@informatik.hu-berlin.de wrote:
>
> Hello,
>
> I am currently working on a compilation unit translating AsterixDB's AQL
> into runnable Scala code for Flink's Scala API. During code generation I
> discovered some things that are quite hard to work around. I am
Hi guys!
I'm sorry I have abandoned this thread but I had to give up Flink for some
time. Now I'm back and would like to resurrect this thread. Flink has
rapidly evolved in this time too, so maybe new features will allow me what
I want to do. By the way, I heard really only good stuff about you fro
20 matches
Mail list logo