Hi Deng,
The jarFiles parameter of `createRemoteEnvironment` means that the path of your
custom library jar. If you don’t need custom library, you can omit the
parameter.
Regards,
Chiwan Park
> On Sep 25, 2015, at 10:48 AM, Deng Jie wrote:
>
> Dear Flink org,i have same question,like:
> publ
Dear Flink org,i have same question,like:
public static void main(String[] args) throws Exception {
ExecutionEnvironment env = ExecutionEnvironment
.createRemoteEnvironment("flink-master", 6123, "/home/user/udfs.jar");
DataSet data = env.readTextFile("hdfs://path/to/file");
d
What I actually meant was partition reassignment (
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-6.ReassignPartitionsTool
).
No topics were deleted.
We added a bunch of new servers and needed to reassign some partitions to
spread the load.
No, I haven't set t
Hi Jakob,
what do you exactly mean by rebalance of topics? Did the leader of the
partitions change?
Were topics deleted?
Flink's KafkaConsumer does not try to recover from these exceptions. We
rely on Flink's fault tolerance mechanisms to restart the data consumption
(from the last valid offset).
We did some rebalance of topics in our Kafka cluster today. I had a flink
job running and it crashed when some of the partitions were moved, other
consumers (non flink) continued to work.
Should I configure it differently or could this be a bug?
09/24/2015 15:34:31 Source: Custom Source(3/4)
Makes sense. The generation process seems to be inherently faster than the
consumption process (Flink program).
Without backpressure, these two will run out of sync, and Kafka does not do
any backpressure (by design).
On Thu, Sep 24, 2015 at 4:51 PM, Rico Bergmann wrote:
> The test data is gene
The test data is generated in a flink program running in a separate jvm. The
generated data is then written to a Kafka topic from which my programs reads
the data ...
> Am 24.09.2015 um 14:53 schrieb Aljoscha Krettek :
>
> Hi Rico,
> are you generating the data directly in your flink program
Hi Rico!
When you say that the program falls behind the unlimited generating source,
I assume you have some unbounded buffering channel (like Kafka) between the
generator and the Flink job. Is that correct? Flink itself backpressures to
the sources, but if the source is Kafka, this does of course
Hi Rico,
are you generating the data directly in your flink program or some external
queue, such as Kafka?
Cheers,
Aljoscha
On Thu, 24 Sep 2015 at 13:47 Rico Bergmann wrote:
> And as side note:
>
> The problem with duplicates seems also to be solved!
>
> Cheers Rico.
>
>
>
> Am 24.09.2015 um 12
I'm actually the last developer that touched the HBase connector but I
never faced that problems with the version specified in the extension pom.
>From what I can tell looking at your logs it seems that there are some
classpath problem ( Failed to classload HBaseZeroCopyByteString:
java.lang.Illega
I'm really sorry that you are facing the issue.
I saw your message on the Hbase-user mailing list [1]. Maybe you can follow
up with Ted so that he can help you.
There are only a few Flink user on this mailing list using it with HBase. I
actually think that the problem is more on the HBase than on t
And as side note:
The problem with duplicates seems also to be solved!
Cheers Rico.
> Am 24.09.2015 um 12:21 schrieb Rico Bergmann :
>
> I took a first glance.
>
> I ran 2 test setups. One with a limited test data generator, the outputs
> around 200 events per second. In this setting the
I am really trying to get HBase to work...
Is there maybe a tutorial for all the config files?
Best regards,
Lydia
> Am 23.09.2015 um 17:40 schrieb Maximilian Michels :
>
> In the issue, it states that it should be sufficient to append the
> hbase-protocol.jar file to the Hadoop classpath. Flin
I took a first glance.
I ran 2 test setups. One with a limited test data generator, the outputs around
200 events per second. In this setting the new implementation keeps up with the
incoming message rate.
The other setup had an unlimited generation (at highest possible rate). There
the same
Hi I tried that but unfortunately it still gets stuck at the second split.
Can it be that I have set something in my configurations wrong? In Hadoop? Or
Flink?
The strange thing is that the HBaseWriteExample works great!
Best regards,
Lydia
> Am 23.09.2015 um 17:40 schrieb Maximilian Michels
Hi Rico,
you should be able to get it with these steps:
git clone https://github.com/StephanEwen/incubator-flink.git flink
cd flink
git checkout -t origin/windows
This will get you on Stephan's windowing branch. Then you can do a
mvn clean install -DskipTests
to build it.
I will merge his stuf
Hi!
Sounds great. How can I get the source code before it's merged to the master
branch? Unfortunately I only have 2 days left for trying this out ...
Greets. Rico.
> Am 24.09.2015 um 00:57 schrieb Stephan Ewen :
>
> Hi Rico!
>
> We have finished the first part of the Window API reworks. Y
17 matches
Mail list logo