Sihua Zhou created FLINK-9661:
-
Summary: TTL state should support to do time shift after restoring
from checkpoint( savepoint).
Key: FLINK-9661
URL: https://issues.apache.org/jira/browse/FLINK-9661
Projec
Leonid Ishimnikov created FLINK-9660:
Summary: Allow passing custom artifacts to Mesos workers
Key: FLINK-9660
URL: https://issues.apache.org/jira/browse/FLINK-9660
Project: Flink
Issue T
Chesnay Schepler created FLINK-9659:
---
Summary: Remove hard-coded sleeps in bucketing sink E2E test
Key: FLINK-9659
URL: https://issues.apache.org/jira/browse/FLINK-9659
Project: Flink
Issue
Chesnay Schepler created FLINK-9658:
---
Summary: Test data output directories are no longer cleaned up
Key: FLINK-9658
URL: https://issues.apache.org/jira/browse/FLINK-9658
Project: Flink
Iss
Looking at the Java/Scala Doc for this class [1]. Seems like this only
supports +1.0 and -1.0 as labeling and there's no mention you can use any
positive integer.
I tried your use case and using just +1 and -1 actually works fine.
--
Rong
[1]
https://github.com/apache/flink/blob/master/flink-lib
Chesnay Schepler created FLINK-9657:
---
Summary: Suspicious output from Bucketing sink E2E test
Key: FLINK-9657
URL: https://issues.apache.org/jira/browse/FLINK-9657
Project: Flink
Issue Type
Hi all,
This is just getting stranger… After playing a while, it seems that if I have a
vector that has value of 0 (i.e. all zero’s) it classifies it as -1.0. Any
other value for the vector causes it to classify as 1.0:
=== Predictions
(
Jozef Vilcek created FLINK-9656:
---
Summary: Environment java opts for flink run
Key: FLINK-9656
URL: https://issues.apache.org/jira/browse/FLINK-9656
Project: Flink
Issue Type: Improvement
Chesnay Schepler created FLINK-9655:
---
Summary: Externalized checkpoint E2E test fails on travis
Key: FLINK-9655
URL: https://issues.apache.org/jira/browse/FLINK-9655
Project: Flink
Issue Ty
Hi, Amol
Yes. I think it is. But, env.setParallelism(80) means that you set a global
parallelism for all operators. Actually, it depends on your job to set one of
them(operators). Instead, You just set the source operator parallelism is
enough.
Like below, It will be 80 kafka consumers [also
Hi, Shuyi:
What is the progress of the discussion? We also look forward to this
feature.
Thanks.
Shuyi Chen 于2018年6月8日周五 下午3:04写道:
> Thanks a lot for the comments, Till and Fabian.
>
> The RemoteEnvrionment does provide a way to specify jar files at
> construction, but we want the jar files to
Zsolt Donca created FLINK-9654:
--
Summary: Internal error while deserializing custom Scala
TypeSerializer instances
Key: FLINK-9654
URL: https://issues.apache.org/jira/browse/FLINK-9654
Project: Flink
Thanks zhangminglei,
Does this mean setting env.setParallelism(80) means I have created 80 kafka
consumers? and if this is true then can I change env.setParallelism(80) to
any number i.e. number of partitions = env.setParallelism or else I need
to restart my job each time I set new Parallelism i
Hi, Amol
As @Sihua said. Also in my case, if the kafka partition is 80. I will also set
the job source operator parallelism to 80 as well.
Cheers
Minglei
> 在 2018年6月25日,下午5:39,sihua zhou 写道:
>
> Hi Amol,
>
> I think If you set the parallelism of the source node equal to the number of
> the p
Hi Rong,
As you can see in my test data example, I did change the labeling data to 8 and
16 instead of 1 and 0.
If SVM always returns +1.0 or -1.0, that would then indeed explain where the
1.0 is coming from. But, it never gives me -1.0, so there is still something
wrong as it classifies every
Hi Amol,
I think If you set the parallelism of the source node equal to the number of
the partition of the kafka topic, you could have per kafka customer per
partition in your job. But if the number of the partitions of the kafka is
dynamic, the 1:1 relationship might break. I think maybe @Gor
Same kind of question I have asked on stack overflow also.
Please answer it ASAP
https://stackoverflow.com/questions/51020018/partition-specific-flink-kafka-consumer
---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com
*iProgrammer Solutions P
Thanks a lot :)
Piotrek
> On 25 Jun 2018, at 09:29, Fabian Hueske wrote:
>
> Congratulations Piotr!
>
> Good to have you on board :-)
>
> Cheers, Fabian
>
> 2018-06-23 19:25 GMT+02:00 Ufuk Celebi :
>
>> Congrats and welcome Piotr! :-)
>>
>> – Ufuk
>>
>>
>> On Sat, Jun 23, 2018 at 3:54 AM
Hello,
I wrote an streaming programme using kafka and flink to stream mongodb
oplog. I need to maintain an order of streaming within different kafka
partitions. As global ordering of records not possible throughout all
partitions I need N consumers for N different partitions. Is it possible to
con
Congratulations Piotr!
Good to have you on board :-)
Cheers, Fabian
2018-06-23 19:25 GMT+02:00 Ufuk Celebi :
> Congrats and welcome Piotr! :-)
>
> – Ufuk
>
>
> On Sat, Jun 23, 2018 at 3:54 AM, zhangminglei <18717838...@163.com> wrote:
> > Congrats Piotr!
> >
> > Cheers
> > Minglei
> >> 在 2018年6
20 matches
Mail list logo