hello thank you very much
I took a look on the link but now how can I check the conditions to get
aggregator results?
El vie., 24 ago. 2018 a las 5:27, aitozi () escribió:
> Hi,
>
> Now that it still not support the aggregator function in cep
> iterativeCondition. Now may be you need to check the
Hello
I am developing an application where I use Flink(v 1.4.2) CEP , is there
any aggregation function to match cumulative amounts or counts in a
IterativeCondition within a period of time for a KeyBy elements?
if a cumulative amount reaches thresholds fire a result
Thank you
Regards
you can't use KeyedProcessFunction, then this would be a pity.
> Then you can use MapState, where Key is used to store the key of your
> partition.
> But I am not sure if this will achieve the effect you want.
>
> Thanks, vino.
>
> antonio saldivar 于2018年8月20日周一 下午4:32写道:
e whether the time of
> each element belongs to a state collection.
> At the time of the trigger, the elements in the collection are evaluated.
>
> Thanks, vino.
>
> antonio saldivar 于2018年8月20日周一 上午11:54写道:
>
>> Thank you fro the references
>>
>> I have
our data.
>
> [1]:
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/process_function.html#process-function-low-level-operations
> [2]:
> https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/stream/state/state.html#working-with-state
>
> Thanks,
send data, there will definitely be some duplication.
>
> Thanks, vino.
>
> antonio saldivar 于2018年8月17日周五 下午12:01写道:
>
>> Hi Vino
>> thank you for the information, actually I am using a trigger alert and
>> processWindowFunction to send my results, but when my
unction, you can refer to the
> official website.[1]
>
> [1]:
> https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/stream/operators/windows.html#processwindowfunction
>
> Thanks, vino.
>
> antonio saldivar 于2018年8月17日周五 上午6:24写道:
>
>> Hello
>>
>
Hello
I am implementing a data stream where I use sliding windows but I am stuck
because I need to set values to my object based on some if statements in my
process function and send the object to the next step but I don't want
results every time a window is creating
if anyone has a good example
e it takes time to
>> redistribute the elements.
>> 2. Rebalancing also messes up the order in the Kafka topic partitions,
>> and often makes a event-time window wait longer to trigger in case you’re
>> using event time characteristic.
>>
>> Best Regards,
>
ciency of your serializers, that
> could have a significant impact on your performance.
>
> On Thu, Aug 9, 2018 at 2:14 PM antonio saldivar
> wrote:
>
>> Hello
>>
>> Does anyone know why when I add "rebalance()" to my .map steps is adding
>> a lot of
Hello
Does anyone know why when I add "rebalance()" to my .map steps is adding a
lot of latency rather than not having rebalance.
I have kafka partitions in my topic 44 and 44 flink task manager
execution plan looks like this when I add rebalance but it is adding a lot
of latency
kafka-src ->
t and implement its getKey method. In the method,
>> you can access outer system (such as Zookeeper) to get a dynamic key.
>>
>> It's just an idea, you can try it.
>>
>> Thanks, vino.
>>
>>
>> 2018-08-01 23:46 GMT+08:00 antonio saldivar :
>
Hello
I am developing a Flink 1.4.2 application currently with sliding windows
(Example below)
I want to ask if there is a way to create the window time dynamically also
the key has to change in some Use Cases and we don't want to create an
specific window for each UC
I want to send those values
performance in your scenario, since job
> performance can be affected by a number of factors(say your WindowFunction).
>
> Best, Hequn
>
> On Sat, Jul 21, 2018 at 2:59 AM, antonio saldivar
> wrote:
>
>> Hello
>>
>> I am building an app but for this UC I want to test
time.
flink version 1.4.2
Thank you
Best Regards
Antonio Saldivar
16 jul. 2018 a las 18:26, antonio saldivar ()
escribió:
> Hello
>
>
> I am getting this error when I run my application in Ambari local-cluster and
> I get this error at runtime.
>
> Flink 1.4.2
>
> phoenix
>
> hbase
>
>
> Does any on
Hello
I am getting this error when I run my application in Ambari
local-cluster and I get this error at runtime.
Flink 1.4.2
phoenix
hbase
Does any one have any recommendation to solve this issue?
javax.xml.parsers.FactoryConfigurationError: Provider for class
javax.xml.parsers.DocumentB
Hello
I am trying to find the way to add Flink 1.4.2 service to ambari because is
not listed in the Stack. does anyone has the steps to add this service
manually?
Thank you
Best regards
not makes sense in general. But you can do
> that by flink, storm, spark streaming or structured streaming. And make a
> compare the latency under different framework.
>
> Cheers
> Minglei
>
> 在 2018年6月26日,下午9:36,antonio saldivar 写道:
>
> Hello Thank you for the feedb
eers
> Minglei
>
>
> > 在 2018年6月26日,上午5:23,antonio saldivar 写道:
> >
> > Hello
> >
> > I am trying to measure the latency of each transaction traveling across
> the system as a DataSource I have a Kafka consumer and I would like to
> measure
; [1]
> https://ci.apache.org/projects/flink/flink-docs-master/monitoring/metrics.html#latency-tracking
>
> On Tue, Jun 26, 2018 at 5:23 AM, antonio saldivar
> wrote:
>
>> Hello
>>
>> I am trying to measure the latency of each transaction traveling across
>> t
Hello
I am trying to measure the latency of each transaction traveling across the
system as a DataSource I have a Kafka consumer and I would like to measure
the time that takes from the Source to Sink. Does any one has an example?.
Thank you
Best Regards
done as sum of 1s).
>
> Best, Fabian
>
> 2018-06-09 4:00 GMT+02:00 antonio saldivar :
>
>> Hello
>>
>> Has anyone work this way? I am asking because I have to get the
>> aggregation ( Sum and Count) for multiple windows size (10 mins, 20 mins,
>> 30 mi
Hello
Has anyone work this way? I am asking because I have to get the aggregation
( Sum and Count) for multiple windows size (10 mins, 20 mins, 30 mins)
please let me know if this works properly or is there other good solution.
DataStream data = ...
// append a Long 1 to each record to count it
Hello
I am wondering if it is possible to process the following scenario, to store
all events by event time in a general window and process elements from a
smaller time Frame
1.- Store elements in a General SlidingWindow (60 mins, 10 mins)
- Rule 1 -> gets 10 mins elements from the ge
25 matches
Mail list logo