Hi Gyula,
I have another question. So if i cache something on the operator, to keep
it up to date, i will always need to add and connect another stream of
changes to the operator ?
Is this right for every case ?
Cheers
On Wed, Aug 19, 2015 at 3:21 PM, Welly Tambunan wrote:
> Hi Gyula,
>
> Th
please help
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/when-use-broadcast-variable-and-run-on-bigdata-display-this-error-please-help-tp2455p2456.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.c
when run this program in big data display this error but when run on small
data not display error why
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
DataSet customer = getCustomerDataSet(env,mask,l,map);
DataSet order= getOrdersDataSet(env,maskorder,l1,maporder);
cus
what max job manger heap size and task manger heap size
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/what-max-jobmanger-heab-size-and-taskmanger-heap-size-tp2454.html
Sent from the Apache Flink User Mailing List archive. mailing list archi
Hey Andreas,
thank you much!
Cheers,
Hermann
Am 19.08.2015 um 15:19 schrieb Andreas Fritzler:
Hi Hermann,
there is a docker-compose setup for Flink:
https://github.com/apache/flink/tree/master/flink-contrib/docker-flink
Regards,
Andreas
On Wed, Aug 19, 2015 at 3:11 PM, Hermann Azong
mailto
Hey Chiwan Park,
thank you, I will give a look!
Cheers,
Hermann
Am 19.08.2015 um 15:23 schrieb Chiwan Park:
Hi Hermann,
In 16 page of Slim’s slides [1], there is a pre-installed virtual machine based
on VMWare. I haven’t run Flink on that machine. But maybe It works.
Regards,
Chiwan Park
[
Hi Hermann,
In 16 page of Slim’s slides [1], there is a pre-installed virtual machine based
on VMWare. I haven’t run Flink on that machine. But maybe It works.
Regards,
Chiwan Park
[1]
http://www.slideshare.net/sbaltagi/apache-flinkcrashcoursebyslimbaltagiandsrinipalthepu
> On Aug 19, 2015, a
Hi Hermann,
there is a docker-compose setup for Flink:
https://github.com/apache/flink/tree/master/flink-contrib/docker-flink
Regards,
Andreas
On Wed, Aug 19, 2015 at 3:11 PM, Hermann Azong
wrote:
> Hey Flinkers,
>
> for testing purposes on cluster, I would like to know if there is a
> virtual
Hey Flinkers,
for testing purposes on cluster, I would like to know if there is a
virtual machine where flink allerady work as standalone or on yarn.
Thank you in advance for answers!
Cheers,
Hermann
Hi Gyula,
That's really helpful. The docs is improving so much since the last time
(0.9).
Thanks a lot !
Cheers
On Wed, Aug 19, 2015 at 3:07 PM, Gyula Fóra wrote:
> Hey,
>
> If it is always better to check the events against a more up-to-date model
> (even if the events we are checking arrive
Hi.
Thanks for the tip. It seems to work...
Greets.
> Am 18.08.2015 um 13:56 schrieb Stephan Ewen :
>
> Yep, that is a valid bug!
> State is apparently not resolved with the correct classloader.
>
> As a workaround, you can checkpoint byte arrays and serialize/deserialize the
> state into
Hey,
If it is always better to check the events against a more up-to-date model
(even if the events we are checking arrived before the update) then it is
fine to keep the model outside of the system.
In this case we need to make sure that we can push the updates to the
external system consistentl
Thanks Gyula,
Another question i have..
> ... while external model updates would be *tricky *to keep consistent.
Is that still the case if the Operator treat the external model as
read-only ? We create another stream that will update the external model
separately.
Could you please elaborate more
In that case I would apply a map to wrap in some common type, like a n
Either before the union.
And then in the coflatmap you can unwrap it.
On Wed, Aug 19, 2015 at 9:50 AM Welly Tambunan wrote:
> Hi Gyula,
>
> Thanks.
>
> However update1 and update2 have a different type. Based on my
> understa
Hi Gyula,
Thanks.
However update1 and update2 have a different type. Based on my
understanding, i don't think we can use union. How can we handle this one ?
We like to create our event strongly type to get the domain language
captured.
Cheers
On Wed, Aug 19, 2015 at 2:37 PM, Gyula Fóra wrote
Hey,
One input of your co-flatmap would be model updates and the other input
would be events to check against the model if I understand correctly.
This means that if your model updates come from more than one stream you
need to union them into a single stream before connecting them with the
event
Hi Gyula,
Thanks for your response.
However the model can received multiple event for update. How can we do
that with co-flatmap as i can see the connect API only received single
datastream ?
> ... while external model updates would be tricky to keep consistent.
Is that still the case if the Op
Hey!
I think it is safe to say that the best approach in this case is creating a
co-flatmap that will receive updates on one input. The events should
probably be broadcasted in this case so you can check in parallel.
This approach can be used effectively with Flink's checkpoint mechanism,
while e
18 matches
Mail list logo