Hi Andrey,
As Paris has explained it, this is a known issue and there are ongoing
efforts to solve it.
I can suggest a workaround: limit the amount of messages sent into the
iteration manually. You can do this with a e.g. a Map operator that
limits records per seconds and simply sends what i
I had a DB look up to do and used com.google.common.cache.Cache with a 10 sec
timeout:
private static Cache locationCache =
CacheBuilder.newBuilder().maximumSize(1000).expireAfterAccess(10,
TimeUnit.SECONDS).build();
Seemed to help a lot with throughput….
Steve
On Apr 3, 2017, at 10:55 AM,
Just to add a scenario.
My current arquitecture is the following:
I've deployed 4 ignite node in yarn and 5 task managers with 2G and 2 slots
each.
As cache on ignite I have on record in key/value (string, object[])
My thoughput without ignite is 30k/sec when I add the lookup i get 3k/sec
My ques
Hi Stephan,
My use case is the following:
Every time an event is processed with a signal event the window must be
fired after the allowed lateness. If no signal event arrives the window must
close after the gap, like in session window.
I’m registering a timer for signal + allowed lateness.
Hope yo
Hi Archit,
The problem is that you need to assign the returned `DataStream` from
`stream.assignTimestampsAndWatermarks` to a separate variable, and use that
when instantiating the Kafka 0.10 sink.
The `assignTimestampsAndWatermarks` method returns a new `DataStream` instance
with records that h
Hi,
We would like to present you our work “I²: Interactive Real-Time Visualization
for Streaming Data”. This demonstration received the Best Demonstration Award at
the 20th International Conference on Extending Database Technology (EDBT) in
March 2017.
I² is based on Apache Zeppelin and Apache Fl
Hi Gordon
This is the function snippet i am using but i am getting invalid
timestamp
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "word");
properties.setProperty("auto.o
Hello Fabio,
what you describe sounds very possible, the easiest way to do it would be
to save your incoming data in HDFS as you already do if I understand
correctly,
and then use the batch ALS algorithm [1] to create your recommendations
from the static data, which you could do at regular interva