>> I dont know how to range scan over a caching store, probably one had
>> to open 2 iterators and merge them.

That happens automatically. If you query a cached KTable, it ranges over
the cache and the underlying RocksDB and performs the merging under the
hood.

>> Other than that, I still think even the regualr join is broken with
>> caching enabled right?

Why? To me, if you use the word "broker", it implies conceptually
incorrect; I don't see this.

> I once files a ticket, because with caching
>>> enabled it would return values that havent been published downstream yet.

For the bug report, if found
https://issues.apache.org/jira/browse/KAFKA-6599. We still need to fix
this, but it is a regular bug as any other, and we should not change a
design because of a bug.

That range() returns values that have not been published downstream if
caching is enabled is how caching works and is intended behavior. Not
sure why say it's incorrect?


-Matthias


On 3/5/19 1:49 AM, Jan Filipiak wrote:
> 
> 
> On 04.03.2019 19:14, Matthias J. Sax wrote:
>> Thanks Adam,
>>
>> *> Q) Range scans work with caching enabled, too. Thus, there is no
>> functional/correctness requirement to disable caching. I cannot 
>> remember why Jan's proposal added this? It might be an 
>> implementation detail though (maybe just remove it from the KIP?
>> -- might be miss leading).
> 
> I dont know how to range scan over a caching store, probably one had
> to open 2 iterators and merge them.
> 
> Other than that, I still think even the regualr join is broken with
> caching enabled right? I once files a ticket, because with caching
> enabled it would return values that havent been published downstream yet.
> 

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to