Thank you for providing the details. Can it be confirmed that the Hashmap
within the accumulator stores the map in RocksDB as a binary object and
undergoes deserialization/serialization during the execution of the
aggregate function?

Thanks,
Arjun

On Mon, 4 Dec 2023 at 12:24, Xuyang <xyzhong...@163.com> wrote:

> Hi, Arjun.
> > I'm using a HashMap to aggregate the results.
> Do you means the you define a hashMap in the accumulator?  If yes, I think
> it restores a binary object about map in RocksDB and deserialize it like
> this[1].
> If you are using flink sql, you can try to debug the class
> 'WindowOperator' or 'SlicingWindowOperator' to find more detail.
>
> [1]
> https://github.com/apache/flink/blob/026bd4be9bafce86ced42d2a07e8b8820f7e6d9d/flink-table/flink-table-common/src/main/java/org/apache/flink/table/data/binary/BinaryRowData.java#L380
>
>
> --
>     Best!
>     Xuyang
>
>
> At 2023-12-01 15:08:41, "arjun s" <arjunjoice...@gmail.com> wrote:
>
> Hi team,
> I'm new to Flink's window and aggregate functions, and I've configured my
> state backend as RocksDB. Currently, I'm computing the count of each ID
> within a 10-minute duration from the data source. I'm using a HashMap to
> aggregate the results. Now, I'm interested in understanding where the data,
> particularly the information aggregated within 10 minutes, is stored in the
> system for summing each ID. As far as I know, it is stored in RocksDB, but
> please correct me if I'm mistaken. I'm particularly confused about how
> these local objects, such as HashMap or Hap, are stored in RocksDB, and
> what type of data is being stored in RocksDB.
>
> Thanks in Advance,
> Arjun S
>
>

Reply via email to