> ... If for any reason, the ratis need to be reset and transaction ID will start from 0 again, then it will cause updateID compare issue and duplicate object ID issue after that. ...
We could add a new feature to Ratis in order to let the state machine pass the starting log index. It seems a generally useful feature. Tsz-Wo On Fri, Jan 5, 2024 at 2:52 AM Sammi Chen <sammic...@apache.org> wrote: > Attenders: Guohao, Yiyang, Jianghua, Hualong, Yuanben, Hongbin, Kangchen, > Ce, ..., Sammi > > 1. > > Shopee > 1. 1.4.0 RC0 VOTE is ongoing. Welcome the community contributors to > evaluate the RC0 package and vote in the thread. > 2. > > DiDi > 1. Doing OM HA performance evaluation. Plan to migrate OM from non HA to > HA if performance meets the requirement. > 2. Face the same objectID issue after previously converting from OM > HA to non HA. The current objectID and updateID are strictly related > with > ratis transaction ID. If for any reason, the ratis need to be reset > and > transaction ID will start from 0 again, then it will cause > updateID compare > issue and duplicate object ID issue after that. One case could be, > use a > backed OM rocksdb directory to start new OM instances for a new OM > namespace. A ratis independent incrementally increased ID is one > solution. > 3. Suggest to provide an ozone command to manually update tx ID in OM > rocksdb in case the tx ID is not updated due to any potential bugs > in OM. > 4. Need review help on > HDDS-9988. SCM UI shows storage usage percentage #5882 > <https://github.com/apache/ozone/pull/5882> > > - > > 3. Qihoo > > 1. Find one bug in GrpcOmTransport, that GrpcOmTransport doesn't handle > the exception well once OM leader switches. > 2. Working on an OM block batch allocation proposal to reduce the > request from OM to SCM. With this approach, the SCM block allocation > request can be reduced by 90%. > 3. Benchmarked SCM block allocation QPS with a 36 Core server, it's > about 20K QPS. Would like to know if there is any way to improve the QPS > from 20K. > > - > > 4. China Unicom > > 1. Start to use ozone. > 2. Find one issue when using 1.3.0 hadoop3 ozone client with Spark 3. > 3. Hadoop 2.7 currently works with Ozone. But Hadoop 2.7 support was > dropped sometime before. So the compatibility cannot be fully > guaranteed in > later Ozone versions. > 4. Can use distcp to migrate data from hadoop 2.7 to Ozone cluster now. >