[jira] [Commented] (IGNITE-24629) Remove NodeFileTree.cacheStorage(String) method
[ https://issues.apache.org/jira/browse/IGNITE-24629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932193#comment-17932193 ] Ignite TC Bot commented on IGNITE-24629: {panel:title=Branch: [pull/11898/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} {panel:title=Branch: [pull/11898/head] Base: [master] : No new tests found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel} [TeamCity *--> Run :: All* Results|https://ci2.ignite.apache.org/viewLog.html?buildId=8335947&buildTypeId=IgniteTests24Java8_RunAll] > Remove NodeFileTree.cacheStorage(String) method > --- > > Key: IGNITE-24629 > URL: https://issues.apache.org/jira/browse/IGNITE-24629 > Project: Ignite > Issue Type: Sub-task >Reporter: Nikolay Izhikov >Assignee: Nikolay Izhikov >Priority: Major > Time Spent: 1h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-24690) Missing synchronization between updating lease information and adding a table processor
Aleksandr Polovtsev created IGNITE-24690: Summary: Missing synchronization between updating lease information and adding a table processor Key: IGNITE-24690 URL: https://issues.apache.org/jira/browse/IGNITE-24690 Project: Ignite Issue Type: Bug Reporter: Aleksandr Polovtsev Assignee: Aleksandr Polovtsev -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24690) Missing synchronization between updating lease information and adding a table processor
[ https://issues.apache.org/jira/browse/IGNITE-24690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtsev updated IGNITE-24690: - Description: When handling a {{PrimaryReplicaChangeCommand}} in {{ZonePartitionRaftListener}} we update lease information inside the storages. This information is also used to initialize newly added Table Processors. Since there's currently no synchronization between these two processes, a table processor can be initialized with stale lease infromation. > Missing synchronization between updating lease information and adding a table > processor > --- > > Key: IGNITE-24690 > URL: https://issues.apache.org/jira/browse/IGNITE-24690 > Project: Ignite > Issue Type: Bug >Reporter: Aleksandr Polovtsev >Assignee: Aleksandr Polovtsev >Priority: Major > Labels: ignite-3 > > When handling a {{PrimaryReplicaChangeCommand}} in > {{ZonePartitionRaftListener}} we update lease information inside the > storages. This information is also used to initialize newly added Table > Processors. Since there's currently no synchronization between these two > processes, a table processor can be initialized with stale lease infromation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-24691) ItZoneDataReplicationTest#testDataRebalance(truncateRaftLog=true) is flaky
Aleksandr Polovtsev created IGNITE-24691: Summary: ItZoneDataReplicationTest#testDataRebalance(truncateRaftLog=true) is flaky Key: IGNITE-24691 URL: https://issues.apache.org/jira/browse/IGNITE-24691 Project: Ignite Issue Type: Bug Reporter: Aleksandr Polovtsev Assignee: Aleksandr Polovtsev This test sometimes fails with the following error: {code:java} Caused by: org.apache.ignite.internal.lang.IgniteInternalException: IGN-TX-9 TraceId:f0e5906a-9180-47c6-bc10-7d58d72ebc94 Storage is in the process of rebalance: [table=3, partitionId=0] at app//org.apache.ignite.internal.tx.storage.state.rocksdb.TxStateRocksDbPartitionStorage.createStorageInProgressOfRebalanceException(TxStateRocksDbPartitionStorage.java:608) at app//org.apache.ignite.internal.tx.storage.state.rocksdb.TxStateRocksDbPartitionStorage.throwExceptionDependingOnStorageState(TxStateRocksDbPartitionStorage.java:620) at app//org.apache.ignite.internal.tx.storage.state.rocksdb.TxStateRocksDbPartitionStorage.close(TxStateRocksDbPartitionStorage.java:433) at app//org.apache.ignite.internal.util.IgniteUtils.lambda$closeAll$0(IgniteUtils.java:571) at java.base@17.0.6/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.base@17.0.6/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179) at java.base@17.0.6/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) at java.base@17.0.6/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base@17.0.6/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base@17.0.6/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) at java.base@17.0.6/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) at java.base@17.0.6/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base@17.0.6/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) at app//org.apache.ignite.internal.util.IgniteUtils.closeAll(IgniteUtils.java:569) at app//org.apache.ignite.internal.util.IgniteUtils.closeAll(IgniteUtils.java:592) at app//org.apache.ignite.internal.tx.storage.state.rocksdb.TxStateRocksDbStorage.close(TxStateRocksDbStorage.java:147) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-24697) Log groupId with unsuccessful election message
Roman Puchkovskiy created IGNITE-24697: -- Summary: Log groupId with unsuccessful election message Key: IGNITE-24697 URL: https://issues.apache.org/jira/browse/IGNITE-24697 Project: Ignite Issue Type: Improvement Reporter: Roman Puchkovskiy Assignee: Roman Puchkovskiy -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24697) Log groupId with unsuccessful election message
[ https://issues.apache.org/jira/browse/IGNITE-24697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-24697: --- Description: Sometimes we see logs like the following: [2025-03-04T08:53:30,203][INFO ][%ikvvot_tko_20001%JRaft-ElectionTimer-11][NodeImpl] Unsuccessful election round number 2 It would be useful to see which group cannot elect a leader. > Log groupId with unsuccessful election message > -- > > Key: IGNITE-24697 > URL: https://issues.apache.org/jira/browse/IGNITE-24697 > Project: Ignite > Issue Type: Improvement >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Time Spent: 10m > Remaining Estimate: 0h > > Sometimes we see logs like the following: > [2025-03-04T08:53:30,203][INFO > ][%ikvvot_tko_20001%JRaft-ElectionTimer-11][NodeImpl] Unsuccessful election > round number 2 > It would be useful to see which group cannot elect a leader. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-24698) Add new serializers that comply with the new serialization protocol.
Pavel Pereslegin created IGNITE-24698: - Summary: Add new serializers that comply with the new serialization protocol. Key: IGNITE-24698 URL: https://issues.apache.org/jira/browse/IGNITE-24698 Project: Ignite Issue Type: Improvement Components: sql Reporter: Pavel Pereslegin In IGNITE-24564, dynamic creation of serializer registry has been added (using the `CatalogSerializer` annotation). The target serialization format (v2) should be as follows: ||Field description||type|| |PROTOCOL VERSION|short| |Object type|short| |Object serialization format version|varint| |Object payload|...| In order to migrate to the new serialization data format, we need to do the following: 1. Introduce new decorators for input/output which allow to write/read objects. {code:Java} interface CatalogObjectDataInput extends IgniteDataInput { // Reads an object. T readObject() throws IOException; } interface CatalogObjectDataOutput extends IgniteDataOutput { // Writes an object. void writeObject(MarshallableEntry object) throws IOException; } {code} 2. Change CatalogObjectSerializer to use them instead of IgniteDataInput/IgniteDataOutput 3. In each serializer container class we need to add a new serializer and annotate it with CtalogSerializer(version = 2, since = "3.1.0") 4. Update unmarshal in UpdateLogMarshallerImpl {code:java} try (CatalogObjectDataInput input = new CatalogObjectDataInputImpl(new IgniteUnsafeDataInput(bytes), serializers)) { short protoVersion = input.readShort(); switch (protoVersion) { case 1: int typeId = input.readShort(); return (UpdateLogEvent) serializers.get(1, typeId).readFrom(input); case 2: return input.readObject(); default: throw new IllegalStateException(format("An object could not be deserialized because it was using " + "a newer version of the serialization protocol [objectVersion={}, supported={}]", protoVersion, PROTOCOL_VERSION)); } } catch (Throwable t) { throw new CatalogMarshallerException(t); } {code} 5. Simplify marshal in UpdateLogMarshallerImpl {code:java} try (CatalogObjectDataOutputImpl output = new CatalogObjectDataOutputImpl(PROTOCOL_VERSION, INITIAL_BUFFER_CAPACITY, serializers)) { output.writeObject(update); return output.array(); } catch (Throwable t) { throw new CatalogMarshallerException(t); } {code} This way we will read data depending on the protocol version, but write in version 2 format and thus ensure a transparent transition from version 1 to version 2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24698) Sql. Add new serializers that comply with the new serialization protocol.
[ https://issues.apache.org/jira/browse/IGNITE-24698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Pereslegin updated IGNITE-24698: -- Summary: Sql. Add new serializers that comply with the new serialization protocol. (was: Add new serializers that comply with the new serialization protocol.) > Sql. Add new serializers that comply with the new serialization protocol. > - > > Key: IGNITE-24698 > URL: https://issues.apache.org/jira/browse/IGNITE-24698 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Pavel Pereslegin >Priority: Major > Labels: ignite-3 > > In IGNITE-24564, dynamic creation of serializer registry has been added > (using the `CatalogSerializer` annotation). > The target serialization format (v2) should be as follows: > ||Field description||type|| > |PROTOCOL VERSION|short| > |Object type|short| > |Object serialization format version|varint| > |Object payload|...| > In order to migrate to the new serialization data format, we need to do the > following: > 1. Introduce new decorators for input/output which allow to write/read > objects. > {code:Java} > interface CatalogObjectDataInput extends IgniteDataInput { > // Reads an object. > T readObject() throws IOException; > } > interface CatalogObjectDataOutput extends IgniteDataOutput { > // Writes an object. > void writeObject(MarshallableEntry object) throws IOException; > } > {code} > 2. Change CatalogObjectSerializer to use them instead of > IgniteDataInput/IgniteDataOutput > 3. In each serializer container class we need to add a new serializer and > annotate it with CtalogSerializer(version = 2, since = "3.1.0") > 4. Update unmarshal in UpdateLogMarshallerImpl > {code:java} > try (CatalogObjectDataInput input = new > CatalogObjectDataInputImpl(new IgniteUnsafeDataInput(bytes), serializers)) { > short protoVersion = input.readShort(); > switch (protoVersion) { > case 1: > int typeId = input.readShort(); > return (UpdateLogEvent) serializers.get(1, > typeId).readFrom(input); > case 2: > return input.readObject(); > default: > throw new IllegalStateException(format("An object could > not be deserialized because it was using " > + "a newer version of the serialization protocol > [objectVersion={}, supported={}]", protoVersion, PROTOCOL_VERSION)); > } > } catch (Throwable t) { > throw new CatalogMarshallerException(t); > } > {code} > 5. Simplify marshal in UpdateLogMarshallerImpl > {code:java} > try (CatalogObjectDataOutputImpl output = new > CatalogObjectDataOutputImpl(PROTOCOL_VERSION, INITIAL_BUFFER_CAPACITY, > serializers)) { > output.writeObject(update); > return output.array(); > } catch (Throwable t) { > throw new CatalogMarshallerException(t); > } > {code} > This way we will read data depending on the protocol version, but write in > version 2 format and thus ensure a transparent transition from version 1 to > version 2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-24688) Add FILTER_CORRELATE rule to HEP push down list
[ https://issues.apache.org/jira/browse/IGNITE-24688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932336#comment-17932336 ] Ignite TC Bot commented on IGNITE-24688: {panel:title=Branch: [pull/11905/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} {panel:title=Branch: [pull/11905/head] Base: [master] : New Tests (1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1} {color:#8b}Calcite SQL{color} [[tests 1|https://ci2.ignite.apache.org/viewLog.html?buildId=8337057]] * {color:#013220}IgniteCalciteTestSuite: CorrelatedNestedLoopJoinPlannerTest.testFilterPushDown - PASSED{color} {panel} [TeamCity *--> Run :: All* Results|https://ci2.ignite.apache.org/viewLog.html?buildId=8336065&buildTypeId=IgniteTests24Java8_RunAll] > Add FILTER_CORRELATE rule to HEP push down list > --- > > Key: IGNITE-24688 > URL: https://issues.apache.org/jira/browse/IGNITE-24688 > Project: Ignite > Issue Type: Improvement >Reporter: Maksim Timonin >Assignee: Maksim Timonin >Priority: Major > Labels: ise > Fix For: 2.18 > > Time Spent: 50m > Remaining Estimate: 0h > > > In the plan below Filter rule is under IgniteTableScan. This leads to memory > overhead. Let's fix it by adding new filter push down rule - Calcite's > FILTER_CORRELATE rule > > {code:java} > IgniteColocatedSortAggregate(group=[{0}], ORDER_COUNT=[COUNT()], > collation=[[0 ASC-nulls-first]]): > IgniteProject(O_ORDERPRIORITY=[$7]) > IgniteFilter(condition=[AND(>=($6, 1998-01-01), <($6, +(1998-01-01, > 3:INTERVAL MONTH)))]) > IgniteCorrelatedNestedLoopJoin(condition=[true], joinType=[inner], > variablesSet=[[$cor0]], variablesSet=[[0]], correlationVariables=[[$cor0]]) > IgniteExchange(distribution=[single]) > IgniteSort(sort0=[$7], dir0=[ASC-nulls-first]) > IgniteTableScan(table=[[PUBLIC, ORDERS]]) > IgniteColocatedHashAggregate(group=[{0}]) > IgniteProject(i=[true]) > IgniteHashIndexSpool(readType=[LAZY], writeType=[EAGER], > searchRow=[[$cor0.O_ORDERKEY, null, null]], condition=[AND(=($0, > $cor0.O_ORDERKEY), <($1, $2))], allowNulls=[false]) > IgniteExchange(distribution=[single]) > IgniteTableScan(table=[[PUBLIC, LINEITEM]], > requiredColumns=[{2, 13, 14}]) > {code} > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-24692) Sql. Fix correct mappings for primary keys
[ https://issues.apache.org/jira/browse/IGNITE-24692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932316#comment-17932316 ] Evgeny Stanilovsky commented on IGNITE-24692: - [~amashenkov], [~jooger] can you make a review plz ? > Sql. Fix correct mappings for primary keys > -- > > Key: IGNITE-24692 > URL: https://issues.apache.org/jira/browse/IGNITE-24692 > Project: Ignite > Issue Type: Task > Components: sql >Affects Versions: 3.0 >Reporter: Evgeny Stanilovsky >Assignee: Evgeny Stanilovsky >Priority: Major > Labels: ignite-3 > Time Spent: 10m > Remaining Estimate: 0h > > Need to fix columns mapping for selectivity calculations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24553) Refine Data Consistancy team related system properties
[ https://issues.apache.org/jira/browse/IGNITE-24553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Efremov updated IGNITE-24553: - Summary: Refine Data Consistancy team related system properties (was: Refine Data Consistancy team related system properties*Description*) > Refine Data Consistancy team related system properties > -- > > Key: IGNITE-24553 > URL: https://issues.apache.org/jira/browse/IGNITE-24553 > Project: Ignite > Issue Type: Task >Reporter: Mikhail Efremov >Assignee: Mikhail Efremov >Priority: Major > Labels: ignite-3 > > *Description* > After https://issues.apache.org/jira/browse/IGNITE-24272 there are system > properties to refine whether they should be a configuration entry or they're > test-related or are intended to be removed in future. This ticket contains > properties that are related or had been created by Data Consistency team > members. The properties list is provided in DoD section below. > *Motivation* > We want to configure an Ignite cluster and its nodes through local or > distributed configuration and left system properties only for the follow > allowed purposes: > # a property is for tests only purposes; > # a property will be removed later and is annotated with a TODO commentary > with a corresponding JIRA ticket. > All other system properties should be done as configuration entries. > *Definition of Done* > # Decision on {{IGNITE_JVM_PAUSE_DETECTOR_DISABLED}} is provided. > # Decision on {{IGNITE_JVM_PAUSE_DETECTOR_PRECISION}} is provided. > # Decision on {{IGNITE_JVM_PAUSE_DETECTOR_THRESHOLD}} is provided. > # Decision on {{IGNITE_JVM_PAUSE_DETECTOR_LAST_EVENTS_COUNT}} is provided. > # Decision on {{RESOURCE_VACUUM_INTERVAL_MILLISECONDS}} is provided. > # Decision on {{LEASE_STATISTICS_PRINT_ONCE_PER_ITERATIONS}} is provided. > # Decision on {{IGNITE_TIMEOUT_WORKER_SLEEP_INTERVAL}} is provided. > # Decision on {{IGNITE_LOG_THROTTLE_INTERVAL_MS}} is provided > # Decision on {{IGNITE_SKIP_REPLICATION_IN_BENCHMARK}} is provided > # Decision on {{IGNITE_SKIP_STORAGE_UPDATE_IN_BENCHMARK}} is provided > # Decision on {{IGNITE_ZONE_BASED_REPLICATION}} is provided -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24699) Sql. Query with ORDER BY returns unsorted result
[ https://issues.apache.org/jira/browse/IGNITE-24699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-24699: -- Fix Version/s: 3.1 > Sql. Query with ORDER BY returns unsorted result > > > Key: IGNITE-24699 > URL: https://issues.apache.org/jira/browse/IGNITE-24699 > Project: Ignite > Issue Type: Bug > Components: sql >Reporter: Konstantin Orlov >Assignee: Konstantin Orlov >Priority: Major > Labels: ignite-3 > Fix For: 3.1 > > Time Spent: 10m > Remaining Estimate: 0h > > Run this test: > {code:java} > // org.apache.ignite.internal.sql.engine.ItSecondaryIndexTest > @Test > void ensurePartitionStreamsAreMergedCorrectlyWithRegardToProjection() { > assertQuery("SELECT /*+ FORCE_INDEX(" + NAME_CITY_IDX + ") */ name FROM > Developer WHERE id % 2 = 0 ORDER BY name DESC") > .matches(containsIndexScan("PUBLIC", "DEVELOPER", NAME_CITY_IDX)) > .matches(not(containsSubPlan("Sort"))) > .returns("Zimmer") > .returns("Stravinsky") > .returns("Strauss") > .returns("Shubert") > .returns("Rihter") > .returns("Prokofiev") > .returns("O'Halloran") > .returns("Einaudi") > .returns("Chaikovsky") > .returns("Beethoven") > .returns("Arnalds") > .ordered() > .check(); > } {code} > It fails with > {code} > org.opentest4j.AssertionFailedError: Collections are not equal (position 0): > Expected: Stravinsky > Actual: Rihter > {code} > The reason is that comparator created to join streams of partition is created > based on collation returned by relation node, which in turn may be adjusted > with regard to projection merged into the real, but merge of partition is > happening before the projection, therefore must account only for > {{requiredColumns}} bitset. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24698) Sql. Add new serializers that comply with the new serialization protocol.
[ https://issues.apache.org/jira/browse/IGNITE-24698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Pereslegin updated IGNITE-24698: -- Description: In IGNITE-24564, dynamic creation of serializer registry has been added (using the `CatalogSerializer` annotation). The target serialization format (v2) should be as follows: ||Field description||type|| |PROTOCOL VERSION|short| |Object type|short| |Object serialization format version|varint| |Object payload|...| In order to migrate to the new serialization data format, we need to do the following: 1. Introduce new decorators for input/output which allow to write/read objects. {code:Java} interface CatalogObjectDataInput extends IgniteDataInput { // Reads an object. T readObject() throws IOException; } interface CatalogObjectDataOutput extends IgniteDataOutput { // Writes an object. void writeObject(MarshallableEntry object) throws IOException; } {code} 2. Change CatalogObjectSerializer to use them instead of IgniteDataInput/IgniteDataOutput 3. In each serializer container class we need to add a new serializer and annotate it with CtalogSerializer(version = 2, since = "3.1.0") 4. Update unmarshal in UpdateLogMarshallerImpl {code:java} try (CatalogObjectDataInput input = new CatalogObjectDataInputImpl(new IgniteUnsafeDataInput(bytes), serializers)) { short protoVersion = input.readShort(); switch (protoVersion) { case 1: int typeId = input.readShort(); return (UpdateLogEvent) serializers.get(1, typeId).readFrom(input); case 2: return input.readObject(); default: throw new IllegalStateException(format("An object could not be deserialized because it was using a newer" + " version of the serialization protocol [supported={}, actual={}].", PROTOCOL_VERSION, protoVersion)); } } catch (Throwable t) { throw new CatalogMarshallerException(t); } {code} 5. Simplify marshal in UpdateLogMarshallerImpl {code:java} try (CatalogObjectDataOutputImpl output = new CatalogObjectDataOutputImpl(PROTOCOL_VERSION, INITIAL_BUFFER_CAPACITY, serializers)) { output.writeObject(update); return output.array(); } catch (Throwable t) { throw new CatalogMarshallerException(t); } {code} This way we will read data depending on the protocol version, but write in version 2 format and thus ensure a transparent transition from version 1 to version 2. was: In IGNITE-24564, dynamic creation of serializer registry has been added (using the `CatalogSerializer` annotation). The target serialization format (v2) should be as follows: ||Field description||type|| |PROTOCOL VERSION|short| |Object type|short| |Object serialization format version|varint| |Object payload|...| In order to migrate to the new serialization data format, we need to do the following: 1. Introduce new decorators for input/output which allow to write/read objects. {code:Java} interface CatalogObjectDataInput extends IgniteDataInput { // Reads an object. T readObject() throws IOException; } interface CatalogObjectDataOutput extends IgniteDataOutput { // Writes an object. void writeObject(MarshallableEntry object) throws IOException; } {code} 2. Change CatalogObjectSerializer to use them instead of IgniteDataInput/IgniteDataOutput 3. In each serializer container class we need to add a new serializer and annotate it with CtalogSerializer(version = 2, since = "3.1.0") 4. Update unmarshal in UpdateLogMarshallerImpl {code:java} try (CatalogObjectDataInput input = new CatalogObjectDataInputImpl(new IgniteUnsafeDataInput(bytes), serializers)) { short protoVersion = input.readShort(); switch (protoVersion) { case 1: int typeId = input.readShort(); return (UpdateLogEvent) serializers.get(1, typeId).readFrom(input); case 2: return input.readObject(); default: throw new IllegalStateException(format("An object could not be deserialized because it was using " + "a newer version of the serialization protocol [objectVersion={}, supported={}]", protoVersion, PROTOCOL_VERSION)); } } catch (Throwable t) { throw new CatalogMarshallerException(t); } {code} 5. Simplify marshal in UpdateLogMarshallerImpl {code:java} try (CatalogObjectDataOutputImpl output = new CatalogObjectDataOutputImpl(PROTOCOL_VERSION, INITIAL_BUFFER_CAPACITY, serializers)) { output.writeObject(update); return output.array(); } catch (Throwable t) { throw new Catalog
[jira] [Created] (IGNITE-24699) Sql. Query with ORDER BY returns unsorted result
Konstantin Orlov created IGNITE-24699: - Summary: Sql. Query with ORDER BY returns unsorted result Key: IGNITE-24699 URL: https://issues.apache.org/jira/browse/IGNITE-24699 Project: Ignite Issue Type: Bug Components: sql Reporter: Konstantin Orlov Run this test: {code:java} // org.apache.ignite.internal.sql.engine.ItSecondaryIndexTest @Test void ensurePartitionStreamsAreMergedCorrectlyWithRegardToProjection() { assertQuery("SELECT /*+ FORCE_INDEX(" + NAME_CITY_IDX + ") */ name FROM Developer WHERE id % 2 = 0 ORDER BY name DESC") .matches(containsIndexScan("PUBLIC", "DEVELOPER", NAME_CITY_IDX)) .matches(not(containsSubPlan("Sort"))) .returns("Zimmer") .returns("Stravinsky") .returns("Strauss") .returns("Shubert") .returns("Rihter") .returns("Prokofiev") .returns("O'Halloran") .returns("Einaudi") .returns("Chaikovsky") .returns("Beethoven") .returns("Arnalds") .ordered() .check(); } {code} It fails with {code} org.opentest4j.AssertionFailedError: Collections are not equal (position 0): Expected: Stravinsky Actual: Rihter {code} The reason is that comparator created to join streams of partition is created based on collation returned by relation node, which in turn may be adjusted with regard to projection merged into the real, but merge of partition is happening before the projection, therefore must account only for {{requiredColumns}} bitset. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-24484) RAFT heartbeat coalescing
[ https://issues.apache.org/jira/browse/IGNITE-24484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932420#comment-17932420 ] Vladislav Pyatkov commented on IGNITE-24484: {code:java} Benchmark (fsync) (replicaCount) (useHeartbeatCoalescing) Mode Cnt Score Error Units MultiTableBenchmark.testfalse 1 false thrpt 20 52238.784 ± 2711.107 ops/s MultiTableBenchmark.testfalse 1 true thrpt 20 52083.244 ± 3323.483 ops/s MultiTableBenchmark.testfalse 3 false thrpt 20 21211.237 ± 1605.232 ops/s MultiTableBenchmark.testfalse 3 true thrpt 20 24562.209 ± 1645.234 ops/s {code} > RAFT heartbeat coalescing > - > > Key: IGNITE-24484 > URL: https://issues.apache.org/jira/browse/IGNITE-24484 > Project: Ignite > Issue Type: Improvement >Reporter: Vladislav Pyatkov >Assignee: Vladislav Pyatkov >Priority: Major > Labels: ignite-3 > Time Spent: 10m > Remaining Estimate: 0h > > h3. Motivation > RAFT heartbeat exists for any RAFT group. Even in the case several groups are > deployed in the one node, the exchange group generates its own heartbeat > message. This leads to a high load on the network layer and CPU. > h3. Definition of done > Heartbeat messages should be sent with a particular frequency for an Ignite > node (independently of the number of RAFT groups). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-24669) Set Default SQL Plan History to Zero (for H2 Engine Only)
[ https://issues.apache.org/jira/browse/IGNITE-24669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932503#comment-17932503 ] Ignite TC Bot commented on IGNITE-24669: {panel:title=Branch: [pull/11903/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} {panel:title=Branch: [pull/11903/head] Base: [master] : New Tests (4)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1} {color:#8b}Calcite SQL{color} [[tests 4|https://ci2.ignite.apache.org/viewLog.html?buildId=8336958]] * {color:#013220}IgniteCalciteTestSuite: SqlPlanHistoryIntegrationTest.testDefaultHistorySize[sqlEngine=calcite, isClient=false loc=false, isFullyFetched=false] - PASSED{color} * {color:#013220}IgniteCalciteTestSuite: SqlPlanHistoryIntegrationTest.testNoSqlEngineConfiguration[sqlEngine=h2, isClient=false loc=false, isFullyFetched=false] - PASSED{color} * {color:#013220}IgniteCalciteTestSuite: SqlPlanHistoryIntegrationTest.testDefaultHistorySize[sqlEngine=h2, isClient=false loc=false, isFullyFetched=false] - PASSED{color} * {color:#013220}IgniteCalciteTestSuite: SqlPlanHistoryIntegrationTest.testNoSqlConfiguration[sqlEngine=h2, isClient=false loc=false, isFullyFetched=false] - PASSED{color} {panel} [TeamCity *--> Run :: All* Results|https://ci2.ignite.apache.org/viewLog.html?buildId=8337043&buildTypeId=IgniteTests24Java8_RunAll] > Set Default SQL Plan History to Zero (for H2 Engine Only) > - > > Key: IGNITE-24669 > URL: https://issues.apache.org/jira/browse/IGNITE-24669 > Project: Ignite > Issue Type: Task >Reporter: Oleg Valuyskiy >Assignee: Oleg Valuyskiy >Priority: Major > Labels: ise > Time Spent: 10m > Remaining Estimate: 0h > > Due to a persistent performance degradation caused by the implementation of > the SQL plan history system view, we propose setting the default SQL plan > history size to zero for the H2 engine. The default SQL plan history size for > Calcite will remain unchanged at 1000 entries. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24675) Sql. Hash join operation may hangs for right and outer join
[ https://issues.apache.org/jira/browse/IGNITE-24675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky updated IGNITE-24675: Ignite Flags: (was: Docs Required,Release Notes Required) > Sql. Hash join operation may hangs for right and outer join > --- > > Key: IGNITE-24675 > URL: https://issues.apache.org/jira/browse/IGNITE-24675 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 3.0 >Reporter: Andrey Mashenkov >Assignee: Andrey Mashenkov >Priority: Critical > Labels: ignite-3 > Fix For: 3.1 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > `RightHashJoin.join()` and `FullOuterHashJoin.join()` may fall into infinite > loop when processing non-matching rows from right source. > When right buffer contains the same amount of rows that were requested by > downstream, the algo emits these rows, but didn't notify downstream the > end-of-data. And on next request emits same rows again, and again... -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-24532) Sql. SqlLogicTest contstraint/test_not_null_constraint.test is flaky
[ https://issues.apache.org/jira/browse/IGNITE-24532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932512#comment-17932512 ] Evgeny Stanilovsky commented on IGNITE-24532: - Need to be fixed after [1] [1] https://issues.apache.org/jira/browse/IGNITE-24307 > Sql. SqlLogicTest contstraint/test_not_null_constraint.test is flaky > > > Key: IGNITE-24532 > URL: https://issues.apache.org/jira/browse/IGNITE-24532 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Iurii Gerzhedovich >Priority: Major > Labels: ignite-3 > > There are a few rare fails on TC - > https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunAllTests/8875145 > > {code:java} > java.lang.AssertionError: Not expected result at: > (test_not_null_constraint.test:163). Statement: MERGE INTO t2 dst USING t1 > src ON dst.id = src.id WHEN MATCHED THEN UPDATE SET val = NULL. Expected: > Column 'VAL' does not allow NULLs > Expected: a string containing "Column 'VAL' does not allow NULLs" > but: was nulljava.lang.AssertionError: Not expected result at: > (test_not_null_constraint.test:163). Statement: MERGE INTO t2 dst USING t1 > src ON dst.id = src.id WHEN MATCHED THEN UPDATE SET val = NULL. Expected: > Column 'VAL' does not allow NULLsExpected: a string containing "Column 'VAL' > does not allow NULLs" but: was null at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at > org.apache.ignite.internal.sql.sqllogic.Statement.execute(Statement.java:123) > at > org.apache.ignite.internal.sql.sqllogic.SqlScriptRunner.run(SqlScriptRunner.java:70) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > at java.base/java.lang.Thread.run(Thread.java:833) {code} > Need to investigate and fix the issue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24532) Sql. SqlLogicTest contstraint/test_not_null_constraint.test is flaky
[ https://issues.apache.org/jira/browse/IGNITE-24532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky updated IGNITE-24532: Ignite Flags: (was: Docs Required,Release Notes Required) > Sql. SqlLogicTest contstraint/test_not_null_constraint.test is flaky > > > Key: IGNITE-24532 > URL: https://issues.apache.org/jira/browse/IGNITE-24532 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Iurii Gerzhedovich >Priority: Major > Labels: ignite-3 > > There are a few rare fails on TC - > https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunAllTests/8875145 > > {code:java} > java.lang.AssertionError: Not expected result at: > (test_not_null_constraint.test:163). Statement: MERGE INTO t2 dst USING t1 > src ON dst.id = src.id WHEN MATCHED THEN UPDATE SET val = NULL. Expected: > Column 'VAL' does not allow NULLs > Expected: a string containing "Column 'VAL' does not allow NULLs" > but: was nulljava.lang.AssertionError: Not expected result at: > (test_not_null_constraint.test:163). Statement: MERGE INTO t2 dst USING t1 > src ON dst.id = src.id WHEN MATCHED THEN UPDATE SET val = NULL. Expected: > Column 'VAL' does not allow NULLsExpected: a string containing "Column 'VAL' > does not allow NULLs" but: was null at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at > org.apache.ignite.internal.sql.sqllogic.Statement.execute(Statement.java:123) > at > org.apache.ignite.internal.sql.sqllogic.SqlScriptRunner.run(SqlScriptRunner.java:70) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > at java.base/java.lang.Thread.run(Thread.java:833) {code} > Need to investigate and fix the issue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-24477) Sql. Add ability to define executor that will be used for public script execution
[ https://issues.apache.org/jira/browse/IGNITE-24477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Pereslegin reassigned IGNITE-24477: - Assignee: Pavel Pereslegin > Sql. Add ability to define executor that will be used for public script > execution > - > > Key: IGNITE-24477 > URL: https://issues.apache.org/jira/browse/IGNITE-24477 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Pavel Pereslegin >Assignee: Pavel Pereslegin >Priority: Major > Labels: ignite-3 > > Currently public script execution uses ForkJoinPool for async execution. > {code:java} > if (cursor.hasNextResult()) { > cursor.nextResult().whenCompleteAsync(this::processCursor); > return; > } > {code} > For example, thin client accepts user specified executor for... something > (see IgniteClient.Builder.asyncContinuationExecutor(...)). > It might be worth somehow adding the ability to define a pool for > asynchronous processing in the embedded client. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24698) Add new catalog serializers that conform with the new serialization protocol
[ https://issues.apache.org/jira/browse/IGNITE-24698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Pereslegin updated IGNITE-24698: -- Summary: Add new catalog serializers that conform with the new serialization protocol (was: Add new catalog serializers that conform with the new serialization protocol.) > Add new catalog serializers that conform with the new serialization protocol > > > Key: IGNITE-24698 > URL: https://issues.apache.org/jira/browse/IGNITE-24698 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Pavel Pereslegin >Priority: Major > Labels: ignite-3 > > In IGNITE-24564, dynamic creation of serializer registry has been added > (using the `CatalogSerializer` annotation). > The target serialization format (v2) should be as follows: > ||Field description||type|| > |PROTOCOL VERSION|short| > |Object type|short| > |Object serialization format version|varint| > |Object payload|...| > In order to migrate to the new serialization data format, we need to do the > following: > 1. Introduce new decorators for input/output which allow to write/read > objects. > {code:Java} > interface CatalogObjectDataInput extends IgniteDataInput { > // Reads an object. > T readObject() throws IOException; > } > interface CatalogObjectDataOutput extends IgniteDataOutput { > // Writes an object. > void writeObject(MarshallableEntry object) throws IOException; > } > {code} > 2. Change CatalogObjectSerializer to use them instead of > IgniteDataInput/IgniteDataOutput > 3. In each serializer container class we need to add a new serializer and > annotate it with CtalogSerializer(version = 2, since = "3.1.0") > 4. Update unmarshal in UpdateLogMarshallerImpl > {code:java} > try (CatalogObjectDataInputImpl input = new > CatalogObjectDataInputImpl(bytes, serializers)) { > short protoVersion = input.readShort(); > switch (protoVersion) { > case 1: > int typeId = input.readShort(); > return (UpdateLogEvent) serializers.get(1, > typeId).readFrom(input); > case 2: > return input.readObject(); > default: > throw new IllegalStateException(format("An object could > not be deserialized because it was using " > + "a newer version of the serialization protocol > [objectVersion={}, supported={}]", protoVersion, PROTOCOL_VERSION)); > } > } catch (Throwable t) { > throw new CatalogMarshallerException(t); > } > {code} > 5. Simplify marshal in UpdateLogMarshallerImpl > {code:java} > public byte[] marshall(UpdateLogEvent update) { > try (CatalogObjectDataOutputImpl output = new > CatalogObjectDataOutputImpl(INITIAL_BUFFER_CAPACITY, serializers)) { > output.writeShort(PROTOCOL_VERSION); > output.writeObject(update); > return output.array(); > } catch (Throwable t) { > throw new CatalogMarshallerException(t); > } > } > {code} > This way we will read data depending on the protocol version, but write in > version 2 format and thus ensure a transparent transition from version 1 to > version 2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24698) Sql. Add new serializers that comply with the new serialization protocol.
[ https://issues.apache.org/jira/browse/IGNITE-24698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Pereslegin updated IGNITE-24698: -- Ignite Flags: (was: Docs Required,Release Notes Required) > Sql. Add new serializers that comply with the new serialization protocol. > - > > Key: IGNITE-24698 > URL: https://issues.apache.org/jira/browse/IGNITE-24698 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Pavel Pereslegin >Priority: Major > Labels: ignite-3 > > In IGNITE-24564, dynamic creation of serializer registry has been added > (using the `CatalogSerializer` annotation). > The target serialization format (v2) should be as follows: > ||Field description||type|| > |PROTOCOL VERSION|short| > |Object type|short| > |Object serialization format version|varint| > |Object payload|...| > In order to migrate to the new serialization data format, we need to do the > following: > 1. Introduce new decorators for input/output which allow to write/read > objects. > {code:Java} > interface CatalogObjectDataInput extends IgniteDataInput { > // Reads an object. > T readObject() throws IOException; > } > interface CatalogObjectDataOutput extends IgniteDataOutput { > // Writes an object. > void writeObject(MarshallableEntry object) throws IOException; > } > {code} > 2. Change CatalogObjectSerializer to use them instead of > IgniteDataInput/IgniteDataOutput > 3. In each serializer container class we need to add a new serializer and > annotate it with CtalogSerializer(version = 2, since = "3.1.0") > 4. Update unmarshal in UpdateLogMarshallerImpl > {code:java} > try (CatalogObjectDataInputImpl input = new > CatalogObjectDataInputImpl(bytes, serializers)) { > short protoVersion = input.readShort(); > switch (protoVersion) { > case 1: > int typeId = input.readShort(); > return (UpdateLogEvent) serializers.get(1, > typeId).readFrom(input); > case 2: > return input.readObject(); > default: > throw new IllegalStateException(format("An object could > not be deserialized because it was using " > + "a newer version of the serialization protocol > [objectVersion={}, supported={}]", protoVersion, PROTOCOL_VERSION)); > } > } catch (Throwable t) { > throw new CatalogMarshallerException(t); > } > {code} > 5. Simplify marshal in UpdateLogMarshallerImpl > {code:java} > public byte[] marshall(UpdateLogEvent update) { > try (CatalogObjectDataOutputImpl output = new > CatalogObjectDataOutputImpl(INITIAL_BUFFER_CAPACITY, serializers)) { > output.writeShort(PROTOCOL_VERSION); > output.writeObject(update); > return output.array(); > } catch (Throwable t) { > throw new CatalogMarshallerException(t); > } > } > {code} > This way we will read data depending on the protocol version, but write in > version 2 format and thus ensure a transparent transition from version 1 to > version 2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24698) Add new catalog serializers that comply with the new serialization protocol.
[ https://issues.apache.org/jira/browse/IGNITE-24698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Pereslegin updated IGNITE-24698: -- Summary: Add new catalog serializers that comply with the new serialization protocol. (was: Sql. Add new serializers that comply with the new serialization protocol.) > Add new catalog serializers that comply with the new serialization protocol. > > > Key: IGNITE-24698 > URL: https://issues.apache.org/jira/browse/IGNITE-24698 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Pavel Pereslegin >Priority: Major > Labels: ignite-3 > > In IGNITE-24564, dynamic creation of serializer registry has been added > (using the `CatalogSerializer` annotation). > The target serialization format (v2) should be as follows: > ||Field description||type|| > |PROTOCOL VERSION|short| > |Object type|short| > |Object serialization format version|varint| > |Object payload|...| > In order to migrate to the new serialization data format, we need to do the > following: > 1. Introduce new decorators for input/output which allow to write/read > objects. > {code:Java} > interface CatalogObjectDataInput extends IgniteDataInput { > // Reads an object. > T readObject() throws IOException; > } > interface CatalogObjectDataOutput extends IgniteDataOutput { > // Writes an object. > void writeObject(MarshallableEntry object) throws IOException; > } > {code} > 2. Change CatalogObjectSerializer to use them instead of > IgniteDataInput/IgniteDataOutput > 3. In each serializer container class we need to add a new serializer and > annotate it with CtalogSerializer(version = 2, since = "3.1.0") > 4. Update unmarshal in UpdateLogMarshallerImpl > {code:java} > try (CatalogObjectDataInputImpl input = new > CatalogObjectDataInputImpl(bytes, serializers)) { > short protoVersion = input.readShort(); > switch (protoVersion) { > case 1: > int typeId = input.readShort(); > return (UpdateLogEvent) serializers.get(1, > typeId).readFrom(input); > case 2: > return input.readObject(); > default: > throw new IllegalStateException(format("An object could > not be deserialized because it was using " > + "a newer version of the serialization protocol > [objectVersion={}, supported={}]", protoVersion, PROTOCOL_VERSION)); > } > } catch (Throwable t) { > throw new CatalogMarshallerException(t); > } > {code} > 5. Simplify marshal in UpdateLogMarshallerImpl > {code:java} > public byte[] marshall(UpdateLogEvent update) { > try (CatalogObjectDataOutputImpl output = new > CatalogObjectDataOutputImpl(INITIAL_BUFFER_CAPACITY, serializers)) { > output.writeShort(PROTOCOL_VERSION); > output.writeObject(update); > return output.array(); > } catch (Throwable t) { > throw new CatalogMarshallerException(t); > } > } > {code} > This way we will read data depending on the protocol version, but write in > version 2 format and thus ensure a transparent transition from version 1 to > version 2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24695) Assert replication group type on enlistment
[ https://issues.apache.org/jira/browse/IGNITE-24695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-24695: --- Description: enlist() and assignCommitPartition() in ReadWriteTransaction should assert that the replicationGroupId has the correct type: that is, TablePartitionId for per-table partitions and ZonePartitionId for per-zone partitions. > Assert replication group type on enlistment > --- > > Key: IGNITE-24695 > URL: https://issues.apache.org/jira/browse/IGNITE-24695 > Project: Ignite > Issue Type: Improvement >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > > enlist() and assignCommitPartition() in ReadWriteTransaction should assert > that the replicationGroupId has the correct type: that is, TablePartitionId > for per-table partitions and ZonePartitionId for per-zone partitions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-24695) Assert replication group type on enlistment
Roman Puchkovskiy created IGNITE-24695: -- Summary: Assert replication group type on enlistment Key: IGNITE-24695 URL: https://issues.apache.org/jira/browse/IGNITE-24695 Project: Ignite Issue Type: Improvement Reporter: Roman Puchkovskiy Assignee: Roman Puchkovskiy -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24695) Assert replication group type on enlistment
[ https://issues.apache.org/jira/browse/IGNITE-24695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-24695: --- Epic Link: IGNITE-22115 > Assert replication group type on enlistment > --- > > Key: IGNITE-24695 > URL: https://issues.apache.org/jira/browse/IGNITE-24695 > Project: Ignite > Issue Type: Improvement >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > > enlist() and assignCommitPartition() in ReadWriteTransaction should assert > that the replicationGroupId has the correct type: that is, TablePartitionId > for per-table partitions and ZonePartitionId for per-zone partitions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-24696) Make waits in TableManager interruptible
Roman Puchkovskiy created IGNITE-24696: -- Summary: Make waits in TableManager interruptible Key: IGNITE-24696 URL: https://issues.apache.org/jira/browse/IGNITE-24696 Project: Ignite Issue Type: Improvement Reporter: Roman Puchkovskiy Assignee: Roman Puchkovskiy -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24696) Make waits in TableManager interruptible
[ https://issues.apache.org/jira/browse/IGNITE-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-24696: --- Description: Currently, waits are not interruptible (as they use join()), so, if the awaited future never gets completed due to some problem, the executing test just hangs forever making it impossible for JUnit to fail it by timeout. We could use IgniteUtils#getInterruptibly() to make the waits interruptible. > Make waits in TableManager interruptible > > > Key: IGNITE-24696 > URL: https://issues.apache.org/jira/browse/IGNITE-24696 > Project: Ignite > Issue Type: Improvement >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > > Currently, waits are not interruptible (as they use join()), so, if the > awaited future never gets completed due to some problem, the executing test > just hangs forever making it impossible for JUnit to fail it by timeout. > We could use IgniteUtils#getInterruptibly() to make the waits interruptible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-24694) JRaft should be able to replay the log from a non-zero position
Aleksandr Polovtsev created IGNITE-24694: Summary: JRaft should be able to replay the log from a non-zero position Key: IGNITE-24694 URL: https://issues.apache.org/jira/browse/IGNITE-24694 Project: Ignite Issue Type: Improvement Reporter: Aleksandr Polovtsev Assignee: Aleksandr Polovtsev -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24694) JRaft should be able to replay the log from a non-zero position
[ https://issues.apache.org/jira/browse/IGNITE-24694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtsev updated IGNITE-24694: - Description: When a new table processor is added to the `ZonePartitionRaftListener` its storage gets initialized with some information, like the last applied index and Raft group configuration. However, a node can die or be restarted before this information gets flushed onto a persistent storage which means that upon the consecutive startup, this storage will return 0 as its last applied index. Since on startup we use the minimum last applied index across all storages during Raft recovery, this value will also be 0 and JRaft will think that it needs to replay the log from the beginning of time, while actually this came from a storage for an empty table, and its applied index shouldn't even be taken into account. An even bigger problem is that the log might have been truncated and cannot be restored from the 0 index, so the node won't even be able to start. It is proposed to modify JRaft recovery procedure to replay the log from the smallest available index in case the index from the startup information provided by our storages is equal to 0. > JRaft should be able to replay the log from a non-zero position > --- > > Key: IGNITE-24694 > URL: https://issues.apache.org/jira/browse/IGNITE-24694 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksandr Polovtsev >Assignee: Aleksandr Polovtsev >Priority: Major > Labels: ignite-3 > > When a new table processor is added to the `ZonePartitionRaftListener` its > storage gets initialized with some information, like the last applied index > and Raft group configuration. However, a node can die or be restarted before > this information gets flushed onto a persistent storage which means that upon > the consecutive startup, this storage will return 0 as its last applied > index. Since on startup we use the minimum last applied index across all > storages during Raft recovery, this value will also be 0 and JRaft will think > that it needs to replay the log from the beginning of time, while actually > this came from a storage for an empty table, and its applied index shouldn't > even be taken into account. An even bigger problem is that the log might have > been truncated and cannot be restored from the 0 index, so the node won't > even be able to start. > It is proposed to modify JRaft recovery procedure to replay the log from the > smallest available index in case the index from the startup information > provided by our storages is equal to 0. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24694) JRaft should be able to replay the log from a non-zero position
[ https://issues.apache.org/jira/browse/IGNITE-24694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtsev updated IGNITE-24694: - Description: When a new table processor is added to the {{ZonePartitionRaftListener}} its storage gets initialized with some information, like the last applied index and Raft group configuration. However, a node can die or be restarted before this information gets flushed onto a persistent storage which means that upon the consecutive startup, this storage will return 0 as its last applied index. Since on startup we use the minimum last applied index across all storages during Raft recovery, this value will also be 0 and JRaft will think that it needs to replay the log from the beginning of time, while actually this came from a storage for an empty table, and its applied index shouldn't even be taken into account. An even bigger problem is that the log might have been truncated and cannot be restored from the 0 index, so the node won't even be able to start. It is proposed to modify JRaft recovery procedure to replay the log from the smallest available index in case the index from the startup information provided by our storages is equal to 0. was: When a new table processor is added to the `ZonePartitionRaftListener` its storage gets initialized with some information, like the last applied index and Raft group configuration. However, a node can die or be restarted before this information gets flushed onto a persistent storage which means that upon the consecutive startup, this storage will return 0 as its last applied index. Since on startup we use the minimum last applied index across all storages during Raft recovery, this value will also be 0 and JRaft will think that it needs to replay the log from the beginning of time, while actually this came from a storage for an empty table, and its applied index shouldn't even be taken into account. An even bigger problem is that the log might have been truncated and cannot be restored from the 0 index, so the node won't even be able to start. It is proposed to modify JRaft recovery procedure to replay the log from the smallest available index in case the index from the startup information provided by our storages is equal to 0. > JRaft should be able to replay the log from a non-zero position > --- > > Key: IGNITE-24694 > URL: https://issues.apache.org/jira/browse/IGNITE-24694 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksandr Polovtsev >Assignee: Aleksandr Polovtsev >Priority: Major > Labels: ignite-3 > > When a new table processor is added to the {{ZonePartitionRaftListener}} its > storage gets initialized with some information, like the last applied index > and Raft group configuration. However, a node can die or be restarted before > this information gets flushed onto a persistent storage which means that upon > the consecutive startup, this storage will return 0 as its last applied > index. Since on startup we use the minimum last applied index across all > storages during Raft recovery, this value will also be 0 and JRaft will think > that it needs to replay the log from the beginning of time, while actually > this came from a storage for an empty table, and its applied index shouldn't > even be taken into account. An even bigger problem is that the log might have > been truncated and cannot be restored from the 0 index, so the node won't > even be able to start. > It is proposed to modify JRaft recovery procedure to replay the log from the > smallest available index in case the index from the startup information > provided by our storages is equal to 0. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-24692) Sql. Fix correct mappings for primary keys
Evgeny Stanilovsky created IGNITE-24692: --- Summary: Sql. Fix correct mappings for primary keys Key: IGNITE-24692 URL: https://issues.apache.org/jira/browse/IGNITE-24692 Project: Ignite Issue Type: Task Components: sql Affects Versions: 3.0 Reporter: Evgeny Stanilovsky Assignee: Evgeny Stanilovsky Need to fix columns mapping for selectivity calculations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (IGNITE-24689) Ignite Website: update top banners
[ https://issues.apache.org/jira/browse/IGNITE-24689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erlan Aytpaev resolved IGNITE-24689. Resolution: Fixed > Ignite Website: update top banners > -- > > Key: IGNITE-24689 > URL: https://issues.apache.org/jira/browse/IGNITE-24689 > Project: Ignite > Issue Type: Task > Components: website >Reporter: Erlan Aytpaev >Assignee: Erlan Aytpaev >Priority: Minor > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (IGNITE-24689) Ignite Website: update top banners
[ https://issues.apache.org/jira/browse/IGNITE-24689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erlan Aytpaev closed IGNITE-24689. -- > Ignite Website: update top banners > -- > > Key: IGNITE-24689 > URL: https://issues.apache.org/jira/browse/IGNITE-24689 > Project: Ignite > Issue Type: Task > Components: website >Reporter: Erlan Aytpaev >Assignee: Erlan Aytpaev >Priority: Minor > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-24562) Avoid "Message handling has been too long" message in custom pools
[ https://issues.apache.org/jira/browse/IGNITE-24562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Pligin reassigned IGNITE-24562: Assignee: Ivan Bessonov > Avoid "Message handling has been too long" message in custom pools > -- > > Key: IGNITE-24562 > URL: https://issues.apache.org/jira/browse/IGNITE-24562 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Bessonov >Assignee: Ivan Bessonov >Priority: Major > Labels: ignite-3 > > Such a message only makes sense if we block the network threads. Other pools > are specifically designed to process heavier handlers, and they should not > produce any warnings in case of long processing. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-24550) Implement throttling metrics
[ https://issues.apache.org/jira/browse/IGNITE-24550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Pligin reassigned IGNITE-24550: Assignee: Ivan Bessonov > Implement throttling metrics > > > Key: IGNITE-24550 > URL: https://issues.apache.org/jira/browse/IGNITE-24550 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Bessonov >Assignee: Ivan Bessonov >Priority: Major > Labels: ignite-3 > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-24548) Compare throttling strategies in Ignite 3
[ https://issues.apache.org/jira/browse/IGNITE-24548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Pligin reassigned IGNITE-24548: Assignee: Ivan Bessonov > Compare throttling strategies in Ignite 3 > - > > Key: IGNITE-24548 > URL: https://issues.apache.org/jira/browse/IGNITE-24548 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Bessonov >Assignee: Ivan Bessonov >Priority: Major > Labels: ignite-3 > > We need to compare target ratio based and speed based throttling > implementations in a number heavy tests and choose the default. > We could provide an option to change the default through the unmanaged part > of configuration. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24685) Add jmh benchmark for TCP-H queries
[ https://issues.apache.org/jira/browse/IGNITE-24685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Korotkov updated IGNITE-24685: - Description: Add helper class to work with the TPC-H queries to allow create reproducers for problems in Calcite engine. Add Jmh benchmark for TPC-H queries using this helper. > Add jmh benchmark for TCP-H queries > --- > > Key: IGNITE-24685 > URL: https://issues.apache.org/jira/browse/IGNITE-24685 > Project: Ignite > Issue Type: Task >Reporter: Sergey Korotkov >Priority: Minor > > Add helper class to work with the TPC-H queries to allow create reproducers > for problems in Calcite engine. > Add Jmh benchmark for TPC-H queries using this helper. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-24685) Add jmh benchmark for TCP-H queries
[ https://issues.apache.org/jira/browse/IGNITE-24685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Korotkov reassigned IGNITE-24685: Assignee: Sergey Korotkov > Add jmh benchmark for TCP-H queries > --- > > Key: IGNITE-24685 > URL: https://issues.apache.org/jira/browse/IGNITE-24685 > Project: Ignite > Issue Type: Task >Reporter: Sergey Korotkov >Assignee: Sergey Korotkov >Priority: Minor > > Add helper class to work with the TPC-H queries to allow create reproducers > for problems in Calcite engine. > Add Jmh benchmark for TPC-H queries using this helper. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24685) Add jmh benchmark for TCP-H queries
[ https://issues.apache.org/jira/browse/IGNITE-24685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Korotkov updated IGNITE-24685: - Ignite Flags: (was: Docs Required,Release Notes Required) > Add jmh benchmark for TCP-H queries > --- > > Key: IGNITE-24685 > URL: https://issues.apache.org/jira/browse/IGNITE-24685 > Project: Ignite > Issue Type: Task >Reporter: Sergey Korotkov >Assignee: Sergey Korotkov >Priority: Minor > > Add helper class to work with the TPC-H queries to allow create reproducers > for problems in Calcite engine. > Add Jmh benchmark for TPC-H queries using this helper. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-24301) Sql schema. Fix Thin client protocol to use schema names
[ https://issues.apache.org/jira/browse/IGNITE-24301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Iurii Gerzhedovich reassigned IGNITE-24301: --- Assignee: (was: Pavel Tupitsyn) > Sql schema. Fix Thin client protocol to use schema names > > > Key: IGNITE-24301 > URL: https://issues.apache.org/jira/browse/IGNITE-24301 > Project: Ignite > Issue Type: Improvement > Components: thin client >Reporter: Evgeny Stanilovsky >Priority: Major > Labels: ignite-3 > Fix For: 3.1 > > > Table can be unambiguously identified either by id or by pair of schema name > and table name. > So, Thin Client protocol should be aware of schema name a table belongs to. > ClientTables, *ClientTableGetRequestV2* and *ClientTablesGetRequestV2* should > either read/write table canonical name (see `QualifiedName.toCanonicalName`) > or transfer schema name with the table name. > I'd prefer second approach with 2 separate Strings, to avoid excessive > parsing and quoting names. > Check this issue mention after ignite-24033 was merged -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24526) Move PartitionReplicaListener#processRequest to ZonePartitionReplicaListener
[ https://issues.apache.org/jira/browse/IGNITE-24526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-24526: --- Description: TBD, see the first comment > Move PartitionReplicaListener#processRequest to ZonePartitionReplicaListener > > > Key: IGNITE-24526 > URL: https://issues.apache.org/jira/browse/IGNITE-24526 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > > TBD, see the first comment -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-24526) Move PartitionReplicaListener#processRequest to ZonePartitionReplicaListener
[ https://issues.apache.org/jira/browse/IGNITE-24526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17932245#comment-17932245 ] Roman Puchkovskiy commented on IGNITE-24526: It is still to be decided what should be done under this issue. PartitionReplicaListener#processRequest(), apart from logic specific to each type of request, has some generic duties, but they all are specific to table-aware requests which processing remains in PartitionReplicaListener, so these duties should remain in PRL and should not be implemented in ZonePartitionReplicaListener. What might be needed to be implemented, however, is introduction of an interface (like TableReplicaProcessor) that would have handleRequest() (accepting a request, optional boolean isPrimary and optional long leaseStartTime) that would call PRL#processRequest(); handleRequest() would be called from ZPRL after invoking ensureReplicaIsPrimary(). This is to be decided after we decide how we approach IGNITE-24380. > Move PartitionReplicaListener#processRequest to ZonePartitionReplicaListener > > > Key: IGNITE-24526 > URL: https://issues.apache.org/jira/browse/IGNITE-24526 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-24553) Refine Data Consistancy team related system properties*Description*
[ https://issues.apache.org/jira/browse/IGNITE-24553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Pligin reassigned IGNITE-24553: Assignee: Mikhail Efremov > Refine Data Consistancy team related system properties*Description* > --- > > Key: IGNITE-24553 > URL: https://issues.apache.org/jira/browse/IGNITE-24553 > Project: Ignite > Issue Type: Task >Reporter: Mikhail Efremov >Assignee: Mikhail Efremov >Priority: Major > Labels: ignite-3 > > *Description* > After https://issues.apache.org/jira/browse/IGNITE-24272 there are system > properties to refine whether they should be a configuration entry or they're > test-related or are intended to be removed in future. This ticket contains > properties that are related or had been created by Data Consistency team > members. The properties list is provided in DoD section below. > *Motivation* > We want to configure an Ignite cluster and its nodes through local or > distributed configuration and left system properties only for the follow > allowed purposes: > # a property is for tests only purposes; > # a property will be removed later and is annotated with a TODO commentary > with a corresponding JIRA ticket. > All other system properties should be done as configuration entries. > *Definition of Done* > # Decision on {{IGNITE_JVM_PAUSE_DETECTOR_DISABLED}} is provided. > # Decision on {{IGNITE_JVM_PAUSE_DETECTOR_PRECISION}} is provided. > # Decision on {{IGNITE_JVM_PAUSE_DETECTOR_THRESHOLD}} is provided. > # Decision on {{IGNITE_JVM_PAUSE_DETECTOR_LAST_EVENTS_COUNT}} is provided. > # Decision on {{RESOURCE_VACUUM_INTERVAL_MILLISECONDS}} is provided. > # Decision on {{LEASE_STATISTICS_PRINT_ONCE_PER_ITERATIONS}} is provided. > # Decision on {{IGNITE_TIMEOUT_WORKER_SLEEP_INTERVAL}} is provided. > # Decision on {{IGNITE_LOG_THROTTLE_INTERVAL_MS}} is provided > # Decision on {{IGNITE_SKIP_REPLICATION_IN_BENCHMARK}} is provided > # Decision on {{IGNITE_SKIP_STORAGE_UPDATE_IN_BENCHMARK}} is provided > # Decision on {{IGNITE_ZONE_BASED_REPLICATION}} is provided -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-24693) Run QA benchmarks to test deadlock prevention with waiting
Denis Chudov created IGNITE-24693: - Summary: Run QA benchmarks to test deadlock prevention with waiting Key: IGNITE-24693 URL: https://issues.apache.org/jira/browse/IGNITE-24693 Project: Ignite Issue Type: Task Reporter: Denis Chudov Default deadlock prevention policy forces the younger transaction operation to fail and older to wait on lock conflict. It can be configured in a way that younger tx would wait for small timeout (let say, 200 ms) to acquire to lock when there is a lock conflict. We should test this on QA benchmarks to decide whether we should keep the existing defaults or change them. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24685) Add TCP-H helper and jmh benchmark for TCP-H queries
[ https://issues.apache.org/jira/browse/IGNITE-24685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Korotkov updated IGNITE-24685: - Summary: Add TCP-H helper and jmh benchmark for TCP-H queries (was: Add jmh benchmark for TCP-H queries) > Add TCP-H helper and jmh benchmark for TCP-H queries > > > Key: IGNITE-24685 > URL: https://issues.apache.org/jira/browse/IGNITE-24685 > Project: Ignite > Issue Type: Task >Reporter: Sergey Korotkov >Assignee: Sergey Korotkov >Priority: Minor > Labels: ise > > Add helper class to work with the TPC-H queries to allow create reproducers > for problems in Calcite engine. > Add Jmh benchmark for TPC-H queries using this helper. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-24689) Ignite Website: update top banners
Erlan Aytpaev created IGNITE-24689: -- Summary: Ignite Website: update top banners Key: IGNITE-24689 URL: https://issues.apache.org/jira/browse/IGNITE-24689 Project: Ignite Issue Type: Task Components: website Reporter: Erlan Aytpaev Assignee: Erlan Aytpaev -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24685) Add jmh benchmark for TCP-H queries
[ https://issues.apache.org/jira/browse/IGNITE-24685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Korotkov updated IGNITE-24685: - Labels: ise (was: ) > Add jmh benchmark for TCP-H queries > --- > > Key: IGNITE-24685 > URL: https://issues.apache.org/jira/browse/IGNITE-24685 > Project: Ignite > Issue Type: Task >Reporter: Sergey Korotkov >Assignee: Sergey Korotkov >Priority: Minor > Labels: ise > > Add helper class to work with the TPC-H queries to allow create reproducers > for problems in Calcite engine. > Add Jmh benchmark for TPC-H queries using this helper. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-24699) Sql. Query with ORDER BY returns unsorted result
[ https://issues.apache.org/jira/browse/IGNITE-24699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov reassigned IGNITE-24699: - Assignee: Konstantin Orlov > Sql. Query with ORDER BY returns unsorted result > > > Key: IGNITE-24699 > URL: https://issues.apache.org/jira/browse/IGNITE-24699 > Project: Ignite > Issue Type: Bug > Components: sql >Reporter: Konstantin Orlov >Assignee: Konstantin Orlov >Priority: Major > Labels: ignite-3 > > Run this test: > {code:java} > // org.apache.ignite.internal.sql.engine.ItSecondaryIndexTest > @Test > void ensurePartitionStreamsAreMergedCorrectlyWithRegardToProjection() { > assertQuery("SELECT /*+ FORCE_INDEX(" + NAME_CITY_IDX + ") */ name FROM > Developer WHERE id % 2 = 0 ORDER BY name DESC") > .matches(containsIndexScan("PUBLIC", "DEVELOPER", NAME_CITY_IDX)) > .matches(not(containsSubPlan("Sort"))) > .returns("Zimmer") > .returns("Stravinsky") > .returns("Strauss") > .returns("Shubert") > .returns("Rihter") > .returns("Prokofiev") > .returns("O'Halloran") > .returns("Einaudi") > .returns("Chaikovsky") > .returns("Beethoven") > .returns("Arnalds") > .ordered() > .check(); > } {code} > It fails with > {code} > org.opentest4j.AssertionFailedError: Collections are not equal (position 0): > Expected: Stravinsky > Actual: Rihter > {code} > The reason is that comparator created to join streams of partition is created > based on collation returned by relation node, which in turn may be adjusted > with regard to projection merged into the real, but merge of partition is > happening before the projection, therefore must account only for > {{requiredColumns}} bitset. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-24698) Sql. Add new serializers that comply with the new serialization protocol.
[ https://issues.apache.org/jira/browse/IGNITE-24698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Pereslegin updated IGNITE-24698: -- Description: In IGNITE-24564, dynamic creation of serializer registry has been added (using the `CatalogSerializer` annotation). The target serialization format (v2) should be as follows: ||Field description||type|| |PROTOCOL VERSION|short| |Object type|short| |Object serialization format version|varint| |Object payload|...| In order to migrate to the new serialization data format, we need to do the following: 1. Introduce new decorators for input/output which allow to write/read objects. {code:Java} interface CatalogObjectDataInput extends IgniteDataInput { // Reads an object. T readObject() throws IOException; } interface CatalogObjectDataOutput extends IgniteDataOutput { // Writes an object. void writeObject(MarshallableEntry object) throws IOException; } {code} 2. Change CatalogObjectSerializer to use them instead of IgniteDataInput/IgniteDataOutput 3. In each serializer container class we need to add a new serializer and annotate it with CtalogSerializer(version = 2, since = "3.1.0") 4. Update unmarshal in UpdateLogMarshallerImpl {code:java} try (CatalogObjectDataInputImpl input = new CatalogObjectDataInputImpl(bytes, serializers)) { short protoVersion = input.readShort(); switch (protoVersion) { case 1: int typeId = input.readShort(); return (UpdateLogEvent) serializers.get(1, typeId).readFrom(input); case 2: return input.readObject(); default: throw new IllegalStateException(format("An object could not be deserialized because it was using " + "a newer version of the serialization protocol [objectVersion={}, supported={}]", protoVersion, PROTOCOL_VERSION)); } } catch (Throwable t) { throw new CatalogMarshallerException(t); } {code} 5. Simplify marshal in UpdateLogMarshallerImpl {code:java} public byte[] marshall(UpdateLogEvent update) { try (CatalogObjectDataOutputImpl output = new CatalogObjectDataOutputImpl(INITIAL_BUFFER_CAPACITY, serializers)) { output.writeShort(PROTOCOL_VERSION); output.writeObject(update); return output.array(); } catch (Throwable t) { throw new CatalogMarshallerException(t); } } {code} This way we will read data depending on the protocol version, but write in version 2 format and thus ensure a transparent transition from version 1 to version 2. was: In IGNITE-24564, dynamic creation of serializer registry has been added (using the `CatalogSerializer` annotation). The target serialization format (v2) should be as follows: ||Field description||type|| |PROTOCOL VERSION|short| |Object type|short| |Object serialization format version|varint| |Object payload|...| In order to migrate to the new serialization data format, we need to do the following: 1. Introduce new decorators for input/output which allow to write/read objects. {code:Java} interface CatalogObjectDataInput extends IgniteDataInput { // Reads an object. T readObject() throws IOException; } interface CatalogObjectDataOutput extends IgniteDataOutput { // Writes an object. void writeObject(MarshallableEntry object) throws IOException; } {code} 2. Change CatalogObjectSerializer to use them instead of IgniteDataInput/IgniteDataOutput 3. In each serializer container class we need to add a new serializer and annotate it with CtalogSerializer(version = 2, since = "3.1.0") 4. Update unmarshal in UpdateLogMarshallerImpl {code:java} try (CatalogObjectDataInput input = new CatalogObjectDataInputImpl(new IgniteUnsafeDataInput(bytes), serializers)) { short protoVersion = input.readShort(); switch (protoVersion) { case 1: int typeId = input.readShort(); return (UpdateLogEvent) serializers.get(1, typeId).readFrom(input); case 2: return input.readObject(); default: throw new IllegalStateException(format("An object could not be deserialized because it was using a newer" + " version of the serialization protocol [supported={}, actual={}].", PROTOCOL_VERSION, protoVersion)); } } catch (Throwable t) { throw new CatalogMarshallerException(t); } {code} 5. Simplify marshal in UpdateLogMarshallerImpl {code:java} try (CatalogObjectDataOutputImpl output = new CatalogObjectDataOutputImpl(PROTOCOL_VERSION, INITIAL_BUFFER_CAPACITY, serializers)) { output.writeObject(update); return output.array