[jira] [Resolved] (IGNITE-21002) Document the ability to execute SQL multi-statement queries.

2025-02-28 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov resolved IGNITE-21002.
---
Resolution: Fixed

Looks like documentation already contains section about script execution: 
https://ignite.apache.org/docs/ignite3/latest/developers-guide/sql/sql-api#sql-scripts

> Document the ability to execute SQL multi-statement queries.
> 
>
> Key: IGNITE-21002
> URL: https://issues.apache.org/jira/browse/IGNITE-21002
> Project: Ignite
>  Issue Type: Task
>  Components: documentation
>Reporter: Pavel Pereslegin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need to add documentation about the ability to execute SQL scripts.
> One thing to remember to mention is that if the user leaves an open 
> transaction initiated by the script, it will be rolled back after the script 
> completes execution.
> For example:
> {code:sql}
> CREATE TABLE TEST(ID INT);
> START TRANSACTION;
> INSERT INTO TEST(1);
> {code}
> Since the transaction remains open, all changes made within it will be undone 
> when the script completes.
> Another thing is that {{COMMIT}} does nothing without an open script 
> transaction.
> {code:sql}
> CREATE TABLE TEST(ID INT);
> COMMIT;
> COMMIT;
> {code}
> Script must be executed without errors.
> h4. JDBC
> TX control statements do not supported in non autocommit mode (see 
> IGNITE-21020).
> At the moment AI3 executes statements only in lazy mode, and this mode does 
> not quite fit the jdbc standard (see IGNITE-21133).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-20453) Sql. Basic Multi Statement Support

2025-02-28 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov resolved IGNITE-20453.
---
Resolution: Fixed

> Sql. Basic Multi Statement Support
> --
>
> Key: IGNITE-20453
> URL: https://issues.apache.org/jira/browse/IGNITE-20453
> Project: Ignite
>  Issue Type: Epic
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> h1. Motivation
> A multi-statement query is a collection of SQL statements that can be 
> executed in one request. Supporting multi-statement queries may result in 
> several benefits:
> * It helps to decrease number of round trips between the application and the 
> database server, which is positively affects performance (this can be 
> achieved by using batching though)
> * It may significantly improve UX: during maintaining, user may submit an 
> entire migration/initialization script to the database server without need to 
> split this script on independent statements by hand
> * In distributed system, some features (like shared mutable state, system and 
> user defined variables) are easier to introduce for multi-statement only, 
> rather than for general case
> Most popular RDBMS systems, such as Oracle, MySQL, and PostgreSQL, already 
> support multi-statement execution.
> Let's support multi statement queries in Apache Ignite 3 to ease the 
> migration to Ignite and improve UX by providing familiar and convenient way 
> of working with database.
> h1. Requirements
> # It should be possible to start new transaction and commit it from a script
> # If there is no explicit active transaction (either passed as parameter to 
> the API call or started from script), then every statement should be wrapped 
> in its own transaction
> # It should not be possible to commit transaction passed as parameter to the 
> API call
> # It should not be possible to start another transaction if there is an 
> active transaction
> # It should not be possible to start transaction by executing a tx management 
> statement in single statement mode
> # The execution of a multi statement query should emulates serial execution 
> for all statements in the order they are enumerated in the script; as if 
> statements had been executed one after another, serially, rather than 
> concurrently
> # Multi statement query should do progress even if no one consumes the result



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-20473) Catalog service improvements

2025-02-28 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov resolved IGNITE-20473.
---
Resolution: Done

> Catalog service improvements
> 
>
> Key: IGNITE-20473
> URL: https://issues.apache.org/jira/browse/IGNITE-20473
> Project: Ignite
>  Issue Type: Epic
>Reporter: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
>
> Umbrella ticket for tech-debt and improvements after IGNITE-19502



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-24666) Suboptimal method Outbox#flush

2025-02-28 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931566#comment-17931566
 ] 

Ignite TC Bot commented on IGNITE-24666:


{panel:title=Branch: [pull/11901/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/11901/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=8330781&buildTypeId=IgniteTests24Java8_RunAll]

> Suboptimal method Outbox#flush
> --
>
> Key: IGNITE-24666
> URL: https://issues.apache.org/jira/browse/IGNITE-24666
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: ise
> Fix For: 2.18
>
> Attachments: Снимок экрана 2025-02-27 в 19.43.30.png, Снимок экрана 
> 2025-02-27 в 19.45.21.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This method uses Streams and loops over same collection twice. It's possible 
> to optimize the memory usage. For example in the attachment this method is 
> responsible for 10% allocations. After optimizing it allocates only 1% (see 
> attachment 2).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Iurii Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Iurii Gerzhedovich reassigned IGNITE-24676:
---

Assignee: Iurii Gerzhedovich

> Productization of Temporal Types
> 
>
> Key: IGNITE-24676
> URL: https://issues.apache.org/jira/browse/IGNITE-24676
> Project: Ignite
>  Issue Type: Epic
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Iurii Gerzhedovich
>Priority: Major
>  Labels: ignite-3
>
> This is an umbrella ticket to keep track of all work related to 
> productization of temporal types. 
> The first phase is to review existing test coverage according to a test plan 
> (presented below) and add absent tests. The goal is to identify all issues 
> related to a temporal types. All found problems (as well as already filed 
> ones) must be linked to this epic.
> Second phase will include fixing all attached issues, as well as amending 
> documentation with known limitation in case of problem that we are not going 
> to fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is 
> not supported and we have no plan to support it any time soon, therefor this 
> must be mentioned as known limitation).
> Note: phases not necessary should be executed sequentially; critical issues 
> may be fixed asap.
> A temporal types hierarchy is as follow:
>  * All temporal types
>  ** Datetime types
>  *** DATE
>  *** TIME [WITHOUT TIME ZONE]
>  *** TIME WITH TIME ZONE // not supported; known limitation
>  *** TIMESTAMP [WITHOUT TIME ZONE]
>  *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
>  *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension
>  ** Interval types
>  *** YEAR TO MONTH intervals
>  *** DAY TO SECOND intervals
> Test plan is as follow:
>  * For all temporal types check different values (literals, dyn params, table 
> columns):
>  ** check boundaries
>  ** check different precisions for fraction of second
>  ** for datetime types check leap year/month/second
>  ** for literals check supported formats
>  ** for table columns check support for defaults; boundaries check; different 
> precision for fraction of second
>  ** for table columns check support in indexes (both SORTED and HASH)
>  * For all temporal types check operations:
>  ** check type coercion for all allowed operations
>  ** below operations must be checked with similar types and types of 
> different precision:
>  *** comparison
>  *** arithmetic
>  ** check conversion between different types (aka CAST operator)
>  *** for conversion from character string to temporal type check conversion 
> from all allowed formats
>  *** for conversion to character string check that result satisfies the 
> format described in SQL standard
>  ** check built-in function
>  *** make sure all required by SQL standard function are presented and work 
> as expected



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-24642) Calcite do not trim tables in case of multiple joins

2025-02-28 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin resolved IGNITE-24642.
-
  Reviewer: Aleksey Plekhanov
Resolution: Fixed

[~alex_pl] thanks for the review, merged to master.

> Calcite do not trim tables in case of multiple joins
> 
>
> Key: IGNITE-24642
> URL: https://issues.apache.org/jira/browse/IGNITE-24642
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: ise
> Fix For: 2.18
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> IgnitePlanner#trimUnusuedFields do not trim fields in case query contains two 
> or more joins. 
> Comment that describes this logic looks weird, as trims applies on 
> PROJECT_PUSH_DOWN to IgniteLogicalTableScan and do not produce new leafs.
> Let's remove this condition, as it affects peformance



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24655) Document security recommendations for AI3

2025-02-28 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-24655:

Component/s: documentation

> Document security recommendations for AI3
> -
>
> Key: IGNITE-24655
> URL: https://issues.apache.org/jira/browse/IGNITE-24655
> Project: Ignite
>  Issue Type: Task
>  Components: documentation
>Reporter: Igor Gusev
>Assignee: Igor Gusev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We need to add recommendations for users on how to run secure clusters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24655) Document security recommendations for AI3

2025-02-28 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-24655:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Document security recommendations for AI3
> -
>
> Key: IGNITE-24655
> URL: https://issues.apache.org/jira/browse/IGNITE-24655
> Project: Ignite
>  Issue Type: Task
>Reporter: Igor Gusev
>Assignee: Igor Gusev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We need to add recommendations for users on how to run secure clusters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21873) Sql. The insertion fails if the execution node buffer size is set to 1

2025-02-28 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-21873:
--
Description: 
This looks like a degenerate case, but it may still be worth addressing the 
issue.

To reproduce this issue you need to set {{Commons.IN_BUFFER_SIZE = 1}}.
And execute DML query, for example:
{code:java}
sql("CREATE TABLE test(id INT PRIMARY KEY)");
sql("INSERT INTO test VALUES (0), (1) ");
{code}

Result:
{noformat}
Caused by: java.lang.AssertionError
at 
org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.request(ModifyNode.java:130)
 ~[main/:?]
at 
org.apache.ignite.internal.sql.engine.exec.rel.Outbox.flush(Outbox.java:326) 
~[main/:?]
at 
org.apache.ignite.internal.sql.engine.exec.rel.Outbox.push(Outbox.java:166) 
~[main/:?]
at 
org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.tryEnd(ModifyNode.java:206)
 ~[main/:?]
at 
org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.lambda$flushTuples$1(ModifyNode.java:282)
 ~[main/:?]
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionContext.lambda$execute$0(ExecutionContext.java:325)
 ~[main/:?]
{noformat}

The problem can be solved by swapping the following lines in the 
{{ModifyNode#tryEnd}} method.
{code:java}
downstream().push(context().rowHandler().factory(MODIFY_RESULT).create(updatedRows));

requested = 0; // must come before the 'push()' call
{code}


Also, we should check all other execution nodes, if they are fine with 
Commons.IN_BUFFER_SIZE = 1



  was:
This looks like a degenerate case, but it may still be worth addressing the 
issue.

To reproduce this issue you need to set {{Commons.IN_BUFFER_SIZE = 1}}.
And execute DML query, for example:
{code:java}
sql("CREATE TABLE test(id INT PRIMARY KEY)");
sql("INSERT INTO test VALUES (0), (1) ");
{code}

Result:
{noformat}
Caused by: java.lang.AssertionError
at 
org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.request(ModifyNode.java:130)
 ~[main/:?]
at 
org.apache.ignite.internal.sql.engine.exec.rel.Outbox.flush(Outbox.java:326) 
~[main/:?]
at 
org.apache.ignite.internal.sql.engine.exec.rel.Outbox.push(Outbox.java:166) 
~[main/:?]
at 
org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.tryEnd(ModifyNode.java:206)
 ~[main/:?]
at 
org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.lambda$flushTuples$1(ModifyNode.java:282)
 ~[main/:?]
at 
org.apache.ignite.internal.sql.engine.exec.ExecutionContext.lambda$execute$0(ExecutionContext.java:325)
 ~[main/:?]
{noformat}

The problem can be solved by swapping the following lines in the 
{{ModifyNode#tryEnd}} method.
{code:java}
downstream().push(context().rowHandler().factory(MODIFY_RESULT).create(updatedRows));

requested = 0; // must come before the 'push()' call
{code}




> Sql. The insertion fails if the execution node buffer size is set to 1
> --
>
> Key: IGNITE-21873
> URL: https://issues.apache.org/jira/browse/IGNITE-21873
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Pavel Pereslegin
>Priority: Minor
>  Labels: ignite-3
>
> This looks like a degenerate case, but it may still be worth addressing the 
> issue.
> To reproduce this issue you need to set {{Commons.IN_BUFFER_SIZE = 1}}.
> And execute DML query, for example:
> {code:java}
> sql("CREATE TABLE test(id INT PRIMARY KEY)");
> sql("INSERT INTO test VALUES (0), (1) ");
> {code}
> Result:
> {noformat}
> Caused by: java.lang.AssertionError
>   at 
> org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.request(ModifyNode.java:130)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.sql.engine.exec.rel.Outbox.flush(Outbox.java:326) 
> ~[main/:?]
>   at 
> org.apache.ignite.internal.sql.engine.exec.rel.Outbox.push(Outbox.java:166) 
> ~[main/:?]
>   at 
> org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.tryEnd(ModifyNode.java:206)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.sql.engine.exec.rel.ModifyNode.lambda$flushTuples$1(ModifyNode.java:282)
>  ~[main/:?]
>   at 
> org.apache.ignite.internal.sql.engine.exec.ExecutionContext.lambda$execute$0(ExecutionContext.java:325)
>  ~[main/:?]
> {noformat}
> The problem can be solved by swapping the following lines in the 
> {{ModifyNode#tryEnd}} method.
> {code:java}
> downstream().push(context().rowHandler().factory(MODIFY_RESULT).create(updatedRows));
> requested = 0; // must come before the 'push()' call
> {code}
> Also, we should check all other execution nodes, if they are fine with 
> Commons.IN_BUFFER_SIZE = 1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24467) HA: Filtered nodes are not applied after automatic reset

2025-02-28 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-24467:
-
Epic Link: IGNITE-23438

> HA: Filtered nodes are not applied after automatic reset
> 
>
> Key: IGNITE-24467
> URL: https://issues.apache.org/jira/browse/IGNITE-24467
> Project: Ignite
>  Issue Type: Bug
>Reporter:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>
> See 
> {{ItHighAvailablePartitionsRecoveryByFilterUpdateTest.testResetAfterChangeFilters}}
>  - the test that covers this scenario.
> *Precondition*
>  - Create an HA zone with a filter that allows nodes A, B and C.
>  - Make sure {{partitionDistributionResetTimeout}} is high enough not to 
> trigger before the following actions happen
>  - Stop nodes B and C
>  - Change zone filter to allow nodes D, E and F. These new nodes should be up 
> and running
>  - Change {{partitionDistributionResetTimeout}} to a smaller value or 0 to 
> trigger automatic reset
> *Result*
> The partition remains on node A
>  
> *Expected result*
> The partition is moved to D, E and F as per the filter
> *Implementation details*
> Zone filter change creates new pending = (D,E,F). But automatic reset, that 
> is triggered after {{{}partitionDistributionResetTimeout{}}}, changes pending 
> to (force, A), planned=(), thus losing any information about nodes D, E and F 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-24673) ReplicaImpl does not remove placement driver listeners on shutdown

2025-02-28 Thread Vyacheslav Koptilin (Jira)
Vyacheslav Koptilin created IGNITE-24673:


 Summary: ReplicaImpl does not remove placement driver listeners on 
shutdown
 Key: IGNITE-24673
 URL: https://issues.apache.org/jira/browse/IGNITE-24673
 Project: Ignite
  Issue Type: Bug
Reporter: Vyacheslav Koptilin
Assignee: Vyacheslav Koptilin


The implementation of `ReplicaImpl` uses method references to add/remove 
placement driver listeners:
{noformat}
public ReplicaImpl(...) {
...
placementDriver.listen(PrimaryReplicaEvent.PRIMARY_REPLICA_ELECTED, 
this::registerFailoverCallback);
placementDriver.listen(PrimaryReplicaEvent.PRIMARY_REPLICA_EXPIRED, 
this::unregisterFailoverCallback);
}

public CompletableFuture shutdown() {
placementDriver.removeListener(PrimaryReplicaEvent.PRIMARY_REPLICA_ELECTED, 
this::registerFailoverCallback);
placementDriver.removeListener(PrimaryReplicaEvent.PRIMARY_REPLICA_EXPIRED, 
this::unregisterFailoverCallback);
...
}{noformat}
Using method references here lead to the fact that these event listeners are 
not removed.

Need to introduce internal variable to hold a method reference that represents 
listener.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-24264) Remove Google & Yandex Analytics from the Ignite Website

2025-02-28 Thread Alexey Alexandrov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931436#comment-17931436
 ] 

Alexey Alexandrov commented on IGNITE-24264:


Hi, sure! We are going to remove it, and thank you for your pull request.

We will also remove other scripts in accordance with Apache policies.
 
 

> Remove Google  & Yandex Analytics from the Ignite Website
> -
>
> Key: IGNITE-24264
> URL: https://issues.apache.org/jira/browse/IGNITE-24264
> Project: Ignite
>  Issue Type: Task
>Reporter: Niall Pemberton
>Priority: Major
>
> Hi Ignite Team
> The ASF {_}*Privacy Policy*{_}[1][2] does not permit the use of _*Google 
> Analytics*_ on any ASF websites and the ASF Infra team will soon enforce a 
> {_}*Content Security Policy*{_}(CSP) that will block access to external 
> trackers:
>  * [https://lists.apache.org/thread/w34sd92v4rz3j28hyddmt5tbprbdq6lc]
> Please could you remove the use of the Google Analytics from the Ignite 
> website (I will submit a PR request shortly to do that)?
>  * [https://lists.apache.org/thread/blrf8lmbm7jrtn6pgktgqbcg5hs5h3bd]
> The ASF hosts its own _*Matomo*_ instance to provide projects with analytics 
> and you can request a tracking id for your project by sending a mail to 
> *privacy AT apache.org.*
>  * 
> [https://privacy.apache.org/faq/committers.html#can-i-use-web-analytics-matomo]
> Additionally I would recommend reviewing any external resources loaded by 
> your website. The Content Security Policy will prevent any resources being 
> loaded from 3rd Party providers that the ASF does not have a Data Processing 
> Agreement (DPA) with. On the 1st February Infra will begin a temporary 
> "brownout" when the CSP will be turned on for a short period. This will allow 
> projects to check which parts, if any, of their websites will stop working. 
> The Privacy FAQ answers a number of questions about which external providers 
> are permitted or not:
>  * [https://privacy.apache.org/faq/committers.html]
> Thanks
> Niall
> [1] [https://privacy.apache.org/policies/website-policy.html]
> [2] 
> [https://privacy.apache.org/faq/committers.html#can-i-use-google-analytics]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-24675) Sql. Hash join operation may hands for right and outer join.

2025-02-28 Thread Andrey Mashenkov (Jira)
Andrey Mashenkov created IGNITE-24675:
-

 Summary: Sql. Hash join operation may hands for right and outer 
join.
 Key: IGNITE-24675
 URL: https://issues.apache.org/jira/browse/IGNITE-24675
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 3.0
Reporter: Andrey Mashenkov
Assignee: Andrey Mashenkov
 Fix For: 3.1


`RightHashJoin.join()` and `FullOuterHashJoin.join()` may fall into infinite 
loop when processing non-matching rows from right source.

When right buffer contains the same amount of rows that were requested by 
downstream, the algo emits these rows, but didn't notify downstream the 
end-of-data. And on next request emits same rows again, and again...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24264) Remove Google & Yandex Analytics from the Ignite Website

2025-02-28 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-24264:
-
Issue Type: Improvement  (was: Task)

> Remove Google  & Yandex Analytics from the Ignite Website
> -
>
> Key: IGNITE-24264
> URL: https://issues.apache.org/jira/browse/IGNITE-24264
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Niall Pemberton
>Priority: Major
>
> Hi Ignite Team
> The ASF {_}*Privacy Policy*{_}[1][2] does not permit the use of _*Google 
> Analytics*_ on any ASF websites and the ASF Infra team will soon enforce a 
> {_}*Content Security Policy*{_}(CSP) that will block access to external 
> trackers:
>  * [https://lists.apache.org/thread/w34sd92v4rz3j28hyddmt5tbprbdq6lc]
> Please could you remove the use of the Google Analytics from the Ignite 
> website (I will submit a PR request shortly to do that)?
>  * [https://lists.apache.org/thread/blrf8lmbm7jrtn6pgktgqbcg5hs5h3bd]
> The ASF hosts its own _*Matomo*_ instance to provide projects with analytics 
> and you can request a tracking id for your project by sending a mail to 
> *privacy AT apache.org.*
>  * 
> [https://privacy.apache.org/faq/committers.html#can-i-use-web-analytics-matomo]
> Additionally I would recommend reviewing any external resources loaded by 
> your website. The Content Security Policy will prevent any resources being 
> loaded from 3rd Party providers that the ASF does not have a Data Processing 
> Agreement (DPA) with. On the 1st February Infra will begin a temporary 
> "brownout" when the CSP will be turned on for a short period. This will allow 
> projects to check which parts, if any, of their websites will stop working. 
> The Privacy FAQ answers a number of questions about which external providers 
> are permitted or not:
>  * [https://privacy.apache.org/faq/committers.html]
> Thanks
> Niall
> [1] [https://privacy.apache.org/policies/website-policy.html]
> [2] 
> [https://privacy.apache.org/faq/committers.html#can-i-use-google-analytics]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-24612) .NET: Thin 3.0: TestSchemaUpdateWhileStreaming is flaky

2025-02-28 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931412#comment-17931412
 ] 

Pavel Tupitsyn commented on IGNITE-24612:
-

100+ green runs: 
https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunNetTests?branch=pull%2F5313&page=1

> .NET: Thin 3.0: TestSchemaUpdateWhileStreaming is flaky
> ---
>
> Key: IGNITE-24612
> URL: https://issues.apache.org/jira/browse/IGNITE-24612
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, streaming, thin client
>Affects Versions: 3.0
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code}
> Apache.Ignite.Table.DataStreamerException : Exception of type 
> 'Apache.Ignite.MarshallerException' was thrown.
>   > Apache.Ignite.MarshallerException : Exception of type 
> 'Apache.Ignite.MarshallerException' was thrown.
>   > Apache.Ignite.IgniteException : 
> org.apache.ignite.lang.MarshallerException: IGN-MARSHALLING-1 
> TraceId:b466dc18-4fea-48c1-b966-8ac2769ec49b
>   at 
> org.apache.ignite.internal.schema.marshaller.TupleMarshallerImpl.marshal(TupleMarshallerImpl.java:123)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.marshal(RecordBinaryViewImpl.java:436)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.mapToBinary(RecordBinaryViewImpl.java:545)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.lambda$updateAll$35(RecordBinaryViewImpl.java:614)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.lambda$withSchemaSync$1(AbstractTableView.java:144)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1187)
>   at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2309)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.withSchemaSync(AbstractTableView.java:144)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.withSchemaSync(AbstractTableView.java:134)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.doOperation(AbstractTableView.java:112)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.updateAll(RecordBinaryViewImpl.java:613)
>   at 
> org.apache.ignite.client.handler.requests.table.ClientStreamerBatchSendRequest.lambda$process$1(ClientStreamerBatchSendRequest.java:59)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1187)
>   at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2309)
>   at 
> org.apache.ignite.client.handler.requests.table.ClientStreamerBatchSendRequest.lambda$process$2(ClientStreamerBatchSendRequest.java:56)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1187)
>   at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2309)
>   at 
> org.apache.ignite.client.handler.requests.table.ClientStreamerBatchSendRequest.process(ClientStreamerBatchSendRequest.java:53)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperation(ClientInboundMessageHandler.java:844)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperationInternal(ClientInboundMessageHandler.java:897)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.lambda$processOperation$4(ClientInboundMessageHandler.java:633)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>   at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.base/java.nio.Buffer.checkIndex(Buffer.java:743)
>   at java.base/java.nio.HeapByteBuffer.get(HeapByteBuffer.java:169)
>   at 
> org.apache.ignite.internal.binarytuple.BinaryTupleParser.longValue(BinaryTupleParser.java:245)
>   at 
> org.apache.ignite.internal.binarytuple.BinaryTupleReader.longValue(BinaryTupleReader.java:183)
>   at 
> org.apache.ignite.internal.client.table.MutableTupleBinaryTupleAdapter.object(MutableTupleBinaryTupleAdapter.java:511)
>   at 
> org.apache.ignite.internal.client.table.MutableTupleBinaryTupleAdapter.value(MutableTupleBinaryTupleAdapter.java:146)
>   at 
> org.apache.ignite.internal.schema.marshaller.TupleMarshallerImpl.validateTuple(TupleMarshallerImpl.java:326)
>   at 
> org.apache.ignite.internal.schema.marshaller.TupleMarshallerImpl.marshal(TupleMarshallerImpl.java:101)
>   ... 23 more
>at 
> Apache.Ignite.Internal.Table.DataStreamer.StreamDataA

[jira] [Assigned] (IGNITE-24678) Sql. Introduce heuristic to exclude NLJ when HJ may be applied

2025-02-28 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov reassigned IGNITE-24678:
-

Assignee: Konstantin Orlov

> Sql. Introduce heuristic to exclude NLJ when HJ may be applied
> --
>
> Key: IGNITE-24678
> URL: https://issues.apache.org/jira/browse/IGNITE-24678
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> Currently, we have very primitive statistics which includes only table size. 
> Moreover, they are gathered with some sort of throttling, preventing updating 
> statistics for the same table more often than once per minute.
> The problem arises, when heavy query is executed immediately after all data 
> has been uploaded to a table (which is actually every benchmark scenario): 
> the first insert triggers gathering of table stats, resulting in table size 
> close to 1 to be cached in statistic manager. During planning phase, 
> cost-based optimizer makes wrong choices due to misleading statistics. The 
> most expensive one is choosing NestedLoopJoin over HashJoin. For instance. 
> the query 5 from TPC-H suite and scale factor 0.1, which normally completes 
> under 1 second (373ms on my laptop), takes tens of minutes to complete with 
> wrong join algorithm (it didn't finish in 15min, so I killed it).
> To mitigate the issue, we may introduce heuristic to avoid using NLJ for 
> joins that can be executed with HJ.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-24679) Remove from mappings cache when zone primary replica expires

2025-02-28 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-24679:
--

 Summary: Remove from mappings cache when zone primary replica 
expires
 Key: IGNITE-24679
 URL: https://issues.apache.org/jira/browse/IGNITE-24679
 Project: Ignite
  Issue Type: Improvement
Reporter: Roman Puchkovskiy






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-24647) Add Gradle Wrapper into TC Bot to prevent build error caused by incorrect gradle version

2025-02-28 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin resolved IGNITE-24647.
-
  Reviewer: Maksim Timonin
Resolution: Fixed

[~apopovprodby] thanks for the patch, merged to master

> Add Gradle Wrapper into TC Bot to prevent build error caused by incorrect 
> gradle version
> 
>
> Key: IGNITE-24647
> URL: https://issues.apache.org/jira/browse/IGNITE-24647
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksandr Popov
>Assignee: Aleksandr Popov
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24647) Add Gradle Wrapper into TC Bot to prevent build error caused by incorrect gradle version

2025-02-28 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-24647:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Add Gradle Wrapper into TC Bot to prevent build error caused by incorrect 
> gradle version
> 
>
> Key: IGNITE-24647
> URL: https://issues.apache.org/jira/browse/IGNITE-24647
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksandr Popov
>Assignee: Aleksandr Popov
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Iurii Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Iurii Gerzhedovich reassigned IGNITE-24676:
---

Assignee: Iurii Gerzhedovich

> Productization of Temporal Types
> 
>
> Key: IGNITE-24676
> URL: https://issues.apache.org/jira/browse/IGNITE-24676
> Project: Ignite
>  Issue Type: Epic
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Iurii Gerzhedovich
>Priority: Major
>  Labels: ignite-3
>
> This is an umbrella ticket to keep track of all work related to 
> productization of temporal types. 
> The first phase is to review existing test coverage according to a test plan 
> (presented below) and add absent tests. The goal is to identify all issues 
> related to a temporal types. All found problems (as well as already filed 
> ones) must be linked to this epic.
> Second phase will include fixing all attached issues, as well as amending 
> documentation with known limitation in case of problem that we are not going 
> to fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is 
> not supported and we have no plan to support it any time soon, therefor this 
> must be mentioned as known limitation).
> Note: phases not necessary should be executed sequentially; critical issues 
> may be fixed asap.
> A temporal types hierarchy is as follow:
>  * All temporal types
>  ** Datetime types
>  *** DATE
>  *** TIME [WITHOUT TIME ZONE]
>  *** TIME WITH TIME ZONE // not supported; known limitation
>  *** TIMESTAMP [WITHOUT TIME ZONE]
>  *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
>  *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension
>  ** Interval types
>  *** YEAR TO MONTH intervals
>  *** DAY TO SECOND intervals
> Test plan is as follow:
>  * For all temporal types check different values (literals, dyn params, table 
> columns):
>  ** check boundaries
>  ** check different precisions for fraction of second
>  ** for datetime types check leap year/month/second
>  ** for literals check supported formats
>  ** for table columns check support for defaults; boundaries check; different 
> precision for fraction of second
>  * For all temporal types check operations:
>  ** check type coercion for all allowed operations
>  ** below operations must be checked with similar types and types of 
> different precision:
>  *** comparison
>  *** arithmetic
>  ** check conversion between different types (aka CAST operator)
>  *** for conversion from character string to temporal type check conversion 
> from all allowed formats
>  ** check built-in function
>  *** make sure all required by SQL standard function are presented and work 
> as expected



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24612) .NET: Thin 3.0: TestSchemaUpdateWhileStreaming is flaky

2025-02-28 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-24612:

Release Note: .NET: Fixed race condition on schema update in data streamer.

> .NET: Thin 3.0: TestSchemaUpdateWhileStreaming is flaky
> ---
>
> Key: IGNITE-24612
> URL: https://issues.apache.org/jira/browse/IGNITE-24612
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, streaming, thin client
>Affects Versions: 3.0
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code}
> Apache.Ignite.Table.DataStreamerException : Exception of type 
> 'Apache.Ignite.MarshallerException' was thrown.
>   > Apache.Ignite.MarshallerException : Exception of type 
> 'Apache.Ignite.MarshallerException' was thrown.
>   > Apache.Ignite.IgniteException : 
> org.apache.ignite.lang.MarshallerException: IGN-MARSHALLING-1 
> TraceId:b466dc18-4fea-48c1-b966-8ac2769ec49b
>   at 
> org.apache.ignite.internal.schema.marshaller.TupleMarshallerImpl.marshal(TupleMarshallerImpl.java:123)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.marshal(RecordBinaryViewImpl.java:436)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.mapToBinary(RecordBinaryViewImpl.java:545)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.lambda$updateAll$35(RecordBinaryViewImpl.java:614)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.lambda$withSchemaSync$1(AbstractTableView.java:144)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1187)
>   at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2309)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.withSchemaSync(AbstractTableView.java:144)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.withSchemaSync(AbstractTableView.java:134)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.doOperation(AbstractTableView.java:112)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.updateAll(RecordBinaryViewImpl.java:613)
>   at 
> org.apache.ignite.client.handler.requests.table.ClientStreamerBatchSendRequest.lambda$process$1(ClientStreamerBatchSendRequest.java:59)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1187)
>   at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2309)
>   at 
> org.apache.ignite.client.handler.requests.table.ClientStreamerBatchSendRequest.lambda$process$2(ClientStreamerBatchSendRequest.java:56)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1187)
>   at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2309)
>   at 
> org.apache.ignite.client.handler.requests.table.ClientStreamerBatchSendRequest.process(ClientStreamerBatchSendRequest.java:53)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperation(ClientInboundMessageHandler.java:844)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperationInternal(ClientInboundMessageHandler.java:897)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.lambda$processOperation$4(ClientInboundMessageHandler.java:633)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>   at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.base/java.nio.Buffer.checkIndex(Buffer.java:743)
>   at java.base/java.nio.HeapByteBuffer.get(HeapByteBuffer.java:169)
>   at 
> org.apache.ignite.internal.binarytuple.BinaryTupleParser.longValue(BinaryTupleParser.java:245)
>   at 
> org.apache.ignite.internal.binarytuple.BinaryTupleReader.longValue(BinaryTupleReader.java:183)
>   at 
> org.apache.ignite.internal.client.table.MutableTupleBinaryTupleAdapter.object(MutableTupleBinaryTupleAdapter.java:511)
>   at 
> org.apache.ignite.internal.client.table.MutableTupleBinaryTupleAdapter.value(MutableTupleBinaryTupleAdapter.java:146)
>   at 
> org.apache.ignite.internal.schema.marshaller.TupleMarshallerImpl.validateTuple(TupleMarshallerImpl.java:326)
>   at 
> org.apache.ignite.internal.schema.marshaller.TupleMarshallerImpl.marshal(TupleMarshallerImpl.java:101)
>   ... 23 more
>at 
> Apache.Ignite.Internal.Table.DataStreamer.StreamDataAsync[T](IAsyncEnumerable`1
>  data, Table table, IRecordSerializerHandler`1 writer, DataStreamerOptions 
> o

[jira] [Updated] (IGNITE-24655) Document security recommendations for AI3

2025-02-28 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-24655:

Fix Version/s: 3.1

> Document security recommendations for AI3
> -
>
> Key: IGNITE-24655
> URL: https://issues.apache.org/jira/browse/IGNITE-24655
> Project: Ignite
>  Issue Type: Task
>  Components: documentation
>Reporter: Igor Gusev
>Assignee: Igor Gusev
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We need to add recommendations for users on how to run secure clusters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-24655) Document security recommendations for AI3

2025-02-28 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931525#comment-17931525
 ] 

Pavel Tupitsyn commented on IGNITE-24655:
-

Merged to main: 
https://github.com/apache/ignite-3/commit/d0300ee9f0d67bb1c4994f2e1b30a51d92a2d636

> Document security recommendations for AI3
> -
>
> Key: IGNITE-24655
> URL: https://issues.apache.org/jira/browse/IGNITE-24655
> Project: Ignite
>  Issue Type: Task
>Reporter: Igor Gusev
>Assignee: Igor Gusev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We need to add recommendations for users on how to run secure clusters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-24676:
--
Description: 
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension
 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):
 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check support for defaults; boundaries check; different 
precision for fraction of second
 ** for table columns check support in indexes (both SORTED and HASH)
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator)
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 *** for conversion to character string check that result is satisfy the format 
described in SQL standard
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected

  was:
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension
 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):
 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check support for defaults; boundaries check; different 
precision for fraction of second
 ** for table columns check support in indexes (both SORTED and HASH)
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator)
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected


> Productization of Temporal Types
> 
>
> Key: IGNITE-24676
> URL: https://issues.apache.org/jira/browse/IGNITE-24676
> Project: Ignite
>  Is

[jira] [Assigned] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Iurii Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Iurii Gerzhedovich reassigned IGNITE-24676:
---

Assignee: (was: Iurii Gerzhedovich)

> Productization of Temporal Types
> 
>
> Key: IGNITE-24676
> URL: https://issues.apache.org/jira/browse/IGNITE-24676
> Project: Ignite
>  Issue Type: Epic
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> This is an umbrella ticket to keep track of all work related to 
> productization of temporal types. 
> The first phase is to review existing test coverage according to a test plan 
> (presented below) and add absent tests. The goal is to identify all issues 
> related to a temporal types. All found problems (as well as already filed 
> ones) must be linked to this epic.
> Second phase will include fixing all attached issues, as well as amending 
> documentation with known limitation in case of problem that we are not going 
> to fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is 
> not supported and we have no plan to support it any time soon, therefor this 
> must be mentioned as known limitation).
> Note: phases not necessary should be executed sequentially; critical issues 
> may be fixed asap.
> A temporal types hierarchy is as follow:
>  * All temporal types
>  ** Datetime types
>  *** DATE
>  *** TIME [WITHOUT TIME ZONE]
>  *** TIME WITH TIME ZONE // not supported; known limitation
>  *** TIMESTAMP [WITHOUT TIME ZONE]
>  *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
>  *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension
>  ** Interval types
>  *** YEAR TO MONTH intervals
>  *** DAY TO SECOND intervals
> Test plan is as follow:
>  * For all temporal types check different values (literals, dyn params, table 
> columns):
>  ** check boundaries
>  ** check different precisions for fraction of second
>  ** for datetime types check leap year/month/second
>  ** for literals check supported formats
>  ** for table columns check support for defaults; boundaries check; different 
> precision for fraction of second
>  ** for table columns check support in indexes (both SORTED and HASH)
>  * For all temporal types check operations:
>  ** check type coercion for all allowed operations
>  ** below operations must be checked with similar types and types of 
> different precision:
>  *** comparison
>  *** arithmetic
>  ** check conversion between different types (aka CAST operator)
>  *** for conversion from character string to temporal type check conversion 
> from all allowed formats
>  *** for conversion to character string check that result satisfies the 
> format described in SQL standard
>  ** check built-in function
>  *** make sure all required by SQL standard function are presented and work 
> as expected



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Iurii Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Iurii Gerzhedovich reassigned IGNITE-24676:
---

Assignee: (was: Iurii Gerzhedovich)

> Productization of Temporal Types
> 
>
> Key: IGNITE-24676
> URL: https://issues.apache.org/jira/browse/IGNITE-24676
> Project: Ignite
>  Issue Type: Epic
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> This is an umbrella ticket to keep track of all work related to 
> productization of temporal types. 
> The first phase is to review existing test coverage according to a test plan 
> (presented below) and add absent tests. The goal is to identify all issues 
> related to a temporal types. All found problems (as well as already filed 
> ones) must be linked to this epic.
> Second phase will include fixing all attached issues, as well as amending 
> documentation with known limitation in case of problem that we are not going 
> to fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is 
> not supported and we have no plan to support it any time soon, therefor this 
> must be mentioned as known limitation).
> Note: phases not necessary should be executed sequentially; critical issues 
> may be fixed asap.
> A temporal types hierarchy is as follow:
>  * All temporal types
>  ** Datetime types
>  *** DATE
>  *** TIME [WITHOUT TIME ZONE]
>  *** TIME WITH TIME ZONE // not supported; known limitation
>  *** TIMESTAMP [WITHOUT TIME ZONE]
>  *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
>  *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension
>  ** Interval types
>  *** YEAR TO MONTH intervals
>  *** DAY TO SECOND intervals
> Test plan is as follow:
>  * For all temporal types check different values (literals, dyn params, table 
> columns):
>  ** check boundaries
>  ** check different precisions for fraction of second
>  ** for datetime types check leap year/month/second
>  ** for literals check supported formats
>  ** for table columns check support for defaults; boundaries check; different 
> precision for fraction of second
>  ** for table columns check support in indexes (both SORTED and HASH)
>  * For all temporal types check operations:
>  ** check type coercion for all allowed operations
>  ** below operations must be checked with similar types and types of 
> different precision:
>  *** comparison
>  *** arithmetic
>  ** check conversion between different types (aka CAST operator)
>  *** for conversion from character string to temporal type check conversion 
> from all allowed formats
>  *** for conversion to character string check that result satisfies the 
> format described in SQL standard
>  ** check built-in function
>  *** make sure all required by SQL standard function are presented and work 
> as expected



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-24682) PageReadWriteManager#write is unused

2025-02-28 Thread Ivan Bessonov (Jira)
Ivan Bessonov created IGNITE-24682:
--

 Summary: PageReadWriteManager#write is unused
 Key: IGNITE-24682
 URL: https://issues.apache.org/jira/browse/IGNITE-24682
 Project: Ignite
  Issue Type: Improvement
Reporter: Ivan Bessonov


This is probably a design flaw. Instead of it we use 
{{{}org.apache.ignite.internal.pagememory.persistence.WriteDirtyPage{}}}.

We should either use the method in write manager, if that's a correct approach, 
or remove this method from the interface.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24614) Prevent throwing safe time advance exception on stop

2025-02-28 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-24614:
---
Description: 
h3. Motivation
Node might throw {{TrackerClosedException}} an exception if the node was going 
to advise safe time but did not have time to make it:
{noformat}
[2025-02-24T15:45:26,966][ERROR][org.apache.ignite.internal.benchmark.MultiTableBenchmark.test-jmh-worker-4][ReplicaManager]
 Could not advance safe time for 429_part_22 to {}
 java.util.concurrent.CompletionException: 
org.apache.ignite.internal.util.TrackerClosedException
at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:332)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:347)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:636)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
 ~[?:?]
at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2194)
 ~[?:?]
at 
org.apache.ignite.internal.util.PendingComparableValuesTracker.lambda$cleanupWaitersOnClose$2(PendingComparableValuesTracker.java:192)
 ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[?:?]
at 
org.apache.ignite.internal.util.PendingComparableValuesTracker.cleanupWaitersOnClose(PendingComparableValuesTracker.java:192)
 ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
at 
org.apache.ignite.internal.util.PendingComparableValuesTracker.close(PendingComparableValuesTracker.java:166)
 ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
at 
org.apache.ignite.internal.metastorage.server.time.ClusterTimeImpl.close(ClusterTimeImpl.java:142)
 ~[ignite-metastorage-3.1.0-SNAPSHOT.jar:?]
at 
org.apache.ignite.internal.util.IgniteUtils.lambda$closeAllManually$1(IgniteUtils.java:617)
 ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
 ~[?:?]
at 
java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
 ~[?:?]
at 
java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:1024)
 ~[?:?]
at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) 
~[?:?]
at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
 ~[?:?]
at 
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
 ~[?:?]
at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
 ~[?:?]
at 
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
~[?:?]
at 
java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
 ~[?:?]
at 
org.apache.ignite.internal.util.IgniteUtils.closeAllManually(IgniteUtils.java:615)
 ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
at 
org.apache.ignite.internal.util.IgniteUtils.closeAllManually(IgniteUtils.java:649)
 ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
at 
org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl.stopAsync(MetaStorageManagerImpl.java:772)
 ~[ignite-metastorage-3.1.0-SNAPSHOT.jar:?]
at 
org.apache.ignite.internal.util.IgniteUtils.lambda$stopAsync$6(IgniteUtils.java:1206)
 ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
at 
java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
 ~[?:?]
at 
java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
 ~[?:?]
at 
java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708)
 ~[?:?]
at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) 
~[?:?]
at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
 ~[?:?]
at 
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575) 
~[?:?]
at 
java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
 ~[?:?]
at 
java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616)
 ~[?:?]
at 
org.apache.ignite.internal.util.IgniteUtils.stopAsync(IgniteUtils.java:1212) 
~[ignite-core-3.1.0-SNAPSHOT.jar:?]
at 
org.apache.ignite.internal.util.IgniteUtils.stopAsync(IgniteUtils.java:1254) 
~[ignite-core-3.1.0-SNAPSHOT.jar:?]
at 
org.apache.ignite.internal.app.LifecycleManager.initiateAllComponentsStop(LifecycleManager.java:178)
 ~[ignite-runner-3.1.0-SNAPSHOT.jar:?]
at 
org.apache.ignite.internal.app.LifecycleManager.stopNode(LifecycleManager.java:152)
 ~[ignite-runner-3.1.0-SNAPS

[jira] [Updated] (IGNITE-24675) Sql. Hash join operation may hands for right and outer join

2025-02-28 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-24675:
--
Summary: Sql. Hash join operation may hands for right and outer join  (was: 
Sql. Hash join operation may hands for right and outer join.)

> Sql. Hash join operation may hands for right and outer join
> ---
>
> Key: IGNITE-24675
> URL: https://issues.apache.org/jira/browse/IGNITE-24675
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0
>Reporter: Andrey Mashenkov
>Assignee: Andrey Mashenkov
>Priority: Critical
>  Labels: ignite-3
> Fix For: 3.1
>
>
> `RightHashJoin.join()` and `FullOuterHashJoin.join()` may fall into infinite 
> loop when processing non-matching rows from right source.
> When right buffer contains the same amount of rows that were requested by 
> downstream, the algo emits these rows, but didn't notify downstream the 
> end-of-data. And on next request emits same rows again, and again...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24675) Sql. Hash join operation may hangs for right and outer join

2025-02-28 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-24675:
--
Summary: Sql. Hash join operation may hangs for right and outer join  (was: 
Sql. Hash join operation may hands for right and outer join)

> Sql. Hash join operation may hangs for right and outer join
> ---
>
> Key: IGNITE-24675
> URL: https://issues.apache.org/jira/browse/IGNITE-24675
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0
>Reporter: Andrey Mashenkov
>Assignee: Andrey Mashenkov
>Priority: Critical
>  Labels: ignite-3
> Fix For: 3.1
>
>
> `RightHashJoin.join()` and `FullOuterHashJoin.join()` may fall into infinite 
> loop when processing non-matching rows from right source.
> When right buffer contains the same amount of rows that were requested by 
> downstream, the algo emits these rows, but didn't notify downstream the 
> end-of-data. And on next request emits same rows again, and again...



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-24680) DB API Driver 3: missing _version.txt file when trying to install from pip

2025-02-28 Thread Igor Sapego (Jira)
Igor Sapego created IGNITE-24680:


 Summary: DB API Driver 3: missing _version.txt file when trying to 
install from pip
 Key: IGNITE-24680
 URL: https://issues.apache.org/jira/browse/IGNITE-24680
 Project: Ignite
  Issue Type: Bug
  Components: platforms, python, thin client
Affects Versions: 3.0
Reporter: Igor Sapego
Assignee: Igor Sapego
 Fix For: 3.1


After pip install pyignite-dbapi

I tried to import pyignite_dbapi but it threw an error:

{noformat}
Python 3.13.2 (tags/v3.13.2:4f8bb39, Feb  4 2025, 15:23:48) [MSC v.1942 64 bit 
(AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyignite_dbapi
Traceback (most recent call last):
  File "", line 1, in 
import pyignite_dbapi
  File 
"\AppData\Local\Programs\Python\Python313\Lib\site-packages\pyignite_dbapi\__init__.py",
 line 17, in 
__version__ = pkgutil.get_data(__name__, "_version.txt").decode
  ^^
  File "\AppData\Local\Programs\Python\Python313\Lib\pkgutil.py", line 
453, in get_data
return loader.get_data(resource_name)
   ~~~^^^
  File "", line 1217, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 
'\\AppData\\Local\\Programs\\Python\\Python313\\Lib\\site-packages\\pyignite_dbapi\\_version.txt'
{noformat}

*Workaround*: Manually add _version.txt in the mentioned path

{noformat}
>>> import pyignite_dbapi
>>> addr = ['127.0.0.1:10800']
>>> conn = pyignite_dbapi.connect(address=addr, timeout=10)
>>> cursor = conn.cursor()
>>> cursor.execute('SELECT 1')
{noformat}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-24607) Reconsider the need of system property SKIP_REBALANCE_TRIGGERS_RECOVERY

2025-02-28 Thread Mikhail Efremov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931499#comment-17931499
 ] 

Mikhail Efremov commented on IGNITE-24607:
--

# We should try to remove the flag as {{false}} logic by default and to check 
that dependent compaction tests are still fine
# We should add a test that checks recovery logic with {{metastore#invoke}} 
inhibition on a node restart.
# Also we should check and test why if we remove the flag as {{true}} branch 
(without recovery rebalance on start) pass all tests now. 

> Reconsider the need of system property SKIP_REBALANCE_TRIGGERS_RECOVERY
> ---
>
> Key: IGNITE-24607
> URL: https://issues.apache.org/jira/browse/IGNITE-24607
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Motivation*
> There is a system property with TODO
> {code:java}
> // TODO: IGNITE-23561 Remove it
> @TestOnly
> public static final String SKIP_REBALANCE_TRIGGERS_RECOVERY = 
> "IGNITE_SKIP_REBALANCE_TRIGGERS_RECOVERY";{code}
> which means whether we should skip the rebalancing on node recovery. It is 
> used only in tests.
>  
> *Definition of done*
> Either the TODO or this property itself is removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-24681) Remove "calculateCrc" flag from persistence code

2025-02-28 Thread Ivan Bessonov (Jira)
Ivan Bessonov created IGNITE-24681:
--

 Summary: Remove "calculateCrc" flag from persistence code
 Key: IGNITE-24681
 URL: https://issues.apache.org/jira/browse/IGNITE-24681
 Project: Ignite
  Issue Type: Improvement
Reporter: Ivan Bessonov
 Fix For: 3.1


This value is *always* true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-24676:
--
Description: 
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension
 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):
 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check support for defaults; boundaries check; different 
precision for fraction of second
 ** for table columns check support in indexes (both SORTED and HASH)
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator)
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 *** for conversion to character string check that result satisfies the format 
described in SQL standard
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected

  was:
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension
 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):
 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check support for defaults; boundaries check; different 
precision for fraction of second
 ** for table columns check support in indexes (both SORTED and HASH)
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator)
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 *** for conversion to character string check that result is satisfy the format 
described in SQL standard
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected


> Productization of Temporal Types
> 
>
> Key: IGNITE-24676
>  

[jira] [Updated] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-24676:
--
Description: 
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension
 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):
 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check support for defaults; boundaries check; different 
precision for fraction of second
 ** for table columns check support in indexes (both SORTED and HASH)
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator)
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected

  was:
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension
 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):
 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check support for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator)
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected


> Productization of Temporal Types
> 
>
> Key: IGNITE-24676
> URL: https://issues.apache.org/jira/browse/IGNITE-24676
> Project: Ignite
>  Issue Type: Epic
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Iurii Gerzhedovich
>Priority: Major
>  Labels: i

[jira] [Assigned] (IGNITE-24614) Prevent throwing safe time advance exception on stop

2025-02-28 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin reassigned IGNITE-24614:


Assignee: Vladislav Pyatkov

> Prevent throwing safe time advance exception on stop
> 
>
> Key: IGNITE-24614
> URL: https://issues.apache.org/jira/browse/IGNITE-24614
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> h3. Motivation
> Node might throw {{TrackerClosedException}} an exception if the node was 
> going to advise safe time but did not have time to make it:
> {noformat}
> [2025-02-24T15:45:26,966][ERROR][org.apache.ignite.internal.benchmark.MultiTableBenchmark.test-jmh-worker-4][ReplicaManager]
>  Could not advance safe time for 429_part_22 to {}
>  java.util.concurrent.CompletionException: 
> org.apache.ignite.internal.util.TrackerClosedException
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:332)
>  ~[?:?]
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:347)
>  ~[?:?]
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:636)
>  ~[?:?]
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
>  ~[?:?]
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2194)
>  ~[?:?]
>   at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.lambda$cleanupWaitersOnClose$2(PendingComparableValuesTracker.java:192)
>  ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
>   at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[?:?]
>   at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.cleanupWaitersOnClose(PendingComparableValuesTracker.java:192)
>  ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.close(PendingComparableValuesTracker.java:166)
>  ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.metastorage.server.time.ClusterTimeImpl.close(ClusterTimeImpl.java:142)
>  ~[ignite-metastorage-3.1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.util.IgniteUtils.lambda$closeAllManually$1(IgniteUtils.java:617)
>  ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
>   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
>  ~[?:?]
>   at 
> java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
>  ~[?:?]
>   at 
> java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:1024)
>  ~[?:?]
>   at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
>  ~[?:?]
>   at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
>  ~[?:?]
>   at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
>  ~[?:?]
>   at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
>  ~[?:?]
>   at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>  ~[?:?]
>   at 
> java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
>  ~[?:?]
>   at 
> org.apache.ignite.internal.util.IgniteUtils.closeAllManually(IgniteUtils.java:615)
>  ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.util.IgniteUtils.closeAllManually(IgniteUtils.java:649)
>  ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl.stopAsync(MetaStorageManagerImpl.java:772)
>  ~[ignite-metastorage-3.1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.util.IgniteUtils.lambda$stopAsync$6(IgniteUtils.java:1206)
>  ~[ignite-core-3.1.0-SNAPSHOT.jar:?]
>   at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
>  ~[?:?]
>   at 
> java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
>  ~[?:?]
>   at 
> java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708)
>  ~[?:?]
>   at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
>  ~[?:?]
>   at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
>  ~[?:?]
>   at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575)
>  ~[?:?]
>   at 
> java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
>  ~[?:?]
>   at 
> java.base/java.util.stream.ReferenceP

[jira] [Assigned] (IGNITE-24681) Remove "calculateCrc" flag from persistence code

2025-02-28 Thread Ivan Bessonov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Bessonov reassigned IGNITE-24681:
--

Assignee: Ivan Bessonov

> Remove "calculateCrc" flag from persistence code
> 
>
> Key: IGNITE-24681
> URL: https://issues.apache.org/jira/browse/IGNITE-24681
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.1
>
>
> This value is *always* true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-24612) .NET: Thin 3.0: TestSchemaUpdateWhileStreaming is flaky

2025-02-28 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931417#comment-17931417
 ] 

Igor Sapego commented on IGNITE-24612:
--

Looks good to me.

> .NET: Thin 3.0: TestSchemaUpdateWhileStreaming is flaky
> ---
>
> Key: IGNITE-24612
> URL: https://issues.apache.org/jira/browse/IGNITE-24612
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, streaming, thin client
>Affects Versions: 3.0
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code}
> Apache.Ignite.Table.DataStreamerException : Exception of type 
> 'Apache.Ignite.MarshallerException' was thrown.
>   > Apache.Ignite.MarshallerException : Exception of type 
> 'Apache.Ignite.MarshallerException' was thrown.
>   > Apache.Ignite.IgniteException : 
> org.apache.ignite.lang.MarshallerException: IGN-MARSHALLING-1 
> TraceId:b466dc18-4fea-48c1-b966-8ac2769ec49b
>   at 
> org.apache.ignite.internal.schema.marshaller.TupleMarshallerImpl.marshal(TupleMarshallerImpl.java:123)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.marshal(RecordBinaryViewImpl.java:436)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.mapToBinary(RecordBinaryViewImpl.java:545)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.lambda$updateAll$35(RecordBinaryViewImpl.java:614)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.lambda$withSchemaSync$1(AbstractTableView.java:144)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1187)
>   at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2309)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.withSchemaSync(AbstractTableView.java:144)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.withSchemaSync(AbstractTableView.java:134)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.doOperation(AbstractTableView.java:112)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.updateAll(RecordBinaryViewImpl.java:613)
>   at 
> org.apache.ignite.client.handler.requests.table.ClientStreamerBatchSendRequest.lambda$process$1(ClientStreamerBatchSendRequest.java:59)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1187)
>   at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2309)
>   at 
> org.apache.ignite.client.handler.requests.table.ClientStreamerBatchSendRequest.lambda$process$2(ClientStreamerBatchSendRequest.java:56)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1187)
>   at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2309)
>   at 
> org.apache.ignite.client.handler.requests.table.ClientStreamerBatchSendRequest.process(ClientStreamerBatchSendRequest.java:53)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperation(ClientInboundMessageHandler.java:844)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperationInternal(ClientInboundMessageHandler.java:897)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.lambda$processOperation$4(ClientInboundMessageHandler.java:633)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>   at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.base/java.nio.Buffer.checkIndex(Buffer.java:743)
>   at java.base/java.nio.HeapByteBuffer.get(HeapByteBuffer.java:169)
>   at 
> org.apache.ignite.internal.binarytuple.BinaryTupleParser.longValue(BinaryTupleParser.java:245)
>   at 
> org.apache.ignite.internal.binarytuple.BinaryTupleReader.longValue(BinaryTupleReader.java:183)
>   at 
> org.apache.ignite.internal.client.table.MutableTupleBinaryTupleAdapter.object(MutableTupleBinaryTupleAdapter.java:511)
>   at 
> org.apache.ignite.internal.client.table.MutableTupleBinaryTupleAdapter.value(MutableTupleBinaryTupleAdapter.java:146)
>   at 
> org.apache.ignite.internal.schema.marshaller.TupleMarshallerImpl.validateTuple(TupleMarshallerImpl.java:326)
>   at 
> org.apache.ignite.internal.schema.marshaller.TupleMarshallerImpl.marshal(TupleMarshallerImpl.java:101)
>   ... 23 more
>at 
> Apache.Ignite.Internal.Table.DataStreamer.StreamDataAsync[T](IAsyncEnumerable`1
>  data, Table table, IRecordSerializerHandler`1 writer, DataStreamerOptions 
> options, Ca

[jira] [Commented] (IGNITE-24505) ClientHandlerModule does not log exception in initChannel

2025-02-28 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931467#comment-17931467
 ] 

Pavel Tupitsyn commented on IGNITE-24505:
-

Merged to main: 
https://github.com/apache/ignite-3/commit/fd4d88ba724b5ac7cdec24c58d1096a0f4b85e91

> ClientHandlerModule does not log exception in initChannel
> -
>
> Key: IGNITE-24505
> URL: https://issues.apache.org/jira/browse/IGNITE-24505
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When an exception happens in *startEndpoint* in *ChannelInitializer*, it is 
> not logged, the client connection gets dropped silently. This makes 
> understanding the problem very difficult.
> https://github.com/apache/ignite-3/blob/78a3bf2e355949bbb0d2c95672bb82d58616742f/modules/client-handler/src/main/java/org/apache/ignite/client/handler/ClientHandlerModule.java#L296



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-24406) .NET: Compute examples in README are outdated

2025-02-28 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931468#comment-17931468
 ] 

Pavel Tupitsyn commented on IGNITE-24406:
-

Merged to main: 
https://github.com/apache/ignite-3/commit/e4df8562e3216efc2086d3c053be35581739bee7

> .NET: Compute examples in README are outdated
> -
>
> Key: IGNITE-24406
> URL: https://issues.apache.org/jira/browse/IGNITE-24406
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Some examples in README are outdated - the API is now different.
> https://github.com/apache/ignite-3/blob/main/modules/platforms/dotnet/README.md
> This readme goes into the NuGet package description: 
> https://www.nuget.org/packages/Apache.Ignite/3.0.0#readme-body-tab



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-24677) DB API Driver 3: Add a macOS support

2025-02-28 Thread Igor Sapego (Jira)
Igor Sapego created IGNITE-24677:


 Summary: DB API Driver 3: Add a macOS support
 Key: IGNITE-24677
 URL: https://issues.apache.org/jira/browse/IGNITE-24677
 Project: Ignite
  Issue Type: Improvement
  Components: python, thin client
Affects Versions: 3.0
Reporter: Igor Sapego
Assignee: Igor Sapego
 Fix For: 3.1


Currently, we only support Windows and Linux. Let's implement support for 
macOS, including tests on macOS agents and wheels building for macOS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-24374) Move PrimaryReplicaChangeCommand processing to ZonePartitionRaftListener

2025-02-28 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin resolved IGNITE-24374.
--
Resolution: Duplicate

Was fixed within https://issues.apache.org/jira/browse/IGNITE-24375

> Move PrimaryReplicaChangeCommand processing to ZonePartitionRaftListener
> 
>
> Key: IGNITE-24374
> URL: https://issues.apache.org/jira/browse/IGNITE-24374
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>
> h3. Motivation
> PrimaryReplicaChangeCommand is a command to propagate an information about 
> elected primary replica into corresponding raft group in order to check 
> whether Update(All)Command miss the primary or not. Without such verification 
> our full tx protocol won't properly work. 
> As for many other commands, PrimaryReplicaChangeCommand updates DataStorage



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-24381) Introduce ZonePlacementDriverDecorator and switch all internal API's to it.

2025-02-28 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin resolved IGNITE-24381.
--
Resolution: Invalid

No longer needed as we do propagate ZonePartitionId instead of TablePartitionId.

> Introduce ZonePlacementDriverDecorator and switch all internal API's to it.
> ---
>
> Key: IGNITE-24381
> URL: https://issues.apache.org/jira/browse/IGNITE-24381
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-24574) Implement required catalog version selection for WriteIntentSwitch requests handling

2025-02-28 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy reassigned IGNITE-24574:
--

Assignee: Roman Puchkovskiy

> Implement required catalog version selection for WriteIntentSwitch requests 
> handling
> 
>
> Key: IGNITE-24574
> URL: https://issues.apache.org/jira/browse/IGNITE-24574
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
>
> WriteIntentSwitchReplicaRequest handlers try to detect required catalog 
> version to build WriteIntentSwitchCommand instances. It is not clear yet how 
> to guarantee that a chosen catalog version is not removed by the catalog 
> compactor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-24674) ItIdempotentCommandCacheTest.testIdempotentInvoke failed with an assertion error

2025-02-28 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-24674:


 Summary: ItIdempotentCommandCacheTest.testIdempotentInvoke failed 
with an assertion error
 Key: IGNITE-24674
 URL: https://issues.apache.org/jira/browse/IGNITE-24674
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


{code:java}
org.opentest4j.AssertionFailedError: expected:  but was:   at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
  at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
  at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)  at 
app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)  at 
app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:31)  at 
app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:183)  at 
app//org.apache.ignite.internal.metastorage.impl.ItIdempotentCommandCacheTest.testIdempotentInvoke(ItIdempotentCommandCacheTest.java:356)
 {code}
TC link 
https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_IntegrationTests_ModuleMetastorageClient/8911747



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-24388) Adjust BuildIndexReplicaRequest to be TableAware one

2025-02-28 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931454#comment-17931454
 ] 

Roman Puchkovskiy commented on IGNITE-24388:


The patch looks good to me

> Adjust BuildIndexReplicaRequest to be TableAware one
> 
>
> Key: IGNITE-24388
> URL: https://issues.apache.org/jira/browse/IGNITE-24388
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Vyacheslav Koptilin
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.1
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Need to adopt the processing of the `BuildIndexReplicaRequest` to be aware of 
> the colocation track.
> There are two possible options:
>  - move the processing to the `ZonePartitionReplicaListener`. This approach 
> requires moving `indexMetaStorage` and `txRwOperationTracker` to the zone 
> listener.
>    Note that in this case, `txRwOperationTracker` might be shared between 
> `PartitionReplicaListener` and `ZonePartitionReplicaListener`. 
>  - the processing should stay in the `PartitionReplicaListener` and so the 
> `BuildIndexReplicaRequest` should extend the `TableAware` interface.
>  
> Therefore, it is worth trying the first approach unless some significant 
> obstacle is encountered.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Konstantin Orlov (Jira)
Konstantin Orlov created IGNITE-24676:
-

 Summary: Productization of Temporal Types
 Key: IGNITE-24676
 URL: https://issues.apache.org/jira/browse/IGNITE-24676
 Project: Ignite
  Issue Type: Epic
  Components: sql
Reporter: Konstantin Orlov


* This is an umbrella ticket to keep track of all work related to 
productization of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension

 * 
 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):

 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check supported for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-24676:
--
Description: 
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension

 * 
 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):

 * 
 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check supported for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected

  was:
* This is an umbrella ticket to keep track of all work related to 
productization of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension

 * 
 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):

 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check supported for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected


> Productization of Temporal Types
> 
>
> Key: IGNITE-24676
> URL: https://issues.apache.org/jira/browse/IGNITE-24676
> Project: Ignite
>  Issue Type: Epic
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> This is an umbrella ticket to keep track of all work related to 
> productizatio

[jira] [Updated] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-24676:
--
Description: 
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension

 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):

 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check supported for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected

  was:
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension

 * 
 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):

 * 
 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check supported for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected


> Productization of Temporal Types
> 
>
> Key: IGNITE-24676
> URL: https://issues.apache.org/jira/browse/IGNITE-24676
> Project: Ignite
>  Issue Type: Epic
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> This is an umbrella ticket to keep track of all work related to 
> productization of t

[jira] [Commented] (IGNITE-24406) .NET: Compute examples in README are outdated

2025-02-28 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931456#comment-17931456
 ] 

Igor Sapego commented on IGNITE-24406:
--

Looks good to me.

> .NET: Compute examples in README are outdated
> -
>
> Key: IGNITE-24406
> URL: https://issues.apache.org/jira/browse/IGNITE-24406
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Some examples in README are outdated - the API is now different.
> https://github.com/apache/ignite-3/blob/main/modules/platforms/dotnet/README.md
> This readme goes into the NuGet package description: 
> https://www.nuget.org/packages/Apache.Ignite/3.0.0#readme-body-tab



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-24676:
--
Description: 
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension

 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):
 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check support for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected

  was:
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension

 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):

 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check support for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected


> Productization of Temporal Types
> 
>
> Key: IGNITE-24676
> URL: https://issues.apache.org/jira/browse/IGNITE-24676
> Project: Ignite
>  Issue Type: Epic
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> This is an umbrella ticket to keep track of all work related to 
> productization of temporal types

[jira] [Updated] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-24676:
--
Description: 
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension
 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):
 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check support for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected

  was:
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension

 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):
 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check support for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected


> Productization of Temporal Types
> 
>
> Key: IGNITE-24676
> URL: https://issues.apache.org/jira/browse/IGNITE-24676
> Project: Ignite
>  Issue Type: Epic
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> This is an umbrella ticket to keep track of all work related to 
> productization of temporal types. 

[jira] [Created] (IGNITE-24678) Sql. Introduce heuristic to exclude NLJ when HJ may be applied

2025-02-28 Thread Konstantin Orlov (Jira)
Konstantin Orlov created IGNITE-24678:
-

 Summary: Sql. Introduce heuristic to exclude NLJ when HJ may be 
applied
 Key: IGNITE-24678
 URL: https://issues.apache.org/jira/browse/IGNITE-24678
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Konstantin Orlov


Currently, we have very primitive statistics which includes only table size. 
Moreover, they are gathered with some sort of throttling, preventing updating 
statistics for the same table more often than once per minute.

The problem arises, when heavy query is executed immediately after all data has 
been uploaded to a table (which is actually every benchmark scenario): the 
first insert triggers gathering of table stats, resulting in table size close 
to 1 to be cached in statistic manager. During planning phase, cost-based 
optimizer makes wrong choices due to misleading statistics. The most expensive 
one is choosing NestedLoopJoin over HashJoin. For instance. the query 5 from 
TPC-H suite and scale factor 0.1, which normally completes under 1 second 
(373ms on my laptop), takes tens of minutes to complete with wrong join 
algorithm (it didn't finish in 15min, so I killed it).

To mitigate the issue, we may introduce heuristic to avoid using NLJ for joins 
that can be executed with HJ.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-24676:
--
Description: 
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension

 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):

 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check support for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected

  was:
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension

 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):

 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check supported for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected


> Productization of Temporal Types
> 
>
> Key: IGNITE-24676
> URL: https://issues.apache.org/jira/browse/IGNITE-24676
> Project: Ignite
>  Issue Type: Epic
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> This is an umbrella ticket to keep track of all work related to 
> productization of temporal ty

[jira] [Updated] (IGNITE-24388) Adjust BuildIndexReplicaRequest to be TableAware one

2025-02-28 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-24388:
-
Fix Version/s: 3.1

> Adjust BuildIndexReplicaRequest to be TableAware one
> 
>
> Key: IGNITE-24388
> URL: https://issues.apache.org/jira/browse/IGNITE-24388
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Vyacheslav Koptilin
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.1
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Need to adopt the processing of the `BuildIndexReplicaRequest` to be aware of 
> the colocation track.
> There are two possible options:
>  - move the processing to the `ZonePartitionReplicaListener`. This approach 
> requires moving `indexMetaStorage` and `txRwOperationTracker` to the zone 
> listener.
>    Note that in this case, `txRwOperationTracker` might be shared between 
> `PartitionReplicaListener` and `ZonePartitionReplicaListener`. 
>  - the processing should stay in the `PartitionReplicaListener` and so the 
> `BuildIndexReplicaRequest` should extend the `TableAware` interface.
>  
> Therefore, it is worth trying the first approach unless some significant 
> obstacle is encountered.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-24612) .NET: Thin 3.0: TestSchemaUpdateWhileStreaming is flaky

2025-02-28 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931424#comment-17931424
 ] 

Pavel Tupitsyn commented on IGNITE-24612:
-

Merged to main: 
[2e360cc80da94056fc96d1a935deede3de2b7ac1|https://github.com/apache/ignite-3/commit/2e360cc80da94056fc96d1a935deede3de2b7ac1]

> .NET: Thin 3.0: TestSchemaUpdateWhileStreaming is flaky
> ---
>
> Key: IGNITE-24612
> URL: https://issues.apache.org/jira/browse/IGNITE-24612
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, streaming, thin client
>Affects Versions: 3.0
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code}
> Apache.Ignite.Table.DataStreamerException : Exception of type 
> 'Apache.Ignite.MarshallerException' was thrown.
>   > Apache.Ignite.MarshallerException : Exception of type 
> 'Apache.Ignite.MarshallerException' was thrown.
>   > Apache.Ignite.IgniteException : 
> org.apache.ignite.lang.MarshallerException: IGN-MARSHALLING-1 
> TraceId:b466dc18-4fea-48c1-b966-8ac2769ec49b
>   at 
> org.apache.ignite.internal.schema.marshaller.TupleMarshallerImpl.marshal(TupleMarshallerImpl.java:123)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.marshal(RecordBinaryViewImpl.java:436)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.mapToBinary(RecordBinaryViewImpl.java:545)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.lambda$updateAll$35(RecordBinaryViewImpl.java:614)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.lambda$withSchemaSync$1(AbstractTableView.java:144)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1187)
>   at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2309)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.withSchemaSync(AbstractTableView.java:144)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.withSchemaSync(AbstractTableView.java:134)
>   at 
> org.apache.ignite.internal.table.AbstractTableView.doOperation(AbstractTableView.java:112)
>   at 
> org.apache.ignite.internal.table.RecordBinaryViewImpl.updateAll(RecordBinaryViewImpl.java:613)
>   at 
> org.apache.ignite.client.handler.requests.table.ClientStreamerBatchSendRequest.lambda$process$1(ClientStreamerBatchSendRequest.java:59)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1187)
>   at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2309)
>   at 
> org.apache.ignite.client.handler.requests.table.ClientStreamerBatchSendRequest.lambda$process$2(ClientStreamerBatchSendRequest.java:56)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1187)
>   at 
> java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2309)
>   at 
> org.apache.ignite.client.handler.requests.table.ClientStreamerBatchSendRequest.process(ClientStreamerBatchSendRequest.java:53)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperation(ClientInboundMessageHandler.java:844)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.processOperationInternal(ClientInboundMessageHandler.java:897)
>   at 
> org.apache.ignite.client.handler.ClientInboundMessageHandler.lambda$processOperation$4(ClientInboundMessageHandler.java:633)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>   at java.base/java.lang.Thread.run(Thread.java:833)
> Caused by: java.lang.IndexOutOfBoundsException
>   at java.base/java.nio.Buffer.checkIndex(Buffer.java:743)
>   at java.base/java.nio.HeapByteBuffer.get(HeapByteBuffer.java:169)
>   at 
> org.apache.ignite.internal.binarytuple.BinaryTupleParser.longValue(BinaryTupleParser.java:245)
>   at 
> org.apache.ignite.internal.binarytuple.BinaryTupleReader.longValue(BinaryTupleReader.java:183)
>   at 
> org.apache.ignite.internal.client.table.MutableTupleBinaryTupleAdapter.object(MutableTupleBinaryTupleAdapter.java:511)
>   at 
> org.apache.ignite.internal.client.table.MutableTupleBinaryTupleAdapter.value(MutableTupleBinaryTupleAdapter.java:146)
>   at 
> org.apache.ignite.internal.schema.marshaller.TupleMarshallerImpl.validateTuple(TupleMarshallerImpl.java:326)
>   at 
> org.apache.ignite.internal.schema.marshaller.TupleMarshallerImpl.marshal(TupleMarshallerImpl.java:101)
>   ... 23 more
>at 
> Apache.Ignite.Internal.Table.DataStreame

[jira] [Commented] (IGNITE-24540) Creating 1000 tables with 200 columns throws "The primary replica has changed"

2025-02-28 Thread Alexander Lapin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931428#comment-17931428
 ] 

Alexander Lapin commented on IGNITE-24540:
--

[~vdmitrienko] Could you please provide logs?

> Creating 1000 tables with 200 columns throws "The primary replica has changed"
> --
>
> Key: IGNITE-24540
> URL: https://issues.apache.org/jira/browse/IGNITE-24540
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 3.0, 3.0.0-beta1
> Environment: 3 nodes (each node is CMG, each node 
> {color:#067d17}"-Xms4096m"{color}, {color:#067d17}"-Xmx4096m"{color}), each 
> on separate host. Each host vCPU: 4, Memory: 32GB.
>Reporter: Vladimir Dmitrienko
>Priority: Major
>  Labels: ignite-3
> Attachments: node_1_logs.zip, node_2_logs.zip, node_3_logs.zip, 
> test.log
>
>
> *Steps to reproduce:*
>  # Start 3 nodes (each node is CMG, each node 
> {color:#067d17}"-Xms4096m"{color}, {color:#067d17}"-Xmx4096m"{color}), each 
> on separate host. Each host vCPU: 4, Memory: 32GB.
>  # Create 50 tables with 200 columns in 1 thread.
>  # Assert 50 tables are present in system view.
>  # Insert 1 row into each.
>  # Assert rows content is correct.
>  # Repeat steps 2-5 while amount of tables is 1000.
> *Expected:*
> 1000 tables are created.
> *Actual:*
> Exception during 400 - 449 tables creation at step 5.
> {code:java}
> java.sql.SQLException: 
> The primary replica has changed [
> expectedLeaseholderName=TablesAmountCapacityMultiNodeTest_cluster_2, 
> currentLeaseholderName=null, 
> expectedLeaseholderId=1522cfcd-0400-4d23-9015-31dfbad90780, 
> currentLeaseholderId=null, 
> expectedEnlistmentConsistencyToken=114017056457556334, 
> currentEnlistmentConsistencyToken=null
> ]{code}
> Based on the information in the exception, it appears to have been thrown 
> from the {{PartitionReplicaListener#ensureReplicaIsPrimary}} method.
> *Notes:*
> The issue might be related to: 
> https://issues.apache.org/jira/browse/IGNITE-20911



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-24666) Suboptimal method Outbox#flush

2025-02-28 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931441#comment-17931441
 ] 

Ignite TC Bot commented on IGNITE-24666:


{panel:title=Branch: [pull/11901/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/11901/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=8330781&buildTypeId=IgniteTests24Java8_RunAll]

> Suboptimal method Outbox#flush
> --
>
> Key: IGNITE-24666
> URL: https://issues.apache.org/jira/browse/IGNITE-24666
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: ise
> Attachments: Снимок экрана 2025-02-27 в 19.43.30.png, Снимок экрана 
> 2025-02-27 в 19.45.21.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This method uses Streams and loops over same collection twice. It's possible 
> to optimize the memory usage. For example in the attachment this method is 
> responsible for 10% allocations. After optimizing it allocates only 1% (see 
> attachment 2).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24666) Suboptimal method Outbox#flush

2025-02-28 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-24666:

Fix Version/s: 2.18

> Suboptimal method Outbox#flush
> --
>
> Key: IGNITE-24666
> URL: https://issues.apache.org/jira/browse/IGNITE-24666
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: ise
> Fix For: 2.18
>
> Attachments: Снимок экрана 2025-02-27 в 19.43.30.png, Снимок экрана 
> 2025-02-27 в 19.45.21.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This method uses Streams and loops over same collection twice. It's possible 
> to optimize the memory usage. For example in the attachment this method is 
> responsible for 10% allocations. After optimizing it allocates only 1% (see 
> attachment 2).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-24540) Creating 1000 tables with 200 columns throws "The primary replica has changed"

2025-02-28 Thread Vladimir Dmitrienko (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931429#comment-17931429
 ] 

Vladimir Dmitrienko commented on IGNITE-24540:
--

[~alapin] You can find them in Attachments.

> Creating 1000 tables with 200 columns throws "The primary replica has changed"
> --
>
> Key: IGNITE-24540
> URL: https://issues.apache.org/jira/browse/IGNITE-24540
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 3.0, 3.0.0-beta1
> Environment: 3 nodes (each node is CMG, each node 
> {color:#067d17}"-Xms4096m"{color}, {color:#067d17}"-Xmx4096m"{color}), each 
> on separate host. Each host vCPU: 4, Memory: 32GB.
>Reporter: Vladimir Dmitrienko
>Priority: Major
>  Labels: ignite-3
> Attachments: node_1_logs.zip, node_2_logs.zip, node_3_logs.zip, 
> test.log
>
>
> *Steps to reproduce:*
>  # Start 3 nodes (each node is CMG, each node 
> {color:#067d17}"-Xms4096m"{color}, {color:#067d17}"-Xmx4096m"{color}), each 
> on separate host. Each host vCPU: 4, Memory: 32GB.
>  # Create 50 tables with 200 columns in 1 thread.
>  # Assert 50 tables are present in system view.
>  # Insert 1 row into each.
>  # Assert rows content is correct.
>  # Repeat steps 2-5 while amount of tables is 1000.
> *Expected:*
> 1000 tables are created.
> *Actual:*
> Exception during 400 - 449 tables creation at step 5.
> {code:java}
> java.sql.SQLException: 
> The primary replica has changed [
> expectedLeaseholderName=TablesAmountCapacityMultiNodeTest_cluster_2, 
> currentLeaseholderName=null, 
> expectedLeaseholderId=1522cfcd-0400-4d23-9015-31dfbad90780, 
> currentLeaseholderId=null, 
> expectedEnlistmentConsistencyToken=114017056457556334, 
> currentEnlistmentConsistencyToken=null
> ]{code}
> Based on the information in the exception, it appears to have been thrown 
> from the {{PartitionReplicaListener#ensureReplicaIsPrimary}} method.
> *Notes:*
> The issue might be related to: 
> https://issues.apache.org/jira/browse/IGNITE-20911



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24674) ItIdempotentCommandCacheTest.testIdempotentInvoke failed with an assertion error

2025-02-28 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-24674:
-
Labels: ignite-3  (was: )

> ItIdempotentCommandCacheTest.testIdempotentInvoke failed with an assertion 
> error
> 
>
> Key: IGNITE-24674
> URL: https://issues.apache.org/jira/browse/IGNITE-24674
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>
> {code:java}
> org.opentest4j.AssertionFailedError: expected:  but was:   at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>   at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>   at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)  
> at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)  at 
> app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:31)  at 
> app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:183)  at 
> app//org.apache.ignite.internal.metastorage.impl.ItIdempotentCommandCacheTest.testIdempotentInvoke(ItIdempotentCommandCacheTest.java:356)
>  {code}
> TC link 
> https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_IntegrationTests_ModuleMetastorageClient/8911747



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24674) ItIdempotentCommandCacheTest.testIdempotentInvoke failed with an assertion error

2025-02-28 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-24674:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> ItIdempotentCommandCacheTest.testIdempotentInvoke failed with an assertion 
> error
> 
>
> Key: IGNITE-24674
> URL: https://issues.apache.org/jira/browse/IGNITE-24674
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>
> {code:java}
> org.opentest4j.AssertionFailedError: expected:  but was:   at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>   at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>   at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)  
> at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)  at 
> app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:31)  at 
> app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:183)  at 
> app//org.apache.ignite.internal.metastorage.impl.ItIdempotentCommandCacheTest.testIdempotentInvoke(ItIdempotentCommandCacheTest.java:356)
>  {code}
> TC link 
> https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_IntegrationTests_ModuleMetastorageClient/8911747



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24676) Productization of Temporal Types

2025-02-28 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-24676:
--
Description: 
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension
 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):
 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check support for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator)
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected

  was:
This is an umbrella ticket to keep track of all work related to productization 
of temporal types. 

The first phase is to review existing test coverage according to a test plan 
(presented below) and add absent tests. The goal is to identify all issues 
related to a temporal types. All found problems (as well as already filed ones) 
must be linked to this epic.

Second phase will include fixing all attached issues, as well as amending 
documentation with known limitation in case of problem that we are not going to 
fix in the nearest future (for instance, a type `TIME WITH TIME ZONE` is not 
supported and we have no plan to support it any time soon, therefor this must 
be mentioned as known limitation).

Note: phases not necessary should be executed sequentially; critical issues may 
be fixed asap.

A temporal types hierarchy is as follow:
 * All temporal types
 ** Datetime types
 *** DATE
 *** TIME [WITHOUT TIME ZONE]
 *** TIME WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP [WITHOUT TIME ZONE]
 *** TIMESTAMP WITH TIME ZONE // not supported; known limitation
 *** TIMESTAMP WITH LOCAL TIME ZONE // not defined by SQL standard; extension
 ** Interval types
 *** YEAR TO MONTH intervals
 *** DAY TO SECOND intervals

Test plan is as follow:
 * For all temporal types check different values (literals, dyn params, table 
columns):
 ** check boundaries
 ** check different precisions for fraction of second
 ** for datetime types check leap year/month/second
 ** for literals check supported formats
 ** for table columns check support for defaults; boundaries check; different 
precision for fraction of second
 * For all temporal types check operations:
 ** check type coercion for all allowed operations
 ** below operations must be checked with similar types and types of different 
precision:
 *** comparison
 *** arithmetic
 ** check conversion between different types (aka CAST operator
 *** for conversion from character string to temporal type check conversion 
from all allowed formats
 ** check built-in function
 *** make sure all required by SQL standard function are presented and work as 
expected


> Productization of Temporal Types
> 
>
> Key: IGNITE-24676
> URL: https://issues.apache.org/jira/browse/IGNITE-24676
> Project: Ignite
>  Issue Type: Epic
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> This is an umbrella ticket to keep track of all work related to 
> productization of temporal types. 

[jira] [Commented] (IGNITE-24505) ClientHandlerModule does not log exception in initChannel

2025-02-28 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931457#comment-17931457
 ] 

Igor Sapego commented on IGNITE-24505:
--

Looks good to me.

> ClientHandlerModule does not log exception in initChannel
> -
>
> Key: IGNITE-24505
> URL: https://issues.apache.org/jira/browse/IGNITE-24505
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When an exception happens in *startEndpoint* in *ChannelInitializer*, it is 
> not logged, the client connection gets dropped silently. This makes 
> understanding the problem very difficult.
> https://github.com/apache/ignite-3/blob/78a3bf2e355949bbb0d2c95672bb82d58616742f/modules/client-handler/src/main/java/org/apache/ignite/client/handler/ClientHandlerModule.java#L296



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-24035) Improvement proposal: Hide CMG and Metastore Group from cluster init operation

2025-02-28 Thread Stanislav Lukyanov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-24035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17931490#comment-17931490
 ] 

Stanislav Lukyanov commented on IGNITE-24035:
-

[~aleksandr.pakhomov] [~sanpwc] [~igusev] Could you please review the patch?

> Improvement proposal: Hide CMG and Metastore Group from cluster init operation
> --
>
> Key: IGNITE-24035
> URL: https://issues.apache.org/jira/browse/IGNITE-24035
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr
>Assignee: Stanislav Lukyanov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h1. Motivation
> Imagine you are trying to launch an Ignite node and play with it. You've 
> downloaded the distribution and installed it with standard packaging tools. 
> Then you launched the node and entered the CLI. What's next?
> Exploring the CLI you understand that the node you've launched is
> not initialized. You run {{cluster init}} command and see the following:
> {code:java}
> [node1]> cluster init
> Missing required options: '--name=', '--metastorage-group= name>'
> USAGE
>  cluster init [OPTIONS]
> DESCRIPTION
> Initializes an Ignite cluster
> OPTIONS * - required option
> * --name=   Human-readable name of the cluster
> * --metastorage-group=[,...]
>  Metastorage group nodes (use comma-separated 
> list of node names '--metastorage-group node1, node2' to specify
>more than one node) that will host the Meta 
> Storage. If the --cluster-management-group option is omitted, the
>same nodes will also host the Cluster 
> Management Group.
>   --cluster-management-group=[,...]
>  Names of nodes (use comma-separated list of node 
> names '--cluster-management-group node1, node2' to specify more
>than one node) that will host the Cluster 
> Management Group. If omitted, then --metastorage-group values will
>also supply the nodes for the Cluster 
> Management Group.
>   --config=  Cluster configuration that will be applied 
> during the cluster initialization
>   --config-files=[,...]
>  Path to cluster configuration files (use 
> comma-separated list of paths '--config-files path1, path2' to specify
>more than one file)
>   --url= URL of cluster endpoint. It can be any node 
> URL.If not set, then the default URL from the profile settings will
>be used
>   -h, --help Show help for the specified command
>   -v, --verbose  Show additional information: logs, REST calls. 
> This flag is useful for debugging. Specify multiple options to
>increase verbosity for REST calls. Single 
> option shows request and response, second option (-vv) shows headers,
>third one (-vvv) shows body
> {code}
> Run: {{cluster init --name=i_want_it_work --metastorage-group=node1}}
> Here is the quetion: what is the "Meta Storage" and why do we need it? This 
> is something about metadata, ok. 
> But what metadata? Why I, as a user, should care about it? Should the 
> metastorage group be somehow isolated from 
> others? Should it be installed on the hardware with special requirements? So 
> many questions.
> What would be much better is to have a simple command that will initialize 
> the cluster with default settings
> (especialy for the case with one node where the choice is obvious).
> Run: {{cluster init --name=i_want_it_work}} should be enough. Even --name 
> could be optional.
> h1. Solution
> Make MS and CMG nodes parameters optional.
> If both are provided, the behavior is the same as now.
> If only one is provided, the other one uses the same value. Note: this is 
> different from the current behavior where CMG can default to MS but MS cannot 
> default to CMG because MS isn't optiona.
> If neither is provided, both are set to the same list of nodes picked 
> automatically.
> The automatic MS/CMG node choice is as follows:
>  * If cluster size <= 3, all nodes are picked.
>  * If cluster size = 4, only 3 nodes are picked. This is to avoid an even 
> number of nodes which may create split brain issues.
>  * If cluster size is >= 5, only 5 nodes are picked. We don't pick more nodes 
> because it may create unnecessary performance overhead while 5 MS/CMG create 
> sufficient failure tolerance in most use cases.
> The nodes are always picked in alphabetical order of node names. This is to 
> ensure that, given the same set of node names, w

[jira] [Updated] (IGNITE-24678) Sql. Introduce heuristic to exclude NLJ when HJ may be applied

2025-02-28 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-24678:
--
Fix Version/s: 3.1

> Sql. Introduce heuristic to exclude NLJ when HJ may be applied
> --
>
> Key: IGNITE-24678
> URL: https://issues.apache.org/jira/browse/IGNITE-24678
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, we have very primitive statistics which includes only table size. 
> Moreover, they are gathered with some sort of throttling, preventing updating 
> statistics for the same table more often than once per minute.
> The problem arises, when heavy query is executed immediately after all data 
> has been uploaded to a table (which is actually every benchmark scenario): 
> the first insert triggers gathering of table stats, resulting in table size 
> close to 1 to be cached in statistic manager. During planning phase, 
> cost-based optimizer makes wrong choices due to misleading statistics. The 
> most expensive one is choosing NestedLoopJoin over HashJoin. For instance. 
> the query 5 from TPC-H suite and scale factor 0.1, which normally completes 
> under 1 second (373ms on my laptop), takes tens of minutes to complete with 
> wrong join algorithm (it didn't finish in 15min, so I killed it).
> To mitigate the issue, we may introduce heuristic to avoid using NLJ for 
> joins that can be executed with HJ.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-24564) Sql. Prepare catalog serializers to move to the new protocol version

2025-02-28 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-24564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-24564:
--
Description: 
In order to migrate to the new serialization protocol.

1. Need to introduce annotation class

{code:java}
@Target(ElementType.TYPE)
@Retention(RUNTIME)
public @interface CatalogSerializer {
/**
 * Returns serializer version.
 */
short version();

/**
 * Returns the type of the object being serialized.
 */
MarshallableEntryType type();

/**
 * The product version starting from which the serializer is used.
 */
String since();
}
{code}

2. Annotate all existing serializers with @CatalogSerializer(version=1, 
since="3.0.0", type=​)

3. Move all existing serializers to separate package (this will required make 
missing methods to access private fields of descriptors).

4. Implement serializer registry building.
{code:Java}
interface CatalogEntrySerializerProvider {
// New method is used to obtain serializer of specific version.
CatalogObjectSerializer get(int version, int typeId);

// Used to obtain the newest serializer version.
int latestSerializerVersion(int typeId);
}
{code}

During initialization registry must scan serializer folder and build registry 
of available serializers.

  was:
In order to migrate to the new serialization protocol.

1. Need to introduce annotation class

{code:java}
@Target(ElementType.TYPE)
@Retention(RUNTIME)
public @interface CatalogSerializer {
/**
 * Returns serializer version.
 */
short version();

/**
 * Returns the type of the object being serialized.
 */
MarshallableEntryType type();

/**
 * The product version starting from which the serializer is used.
 */
String since();
}
{code}

2. Annotate all existing serializers with @CatalogSerializer(version=1, 
since="3.0.0", type=​)

3. Move all existing serializers to separate package (this will required make 
missing methods to access private fields of descriptors).

4. Implement serializer registry building.
{code:Java}
interface CatalogEntrySerializerProvider {
// New method is used to obtain serializer of specific version.
CatalogObjectSerializer get(int version, int typeId);

// Used to obtain the newest serializer version.
int activeSerializerVersion(int typeId);
}
{code}

During initialization registry must scan serializer folder and build registry 
of available serializers.


> Sql. Prepare catalog serializers to move to the new protocol version
> 
>
> Key: IGNITE-24564
> URL: https://issues.apache.org/jira/browse/IGNITE-24564
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In order to migrate to the new serialization protocol.
> 1. Need to introduce annotation class
> {code:java}
> @Target(ElementType.TYPE)
> @Retention(RUNTIME)
> public @interface CatalogSerializer {
> /**
>  * Returns serializer version.
>  */
> short version();
> /**
>  * Returns the type of the object being serialized.
>  */
> MarshallableEntryType type();
> /**
>  * The product version starting from which the serializer is used.
>  */
> String since();
> }
> {code}
> 2. Annotate all existing serializers with @CatalogSerializer(version=1, 
> since="3.0.0", type=​)
> 3. Move all existing serializers to separate package (this will required make 
> missing methods to access private fields of descriptors).
> 4. Implement serializer registry building.
> {code:Java}
> interface CatalogEntrySerializerProvider {
> // New method is used to obtain serializer of specific version.
> CatalogObjectSerializer get(int version, int typeId);
> // Used to obtain the newest serializer version.
> int latestSerializerVersion(int typeId);
> }
> {code}
> During initialization registry must scan serializer folder and build registry 
> of available serializers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)