[jira] [Assigned] (IGNITE-20681) Remove limit on write intent switch attempts

2024-01-16 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IGNITE-20681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

 Kirill Sizov reassigned IGNITE-20681:
--

Assignee:  Kirill Sizov

> Remove limit on write intent switch attempts
> 
>
> Key: IGNITE-20681
> URL: https://issues.apache.org/jira/browse/IGNITE-20681
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Chudov
>Assignee:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>
> {{WriteIntentSwitchProcessor#ATTEMPTS_TO_SWITCH_WI}} is not actually needed 
> and can be removed. After that, the code of durable cleanup can be refactored 
> a bit in order to unify the logic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21269) Remove ClusterNodeResolver

2024-01-16 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-21269:
--

 Summary: Remove ClusterNodeResolver
 Key: IGNITE-21269
 URL: https://issues.apache.org/jira/browse/IGNITE-21269
 Project: Ignite
  Issue Type: Improvement
Reporter: Roman Puchkovskiy
 Fix For: 3.0.0-beta2


After IGNITE-21232 is implemented, ClusterNodeResolver seems redundant and we 
should probably remove it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21234) Acquired checkpoint read lock waits for schedules checkpoint write unlock sometimes

2024-01-16 Thread Kirill Tkalenko (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807130#comment-17807130
 ] 

Kirill Tkalenko commented on IGNITE-21234:
--

Looks good.

> Acquired checkpoint read lock waits for schedules checkpoint write unlock 
> sometimes
> ---
>
> Key: IGNITE-21234
> URL: https://issues.apache.org/jira/browse/IGNITE-21234
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In a situation where we have "too many dirty pages" we trigger checkpoint and 
> wait until it starts. This can take seconds, because we have to flush 
> free-lists before acquiring checkpoint write lock. This can cause severe dips 
> in performance for no good reason.
> I suggest introducing two modes for triggering checkpoints when we have too 
> many dirty pages: soft threshold and hard threshold.
>  * soft - trigger checkpoint, but don't wait for its start. Just continue all 
> operations as usual. Make it like a current threshold  - 75% of any existing 
> memory segment must be dirty.
>  * hard - trigger checkpoint and wait until it starts. The way it behaves 
> right now. Make it higher than current threshold - 90% of any existing memory 
> segment must be dirty.
> Maybe we should use different values for thresholds, that should be discussed 
> during the review



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21270) Use consistentId instead of ClusterNode in Compute API

2024-01-16 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-21270:
--

 Summary: Use consistentId instead of ClusterNode in Compute API
 Key: IGNITE-21270
 URL: https://issues.apache.org/jira/browse/IGNITE-21270
 Project: Ignite
  Issue Type: Improvement
Reporter: Roman Puchkovskiy
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21270) Use consistentId instead of ClusterNode in Compute API

2024-01-16 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-21270:
---
Description: 
Currently, methods of the Compute API accept ClusterNode type to identify nodes 
on which a job is to be executed. This seems redundant as we only use 
consistentId from those objects.

If we replace ClusterNode with String (representing consistentId), we will not 
lose expressivity, but we'll be able to move ClusterNode and TopologyService 
from our public API, and having less public API seems to be a good thing.

> Use consistentId instead of ClusterNode in Compute API
> --
>
> Key: IGNITE-21270
> URL: https://issues.apache.org/jira/browse/IGNITE-21270
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Currently, methods of the Compute API accept ClusterNode type to identify 
> nodes on which a job is to be executed. This seems redundant as we only use 
> consistentId from those objects.
> If we replace ClusterNode with String (representing consistentId), we will 
> not lose expressivity, but we'll be able to move ClusterNode and 
> TopologyService from our public API, and having less public API seems to be a 
> good thing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21270) Use consistentId instead of ClusterNode in Compute API

2024-01-16 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-21270:
---
Description: 
Currently, methods of the Compute API accept ClusterNode type to identify nodes 
on which a job is to be executed. This seems redundant as we only use 
consistentId from those objects.

If we replace ClusterNode with String (representing consistentId), we will not 
lose API capabilities (we'll just resolve the nodes internally in the very 
beginning), but we'll be able to move ClusterNode and TopologyService from our 
public API, and having less public API seems to be a good thing.

  was:
Currently, methods of the Compute API accept ClusterNode type to identify nodes 
on which a job is to be executed. This seems redundant as we only use 
consistentId from those objects.

If we replace ClusterNode with String (representing consistentId), we will not 
lose expressivity, but we'll be able to move ClusterNode and TopologyService 
from our public API, and having less public API seems to be a good thing.


> Use consistentId instead of ClusterNode in Compute API
> --
>
> Key: IGNITE-21270
> URL: https://issues.apache.org/jira/browse/IGNITE-21270
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Currently, methods of the Compute API accept ClusterNode type to identify 
> nodes on which a job is to be executed. This seems redundant as we only use 
> consistentId from those objects.
> If we replace ClusterNode with String (representing consistentId), we will 
> not lose API capabilities (we'll just resolve the nodes internally in the 
> very beginning), but we'll be able to move ClusterNode and TopologyService 
> from our public API, and having less public API seems to be a good thing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21271) C++ test transaction_error is flaky on TC

2024-01-16 Thread Igor Sapego (Jira)
Igor Sapego created IGNITE-21271:


 Summary: C++ test transaction_error is flaky on TC
 Key: IGNITE-21271
 URL: https://issues.apache.org/jira/browse/IGNITE-21271
 Project: Ignite
  Issue Type: Bug
  Components: thin client
Reporter: Igor Sapego
Assignee: Igor Sapego
 Fix For: 3.0.0-beta2


The following test is flaky: 
https://ci.ignite.apache.org/test/5785530357000715945?currentProjectId=ApacheIgnite3xGradle_Test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21248) HeapUnboundedLockManager lacks abandoned locks handling

2024-01-16 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21248:
-
Epic Link: IGNITE-21174

> HeapUnboundedLockManager lacks abandoned locks handling
> ---
>
> Key: IGNITE-21248
> URL: https://issues.apache.org/jira/browse/IGNITE-21248
> Project: Ignite
>  Issue Type: Task
>Reporter:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>
> {{HeapLockManager}} notifies {{OrphanDetector}} of a lock conflict to check 
> whether the lock holder is still alive and immediately fail the request if it 
> is not (done in IGNITE-21147).
> {{HeapUnboundedLockManager}} does not have similar changes, it does not check 
> the response from {{OrphanDetector}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21272) ItIgniteNodeRestartTest#metastorageRecoveryTest is flaky

2024-01-16 Thread Mirza Aliev (Jira)
Mirza Aliev created IGNITE-21272:


 Summary: ItIgniteNodeRestartTest#metastorageRecoveryTest is flaky
 Key: IGNITE-21272
 URL: https://issues.apache.org/jira/browse/IGNITE-21272
 Project: Ignite
  Issue Type: Bug
Reporter: Mirza Aliev
 Attachments: _Integration_Tests_Module_Runner_21223.log

ItIgniteNodeRestartTest#metastorageRecoveryTest is started to fail with

 
{noformat}
java.lang.AssertionError: java.util.concurrent.ExecutionException: 
org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:425514ac-f6c5-4d9e-a200-d825b9b87150 
org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:425514ac-f6c5-4d9e-a200-d825b9b87150 Unable to start [node=iinrt_mrt_1]
  at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:78)
  at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:35)
  at org.hamcrest.TypeSafeMatcher.matches(TypeSafeMatcher.java:67)
  at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:10)
  at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:560)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:574)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.metastorageRecoveryTest(ItIgniteNodeRestartTest.java:836)
  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
  at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.base/java.lang.reflect.Method.invoke(Method.java:566)
  at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
  at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
  at 
org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45)
  at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
  at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
  at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:94)
  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
  at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
  at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
  at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
  at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
  at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
  at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
  at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
  at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
  at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
org.junit.platform.engine.support.hierarchical.NodeTest

[jira] [Updated] (IGNITE-21272) ItIgniteNodeRestartTest#metastorageRecoveryTest is flaky

2024-01-16 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-21272:
-
Description: 
ItIgniteNodeRestartTest#metastorageRecoveryTest is started to fail with:
{noformat}
java.lang.AssertionError: java.util.concurrent.ExecutionException: 
org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:425514ac-f6c5-4d9e-a200-d825b9b87150 
org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:425514ac-f6c5-4d9e-a200-d825b9b87150 Unable to start [node=iinrt_mrt_1]
  at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:78)
  at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:35)
  at org.hamcrest.TypeSafeMatcher.matches(TypeSafeMatcher.java:67)
  at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:10)
  at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:560)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:574)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.metastorageRecoveryTest(ItIgniteNodeRestartTest.java:836)
  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
  at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.base/java.lang.reflect.Method.invoke(Method.java:566)
  at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
  at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
  at 
org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45)
  at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
  at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
  at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:94)
  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
  at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
  at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
  at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
  at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
  at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
  at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
  at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
  at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
  at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
  at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
  at 
org.junit.platform.engine.support.hierarchical.Sam

[jira] [Updated] (IGNITE-21272) ItIgniteNodeRestartTest#metastorageRecoveryTest is flaky

2024-01-16 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-21272:
-
Description: 
{{ItIgniteNodeRestartTest#metastorageRecoveryTest}} is started to fail with:
{noformat}
java.lang.AssertionError: java.util.concurrent.ExecutionException: 
org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:425514ac-f6c5-4d9e-a200-d825b9b87150 
org.apache.ignite.lang.IgniteException: IGN-CMN-65535 
TraceId:425514ac-f6c5-4d9e-a200-d825b9b87150 Unable to start [node=iinrt_mrt_1]
  at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:78)
  at 
org.apache.ignite.internal.testframework.matchers.CompletableFutureMatcher.matchesSafely(CompletableFutureMatcher.java:35)
  at org.hamcrest.TypeSafeMatcher.matches(TypeSafeMatcher.java:67)
  at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:10)
  at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:6)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:560)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.startNode(ItIgniteNodeRestartTest.java:574)
  at 
org.apache.ignite.internal.runner.app.ItIgniteNodeRestartTest.metastorageRecoveryTest(ItIgniteNodeRestartTest.java:836)
  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
  at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.base/java.lang.reflect.Method.invoke(Method.java:566)
  at 
org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
  at 
org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
  at 
org.junit.jupiter.engine.extension.SameThreadTimeoutInvocation.proceed(SameThreadTimeoutInvocation.java:45)
  at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
  at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
  at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:94)
  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
  at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
  at 
org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
  at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
  at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
  at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
  at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
  at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
  at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
  at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
  at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
  at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
  at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
  at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
  at 
org.junit.platform.engine.support.hierarchical

[jira] [Assigned] (IGNITE-20267) Infinite loop of SockeException

2024-01-16 Thread Igor Sapego (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego reassigned IGNITE-20267:


Assignee: (was: Igor Sapego)

> Infinite loop of SockeException
> ---
>
> Key: IGNITE-20267
> URL: https://issues.apache.org/jira/browse/IGNITE-20267
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Affects Versions: 2.15
>Reporter: Sebastian Fabisz
>Priority: Major
>
> Some of our ignite instances are experiencing infinite loop of same error:
> {{ERROR 2023-07-27 08:26:44,876 
> [grid-nio-worker-tcp-comm-2-#25%TcpCommunicationSpi%] 
> o.a.i.s.c.t.TcpCommunicationSpi traceId="" spanId="" - Failed to process 
> selector key [ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker 
> [super=AbstractNioClientWorker [idx=2, bytesRcvd=21528, bytesSent=15345, 
> bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker 
> [name=grid-nio-worker-tcp-comm-2, igniteInstanceName=TcpCommunicationSpi, 
> finished=false, heartbeatTs=1690442803865, hashCode=2102759141, 
> interrupted=false, 
> runner=grid-nio-worker-tcp-comm-2-#25%TcpCommunicationSpi%]]], 
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], 
> readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], 
> inRecovery=null, outRecovery=null, closeSocket=true, 
> outboundMessagesQueueSizeMetric=o.a.i.i.processors.metric.impl.LongAdderMetric@69a257d1,
>  super=GridNioSessionImpl [locAddr=\{removed}, rmtAddr=\{removed}, 
> createTime=1690249023154, closeTime=0, bytesSent=18, bytesRcvd=3, 
> bytesSent0=0, bytesRcvd0=0, sndSchedTime=1690442567813, 
> lastSndTime=1690249023154, lastRcvTime=1690442567813, readsPaused=false, 
> filterChain=FilterChain[filters=[GridNioCodecFilter 
> [parser=o.a.i.i.util.nio.GridDirectParser@1fff7116, directMode=true], 
> GridConnectionBytesVerifyFilter], accepted=true, markedForClose=false]]] 
> java.net.SocketException: Connection reset at 
> java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:394)
>  at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:411) 
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:1351)
>  at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeys(GridNioServer.java:2575)
>  at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2271)
>  at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1910)
>  at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) at 
> java.base/java.lang.Thread.run(Thread.java:833)}}
> Each error contains same message except of heartbeatTs field.
> This error repeats approximately every second. Not all of Ignite instances 
> are affected. We have figured out that problem is caused by Nessus security 
> scanner. It walks over all boxes and runs some security checks. It looks like 
> one of security checks (which can be http request) causes Ignite to fall into 
> infinite loop of errors. We think that nessus opens a connection to Ignite, 
> then connection is closed by nessus, but Iginite won't kill the socket.
>  
> We have already updated Ignite to latest version.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20267) Infinite loop of SockeException

2024-01-16 Thread Igor Sapego (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego updated IGNITE-20267:
-
Component/s: (was: thin client)

> Infinite loop of SockeException
> ---
>
> Key: IGNITE-20267
> URL: https://issues.apache.org/jira/browse/IGNITE-20267
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.15
>Reporter: Sebastian Fabisz
>Priority: Major
>
> Some of our ignite instances are experiencing infinite loop of same error:
> {{ERROR 2023-07-27 08:26:44,876 
> [grid-nio-worker-tcp-comm-2-#25%TcpCommunicationSpi%] 
> o.a.i.s.c.t.TcpCommunicationSpi traceId="" spanId="" - Failed to process 
> selector key [ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker 
> [super=AbstractNioClientWorker [idx=2, bytesRcvd=21528, bytesSent=15345, 
> bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker 
> [name=grid-nio-worker-tcp-comm-2, igniteInstanceName=TcpCommunicationSpi, 
> finished=false, heartbeatTs=1690442803865, hashCode=2102759141, 
> interrupted=false, 
> runner=grid-nio-worker-tcp-comm-2-#25%TcpCommunicationSpi%]]], 
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], 
> readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], 
> inRecovery=null, outRecovery=null, closeSocket=true, 
> outboundMessagesQueueSizeMetric=o.a.i.i.processors.metric.impl.LongAdderMetric@69a257d1,
>  super=GridNioSessionImpl [locAddr=\{removed}, rmtAddr=\{removed}, 
> createTime=1690249023154, closeTime=0, bytesSent=18, bytesRcvd=3, 
> bytesSent0=0, bytesRcvd0=0, sndSchedTime=1690442567813, 
> lastSndTime=1690249023154, lastRcvTime=1690442567813, readsPaused=false, 
> filterChain=FilterChain[filters=[GridNioCodecFilter 
> [parser=o.a.i.i.util.nio.GridDirectParser@1fff7116, directMode=true], 
> GridConnectionBytesVerifyFilter], accepted=true, markedForClose=false]]] 
> java.net.SocketException: Connection reset at 
> java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:394)
>  at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:411) 
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processRead(GridNioServer.java:1351)
>  at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeys(GridNioServer.java:2575)
>  at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2271)
>  at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1910)
>  at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) at 
> java.base/java.lang.Thread.run(Thread.java:833)}}
> Each error contains same message except of heartbeatTs field.
> This error repeats approximately every second. Not all of Ignite instances 
> are affected. We have figured out that problem is caused by Nessus security 
> scanner. It walks over all boxes and runs some security checks. It looks like 
> one of security checks (which can be http request) causes Ignite to fall into 
> infinite loop of errors. We think that nessus opens a connection to Ignite, 
> then connection is closed by nessus, but Iginite won't kill the socket.
>  
> We have already updated Ignite to latest version.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21231) HeapLockManager#locks method do not provide all acuired locks

2024-01-16 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin reassigned IGNITE-21231:


Assignee:  Kirill Sizov

> HeapLockManager#locks method do not provide all acuired locks
> -
>
> Key: IGNITE-21231
> URL: https://issues.apache.org/jira/browse/IGNITE-21231
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>
> h3. Motivation
> If a lock key does not use the context id (_LockKey#contextId_ is _null_), it 
> does not appear in the iterator. Here is a test that demonstrates incorrect 
> behavior:
> {code:title=AbstractLockManagerTest.java}
> @Test
> public void simpleTest() {
> UUID txId1 = TestTransactionIds.newTransactionId();
> LockKey key = new LockKey(0);
> lockManager.acquire(txId1, key, S).join();
> assertTrue(lockManager.locks(txId1).hasNext());
> }
> {code}
> h3. Definition of done
> Despite the fact that the method is used only in tests, it has to work 
> correctly. All locks should be in the lock iterator.
> h3. Issue details.
> The real issue is way more serious than described in the motivation section.
> HeapLockManager contains HeapUnboundedLockManager:
> {code:title=HeapLockManager.java}
>   public HeapLockManager() {
> this(new WaitDieDeadlockPreventionPolicy(), SLOTS, SLOTS, new 
> HeapUnboundedLockManager());
> }
> {code}
> And this is how {{HeapLockManager.acquire}} looks like:
> {code:title=HeapLockManager.java}
>  public CompletableFuture acquire(UUID txId, LockKey lockKey, LockMode 
> lockMode) {
> if (lockKey.contextId() == null) { // Treat this lock as a hierarchy 
> lock.
> return parentLockManager.acquire(txId, lockKey, lockMode);
> }
>   ...
> // the rest of the body is omitted
> }
> {code}
> So if a lock key lacks context id, it is forwarded to parentLockManager. 
> Unfortunately, it is the only place where the forwarding is used, other 
> methods like release do not check context id.
> Imagine the code
> {code}
>   LockKey key = new LockKey(0);
>   lockManager.acquire(txId1, key, S).join();
>   lockManager.release(txId1, key, S);
> {code}
> In this case the lock is present in the parentLockManager only and is not 
> released with {{lockManager.release}}
> This test will fail
> {code:title=HeapLockManagerTest.java}
> @Test
> public void simpleTest() {
> LockKey key = new LockKey(0);
> 
> UUID txId1 = TestTransactionIds.newTransactionId();
> lockManager.acquire(txId1, key, X).join();
> assertTrue(lockManager.isEmpty());
> lockManager.release(txId1, key, X);
> UUID txId2 = TestTransactionIds.newTransactionId();
> lockManager.acquire(txId2, key, X).join();
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21202) Use node ID instead of node name to identify primary node in client primary replica tracker

2024-01-16 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21202:

Fix Version/s: 3.0.0-beta2

> Use node ID instead of node name to identify primary node in client primary 
> replica tracker
> ---
>
> Key: IGNITE-21202
> URL: https://issues.apache.org/jira/browse/IGNITE-21202
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> h3. Motivation
> Recently, we changed the process of granting leases. This process uses the 
> node ID as a leaseholder identifier. The other components should also follow 
> this consistently.
> h3. Definition of done
>  # Here we are using the deprecated property, but we should use the 
> leasholder ID.
> {code:java}
> updatePrimaryReplica(tablePartitionId, primaryReplicaEvent.startTime(), 
> primaryReplicaEvent.leaseholder()); {code}
>  # The lesholder property should be removed from the event parameters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21202) Use node ID instead of node name to identify primary node in client primary replica tracker

2024-01-16 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21202:

Component/s: thin client

> Use node ID instead of node name to identify primary node in client primary 
> replica tracker
> ---
>
> Key: IGNITE-21202
> URL: https://issues.apache.org/jira/browse/IGNITE-21202
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Reporter: Vladislav Pyatkov
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> h3. Motivation
> Recently, we changed the process of granting leases. This process uses the 
> node ID as a leaseholder identifier. The other components should also follow 
> this consistently.
> h3. Definition of done
>  # Here we are using the deprecated property, but we should use the 
> leasholder ID.
> {code:java}
> updatePrimaryReplica(tablePartitionId, primaryReplicaEvent.startTime(), 
> primaryReplicaEvent.leaseholder()); {code}
>  # The lesholder property should be removed from the event parameters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21202) Use node ID instead of node name to identify primary node in client primary replica tracker

2024-01-16 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21202:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Use node ID instead of node name to identify primary node in client primary 
> replica tracker
> ---
>
> Key: IGNITE-21202
> URL: https://issues.apache.org/jira/browse/IGNITE-21202
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> h3. Motivation
> Recently, we changed the process of granting leases. This process uses the 
> node ID as a leaseholder identifier. The other components should also follow 
> this consistently.
> h3. Definition of done
>  # Here we are using the deprecated property, but we should use the 
> leasholder ID.
> {code:java}
> updatePrimaryReplica(tablePartitionId, primaryReplicaEvent.startTime(), 
> primaryReplicaEvent.leaseholder()); {code}
>  # The lesholder property should be removed from the event parameters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21202) Use node ID instead of node name to identify primary node in client primary replica tracker

2024-01-16 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807189#comment-17807189
 ] 

Pavel Tupitsyn commented on IGNITE-21202:
-

As discussed privately with [~v.pyatkov], there is no difference between using 
*nodeId* or *consistendId* for partition awareness:
* It is just a way to identify a node among active connections
* If a node restarts, partition assignment change will be detected by 
*ClientPrimaryReplicaTracker* and sent to the client
* We use "best effort" mechanism. Client can miss primary replica in many cases 
(not all node addresses are known, connection not yet established, assignment 
out of date), and the server is still required to handle the request correctly.
* Partition awareness does not apply to explicit tx scenarios (all requests go 
to tx coordinator)

Therefore, it does not make sense to rework all 3 clients to use different ids. 
I'll remove "deprecated" and close as "won't fix".

> Use node ID instead of node name to identify primary node in client primary 
> replica tracker
> ---
>
> Key: IGNITE-21202
> URL: https://issues.apache.org/jira/browse/IGNITE-21202
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Reporter: Vladislav Pyatkov
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> h3. Motivation
> Recently, we changed the process of granting leases. This process uses the 
> node ID as a leaseholder identifier. The other components should also follow 
> this consistently.
> h3. Definition of done
>  # Here we are using the deprecated property, but we should use the 
> leasholder ID.
> {code:java}
> updatePrimaryReplica(tablePartitionId, primaryReplicaEvent.startTime(), 
> primaryReplicaEvent.leaseholder()); {code}
>  # The lesholder property should be removed from the event parameters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21171) Calcite engine. Field nullability flag lost for data types with precession or scale

2024-01-16 Thread Yury Gerzhedovich (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807192#comment-17807192
 ] 

Yury Gerzhedovich commented on IGNITE-21171:


[~alex_pl] Why is it the issue? By default column can contains null value.

> Calcite engine. Field nullability flag lost for data types with precession or 
> scale
> ---
>
> Key: IGNITE-21171
> URL: https://issues.apache.org/jira/browse/IGNITE-21171
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: ise
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Reproducer:
> {code:java}
> CREATE TABLE test(id INT PRIMARY KEY, val DECIMAL(10,2));
> INSERT INTO test(id, val) VALUES (0, NULL); {code}
> Fail with: {{Column 'VAL' has no default value and does not allow NULLs}}
> But it works if {{val}} data type is {{DECIMAL}} or {{INT}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21202) Use node ID instead of node name to identify primary node in client primary replica tracker

2024-01-16 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807193#comment-17807193
 ] 

Vladislav Pyatkov commented on IGNITE-21202:


LGTM

> Use node ID instead of node name to identify primary node in client primary 
> replica tracker
> ---
>
> Key: IGNITE-21202
> URL: https://issues.apache.org/jira/browse/IGNITE-21202
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Reporter: Vladislav Pyatkov
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Recently, we changed the process of granting leases. This process uses the 
> node ID as a leaseholder identifier. The other components should also follow 
> this consistently.
> h3. Definition of done
>  # Here we are using the deprecated property, but we should use the 
> leasholder ID.
> {code:java}
> updatePrimaryReplica(tablePartitionId, primaryReplicaEvent.startTime(), 
> primaryReplicaEvent.leaseholder()); {code}
>  # The lesholder property should be removed from the event parameters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-20098) Move exception classes related to distribution zones to an appropriate package/module

2024-01-16 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov reassigned IGNITE-20098:
-

Assignee: Andrey Mashenkov

> Move exception classes related to distribution zones to an appropriate 
> package/module
> -
>
> Key: IGNITE-20098
> URL: https://issues.apache.org/jira/browse/IGNITE-20098
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vyacheslav Koptilin
>Assignee: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
>
> The following exceptions classes were moved to the `core` module due to 
> CatalogService feature:
>  - DistributionZoneNotFoundException
>  - DistributionZoneBindTableException
>  - DistributionZoneAlreadyExistsException
> It seems to me, that this is not the correct way to handle dependencies 
> between `catalog` and `distribution zone` modules. All exceptions should be 
> moved to the one place. IMHO, it should be the `distribution zone` module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21171) Calcite engine. Field nullability flag lost for data types with precession or scale

2024-01-16 Thread Aleksey Plekhanov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807199#comment-17807199
 ] 

Aleksey Plekhanov commented on IGNITE-21171:


[~jooger] it should allow null values, but it doesn't allowed now for types 
with scale.

> Calcite engine. Field nullability flag lost for data types with precession or 
> scale
> ---
>
> Key: IGNITE-21171
> URL: https://issues.apache.org/jira/browse/IGNITE-21171
> Project: Ignite
>  Issue Type: Bug
>Reporter: Aleksey Plekhanov
>Assignee: Aleksey Plekhanov
>Priority: Major
>  Labels: ise
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Reproducer:
> {code:java}
> CREATE TABLE test(id INT PRIMARY KEY, val DECIMAL(10,2));
> INSERT INTO test(id, val) VALUES (0, NULL); {code}
> Fail with: {{Column 'VAL' has no default value and does not allow NULLs}}
> But it works if {{val}} data type is {{DECIMAL}} or {{INT}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21249) Upgrade gradle to 8.5+

2024-01-16 Thread Artem Egorov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Egorov reassigned IGNITE-21249:
-

Assignee: Artem Egorov

> Upgrade gradle to 8.5+
> --
>
> Key: IGNITE-21249
> URL: https://issues.apache.org/jira/browse/IGNITE-21249
> Project: Ignite
>  Issue Type: Improvement
>  Components: build
>Reporter: Artem Egorov
>Assignee: Artem Egorov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, Gradle 7.6.2 is incompatible with JDK 21.
> To be able to build a project with JDK 21, Gradle must be updated to version 
> 8.5 at least (according to 
> https://docs.gradle.org/current/userguide/compatibility.html)
> It is also necessary to update some plugins and directives in the code so 
> that the project can be built



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21228) Data not available thtough JDBC after inserting using KV for a while

2024-01-16 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin reassigned IGNITE-21228:


Assignee: Vyacheslav Koptilin

> Data not available thtough JDBC after inserting using KV for a while
> 
>
> Key: IGNITE-21228
> URL: https://issues.apache.org/jira/browse/IGNITE-21228
> Project: Ignite
>  Issue Type: Improvement
>  Components: jdbc
>Affects Versions: 3.0
>Reporter: Alexander Belyak
>Assignee: Vyacheslav Koptilin
>Priority: Major
>
> # Create table using java API
>  # Insert 100 rows using sync KV java API
>  # Connect using JDBC and try to select inserted data
> Expected result: all rows available
> Actual result: no rows available for a few hundreds of milliseconds 
> (700-1000ms on my laptop).
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21228) Data not available thtough JDBC after inserting using KV for a while

2024-01-16 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-21228:
-
Labels: ignite-3  (was: )

> Data not available thtough JDBC after inserting using KV for a while
> 
>
> Key: IGNITE-21228
> URL: https://issues.apache.org/jira/browse/IGNITE-21228
> Project: Ignite
>  Issue Type: Improvement
>  Components: jdbc
>Affects Versions: 3.0
>Reporter: Alexander Belyak
>Assignee: Vyacheslav Koptilin
>Priority: Major
>  Labels: ignite-3
>
> # Create table using java API
>  # Insert 100 rows using sync KV java API
>  # Connect using JDBC and try to select inserted data
> Expected result: all rows available
> Actual result: no rows available for a few hundreds of milliseconds 
> (700-1000ms on my laptop).
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21228) Data not available thtough JDBC after inserting using KV for a while

2024-01-16 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin reassigned IGNITE-21228:


Assignee: (was: Vyacheslav Koptilin)

> Data not available thtough JDBC after inserting using KV for a while
> 
>
> Key: IGNITE-21228
> URL: https://issues.apache.org/jira/browse/IGNITE-21228
> Project: Ignite
>  Issue Type: Improvement
>  Components: jdbc
>Affects Versions: 3.0
>Reporter: Alexander Belyak
>Priority: Major
>  Labels: ignite-3
>
> # Create table using java API
>  # Insert 100 rows using sync KV java API
>  # Connect using JDBC and try to select inserted data
> Expected result: all rows available
> Actual result: no rows available for a few hundreds of milliseconds 
> (700-1000ms on my laptop).
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21253) Implement a counter for number of rebalancing tables inside the zone

2024-01-16 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-21253:
-
Labels: ignite-3  (was: )

> Implement a counter for number of rebalancing tables inside the zone 
> -
>
> Key: IGNITE-21253
> URL: https://issues.apache.org/jira/browse/IGNITE-21253
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> According to 
> [comment|https://issues.apache.org/jira/browse/IGNITE-18991?focusedCommentId=17806657&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17806657]
>  we need to switch zone assignments, only when all zone tables finish their 
> rebalances.
> To implement this behaviour we need to implement the metastorage counter of 
> tables, which will be decreased on every successfull table rebalance.
> *Definition of done*
> - Counter of zone tables created on the rebalance start and decreased with 
> every successfull table rebalance



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21254) Avoid new table creation in the zone with ongoing rebalance

2024-01-16 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-21254:
-
Labels: ignite-3  (was: )

> Avoid new table creation in the zone with ongoing rebalance
> ---
>
> Key: IGNITE-21254
> URL: https://issues.apache.org/jira/browse/IGNITE-21254
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> According to 
> [comment|https://issues.apache.org/jira/browse/IGNITE-18991?focusedCommentId=17806657&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17806657]
>  we need to avoid new tables creation in the zone, if we have ongoing 
> rebalance.
> This guard is needed to avoid the races on the rebalance finish and to 
> support the valid counter values in IGNITE-21253.
> *Definition of done*
> - Avoid table creation, if zones pending key is not empty. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21252) Partition RAFT client must use pending and stable assignments as a list of peers during rebalance

2024-01-16 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-21252:
-
Labels: ignite-3  (was: )

> Partition RAFT client must use pending and stable assignments as a list of 
> peers during rebalance
> -
>
> Key: IGNITE-21252
> URL: https://issues.apache.org/jira/browse/IGNITE-21252
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> *Motivation*
> According to 
> [comment|https://issues.apache.org/jira/browse/IGNITE-18991?focusedCommentId=17806657&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17806657]
>  during the ongoing rebalance we need to use the sum of pending+stable 
> assignments as a list of peers for partition raft clients.
> This strategy will protect us in the case, when some tables of zone are 
> already rebalanced and should use new stables, but others are still have 
> ongoing rebalance.
> *Definition of done*
> - On the rebalance start all RAFT clients for all table partition from this 
> zone updates to the clients with pending+stable peers
> - When rebalance is done, RAFT clients switched to the stable assignments only



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21273) Document newly-added thread pools in README files

2024-01-16 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-21273:
--

 Summary: Document newly-added thread pools in README files
 Key: IGNITE-21273
 URL: https://issues.apache.org/jira/browse/IGNITE-21273
 Project: Ignite
  Issue Type: Improvement
Reporter: Roman Puchkovskiy
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21273) Document newly-added thread pools in README files

2024-01-16 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy reassigned IGNITE-21273:
--

Assignee: Roman Puchkovskiy

> Document newly-added thread pools in README files
> -
>
> Key: IGNITE-21273
> URL: https://issues.apache.org/jira/browse/IGNITE-21273
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21202) Use node ID instead of node name to identify primary node in client primary replica tracker

2024-01-16 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807230#comment-17807230
 ] 

Pavel Tupitsyn commented on IGNITE-21202:
-

Un-deprecate merged to main: b768928fed89c6529aba6c64391d6c2e35648cdf

> Use node ID instead of node name to identify primary node in client primary 
> replica tracker
> ---
>
> Key: IGNITE-21202
> URL: https://issues.apache.org/jira/browse/IGNITE-21202
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Reporter: Vladislav Pyatkov
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> h3. Motivation
> Recently, we changed the process of granting leases. This process uses the 
> node ID as a leaseholder identifier. The other components should also follow 
> this consistently.
> h3. Definition of done
>  # Here we are using the deprecated property, but we should use the 
> leasholder ID.
> {code:java}
> updatePrimaryReplica(tablePartitionId, primaryReplicaEvent.startTime(), 
> primaryReplicaEvent.leaseholder()); {code}
>  # The lesholder property should be removed from the event parameters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-21261) Fix exception 'Unknown topic' is never thrown in KafkaToIgniteMetadataUpdater

2024-01-16 Thread Alexey Gidaspov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17806816#comment-17806816
 ] 

Alexey Gidaspov edited comment on IGNITE-21261 at 1/16/24 1:50 PM:
---

https://tc2.sbt-ignite-dev.ru/viewLog.html?buildId=7706734&buildTypeId=IgniteExtensions_Tests_RunAllTests&tab=dependencies#_expand=block_bt1193-7706734&hpos=0&vpos=1580


was (Author: agidaspov):
https://tc2.sbt-ignite-dev.ru/viewLog.html?tab=dependencies&depsTab=snapshot&buildId=7705635&buildTypeId=IgniteExtensions_Tests_RunAllTests&fromSakuraUI=true#_expand=block_bt1193-7705635&hpos=&vpos=

> Fix exception 'Unknown topic' is never thrown in KafkaToIgniteMetadataUpdater
> -
>
> Key: IGNITE-21261
> URL: https://issues.apache.org/jira/browse/IGNITE-21261
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexey Gidaspov
>Assignee: Alexey Gidaspov
>Priority: Major
>  Labels: ise
>
> exception 'Unknown topic' is never thrown in KafkaToIgniteMetadataUpdater 
> since kafka lib was upgraded to 3.4.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21261) Fix exception 'Unknown topic' is never thrown in KafkaToIgniteMetadataUpdater

2024-01-16 Thread Taras Ledkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807268#comment-17807268
 ] 

Taras Ledkov commented on IGNITE-21261:
---

[~agidaspov], the patch is OK with me. Merged. Thanks for contribution.

> Fix exception 'Unknown topic' is never thrown in KafkaToIgniteMetadataUpdater
> -
>
> Key: IGNITE-21261
> URL: https://issues.apache.org/jira/browse/IGNITE-21261
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexey Gidaspov
>Assignee: Alexey Gidaspov
>Priority: Major
>  Labels: ise
>
> exception 'Unknown topic' is never thrown in KafkaToIgniteMetadataUpdater 
> since kafka lib was upgraded to 3.4.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21273) Document newly-added thread pools in README files

2024-01-16 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-21273:
---
Description: 
* partition-operations thread pool was added recently
 * inbound and outbound messages threads are not documented as well

They should be mentioned in README files.

> Document newly-added thread pools in README files
> -
>
> Key: IGNITE-21273
> URL: https://issues.apache.org/jira/browse/IGNITE-21273
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * partition-operations thread pool was added recently
>  * inbound and outbound messages threads are not documented as well
> They should be mentioned in README files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18853) Introduce thread types to thread pools

2024-01-16 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-18853:
---
Labels: ignite-3 storage-threading threading  (was: ignite-3 
storage-threading)

> Introduce thread types to thread pools
> --
>
> Key: IGNITE-18853
> URL: https://issues.apache.org/jira/browse/IGNITE-18853
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3, storage-threading, threading
>
> Like in Ignite 2.x, we need to have custom classes for threads, with custom 
> properties.
> Currently, I can only say that we use custom thread types in network, for 
> event loops I guess. That's not enough, here's why.
> Given the wide adoption of async code, developers struggle to understand, 
> what thread executes the actual operation. For example, "thenCompose" or 
> "whenComplete" closure is being executed in whatever thread that completes 
> the future, and quite often it's not the thread that we want.
> Also, we shouldn't use default fork-join pool, for example. We should force 
> most operations to our own pools.
> To make everything more clear, we have to mark threads with at least 
> following categories:
>  * can perform storage reads
>  * can perform storage writes
>  * can perform network IO operations
>  * can be safely blocked
>  * etc.
> Once we know for sure that the thread fits the operation, we can execute it. 
> Ideally, that should be an assertion and not a runtime logic.
> This will also help us finding existing bugs and bottlenecks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21151) MVCC caching removal

2024-01-16 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807389#comment-17807389
 ] 

Ignite TC Bot commented on IGNITE-21151:


{panel:title=Branch: [pull/11140/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/11140/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *--> Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7705754&buildTypeId=IgniteTests24Java8_RunAll]

> MVCC caching removal
> 
>
> Key: IGNITE-21151
> URL: https://issues.apache.org/jira/browse/IGNITE-21151
> Project: Ignite
>  Issue Type: Sub-task
>  Components: mvcc
>Reporter: Ilya Shishkov
>Assignee: Ilya Shishkov
>Priority: Minor
>  Labels: ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Remove MvccCachingManager and corresponding code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21274) Add docs on implementing events in java thin client

2024-01-16 Thread Julia Bakulina (Jira)
Julia Bakulina created IGNITE-21274:
---

 Summary: Add docs on implementing events in java thin client
 Key: IGNITE-21274
 URL: https://issues.apache.org/jira/browse/IGNITE-21274
 Project: Ignite
  Issue Type: Improvement
  Components: documentation, thin client
Reporter: Julia Bakulina






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21274) Add docs on implementing events in java thin client

2024-01-16 Thread Julia Bakulina (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julia Bakulina updated IGNITE-21274:

Fix Version/s: 2.17

> Add docs on implementing events in java thin client
> ---
>
> Key: IGNITE-21274
> URL: https://issues.apache.org/jira/browse/IGNITE-21274
> Project: Ignite
>  Issue Type: Improvement
>  Components: documentation, thin client
>Reporter: Julia Bakulina
>Priority: Major
>  Labels: important, ise
> Fix For: 2.17
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21274) Add docs on implementing events in java thin client

2024-01-16 Thread Julia Bakulina (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julia Bakulina updated IGNITE-21274:

Description: The docs to the feature of implementing events in java thin 
client are required

> Add docs on implementing events in java thin client
> ---
>
> Key: IGNITE-21274
> URL: https://issues.apache.org/jira/browse/IGNITE-21274
> Project: Ignite
>  Issue Type: Improvement
>  Components: documentation, thin client
>Reporter: Julia Bakulina
>Priority: Major
>  Labels: important, ise
> Fix For: 2.17
>
>
> The docs to the feature of implementing events in java thin client are 
> required



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21275) Up to 5x difference in performance between SQL API and key-value API

2024-01-16 Thread Ivan Artiukhov (Jira)
Ivan Artiukhov created IGNITE-21275:
---

 Summary: Up to 5x difference in performance between SQL API and 
key-value API
 Key: IGNITE-21275
 URL: https://issues.apache.org/jira/browse/IGNITE-21275
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Ivan Artiukhov
 Attachments: 1240-sql-insert.png, 1240-sql-select.png, 
1242-kv-get.png, 1242-kv-put.png

AI3 rev. ca21384f85e8c779258cb3b21f54b6c30a7071e4 (Jan 16 2024)

Compare two benchmark runs:
 * a benchmark which uses KeyValueView to perform single {{put()}} and 
{{{}get(){}}}: 
[https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteClient.java]
 
 * a benchmark which performs {{INSERT}} and {{SELECT}} via {{Statement}} 
objects by using Ignite SQL API: 
[https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
 

h1. Run 1, PUT/INSERT

Insert N unique entries into a single-node cluster from a single-threaded 
client. 
h2. KeyValueView

N = 25
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteClient -load -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
recordcount=25 -p warmupops=5 -p dataintegrity=true -p 
measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.37 -s {code}
!1242-kv-put.png!
h2. SQL API

N = 15000

 
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
recordcount=15 -p warmupops=15000 -p dataintegrity=true -p 
measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.47 -s {code}
!1240-sql-insert.png!

 
h1. Run 2, GET/SELECT

Get N entries inserted on Run 1.
h2. KeyValueView

N = 25

 
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteClient -t -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
operationcount=25 -p recordcount=25 -p warmupops=5 -p 
dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
hosts=192.168.1.37 -s{code}
!1242-kv-get.png!

 
h2. SQL API

N = 15
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -t -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
operationcount=15 -p recordcount=15 -p warmupops=15000 -p 
dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
hosts=192.168.1.47 -s {code}
!1240-sql-select.png!

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21275) Up to 5x difference in performance between SQL API and key-value API

2024-01-16 Thread Ivan Artiukhov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Artiukhov updated IGNITE-21275:

Attachment: 1241-jdbc-insert.png

> Up to 5x difference in performance between SQL API and key-value API
> 
>
> Key: IGNITE-21275
> URL: https://issues.apache.org/jira/browse/IGNITE-21275
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Ivan Artiukhov
>Priority: Major
>  Labels: ignite-3
> Attachments: 1240-sql-insert.png, 1240-sql-select.png, 
> 1241-jdbc-insert.png, 1242-kv-get.png, 1242-kv-put.png
>
>
> AI3 rev. ca21384f85e8c779258cb3b21f54b6c30a7071e4 (Jan 16 2024)
> Compare two benchmark runs:
>  * a benchmark which uses KeyValueView to perform single {{put()}} and 
> {{{}get(){}}}: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteClient.java]
>  
>  * a benchmark which performs {{INSERT}} and {{SELECT}} via {{Statement}} 
> objects by using Ignite SQL API: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> h1. Run 1, PUT/INSERT
> Insert N unique entries into a single-node cluster from a single-threaded 
> client. 
> h2. KeyValueView
> N = 25
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=25 -p warmupops=5 -p dataintegrity=true -p 
> measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.37 -s 
> {code}
> !1242-kv-put.png!
> h2. SQL API
> N = 15000
>  
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=15 -p warmupops=15000 -p dataintegrity=true -p 
> measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.47 -s 
> {code}
> !1240-sql-insert.png!
>  
> h1. Run 2, GET/SELECT
> Get N entries inserted on Run 1.
> h2. KeyValueView
> N = 25
>  
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteClient -t -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> operationcount=25 -p recordcount=25 -p warmupops=5 -p 
> dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
> hosts=192.168.1.37 -s{code}
> !1242-kv-get.png!
>  
> h2. SQL API
> N = 15
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -t -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> operationcount=15 -p recordcount=15 -p warmupops=15000 -p 
> dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
> hosts=192.168.1.47 -s {code}
> !1240-sql-select.png!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21275) Up to 5x difference in performance between SQL API and key-value API

2024-01-16 Thread Ivan Artiukhov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807450#comment-17807450
 ] 

Ivan Artiukhov commented on IGNITE-21275:
-

JDBC benchmark: 
[https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteJdbcClient.java]
 
h2. JDBC INSERT
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteJdbcClient -load -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
recordcount=15 -p warmupops=15000 -p dataintegrity=true -p 
measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.19 -s {code}
!1241-jdbc-insert.png!
h2. JDBC SELECT
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteJdbcClient -t -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
operationcount=15 -p recordcount=15 -p warmupops=15000 -p 
dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
hosts=192.168.1.19 -s {code}
!1241-jdbc-select.png!

 

> Up to 5x difference in performance between SQL API and key-value API
> 
>
> Key: IGNITE-21275
> URL: https://issues.apache.org/jira/browse/IGNITE-21275
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Ivan Artiukhov
>Priority: Major
>  Labels: ignite-3
> Attachments: 1240-sql-insert.png, 1240-sql-select.png, 
> 1241-jdbc-insert.png, 1241-jdbc-select.png, 1242-kv-get.png, 1242-kv-put.png
>
>
> AI3 rev. ca21384f85e8c779258cb3b21f54b6c30a7071e4 (Jan 16 2024)
> Compare two benchmark runs:
>  * a benchmark which uses KeyValueView to perform single {{put()}} and 
> {{{}get(){}}}: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteClient.java]
>  
>  * a benchmark which performs {{INSERT}} and {{SELECT}} via {{Statement}} 
> objects by using Ignite SQL API: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> h1. Run 1, PUT/INSERT
> Insert N unique entries into a single-node cluster from a single-threaded 
> client. 
> h2. KeyValueView
> N = 25
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=25 -p warmupops=5 -p dataintegrity=true -p 
> measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.37 -s 
> {code}
> !1242-kv-put.png!
> h2. SQL API
> N = 15000
>  
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=15 -p warmupops=15000 -p dataintegrity=true -p 
> measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.47 -s 
> {code}
> !1240-sql-insert.png!
>  
> h1. Run 2, GET/SELECT
> Get N entries inserted on Run 1.
> h2. KeyValueView
> N = 25
>  
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteClient -t -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> operationcount=25 -p recordcount=25 -p warmupops=5 -p 
> dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
> hosts=192.168.1.37 -s{code}
> !1242-kv-get.png!
>  
> h2. SQL API
> N = 15
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -t -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> operationcount=15 -p recordcount=15 -p warmupops=15000 -p 
> dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
> hosts=192.168.1.47 -s {code}
> !1240-sql-select.png!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21275) Up to 5x difference in performance between SQL API and key-value API

2024-01-16 Thread Ivan Artiukhov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Artiukhov updated IGNITE-21275:

Attachment: 1241-jdbc-select.png

> Up to 5x difference in performance between SQL API and key-value API
> 
>
> Key: IGNITE-21275
> URL: https://issues.apache.org/jira/browse/IGNITE-21275
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Ivan Artiukhov
>Priority: Major
>  Labels: ignite-3
> Attachments: 1240-sql-insert.png, 1240-sql-select.png, 
> 1241-jdbc-insert.png, 1241-jdbc-select.png, 1242-kv-get.png, 1242-kv-put.png
>
>
> AI3 rev. ca21384f85e8c779258cb3b21f54b6c30a7071e4 (Jan 16 2024)
> Compare two benchmark runs:
>  * a benchmark which uses KeyValueView to perform single {{put()}} and 
> {{{}get(){}}}: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteClient.java]
>  
>  * a benchmark which performs {{INSERT}} and {{SELECT}} via {{Statement}} 
> objects by using Ignite SQL API: 
> [https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
>  
> h1. Run 1, PUT/INSERT
> Insert N unique entries into a single-node cluster from a single-threaded 
> client. 
> h2. KeyValueView
> N = 25
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=25 -p warmupops=5 -p dataintegrity=true -p 
> measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.37 -s 
> {code}
> !1242-kv-put.png!
> h2. SQL API
> N = 15000
>  
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> recordcount=15 -p warmupops=15000 -p dataintegrity=true -p 
> measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.47 -s 
> {code}
> !1240-sql-insert.png!
>  
> h1. Run 2, GET/SELECT
> Get N entries inserted on Run 1.
> h2. KeyValueView
> N = 25
>  
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteClient -t -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> operationcount=25 -p recordcount=25 -p warmupops=5 -p 
> dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
> hosts=192.168.1.37 -s{code}
> !1242-kv-get.png!
>  
> h2. SQL API
> N = 15
> {code:java}
> Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -t -P 
> /opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
> operationcount=15 -p recordcount=15 -p warmupops=15000 -p 
> dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
> hosts=192.168.1.47 -s {code}
> !1240-sql-select.png!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21275) Up to 5x difference in performance between SQL API and key-value API

2024-01-16 Thread Ivan Artiukhov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Artiukhov updated IGNITE-21275:

Description: 
h1. Build under test

AI3 rev. ca21384f85e8c779258cb3b21f54b6c30a7071e4 (Jan 16 2024)
h1. Setup

A single-node Ignite 3 cluster with default config. 
h1. Benchmark

Compare two benchmark runs:
 * a benchmark which uses KeyValueView to perform single {{put()}} and 
{{{}get(){}}}: 
[https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteClient.java]
 
 * a benchmark which performs {{INSERT}} and {{SELECT}} via {{Statement}} 
objects by using Ignite SQL API: 
[https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
 

h1. Run 1, PUT/INSERT

Insert N unique entries into a single-node cluster from a single-threaded 
client. 
h2. KeyValueView

N = 25
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteClient -load -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
recordcount=25 -p warmupops=5 -p dataintegrity=true -p 
measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.37 -s {code}
!1242-kv-put.png!
h2. SQL API

N = 15000

 
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
recordcount=15 -p warmupops=15000 -p dataintegrity=true -p 
measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.47 -s {code}
!1240-sql-insert.png!

 
h1. Run 2, GET/SELECT

Get N entries inserted on Run 1.
h2. KeyValueView

N = 25

 
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteClient -t -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
operationcount=25 -p recordcount=25 -p warmupops=5 -p 
dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
hosts=192.168.1.37 -s{code}
!1242-kv-get.png!

 
h2. SQL API

N = 15
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -t -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
operationcount=15 -p recordcount=15 -p warmupops=15000 -p 
dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
hosts=192.168.1.47 -s {code}
!1240-sql-select.png!

 

  was:
AI3 rev. ca21384f85e8c779258cb3b21f54b6c30a7071e4 (Jan 16 2024)

Compare two benchmark runs:
 * a benchmark which uses KeyValueView to perform single {{put()}} and 
{{{}get(){}}}: 
[https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteClient.java]
 
 * a benchmark which performs {{INSERT}} and {{SELECT}} via {{Statement}} 
objects by using Ignite SQL API: 
[https://github.com/gridgain/YCSB/blob/ycsb-2023.11/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteSqlClient.java]
 

h1. Run 1, PUT/INSERT

Insert N unique entries into a single-node cluster from a single-threaded 
client. 
h2. KeyValueView

N = 25
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteClient -load -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
recordcount=25 -p warmupops=5 -p dataintegrity=true -p 
measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.37 -s {code}
!1242-kv-put.png!
h2. SQL API

N = 15000

 
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -load -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
recordcount=15 -p warmupops=15000 -p dataintegrity=true -p 
measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.47 -s {code}
!1240-sql-insert.png!

 
h1. Run 2, GET/SELECT

Get N entries inserted on Run 1.
h2. KeyValueView

N = 25

 
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteClient -t -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
operationcount=25 -p recordcount=25 -p warmupops=5 -p 
dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
hosts=192.168.1.37 -s{code}
!1242-kv-get.png!

 
h2. SQL API

N = 15
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteSqlClient -t -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
operationcount=15 -p recordcount=15 -p warmupops=15000 -p 
dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
hosts=192.168.1.47 -s {code}
!1240-sql-select.png!

 


> Up to 5x difference in performance between SQL API and key-value API
> 
>
> Key: IGNITE-21275
> URL: https://issues.apache.org/jira/browse/IGNITE-21275
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Ivan Artiukhov
>Priority: Major
>  Labels: ignite-3
> Attachments: 1240-sql-insert.png, 1240-sql-select.png, 
> 1241-jdbc

[jira] [Updated] (IGNITE-20881) Add ability to enforce an index to be used

2024-01-16 Thread Andrey Novikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Novikov updated IGNITE-20881:

Description: 
*What to do:*
 * Extend criteria api with index hint

 * Generate SQL with hint

 * Add a test that verifies that the passed index is used in the query

As reference ignite 2.x documentation can be used 
[https://ignite.apache.org/docs/latest/SQL/sql-calcite#force_index-no_index]

  was:
*What to do:*
 * Extend criteria api with index hint

 * Generate SQL with hint

 * Add a test that verifies that the passed index is used in the query

As reference ignite 2.x implementation can be used 
[https://ignite.apache.org/docs/latest/SQL/sql-calcite#force_index-no_index]


> Add ability to enforce an index to be used
> --
>
> Key: IGNITE-20881
> URL: https://issues.apache.org/jira/browse/IGNITE-20881
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Novikov
>Assignee: Andrey Novikov
>Priority: Major
>  Labels: ignite-3
>
> *What to do:*
>  * Extend criteria api with index hint
>  * Generate SQL with hint
>  * Add a test that verifies that the passed index is used in the query
> As reference ignite 2.x documentation can be used 
> [https://ignite.apache.org/docs/latest/SQL/sql-calcite#force_index-no_index]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20881) Add ability to enforce an index to be used

2024-01-16 Thread Andrey Novikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Novikov updated IGNITE-20881:

Description: 
*What to do:*
 * Extend criteria api with index hint

 * Generate SQL with hint

 * Add a test that verifies that the passed index is used in the query

As reference ignite 2.x implementation can be used 
[https://ignite.apache.org/docs/latest/SQL/sql-calcite#force_index-no_index]

> Add ability to enforce an index to be used
> --
>
> Key: IGNITE-20881
> URL: https://issues.apache.org/jira/browse/IGNITE-20881
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Novikov
>Assignee: Andrey Novikov
>Priority: Major
>  Labels: ignite-3
>
> *What to do:*
>  * Extend criteria api with index hint
>  * Generate SQL with hint
>  * Add a test that verifies that the passed index is used in the query
> As reference ignite 2.x implementation can be used 
> [https://ignite.apache.org/docs/latest/SQL/sql-calcite#force_index-no_index]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21151) MVCC caching removal

2024-01-16 Thread Ilya Shishkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807579#comment-17807579
 ] 

Ilya Shishkov commented on IGNITE-21151:


[~av], can you take a look, please?

> MVCC caching removal
> 
>
> Key: IGNITE-21151
> URL: https://issues.apache.org/jira/browse/IGNITE-21151
> Project: Ignite
>  Issue Type: Sub-task
>  Components: mvcc
>Reporter: Ilya Shishkov
>Assignee: Ilya Shishkov
>Priority: Minor
>  Labels: ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Remove MvccCachingManager and corresponding code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-18853) Introduce thread types to thread pools

2024-01-16 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy reassigned IGNITE-18853:
--

Assignee: Roman Puchkovskiy

> Introduce thread types to thread pools
> --
>
> Key: IGNITE-18853
> URL: https://issues.apache.org/jira/browse/IGNITE-18853
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3, storage-threading, threading
>
> Like in Ignite 2.x, we need to have custom classes for threads, with custom 
> properties.
> Currently, I can only say that we use custom thread types in network, for 
> event loops I guess. That's not enough, here's why.
> Given the wide adoption of async code, developers struggle to understand, 
> what thread executes the actual operation. For example, "thenCompose" or 
> "whenComplete" closure is being executed in whatever thread that completes 
> the future, and quite often it's not the thread that we want.
> Also, we shouldn't use default fork-join pool, for example. We should force 
> most operations to our own pools.
> To make everything more clear, we have to mark threads with at least 
> following categories:
>  * can perform storage reads
>  * can perform storage writes
>  * can perform network IO operations
>  * can be safely blocked
>  * etc.
> Once we know for sure that the thread fits the operation, we can execute it. 
> Ideally, that should be an assertion and not a runtime logic.
> This will also help us finding existing bugs and bottlenecks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21276) Check nodeId within ensureReplicaIsPrimary

2024-01-16 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-21276:


 Summary: Check nodeId within ensureReplicaIsPrimary
 Key: IGNITE-21276
 URL: https://issues.apache.org/jira/browse/IGNITE-21276
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21277) Sql. Partition pruning. Extract partition pruning information from scan operations

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21277:
--
Affects Version/s: 3.0.0-beta2

> Sql. Partition pruning. Extract partition pruning information from scan 
> operations
> --
>
> Key: IGNITE-21277
> URL: https://issues.apache.org/jira/browse/IGNITE-21277
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> In order to prune unnecessary partitions we need to obtain information that 
> includes possible "values" of colocation key columns from filter expressions 
> for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
> IgniteSystemViewScan - or simply subclasses of all 
> ProjectableFilterableTableScan) prior to statement execution. This can be 
> accomplished by traversing an expression tree of scan's filter and collecting 
> expressions with colocation key columns (This data is called partition 
> pruning metadata for simplicity).
> 1. Implement a component that takes a physical plan and analyses filter 
> expressions of every scan operator and creates (if possible) an expression 
> that includes all colocated columns. (The PartitionExtractor from patch can 
> be used a reference implementation).
> Basic example:
>  
> {code:java}
> Statement: 
> SELECT * FROM t WHERE pk = 7 OR pk = 42
> Partition metadata: 
> t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
> primary key, || denotes OR operation 
> {code}
>  
> If some colocation key columns are missing from then filter, then partition 
> pruning is not possible for such operation.
> Expression types to analyze:
>  * AND
>  * EQUALS
>  * IS_FALSE
>  * IS_NOT_DISTINCT_FROM
>  * IS_NOT_FALSE
>  * IS_NOT_TRUE
>  * IS_TRUE
>  * NOT
>  * OR
>  * SEARCH (operation that tests whether a value is included in a certain 
> range)
> 2. Update QueryPlan to include partition pruning metadata for every scan 
> operator (source_id = ).
> —
> *Additional examples - partition pruning is possible*
> Dynamic parameters:
> {code:java}
> SELECT * FROM t WHERE pk = ?1 
> Partition pruning metadata: t = [ pk = ?1 ]
> {code}
> Colocation columns reside inside a nested expression:
> {code:java}
> SELECT * FROM t WHERE col1 = col2 AND (col2 = 100 AND pk = 2) 
> Partition pruning metadata: t = [ pk = 2 ]
> {code}
> Multiple keys:
> {code:java}
> SELECT * FROM t WHERE col_c1 = 1 AND col_c2 = 2 
> Partition pruning metadata:  t = [ (col_c1 = 1, col_c2 = 2) ]
> {code}
> Complex expression with multiple keys:
> {code:java}
> SELECT * FROM t WHERE (col_col1 = 100 and col_col2 = 4) OR (col_col1 = 4 and 
> col_col2 = 100)
> Partition pruning metadata: t = [ (col_col1 = 100, col_col2 = 4) || (col_col1 
> = 4, col_col2 = 100) ]
> {code}
> Multiple tables, assuming that filter b_id = 42 is pushed into scan b, 
> because a_id = b_id:
> {code:java}
> SELECT * FROM a JOIN b WHERE a_id = b_id AND a_id = 42 
> Partition pruning metadata: a= [ a_id=42 ], b=[ b_id=42 ]
> {code}
> ---
> *Partition pruning is not possible*
> Columns named col* are not part of colocation key:
> {code:java}
> SELECT * FROM t WHERE col1 = 10 
> Partition pruning metadata: [] // (empty) because filter does not use 
> colocation key columns.
> {code}
> {code:java}
> SELECT * FROM t WHERE col1 = col2 OR pk = 42 
> // Pruning is not possible because we need to scan all partitions to figure 
> out which tuples have ‘col1 = col2’
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col_col1 = 10 AND col_col2 OR col_col1 = 42
> // Although first expression uses all colocation key columns the second one 
> only uses some.
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col_c1 = 1 OR col_c2 = 2 
> // Empty partition pruning metadata:  need to scan all partitions to figure 
> out which tuples have col_c1 = 1 OR col_c2 = 2.
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col_col1 = col_col2 OR col_col2 = 42 
> // Empty partition pruning metadata: need to scan all partitions to figure 
> out which tuples have ‘col_col1 = col_col2’
> Partition pruning metadata: [] 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21277) Sql. Partition pruning. Extract partition pruning information from scan operations

2024-01-16 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-21277:
-

 Summary: Sql. Partition pruning. Extract partition pruning 
information from scan operations
 Key: IGNITE-21277
 URL: https://issues.apache.org/jira/browse/IGNITE-21277
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Maksim Zhuravkov


In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
IgniteSystemViewScan - or simply subclasses of all 
ProjectableFilterableTableScan) prior to statement execution. This can be 
accomplished by traversing an expression tree of scan's filter and collecting 
expressions with colocation key columns (This data is called partition pruning 
metadata for simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch can be used 
a reference implementation).

Basic example:

 
{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
primary key, || denotes OR operation 
{code}
 

If some colocation key columns are missing from then filter, then partition 
pruning is not possible for such operation.

Expression types to analyze:
 * AND
 * EQUALS
 * IS_FALSE
 * IS_NOT_DISTINCT_FROM
 * IS_NOT_FALSE
 * IS_NOT_TRUE
 * IS_TRUE
 * NOT
 * OR
 * SEARCH (operation that tests whether a value is included in a certain range)

2. Update QueryPlan to include partition pruning metadata for every scan 
operator (source_id = ).

—

*Additional examples - partition pruning is possible*

Dynamic parameters:

{code:java}
SELECT * FROM t WHERE pk = ?1 
Partition pruning metadata: t = [ pk = ?1 ]
{code}

Colocation columns reside inside a nested expression:

{code:java}
SELECT * FROM t WHERE col1 = col2 AND (col2 = 100 AND pk = 2) 
Partition pruning metadata: t = [ pk = 2 ]
{code}

Multiple keys:

{code:java}
SELECT * FROM t WHERE col_c1 = 1 AND col_c2 = 2 
Partition pruning metadata:  t = [ (col_c1 = 1, col_c2 = 2) ]
{code}

Complex expression with multiple keys:

{code:java}
SELECT * FROM t WHERE (col_col1 = 100 and col_col2 = 4) OR (col_col1 = 4 and 
col_col2 = 100)
Partition pruning metadata: t = [ (col_col1 = 100, col_col2 = 4) || (col_col1 = 
4, col_col2 = 100) ]
{code}

Multiple tables, assuming that filter b_id = 42 is pushed into scan b, because 
a_id = b_id:

{code:java}
SELECT * FROM a JOIN b WHERE a_id = b_id AND a_id = 42 
Partition pruning metadata: a= [ a_id=42 ], b=[ b_id=42 ]
{code}

---

*Partition pruning is not possible*

Columns named col* are not part of colocation key:

{code:java}
SELECT * FROM t WHERE col1 = 10 
Partition pruning metadata: [] // (empty) because filter does not use 
colocation key columns.
{code}

{code:java}
SELECT * FROM t WHERE col1 = col2 OR pk = 42 
// Pruning is not possible because we need to scan all partitions to figure out 
which tuples have ‘col1 = col2’
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = 10 AND col_col2 OR col_col1 = 42
// Although first expression uses all colocation key columns the second one 
only uses some.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_c1 = 1 OR col_c2 = 2 
// Empty partition pruning metadata:  need to scan all partitions to figure out 
which tuples have col_c1 = 1 OR col_c2 = 2.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = col_col2 OR col_col2 = 42 
// Empty partition pruning metadata: need to scan all partitions to figure out 
which tuples have ‘col_col1 = col_col2’
Partition pruning metadata: [] 
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21277) Sql. Partition pruning. Extract partition pruning information from scan operations

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21277:
--
Description: 
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
IgniteSystemViewScan - or simply subclasses of all 
ProjectableFilterableTableScan) prior to statement execution. This can be 
accomplished by traversing an expression tree of scan's filter and collecting 
expressions with colocation key columns (This data is called partition pruning 
metadata for simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch can be used 
a reference implementation).

Basic example:

 
{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
primary key, || denotes OR operation 
{code}
 

If some colocation key columns are missing from then filter, then partition 
pruning is not possible for such operation.

Expression types to analyze:
 * AND
 * EQUALS
 * IS_FALSE
 * IS_NOT_DISTINCT_FROM
 * IS_NOT_FALSE
 * IS_NOT_TRUE
 * IS_TRUE
 * NOT
 * OR
 * SEARCH (operation that tests whether a value is included in a certain range)

2. Update QueryPlan to include partition pruning metadata for every scan 
operator (source_id = ).

—

*Additional examples - partition pruning is possible*

Dynamic parameters:

{code:java}
SELECT * FROM t WHERE pk = ?1 
Partition pruning metadata: t = [ pk = ?1 ]
{code}

Colocation columns reside inside a nested expression:

{code:java}
SELECT * FROM t WHERE col1 = col2 AND (col2 = 100 AND pk = 2) 
Partition pruning metadata: t = [ pk = 2 ]
{code}

Multiple keys:

{code:java}
SELECT * FROM t WHERE col_c1 = 1 AND col_c2 = 2 
Partition pruning metadata:  t = [ (col_c1 = 1, col_c2 = 2) ]
{code}

Complex expression with multiple keys:

{code:java}
SELECT * FROM t WHERE (col_col1 = 100 and col_col2 = 4) OR (col_col1 = 4 and 
col_col2 = 100)
Partition pruning metadata: t = [ (col_col1 = 100, col_col2 = 4) || (col_col1 = 
4, col_col2 = 100) ]
{code}

Multiple tables, assuming that filter b_id = 42 is pushed into scan b, because 
a_id = b_id:

{code:java}
SELECT * FROM a JOIN b WHERE a_id = b_id AND a_id = 42 
Partition pruning metadata: a= [ a_id=42 ], b=[ b_id=42 ]
{code}

---

*Partition pruning is not possible*

Columns named col* are not part of colocation key:

{code:java}
SELECT * FROM t WHERE col1 = 10 
Partition pruning metadata: [] // (empty) because filter does not use 
colocation key columns.
{code}

{code:java}
SELECT * FROM t WHERE col1 = col2 OR pk = 42 
// Pruning is not possible because we need to scan all partitions to figure out 
which tuples have ‘col1 = col2’
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = 10 AND col_col2 OR col_col1 = 42
// Although first expression uses all colocation key columns the second one 
only uses some.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_c1 = 1 OR col_c2 = 2 
// Empty partition pruning metadata:  need to scan all partitions to figure out 
which tuples have col_c1 = 1 OR col_c2 = 2.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = col_col2 OR col_col2 = 42 
// Empty partition pruning metadata: need to scan all partitions to figure out 
which tuples have ‘col_col1 = col_col2’
Partition pruning metadata: [] 
{code}


  was:
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
IgniteSystemViewScan - or simply subclasses of all 
ProjectableFilterableTableScan) prior to statement execution. This can be 
accomplished by traversing an expression tree of scan's filter and collecting 
expressions with colocation key columns (This data is called partition pruning 
metadata for simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch can be used 
a reference implementation).

Basic example:

 
{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
primary key, || denotes OR operation 
{code}
 

If some colocation key columns are missing from then filter, then partition 
pruning is not possible for such operation.

Expression types to analyze:
 * AND

[jira] [Updated] (IGNITE-21277) Sql. Partition pruning. Extract partition pruning information from scan operations

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21277:
--
Description: 
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
IgniteSystemViewScan - or simply subclasses of all 
ProjectableFilterableTableScan) prior to statement execution. This can be 
accomplished by traversing an expression tree of scan's filter and collecting 
expressions with colocation key columns (This data is called partition pruning 
metadata for simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch can be used 
a reference implementation).

Basic example:

 
{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
primary key, || denotes OR operation 
{code}
 

If some colocation key columns are missing from then filter, then partition 
pruning is not possible for such operation.

Expression types to analyze:
 * AND
 * EQUALS
 * IS_FALSE
 * IS_NOT_DISTINCT_FROM
 * IS_NOT_FALSE
 * IS_NOT_TRUE
 * IS_TRUE
 * NOT
 * OR
 * SEARCH (operation that tests whether a value is included in a certain range)

2. Update QueryPlan to include partition pruning metadata for every scan 
operator (source_id = ).

—

*Additional examples - partition pruning is possible*

Dynamic parameters:

{code:java}
SELECT * FROM t WHERE pk = ?1 
Partition pruning metadata: t = [ pk = ?1 ]
{code}

Colocation columns reside inside a nested expression:

{code:java}
SELECT * FROM t WHERE col1 = col2 AND (col2 = 100 AND pk = 2) 
Partition pruning metadata: t = [ pk = 2 ]
{code}

Multiple keys:

{code:java}
SELECT * FROM t WHERE col_c1 = 1 AND col_c2 = 2 
Partition pruning metadata:  t = [ (col_c1 = 1, col_c2 = 2) ]
{code}

Complex expression with multiple keys:

{code:java}
SELECT * FROM t WHERE (col_col1 = 100 and col_col2 = 4) OR (col_col1 = 4 and 
col_col2 = 100)
Partition pruning metadata: t = [ (col_col1 = 100, col_col2 = 4) || (col_col1 = 
4, col_col2 = 100) ]
{code}

Multiple tables, assuming that filter b_id = 42 is pushed into scan b, because 
a_id = b_id:

{code:java}
SELECT * FROM a JOIN b WHERE a_id = b_id AND a_id = 42 
Partition pruning metadata: a= [ a_id=42 ], b=[ b_id=42 ]
{code}

---

*Additional examples - partition pruning is not possible*

Columns named col* are not part of colocation key:

{code:java}
SELECT * FROM t WHERE col1 = 10 
Partition pruning metadata: [] // (empty) because filter does not use 
colocation key columns.
{code}

{code:java}
SELECT * FROM t WHERE col1 = col2 OR pk = 42 
// Pruning is not possible because we need to scan all partitions to figure out 
which tuples have ‘col1 = col2’
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = 10 AND col_col2 OR col_col1 = 42
// Although first expression uses all colocation key columns the second one 
only uses some.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_c1 = 1 OR col_c2 = 2 
// Empty partition pruning metadata:  need to scan all partitions to figure out 
which tuples have col_c1 = 1 OR col_c2 = 2.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = col_col2 OR col_col2 = 42 
// Empty partition pruning metadata: need to scan all partitions to figure out 
which tuples have ‘col_col1 = col_col2’
Partition pruning metadata: [] 
{code}


  was:
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
IgniteSystemViewScan - or simply subclasses of all 
ProjectableFilterableTableScan) prior to statement execution. This can be 
accomplished by traversing an expression tree of scan's filter and collecting 
expressions with colocation key columns (This data is called partition pruning 
metadata for simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch can be used 
a reference implementation).

Basic example:

 
{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
primary key, || denotes OR operation 
{code}
 

If some colocation key columns are missing from then filter, then partition 
pruning is not possible for such operation.

Expression ty

[jira] [Updated] (IGNITE-21277) Sql. Partition pruning. Port partitionExtractor from AI2.

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21277:
--
Summary: Sql. Partition pruning. Port partitionExtractor from AI2.  (was: 
Sql. Partition pruning. Extract partition pruning information from scan 
operations)

> Sql. Partition pruning. Port partitionExtractor from AI2.
> -
>
> Key: IGNITE-21277
> URL: https://issues.apache.org/jira/browse/IGNITE-21277
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> In order to prune unnecessary partitions we need to obtain information that 
> includes possible "values" of colocation key columns from filter expressions 
> for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
> IgniteSystemViewScan - or simply subclasses of all 
> ProjectableFilterableTableScan) prior to statement execution. This can be 
> accomplished by traversing an expression tree of scan's filter and collecting 
> expressions with colocation key columns (This data is called partition 
> pruning metadata for simplicity).
> 1. Implement a component that takes a physical plan and analyses filter 
> expressions of every scan operator and creates (if possible) an expression 
> that includes all colocated columns. (The PartitionExtractor from patch can 
> be used a reference implementation).
> Basic example:
>  
> {code:java}
> Statement: 
> SELECT * FROM t WHERE pk = 7 OR pk = 42
> Partition metadata: 
> t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
> primary key, || denotes OR operation 
> {code}
>  
> If some colocation key columns are missing from then filter, then partition 
> pruning is not possible for such operation.
> Expression types to analyze:
>  * AND
>  * EQUALS
>  * IS_FALSE
>  * IS_NOT_DISTINCT_FROM
>  * IS_NOT_FALSE
>  * IS_NOT_TRUE
>  * IS_TRUE
>  * NOT
>  * OR
>  * SEARCH (operation that tests whether a value is included in a certain 
> range)
> 2. Update QueryPlan to include partition pruning metadata for every scan 
> operator (source_id = ).
> —
> *Additional examples - partition pruning is possible*
> Dynamic parameters:
> {code:java}
> SELECT * FROM t WHERE pk = ?1 
> Partition pruning metadata: t = [ pk = ?1 ]
> {code}
> Colocation columns reside inside a nested expression:
> {code:java}
> SELECT * FROM t WHERE col1 = col2 AND (col2 = 100 AND pk = 2) 
> Partition pruning metadata: t = [ pk = 2 ]
> {code}
> Multiple keys:
> {code:java}
> SELECT * FROM t WHERE col_c1 = 1 AND col_c2 = 2 
> Partition pruning metadata:  t = [ (col_c1 = 1, col_c2 = 2) ]
> {code}
> Complex expression with multiple keys:
> {code:java}
> SELECT * FROM t WHERE (col_col1 = 100 and col_col2 = 4) OR (col_col1 = 4 and 
> col_col2 = 100)
> Partition pruning metadata: t = [ (col_col1 = 100, col_col2 = 4) || (col_col1 
> = 4, col_col2 = 100) ]
> {code}
> Multiple tables, assuming that filter b_id = 42 is pushed into scan b, 
> because a_id = b_id:
> {code:java}
> SELECT * FROM a JOIN b WHERE a_id = b_id AND a_id = 42 
> Partition pruning metadata: a= [ a_id=42 ], b=[ b_id=42 ]
> {code}
> ---
> *Additional examples - partition pruning is not possible*
> Columns named col* are not part of colocation key:
> {code:java}
> SELECT * FROM t WHERE col1 = 10 
> Partition pruning metadata: [] // (empty) because filter does not use 
> colocation key columns.
> {code}
> {code:java}
> SELECT * FROM t WHERE col1 = col2 OR pk = 42 
> // Pruning is not possible because we need to scan all partitions to figure 
> out which tuples have ‘col1 = col2’
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col_col1 = 10 AND col_col2 OR col_col1 = 42
> // Although first expression uses all colocation key columns the second one 
> only uses some.
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col_c1 = 1 OR col_c2 = 2 
> // Empty partition pruning metadata:  need to scan all partitions to figure 
> out which tuples have col_c1 = 1 OR col_c2 = 2.
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col_col1 = col_col2 OR col_col2 = 42 
> // Empty partition pruning metadata: need to scan all partitions to figure 
> out which tuples have ‘col_col1 = col_col2’
> Partition pruning metadata: [] 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21277) Sql. Partition pruning. Port partitionExtractor from AI2.

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21277:
--
Description: 
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
IgniteSystemViewScan - or simply subclasses of all 
ProjectableFilterableTableScan) prior to statement execution. This can be 
accomplished by traversing an expression tree of scan's filter and collecting 
expressions with colocation key columns (this data is called partition pruning 
metadata for simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch 
(https://github.com/apache/ignite/pull/10928/files) can be used a reference 
implementation).

Basic example:

 
{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
primary key, || denotes OR operation 
{code}
 

If some colocation key columns are missing from then filter, then partition 
pruning is not possible for such operation.

Expression types to analyze:
 * AND
 * EQUALS
 * IS_FALSE
 * IS_NOT_DISTINCT_FROM
 * IS_NOT_FALSE
 * IS_NOT_TRUE
 * IS_TRUE
 * NOT
 * OR
 * SEARCH (operation that tests whether a value is included in a certain range)

2. Update QueryPlan to include partition pruning metadata for every scan 
operator (source_id = ).

—

*Additional examples - partition pruning is possible*

Dynamic parameters:

{code:java}
SELECT * FROM t WHERE pk = ?1 
Partition pruning metadata: t = [ pk = ?1 ]
{code}

Colocation columns reside inside a nested expression:

{code:java}
SELECT * FROM t WHERE col1 = col2 AND (col2 = 100 AND pk = 2) 
Partition pruning metadata: t = [ pk = 2 ]
{code}

Multiple keys:

{code:java}
SELECT * FROM t WHERE col_c1 = 1 AND col_c2 = 2 
Partition pruning metadata:  t = [ (col_c1 = 1, col_c2 = 2) ]
{code}

Complex expression with multiple keys:

{code:java}
SELECT * FROM t WHERE (col_col1 = 100 and col_col2 = 4) OR (col_col1 = 4 and 
col_col2 = 100)
Partition pruning metadata: t = [ (col_col1 = 100, col_col2 = 4) || (col_col1 = 
4, col_col2 = 100) ]
{code}

Multiple tables, assuming that filter b_id = 42 is pushed into scan b, because 
a_id = b_id:

{code:java}
SELECT * FROM a JOIN b WHERE a_id = b_id AND a_id = 42 
Partition pruning metadata: a= [ a_id=42 ], b=[ b_id=42 ]
{code}

---

*Additional examples - partition pruning is not possible*

Columns named col* are not part of colocation key:

{code:java}
SELECT * FROM t WHERE col1 = 10 
Partition pruning metadata: [] // (empty) because filter does not use 
colocation key columns.
{code}

{code:java}
SELECT * FROM t WHERE col1 = col2 OR pk = 42 
// Pruning is not possible because we need to scan all partitions to figure out 
which tuples have ‘col1 = col2’
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = 10 AND col_col2 OR col_col1 = 42
// Although first expression uses all colocation key columns the second one 
only uses some.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_c1 = 1 OR col_c2 = 2 
// Empty partition pruning metadata:  need to scan all partitions to figure out 
which tuples have col_c1 = 1 OR col_c2 = 2.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = col_col2 OR col_col2 = 42 
// Empty partition pruning metadata: need to scan all partitions to figure out 
which tuples have ‘col_col1 = col_col2’
Partition pruning metadata: [] 
{code}


  was:
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
IgniteSystemViewScan - or simply subclasses of all 
ProjectableFilterableTableScan) prior to statement execution. This can be 
accomplished by traversing an expression tree of scan's filter and collecting 
expressions with colocation key columns (This data is called partition pruning 
metadata for simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch can be used 
a reference implementation).

Basic example:

 
{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
primary key, || denotes OR operation 
{code}
 

If some colocation key columns are missing from then filter, then partition 
pruni

[jira] [Updated] (IGNITE-21277) Sql. Partition pruning. Port partitionExtractor from AI2.

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21277:
--
Description: 
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
IgniteSystemViewScan - or simply subclasses of all 
ProjectableFilterableTableScan) prior to statement execution. This can be 
accomplished by traversing an expression tree of scan's filter and collecting 
expressions with colocation key columns (this data is called partition pruning 
metadata for simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch 
(https://github.com/apache/ignite/pull/10928/files) can be used a reference 
implementation).

Basic example:

{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
primary key, || denotes OR operation 
{code}
 

If some colocation key columns are missing from then filter, then partition 
pruning is not possible for such operation.

Expression types to analyze:
 * AND
 * EQUALS
 * IS_FALSE
 * IS_NOT_DISTINCT_FROM
 * IS_NOT_FALSE
 * IS_NOT_TRUE
 * IS_TRUE
 * NOT
 * OR
 * SEARCH (operation that tests whether a value is included in a certain range)

2. Update QueryPlan to include partition pruning metadata for every scan 
operator (source_id = ).

—

*Additional examples - partition pruning is possible*

Dynamic parameters:

{code:java}
SELECT * FROM t WHERE pk = ?1 
Partition pruning metadata: t = [ pk = ?1 ]
{code}

Colocation columns reside inside a nested expression:

{code:java}
SELECT * FROM t WHERE col1 = col2 AND (col2 = 100 AND pk = 2) 
Partition pruning metadata: t = [ pk = 2 ]
{code}

Multiple keys:

{code:java}
SELECT * FROM t WHERE col_c1 = 1 AND col_c2 = 2 
Partition pruning metadata:  t = [ (col_c1 = 1, col_c2 = 2) ]
{code}

Complex expression with multiple keys:

{code:java}
SELECT * FROM t WHERE (col_col1 = 100 and col_col2 = 4) OR (col_col1 = 4 and 
col_col2 = 100)
Partition pruning metadata: t = [ (col_col1 = 100, col_col2 = 4) || (col_col1 = 
4, col_col2 = 100) ]
{code}

Multiple tables, assuming that filter b_id = 42 is pushed into scan b, because 
a_id = b_id:

{code:java}
SELECT * FROM a JOIN b WHERE a_id = b_id AND a_id = 42 
Partition pruning metadata: a= [ a_id=42 ], b=[ b_id=42 ]
{code}

---

*Additional examples - partition pruning is not possible*

Columns named col* are not part of colocation key:

{code:java}
SELECT * FROM t WHERE col1 = 10 
Partition pruning metadata: [] // (empty) because filter does not use 
colocation key columns.
{code}

{code:java}
SELECT * FROM t WHERE col1 = col2 OR pk = 42 
// Pruning is not possible because we need to scan all partitions to figure out 
which tuples have ‘col1 = col2’
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = 10 AND col_col2 OR col_col1 = 42
// Although first expression uses all colocation key columns the second one 
only uses some.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_c1 = 1 OR col_c2 = 2 
// Empty partition pruning metadata:  need to scan all partitions to figure out 
which tuples have col_c1 = 1 OR col_c2 = 2.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = col_col2 OR col_col2 = 42 
// Empty partition pruning metadata: need to scan all partitions to figure out 
which tuples have ‘col_col1 = col_col2’
Partition pruning metadata: [] 
{code}


  was:
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
IgniteSystemViewScan - or simply subclasses of all 
ProjectableFilterableTableScan) prior to statement execution. This can be 
accomplished by traversing an expression tree of scan's filter and collecting 
expressions with colocation key columns (this data is called partition pruning 
metadata for simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch 
(https://github.com/apache/ignite/pull/10928/files) can be used a reference 
implementation).

Basic example:

 
{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
primary key, || denotes OR operation 
{code}
 

If some colocation key columns 

[jira] [Created] (IGNITE-21278) Add FORCE_INDEX/NO_INDEX hints for calcite engine

2024-01-16 Thread Andrey Novikov (Jira)
Andrey Novikov created IGNITE-21278:
---

 Summary: Add FORCE_INDEX/NO_INDEX hints for calcite engine
 Key: IGNITE-21278
 URL: https://issues.apache.org/jira/browse/IGNITE-21278
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Andrey Novikov


As part of hints for Calcite, we could try to implement simple hint like 
FORCE_INDEX/NO_INDEX.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21278) Add FORCE_INDEX/NO_INDEX hints for calcite engine

2024-01-16 Thread Andrey Novikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Novikov updated IGNITE-21278:

Labels: ignite-3  (was: )

> Add FORCE_INDEX/NO_INDEX hints for calcite engine
> -
>
> Key: IGNITE-21278
> URL: https://issues.apache.org/jira/browse/IGNITE-21278
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Novikov
>Priority: Major
>  Labels: ignite-3
>
> As part of hints for Calcite, we could try to implement simple hint like 
> FORCE_INDEX/NO_INDEX.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21277) Sql. Partition pruning. Port partitionExtractor from AI2.

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21277:
--
Description: 
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
IgniteSystemViewScan - or simply subclasses of all 
ProjectableFilterableTableScan) prior to statement execution. This can be 
accomplished by traversing an expression tree of scan's filter and collecting 
expressions with colocation key columns (this data is called partition pruning 
metadata for simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch 
(https://github.com/apache/ignite/pull/10928/files) can be used a reference 
implementation).

Expression types to analyze:
 * AND
 * EQUALS
 * IS_FALSE
 * IS_NOT_DISTINCT_FROM
 * IS_NOT_FALSE
 * IS_NOT_TRUE
 * IS_TRUE
 * NOT
 * OR
 * SEARCH (operation that tests whether a value is included in a certain range)

2. Update QueryPlan to include partition pruning metadata for every scan 
operator (source_id = ).

Basic examples:

{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
primary key, || denotes OR operation 

Statement: 
SELECT * FROM t WHERE pk = 7 OR col1 = 1
Partition metadata: [] // Empty, because col1 is not part of a colocation key.

Statement: 
SELECT * FROM t_colo_key1_colo_key2 WHERE colo_key1= 42
Partition metadata: [] // Empty, because colo_key2 is missing 
{code}

—

*Additional examples - partition pruning is possible*

Dynamic parameters:

{code:java}
SELECT * FROM t WHERE pk = ?1 
Partition pruning metadata: t = [ pk = ?1 ]
{code}

Colocation columns reside inside a nested expression:

{code:java}
SELECT * FROM t WHERE col1 = col2 AND (col2 = 100 AND pk = 2) 
Partition pruning metadata: t = [ pk = 2 ]
{code}

Multiple keys:

{code:java}
SELECT * FROM t WHERE col_c1 = 1 AND col_c2 = 2 
Partition pruning metadata:  t = [ (col_c1 = 1, col_c2 = 2) ]
{code}

Complex expression with multiple keys:

{code:java}
SELECT * FROM t WHERE (col_col1 = 100 and col_col2 = 4) OR (col_col1 = 4 and 
col_col2 = 100)
Partition pruning metadata: t = [ (col_col1 = 100, col_col2 = 4) || (col_col1 = 
4, col_col2 = 100) ]
{code}

Multiple tables, assuming that filter b_id = 42 is pushed into scan b, because 
a_id = b_id:

{code:java}
SELECT * FROM a JOIN b WHERE a_id = b_id AND a_id = 42 
Partition pruning metadata: a= [ a_id=42 ], b=[ b_id=42 ]
{code}

---

*Additional examples - partition pruning is not possible*

Columns named col* are not part of colocation key:

{code:java}
SELECT * FROM t WHERE col1 = 10 
// Filter does not use colocation key columns.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col1 = col2 OR pk = 42 
// We need to scan all partitions to figure out which tuples have ‘col1 = col2’
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = 10 AND col_col2 OR col_col1 = 42
// Although first expression uses all colocation key columns the second one 
only uses some.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_c1 = 1 OR col_c2 = 2 
// We need to scan all partitions to figure out which tuples have col_c1 = 1 OR 
col_c2 = 2.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = col_col2 OR col_col2 = 42 
// We need to scan all partitions to figure out which tuples have ‘col_col1 = 
col_col2’
Partition pruning metadata: [] 
{code}


  was:
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
IgniteSystemViewScan - or simply subclasses of all 
ProjectableFilterableTableScan) prior to statement execution. This can be 
accomplished by traversing an expression tree of scan's filter and collecting 
expressions with colocation key columns (this data is called partition pruning 
metadata for simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch 
(https://github.com/apache/ignite/pull/10928/files) can be used a reference 
implementation).

Basic example:

{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
primary key, || denotes OR operation 
{code}
 

If 

[jira] [Updated] (IGNITE-21278) Add FORCE_INDEX/NO_INDEX hints for calcite engine

2024-01-16 Thread Andrey Novikov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Novikov updated IGNITE-21278:

Description: 
As part of hints for Calcite, we could try to implement simple hint like 
FORCE_INDEX/NO_INDEX.

As reference ignite 2.x documentation can be used 
[https://ignite.apache.org/docs/latest/SQL/sql-calcite#force_index-no_index]

  was:As part of hints for Calcite, we could try to implement simple hint like 
FORCE_INDEX/NO_INDEX.


> Add FORCE_INDEX/NO_INDEX hints for calcite engine
> -
>
> Key: IGNITE-21278
> URL: https://issues.apache.org/jira/browse/IGNITE-21278
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Novikov
>Priority: Major
>  Labels: ignite-3
>
> As part of hints for Calcite, we could try to implement simple hint like 
> FORCE_INDEX/NO_INDEX.
> As reference ignite 2.x documentation can be used 
> [https://ignite.apache.org/docs/latest/SQL/sql-calcite#force_index-no_index]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21277) Sql. Partition pruning. Port partitionExtractor from AI2.

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21277:
--
Description: 
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators prior to statement execution. This can be accomplished 
by traversing an expression tree of scan's filter and collecting expressions 
with colocation key columns (this data is called partition pruning metadata for 
simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch 
(https://github.com/apache/ignite/pull/10928/files) can be used a reference 
implementation).

Expression types to analyze:
 * AND
 * EQUALS
 * IS_FALSE
 * IS_NOT_DISTINCT_FROM
 * IS_NOT_FALSE
 * IS_NOT_TRUE
 * IS_TRUE
 * NOT
 * OR
 * SEARCH (operation that tests whether a value is included in a certain range)

2. Update QueryPlan to include partition pruning metadata for every scan 
operator (source_id = ).

Basic examples:

{code:java}
Statement: 
SELECT * FROM t WHERE pk = 7 OR pk = 42

Partition metadata: 
t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
primary key, || denotes OR operation 

Statement: 
SELECT * FROM t WHERE pk = 7 OR col1 = 1
Partition metadata: [] // Empty, because col1 is not part of a colocation key.

Statement: 
SELECT * FROM t_colo_key1_colo_key2 WHERE colo_key1= 42
Partition metadata: [] // Empty, because colo_key2 is missing 
{code}

—

*Additional examples - partition pruning is possible*

Dynamic parameters:

{code:java}
SELECT * FROM t WHERE pk = ?1 
Partition pruning metadata: t = [ pk = ?1 ]
{code}

Colocation columns reside inside a nested expression:

{code:java}
SELECT * FROM t WHERE col1 = col2 AND (col2 = 100 AND pk = 2) 
Partition pruning metadata: t = [ pk = 2 ]
{code}

Multiple keys:

{code:java}
SELECT * FROM t WHERE col_c1 = 1 AND col_c2 = 2 
Partition pruning metadata:  t = [ (col_c1 = 1, col_c2 = 2) ]
{code}

Complex expression with multiple keys:

{code:java}
SELECT * FROM t WHERE (col_col1 = 100 and col_col2 = 4) OR (col_col1 = 4 and 
col_col2 = 100)
Partition pruning metadata: t = [ (col_col1 = 100, col_col2 = 4) || (col_col1 = 
4, col_col2 = 100) ]
{code}

Multiple tables, assuming that filter b_id = 42 is pushed into scan b, because 
a_id = b_id:

{code:java}
SELECT * FROM a JOIN b WHERE a_id = b_id AND a_id = 42 
Partition pruning metadata: a= [ a_id=42 ], b=[ b_id=42 ]
{code}

---

*Additional examples - partition pruning is not possible*

Columns named col* are not part of colocation key:

{code:java}
SELECT * FROM t WHERE col1 = 10 
// Filter does not use colocation key columns.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col1 = col2 OR pk = 42 
// We need to scan all partitions to figure out which tuples have ‘col1 = col2’
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = 10 AND col_col2 OR col_col1 = 42
// Although first expression uses all colocation key columns the second one 
only uses some.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_c1 = 1 OR col_c2 = 2 
// We need to scan all partitions to figure out which tuples have col_c1 = 1 OR 
col_c2 = 2.
Partition pruning metadata: [] 
{code}

{code:java}
SELECT * FROM t WHERE col_col1 = col_col2 OR col_col2 = 42 
// We need to scan all partitions to figure out which tuples have ‘col_col1 = 
col_col2’
Partition pruning metadata: [] 
{code}


  was:
In order to prune unnecessary partitions we need to obtain information that 
includes possible "values" of colocation key columns from filter expressions 
for every scan operators (such as IgniteTableScan, IgniteIndexScan and 
IgniteSystemViewScan - or simply subclasses of all 
ProjectableFilterableTableScan) prior to statement execution. This can be 
accomplished by traversing an expression tree of scan's filter and collecting 
expressions with colocation key columns (this data is called partition pruning 
metadata for simplicity).

1. Implement a component that takes a physical plan and analyses filter 
expressions of every scan operator and creates (if possible) an expression that 
includes all colocated columns. (The PartitionExtractor from patch 
(https://github.com/apache/ignite/pull/10928/files) can be used a reference 
implementation).

Expression types to analyze:
 * AND
 * EQUALS
 * IS_FALSE
 * IS_NOT_DISTINCT_FROM
 * IS_NOT_FALSE
 * IS_NOT_TRUE
 * IS_TRUE
 * NOT
 * OR
 * SEARCH (operation that tests whether a value is included in a certain range)

2. Update QueryPlan to include partition pruning metadata for every scan 
operator (source_id = ).

Basic examples:

{code:java}
Statement: 

[jira] [Created] (IGNITE-21279) Sql. Partition pruning. Integrate static partition pruning into READ statements execution pipeline

2024-01-16 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-21279:
-

 Summary: Sql. Partition pruning. Integrate static partition 
pruning into READ statements execution pipeline
 Key: IGNITE-21279
 URL: https://issues.apache.org/jira/browse/IGNITE-21279
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 3.0.0-beta2
Reporter: Maksim Zhuravkov


Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate
against current execution context to prune partitions that scan operations 
won't touch.

1. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned.
2. Support table scans, system view scans, and index scans.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21279) Sql. Partition pruning. Integrate static partition pruning into READ statements execution pipeline

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21279:
--
Description: 
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that scan operations won't touch.

1. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned.
2. Support table scans, system view scans, and index scans.

After this issue is resolved, partition pruning should work for SELECT queries.

  was:
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate
against current execution context to prune partitions that scan operations 
won't touch.

1. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned.
2. Support table scans, system view scans, and index scans.

After this issue is resolved, partition pruning should work for SELECT queries.


> Sql. Partition pruning. Integrate static partition pruning into READ 
> statements execution pipeline
> --
>
> Key: IGNITE-21279
> URL: https://issues.apache.org/jira/browse/IGNITE-21279
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Given partition pruning information for each scan operator of a QueryPlan, we 
> can evaluate a partition pruning predicate against statement's execution 
> context to prune partitions that scan operations won't touch.
> 1. Use affinity function and statement's execution context to evaluate 
> partition pruning predicates for each scan operator, so enlist is only called 
> for partitions that should be scanned.
> 2. Support table scans, system view scans, and index scans.
> After this issue is resolved, partition pruning should work for SELECT 
> queries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21279) Sql. Partition pruning. Integrate static partition pruning into READ statements execution pipeline

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21279:
--
Description: 
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate
against current execution context to prune partitions that scan operations 
won't touch.

1. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned.
2. Support table scans, system view scans, and index scans.

After this issue is resolved, partition pruning should work for SELECT queries.

  was:
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate
against current execution context to prune partitions that scan operations 
won't touch.

1. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned.
2. Support table scans, system view scans, and index scans.




> Sql. Partition pruning. Integrate static partition pruning into READ 
> statements execution pipeline
> --
>
> Key: IGNITE-21279
> URL: https://issues.apache.org/jira/browse/IGNITE-21279
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Given partition pruning information for each scan operator of a QueryPlan, we 
> can evaluate a partition pruning predicate
> against current execution context to prune partitions that scan operations 
> won't touch.
> 1. Use affinity function and statement's execution context to evaluate 
> partition pruning predicates for each scan operator, so enlist is only called 
> for partitions that should be scanned.
> 2. Support table scans, system view scans, and index scans.
> After this issue is resolved, partition pruning should work for SELECT 
> queries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21277) Sql. Partition pruning. Port partitionExtractor from AI2.

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21277:
--
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Sql. Partition pruning. Port partitionExtractor from AI2.
> -
>
> Key: IGNITE-21277
> URL: https://issues.apache.org/jira/browse/IGNITE-21277
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> In order to prune unnecessary partitions we need to obtain information that 
> includes possible "values" of colocation key columns from filter expressions 
> for every scan operators prior to statement execution. This can be 
> accomplished by traversing an expression tree of scan's filter and collecting 
> expressions with colocation key columns (this data is called partition 
> pruning metadata for simplicity).
> 1. Implement a component that takes a physical plan and analyses filter 
> expressions of every scan operator and creates (if possible) an expression 
> that includes all colocated columns. (The PartitionExtractor from patch 
> (https://github.com/apache/ignite/pull/10928/files) can be used a reference 
> implementation).
> Expression types to analyze:
>  * AND
>  * EQUALS
>  * IS_FALSE
>  * IS_NOT_DISTINCT_FROM
>  * IS_NOT_FALSE
>  * IS_NOT_TRUE
>  * IS_TRUE
>  * NOT
>  * OR
>  * SEARCH (operation that tests whether a value is included in a certain 
> range)
> 2. Update QueryPlan to include partition pruning metadata for every scan 
> operator (source_id = ).
> Basic examples:
> {code:java}
> Statement: 
> SELECT * FROM t WHERE pk = 7 OR pk = 42
> Partition metadata: 
> t's source_id = [pk=7 || pk = 42] // Assuming colocation key is equal to 
> primary key, || denotes OR operation 
> Statement: 
> SELECT * FROM t WHERE pk = 7 OR col1 = 1
> Partition metadata: [] // Empty, because col1 is not part of a colocation key.
> Statement: 
> SELECT * FROM t_colo_key1_colo_key2 WHERE colo_key1= 42
> Partition metadata: [] // Empty, because colo_key2 is missing 
> {code}
> —
> *Additional examples - partition pruning is possible*
> Dynamic parameters:
> {code:java}
> SELECT * FROM t WHERE pk = ?1 
> Partition pruning metadata: t = [ pk = ?1 ]
> {code}
> Colocation columns reside inside a nested expression:
> {code:java}
> SELECT * FROM t WHERE col1 = col2 AND (col2 = 100 AND pk = 2) 
> Partition pruning metadata: t = [ pk = 2 ]
> {code}
> Multiple keys:
> {code:java}
> SELECT * FROM t WHERE col_c1 = 1 AND col_c2 = 2 
> Partition pruning metadata:  t = [ (col_c1 = 1, col_c2 = 2) ]
> {code}
> Complex expression with multiple keys:
> {code:java}
> SELECT * FROM t WHERE (col_col1 = 100 and col_col2 = 4) OR (col_col1 = 4 and 
> col_col2 = 100)
> Partition pruning metadata: t = [ (col_col1 = 100, col_col2 = 4) || (col_col1 
> = 4, col_col2 = 100) ]
> {code}
> Multiple tables, assuming that filter b_id = 42 is pushed into scan b, 
> because a_id = b_id:
> {code:java}
> SELECT * FROM a JOIN b WHERE a_id = b_id AND a_id = 42 
> Partition pruning metadata: a= [ a_id=42 ], b=[ b_id=42 ]
> {code}
> ---
> *Additional examples - partition pruning is not possible*
> Columns named col* are not part of colocation key:
> {code:java}
> SELECT * FROM t WHERE col1 = 10 
> // Filter does not use colocation key columns.
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col1 = col2 OR pk = 42 
> // We need to scan all partitions to figure out which tuples have ‘col1 = 
> col2’
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col_col1 = 10 AND col_col2 OR col_col1 = 42
> // Although first expression uses all colocation key columns the second one 
> only uses some.
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col_c1 = 1 OR col_c2 = 2 
> // We need to scan all partitions to figure out which tuples have col_c1 = 1 
> OR col_c2 = 2.
> Partition pruning metadata: [] 
> {code}
> {code:java}
> SELECT * FROM t WHERE col_col1 = col_col2 OR col_col2 = 42 
> // We need to scan all partitions to figure out which tuples have ‘col_col1 = 
> col_col2’
> Partition pruning metadata: [] 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21279) Sql. Partition pruning. Integrate static partition pruning into READ statements execution pipeline

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21279:
--
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Sql. Partition pruning. Integrate static partition pruning into READ 
> statements execution pipeline
> --
>
> Key: IGNITE-21279
> URL: https://issues.apache.org/jira/browse/IGNITE-21279
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Given partition pruning information for each scan operator of a QueryPlan, we 
> can evaluate a partition pruning predicate against statement's execution 
> context to prune partitions that scan operations won't touch.
> 1. Use affinity function and statement's execution context to evaluate 
> partition pruning predicates for each scan operator, so enlist is only called 
> for partitions that should be scanned.
> 2. Support table scans, system view scans, and index scans.
> After this issue is resolved, partition pruning should work for SELECT 
> queries.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21280) Add DATA_PAGE_FRAGMENTED_UPDATE_RECORD WAL record type placeholder

2024-01-16 Thread Vyacheslav Koptilin (Jira)
Vyacheslav Koptilin created IGNITE-21280:


 Summary: Add DATA_PAGE_FRAGMENTED_UPDATE_RECORD WAL record type 
placeholder
 Key: IGNITE-21280
 URL: https://issues.apache.org/jira/browse/IGNITE-21280
 Project: Ignite
  Issue Type: Improvement
Reporter: Vyacheslav Koptilin
Assignee: Vyacheslav Koptilin


Reserve new WAL type for encrypted DATA_PAGE_FRAGMENTED_UPDATE_RECORD record.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21281) Sql. Partition pruning. Integrate static partition pruning into MODIFY statements execution pipeline.

2024-01-16 Thread Maksim Zhuravkov (Jira)
Maksim Zhuravkov created IGNITE-21281:
-

 Summary: Sql. Partition pruning. Integrate static partition 
pruning into MODIFY statements execution pipeline.
 Key: IGNITE-21281
 URL: https://issues.apache.org/jira/browse/IGNITE-21281
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 3.0.0-beta2
Reporter: Maksim Zhuravkov


Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that modify operations won't touch.

1. Traverse fragment tree to analyze inputs of DML operations:
  - If Modify operation accepts Scan operation as an input, we do not need to 
do anything - since both operations are collocated and this case is coverted by 
X). 
  - For operations that accept Values, we need to consider both Value and 
Projection operators, since SQL's VALUES accepts DEFAULT expression.
2. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned/modified.

After this issue is resolved, partition pruning should work for INSERT, UPDATE, 
MERGE, and DELETE statements.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21281) Sql. Partition pruning. Integrate static partition pruning into MODIFY statements execution pipeline.

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21281:
--
Description: 
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that modify operations won't touch.

1. Traverse fragment tree to analyze inputs of DML operations:
  - If Modify operation accepts Scan operation as an input, we do not need to 
do anything - since both operations are collocated and this case is covered by 
https://issues.apache.org/jira/browse/IGNITE-21279). 
  - For operations that accept Values, we need to consider both Value and 
Projection operators, since SQL's VALUES accepts DEFAULT expression.

2. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned/modified.

After this issue is resolved, partition pruning should work for INSERT, UPDATE, 
MERGE, and DELETE statements.


  was:
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that modify operations won't touch.

1. Traverse fragment tree to analyze inputs of DML operations:
  - If Modify operation accepts Scan operation as an input, we do not need to 
do anything - since both operations are collocated and this case is coverted by 
X). 
  - For operations that accept Values, we need to consider both Value and 
Projection operators, since SQL's VALUES accepts DEFAULT expression.

2. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned/modified.

After this issue is resolved, partition pruning should work for INSERT, UPDATE, 
MERGE, and DELETE statements.



> Sql. Partition pruning. Integrate static partition pruning into MODIFY 
> statements execution pipeline.
> -
>
> Key: IGNITE-21281
> URL: https://issues.apache.org/jira/browse/IGNITE-21281
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>
> Given partition pruning information for each scan operator of a QueryPlan, we 
> can evaluate a partition pruning predicate against statement's execution 
> context to prune partitions that modify operations won't touch.
> 1. Traverse fragment tree to analyze inputs of DML operations:
>   - If Modify operation accepts Scan operation as an input, we do not need to 
> do anything - since both operations are collocated and this case is covered 
> by https://issues.apache.org/jira/browse/IGNITE-21279). 
>   - For operations that accept Values, we need to consider both Value and 
> Projection operators, since SQL's VALUES accepts DEFAULT expression.
> 2. Use affinity function and statement's execution context to evaluate 
> partition pruning predicates for each scan operator, so enlist is only called 
> for partitions that should be scanned/modified.
> After this issue is resolved, partition pruning should work for INSERT, 
> UPDATE, MERGE, and DELETE statements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21281) Sql. Partition pruning. Integrate static partition pruning into MODIFY statements execution pipeline.

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21281:
--
Description: 
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that modify operations won't touch.

1. Traverse fragment tree to analyze inputs of DML operations:
  - If Modify operation accepts Scan operation as an input, we do not need to 
do anything - since both operations are collocated and this case is coverted by 
X). 
  - For operations that accept Values, we need to consider both Value and 
Projection operators, since SQL's VALUES accepts DEFAULT expression.

2. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned/modified.

After this issue is resolved, partition pruning should work for INSERT, UPDATE, 
MERGE, and DELETE statements.


  was:
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that modify operations won't touch.

1. Traverse fragment tree to analyze inputs of DML operations:
  - If Modify operation accepts Scan operation as an input, we do not need to 
do anything - since both operations are collocated and this case is coverted by 
X). 
  - For operations that accept Values, we need to consider both Value and 
Projection operators, since SQL's VALUES accepts DEFAULT expression.
2. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned/modified.

After this issue is resolved, partition pruning should work for INSERT, UPDATE, 
MERGE, and DELETE statements.



> Sql. Partition pruning. Integrate static partition pruning into MODIFY 
> statements execution pipeline.
> -
>
> Key: IGNITE-21281
> URL: https://issues.apache.org/jira/browse/IGNITE-21281
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>
> Given partition pruning information for each scan operator of a QueryPlan, we 
> can evaluate a partition pruning predicate against statement's execution 
> context to prune partitions that modify operations won't touch.
> 1. Traverse fragment tree to analyze inputs of DML operations:
>   - If Modify operation accepts Scan operation as an input, we do not need to 
> do anything - since both operations are collocated and this case is coverted 
> by X). 
>   - For operations that accept Values, we need to consider both Value and 
> Projection operators, since SQL's VALUES accepts DEFAULT expression.
> 2. Use affinity function and statement's execution context to evaluate 
> partition pruning predicates for each scan operator, so enlist is only called 
> for partitions that should be scanned/modified.
> After this issue is resolved, partition pruning should work for INSERT, 
> UPDATE, MERGE, and DELETE statements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21281) Sql. Partition pruning. Integrate static partition pruning into MODIFY statements execution pipeline.

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21281:
--
Description: 
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that modify operations won't touch.

1. Traverse fragment tree to analyze inputs of DML operations:
  - If Modify operation accepts Scan operation as an input, we do not need to 
do anything - since both operations are collocated and this case is covered by 
https://issues.apache.org/jira/browse/IGNITE-21279). 
  - For operations that accept Values, we need to consider values of colocation 
key columns of both Value and Projection operators, since SQL's VALUES accepts 
DEFAULT expression.

2. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned/modified.

After this issue is resolved, partition pruning should work for INSERT, UPDATE, 
MERGE, and DELETE statements.


  was:
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that modify operations won't touch.

1. Traverse fragment tree to analyze inputs of DML operations:
  - If Modify operation accepts Scan operation as an input, we do not need to 
do anything - since both operations are collocated and this case is covered by 
https://issues.apache.org/jira/browse/IGNITE-21279). 
  - For operations that accept Values, we need to consider both Value and 
Projection operators, since SQL's VALUES accepts DEFAULT expression.

2. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned/modified.

After this issue is resolved, partition pruning should work for INSERT, UPDATE, 
MERGE, and DELETE statements.



> Sql. Partition pruning. Integrate static partition pruning into MODIFY 
> statements execution pipeline.
> -
>
> Key: IGNITE-21281
> URL: https://issues.apache.org/jira/browse/IGNITE-21281
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>
> Given partition pruning information for each scan operator of a QueryPlan, we 
> can evaluate a partition pruning predicate against statement's execution 
> context to prune partitions that modify operations won't touch.
> 1. Traverse fragment tree to analyze inputs of DML operations:
>   - If Modify operation accepts Scan operation as an input, we do not need to 
> do anything - since both operations are collocated and this case is covered 
> by https://issues.apache.org/jira/browse/IGNITE-21279). 
>   - For operations that accept Values, we need to consider values of 
> colocation key columns of both Value and Projection operators, since SQL's 
> VALUES accepts DEFAULT expression.
> 2. Use affinity function and statement's execution context to evaluate 
> partition pruning predicates for each scan operator, so enlist is only called 
> for partitions that should be scanned/modified.
> After this issue is resolved, partition pruning should work for INSERT, 
> UPDATE, MERGE, and DELETE statements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21281) Sql. Partition pruning. Integrate static partition pruning into MODIFY statements execution pipeline.

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21281:
--
Labels: ignite-3  (was: )

> Sql. Partition pruning. Integrate static partition pruning into MODIFY 
> statements execution pipeline.
> -
>
> Key: IGNITE-21281
> URL: https://issues.apache.org/jira/browse/IGNITE-21281
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> Given partition pruning information for each scan operator of a QueryPlan, we 
> can evaluate a partition pruning predicate against statement's execution 
> context to prune partitions that modify operations won't touch.
> 1. Traverse fragment tree to analyze inputs of DML operations:
>   - If Modify operation accepts Scan operation as an input (UPDATE), we do 
> not need to do anything when both operations are collocated and this case is 
> covered by https://issues.apache.org/jira/browse/IGNITE-21279). 
>   - For operations that accept INSERT/ MERGE, we need to consider values of 
> colocation key columns of both Value and Projection operators, since SQL's 
> VALUES accepts DEFAULT expression.
> 2. Use affinity function and statement's execution context to evaluate 
> partition pruning predicates for each scan operator, so enlist is only called 
> for partitions that should be scanned/modified.
> After this issue is resolved, partition pruning should work for INSERT, 
> UPDATE, MERGE, and DELETE statements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21281) Sql. Partition pruning. Integrate static partition pruning into MODIFY statements execution pipeline.

2024-01-16 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-21281:
--
Description: 
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that modify operations won't touch.

1. Traverse fragment tree to analyze inputs of DML operations:
  - If Modify operation accepts Scan operation as an input (UPDATE), we do not 
need to do anything when both operations are collocated and this case is 
covered by https://issues.apache.org/jira/browse/IGNITE-21279). 
  - For operations that accept INSERT/ MERGE, we need to consider values of 
colocation key columns of both Value and Projection operators, since SQL's 
VALUES accepts DEFAULT expression.

2. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned/modified.

After this issue is resolved, partition pruning should work for INSERT, UPDATE, 
MERGE, and DELETE statements.


  was:
Given partition pruning information for each scan operator of a QueryPlan, we 
can evaluate a partition pruning predicate against statement's execution 
context to prune partitions that modify operations won't touch.

1. Traverse fragment tree to analyze inputs of DML operations:
  - If Modify operation accepts Scan operation as an input, we do not need to 
do anything - since both operations are collocated and this case is covered by 
https://issues.apache.org/jira/browse/IGNITE-21279). 
  - For operations that accept Values, we need to consider values of colocation 
key columns of both Value and Projection operators, since SQL's VALUES accepts 
DEFAULT expression.

2. Use affinity function and statement's execution context to evaluate 
partition pruning predicates for each scan operator, so enlist is only called 
for partitions that should be scanned/modified.

After this issue is resolved, partition pruning should work for INSERT, UPDATE, 
MERGE, and DELETE statements.



> Sql. Partition pruning. Integrate static partition pruning into MODIFY 
> statements execution pipeline.
> -
>
> Key: IGNITE-21281
> URL: https://issues.apache.org/jira/browse/IGNITE-21281
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta2
>Reporter: Maksim Zhuravkov
>Priority: Major
>
> Given partition pruning information for each scan operator of a QueryPlan, we 
> can evaluate a partition pruning predicate against statement's execution 
> context to prune partitions that modify operations won't touch.
> 1. Traverse fragment tree to analyze inputs of DML operations:
>   - If Modify operation accepts Scan operation as an input (UPDATE), we do 
> not need to do anything when both operations are collocated and this case is 
> covered by https://issues.apache.org/jira/browse/IGNITE-21279). 
>   - For operations that accept INSERT/ MERGE, we need to consider values of 
> colocation key columns of both Value and Projection operators, since SQL's 
> VALUES accepts DEFAULT expression.
> 2. Use affinity function and statement's execution context to evaluate 
> partition pruning predicates for each scan operator, so enlist is only called 
> for partitions that should be scanned/modified.
> After this issue is resolved, partition pruning should work for INSERT, 
> UPDATE, MERGE, and DELETE statements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)