(doris) branch master updated: [improvement](inverted index) Change inverted index field_name from column_name to id in format v2 (#36470)

2024-06-20 Thread jianliangqi
This is an automated email from the ASF dual-hosted git repository.

jianliangqi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new 6960be77e96 [improvement](inverted index) Change inverted index 
field_name from column_name to id in format v2 (#36470)
6960be77e96 is described below

commit 6960be77e963941343b79998948316b81e97
Author: qiye 
AuthorDate: Thu Jun 20 15:07:30 2024 +0800

[improvement](inverted index) Change inverted index field_name from 
column_name to id in format v2 (#36470)

Currently, when writing a lucene index, the field of the document is
column_name, and the column name is bound to the index field. Since
version 1.2, the data file storage has been changed from column_name to
column_unique_id, allowing the column name to be changed. Due to current
limitations, previous inverted index data cannot be used after Doris
changes the column name. Column names also support Unicode characters,
which may cause other problems with indexing in non-ASCII characters.
After consideration, it was decided to change the field name from
column_name to column_unique_id in format V2, while format V1 continues
to use column_name.

`field_name` is the name of inverted index document's filed
1. for inverted_index_storage_format_v1, field_name is the `column_name`
in Doris
2. for inverted_index_storage_format_v2
2.1 for normal column, field_name is the `column_unique_id` in Doris
2.2 for variant column, field_name is the
`parent_column_unique_id.sub_column_name` in Doris
---
 be/src/olap/accept_null_predicate.h|  2 +-
 be/src/olap/column_predicate.h |  2 +-
 be/src/olap/comparison_predicate.h |  2 +-
 be/src/olap/field.h|  8 +++
 be/src/olap/in_list_predicate.h|  2 +-
 be/src/olap/match_predicate.cpp|  2 +-
 be/src/olap/match_predicate.h  |  2 +-
 be/src/olap/null_predicate.cpp |  2 +-
 be/src/olap/null_predicate.h   |  2 +-
 .../rowset/segment_v2/inverted_index_file_writer.h |  1 +
 .../rowset/segment_v2/inverted_index_writer.cpp| 15 -
 be/src/olap/rowset/segment_v2/segment_iterator.cpp | 25 --
 be/src/olap/rowset/segment_v2/segment_iterator.h   |  2 +-
 be/src/olap/shared_predicate.h |  2 +-
 be/src/vec/core/columns_with_type_and_name.h   | 12 +--
 be/src/vec/exprs/vcompound_pred.h  |  2 +-
 be/src/vec/exprs/vectorized_fn_call.cpp|  2 +-
 be/src/vec/exprs/vectorized_fn_call.h  |  2 +-
 be/src/vec/exprs/vexpr.h   |  2 +-
 be/src/vec/exprs/vexpr_context.cpp |  2 +-
 be/src/vec/exprs/vexpr_context.h   |  2 +-
 be/src/vec/functions/array/function_array_index.h  |  2 +-
 be/src/vec/functions/function.h| 15 +++--
 23 files changed, 81 insertions(+), 29 deletions(-)

diff --git a/be/src/olap/accept_null_predicate.h 
b/be/src/olap/accept_null_predicate.h
index c9fe651f802..89d26e2684c 100644
--- a/be/src/olap/accept_null_predicate.h
+++ b/be/src/olap/accept_null_predicate.h
@@ -51,7 +51,7 @@ public:
 return _nested->evaluate(iterator, num_rows, roaring);
 }
 
-Status evaluate(const vectorized::NameAndTypePair& name_with_type,
+Status evaluate(const vectorized::IndexFieldNameAndTypePair& 
name_with_type,
 InvertedIndexIterator* iterator, uint32_t num_rows,
 roaring::Roaring* bitmap) const override {
 return _nested->evaluate(name_with_type, iterator, num_rows, bitmap);
diff --git a/be/src/olap/column_predicate.h b/be/src/olap/column_predicate.h
index b6b419f8ccf..d5b5abe1501 100644
--- a/be/src/olap/column_predicate.h
+++ b/be/src/olap/column_predicate.h
@@ -176,7 +176,7 @@ public:
 roaring::Roaring* roaring) const = 0;
 
 //evaluate predicate on inverted
-virtual Status evaluate(const vectorized::NameAndTypePair& name_with_type,
+virtual Status evaluate(const vectorized::IndexFieldNameAndTypePair& 
name_with_type,
 InvertedIndexIterator* iterator, uint32_t num_rows,
 roaring::Roaring* bitmap) const {
 return Status::NotSupported(
diff --git a/be/src/olap/comparison_predicate.h 
b/be/src/olap/comparison_predicate.h
index 24a35a3ba15..685d70f1e0b 100644
--- a/be/src/olap/comparison_predicate.h
+++ b/be/src/olap/comparison_predicate.h
@@ -67,7 +67,7 @@ public:
bitmap);
 }
 
-Status evaluate(const vectorized::NameAndTypePair& name_with_type,
+Status evaluate(const vectorized::IndexFieldNameAndTypePair& 
name_with_type,
 

(doris) branch master updated: [Fix]Fix insert select missing audit log when connect follower FE (#36472)

2024-06-20 Thread wangbo
This is an automated email from the ASF dual-hosted git repository.

wangbo pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new a2fb08c5944 [Fix]Fix insert select missing audit log when connect 
follower FE (#36472)
a2fb08c5944 is described below

commit a2fb08c59448866a6518196ea0ce4924948e91c1
Author: wangbo 
AuthorDate: Thu Jun 20 15:16:10 2024 +0800

[Fix]Fix insert select missing audit log when connect follower FE (#36472)

## Proposed changes
pick #36454

Fix when a ```insert select``` is executed in Follower, audit log could
missing query statistics.
This is because ```audit log``` is logged in the connect FE, but request
is forward to master FE, then the coord FE is master FE, BE report query
statistics to cood FE, finally the connected Follower could not get
reported query statistics, audit log missing query statistics.
We can add a new field to mark client connected FE, then be report query
statistics to the connected FE.
Besides, I do refactor for FE's WorkloadRuntimeStatusMgr to make logic
more clear and add some log in be..
---
 be/src/runtime/fragment_mgr.cpp|   8 +-
 be/src/runtime/query_context.cpp   |  18 +--
 be/src/runtime/query_context.h |   4 +-
 be/src/runtime/runtime_query_statistics_mgr.cpp|  34 --
 .../apache/doris/planner/StreamLoadPlanner.java|   1 +
 .../java/org/apache/doris/qe/ConnectContext.java   |   9 ++
 .../main/java/org/apache/doris/qe/Coordinator.java |  14 +++
 .../WorkloadRuntimeStatusMgr.java  | 125 ++---
 gensrc/thrift/PaloInternalService.thrift   |   3 +
 9 files changed, 130 insertions(+), 86 deletions(-)

diff --git a/be/src/runtime/fragment_mgr.cpp b/be/src/runtime/fragment_mgr.cpp
index 1172b5b889b..9271f78fe56 100644
--- a/be/src/runtime/fragment_mgr.cpp
+++ b/be/src/runtime/fragment_mgr.cpp
@@ -607,12 +607,14 @@ Status FragmentMgr::_get_query_ctx(const Params& params, 
TUniqueId query_id, boo
 LOG(INFO) << "query_id: " << print_id(query_id) << ", coord_addr: " << 
params.coord
   << ", total fragment num on current host: " << 
params.fragment_num_on_host
   << ", fe process uuid: " << 
params.query_options.fe_process_uuid
-  << ", query type: " << params.query_options.query_type;
+  << ", query type: " << params.query_options.query_type
+  << ", report audit fe:" << params.current_connect_fe;
 
 // This may be a first fragment request of the query.
 // Create the query fragments context.
-query_ctx = QueryContext::create_shared(query_id, _exec_env, 
params.query_options,
-params.coord, pipeline, 
params.is_nereids);
+query_ctx =
+QueryContext::create_shared(query_id, _exec_env, 
params.query_options, params.coord,
+pipeline, params.is_nereids, 
params.current_connect_fe);
 SCOPED_SWITCH_THREAD_MEM_TRACKER_LIMITER(query_ctx->query_mem_tracker);
 RETURN_IF_ERROR(DescriptorTbl::create(&(query_ctx->obj_pool), 
params.desc_tbl,
   &(query_ctx->desc_tbl)));
diff --git a/be/src/runtime/query_context.cpp b/be/src/runtime/query_context.cpp
index 2dafb8dd3ec..429c4f80563 100644
--- a/be/src/runtime/query_context.cpp
+++ b/be/src/runtime/query_context.cpp
@@ -57,7 +57,7 @@ public:
 
 QueryContext::QueryContext(TUniqueId query_id, ExecEnv* exec_env,
const TQueryOptions& query_options, TNetworkAddress 
coord_addr,
-   bool is_pipeline, bool is_nereids)
+   bool is_pipeline, bool is_nereids, TNetworkAddress 
current_connect_fe)
 : _timeout_second(-1),
   _query_id(query_id),
   _exec_env(exec_env),
@@ -81,10 +81,13 @@ QueryContext::QueryContext(TUniqueId query_id, ExecEnv* 
exec_env,
 DCHECK_EQ(is_query_type_valid, true);
 
 this->coord_addr = coord_addr;
-// external query has no coord_addr
+// current_connect_fe is used for report query statistics
+this->current_connect_fe = current_connect_fe;
+// external query has no current_connect_fe
 if (query_options.query_type != TQueryType::EXTERNAL) {
-bool is_coord_addr_valid = !this->coord_addr.hostname.empty() && 
this->coord_addr.port != 0;
-DCHECK_EQ(is_coord_addr_valid, true);
+bool is_report_fe_addr_valid =
+!this->current_connect_fe.hostname.empty() && 
this->current_connect_fe.port != 0;
+DCHECK_EQ(is_report_fe_addr_valid, true);
 }
 
 register_memory_statistics();
@@ -284,7 +287,7 @@ void QueryContext::set_pipeline_context(
 
 void QueryContext::register_query_statistics(

(doris) branch branch-2.0 updated: [Fix]Fix insert select missing audit log when connect follower FE (#36454)

2024-06-20 Thread wangbo
This is an automated email from the ASF dual-hosted git repository.

wangbo pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.0 by this push:
 new fb789f1 [Fix]Fix insert select missing audit log when connect 
follower FE (#36454)
fb789f1 is described below

commit fb789f1f5719c1de6354f4ae19f5f71b8e2f
Author: wangbo 
AuthorDate: Thu Jun 20 15:16:14 2024 +0800

[Fix]Fix insert select missing audit log when connect follower FE (#36454)

## Proposed changes
Fix when a ```insert select``` is executed in Follower, audit log could
missing query statistics.
This is because ```audit log``` is logged in the connect FE, but request
is forward to master FE, then the coord FE is master FE, BE report query
statistics to cood FE, finally the connected Follower could not get
reported query statistics, audit log missing query statistics.
We can add a new field to mark client connected FE, then be report query
statistics to the connected FE.
Besides, I do refactor for FE's WorkloadRuntimeStatusMgr to make logic
more clear and add some log in be.
---
 be/src/runtime/fragment_mgr.cpp|   4 +-
 be/src/runtime/query_context.h |   7 +-
 be/src/runtime/runtime_query_statistics_mgr.cpp|  34 --
 .../apache/doris/planner/StreamLoadPlanner.java|   2 +
 .../java/org/apache/doris/qe/ConnectContext.java   |   9 ++
 .../main/java/org/apache/doris/qe/Coordinator.java |  14 +++
 .../WorkloadRuntimeStatusMgr.java  | 121 +++--
 gensrc/thrift/PaloInternalService.thrift   |   3 +
 8 files changed, 119 insertions(+), 75 deletions(-)

diff --git a/be/src/runtime/fragment_mgr.cpp b/be/src/runtime/fragment_mgr.cpp
index 1529d66def2..66538529c3f 100644
--- a/be/src/runtime/fragment_mgr.cpp
+++ b/be/src/runtime/fragment_mgr.cpp
@@ -692,9 +692,11 @@ Status FragmentMgr::_get_query_ctx(const Params& params, 
TUniqueId query_id, boo
 }
 
 query_ctx->coord_addr = params.coord;
+query_ctx->current_connect_fe = params.current_connect_fe;
 LOG(INFO) << "query_id: " << UniqueId(query_ctx->query_id.hi, 
query_ctx->query_id.lo)
   << " coord_addr " << query_ctx->coord_addr
-  << " total fragment num on current host: " << 
params.fragment_num_on_host;
+  << " total fragment num on current host: " << 
params.fragment_num_on_host
+  << " report audit fe:" << params.current_connect_fe;
 query_ctx->query_globals = params.query_globals;
 
 if (params.__isset.resource_info) {
diff --git a/be/src/runtime/query_context.h b/be/src/runtime/query_context.h
index 8746483df4c..e47e09e5921 100644
--- a/be/src/runtime/query_context.h
+++ b/be/src/runtime/query_context.h
@@ -182,7 +182,7 @@ public:
 
 void register_query_statistics(std::shared_ptr qs) {
 
_exec_env->runtime_query_statistics_mgr()->register_query_statistics(print_id(query_id),
 qs,
- 
coord_addr);
+ 
current_connect_fe);
 }
 
 std::shared_ptr get_query_statistics() {
@@ -198,7 +198,7 @@ public:
 if (_exec_env &&
 _exec_env->runtime_query_statistics_mgr()) { // for ut 
FragmentMgrTest.normal
 
_exec_env->runtime_query_statistics_mgr()->register_query_statistics(
-query_id_str, qs, coord_addr);
+query_id_str, qs, current_connect_fe);
 }
 } else {
 LOG(INFO) << " query " << query_id_str << " get memory query 
statistics failed ";
@@ -212,7 +212,7 @@ public:
 if (_exec_env &&
 _exec_env->runtime_query_statistics_mgr()) { // for ut 
FragmentMgrTest.normal
 
_exec_env->runtime_query_statistics_mgr()->register_query_statistics(
-print_id(query_id), _cpu_statistics, coord_addr);
+print_id(query_id), _cpu_statistics, 
current_connect_fe);
 }
 }
 }
@@ -226,6 +226,7 @@ public:
 std::string user;
 std::string group;
 TNetworkAddress coord_addr;
+TNetworkAddress current_connect_fe;
 TQueryGlobals query_globals;
 
 /// In the current implementation, for multiple fragments executed by a 
query on the same BE node,
diff --git a/be/src/runtime/runtime_query_statistics_mgr.cpp 
b/be/src/runtime/runtime_query_statistics_mgr.cpp
index 5c40ea61763..0ed8cbeb79c 100644
--- a/be/src/runtime/runtime_query_statistics_mgr.cpp
+++ b/be/src/runtime/runtime_query_statistics_mgr.cpp
@@ -83,8 +83,8 @@ void 
RuntimeQueryStatiticsMgr::report_runtime_query_statistics() {
 
 if (!coord_status.ok()) {
   

(doris) branch branch-2.1 updated: [Fix]Fix insert select missing audit log when connect follower FE (#36481)

2024-06-20 Thread wangbo
This is an automated email from the ASF dual-hosted git repository.

wangbo pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 88e02c836d1 [Fix]Fix insert select missing audit log when connect 
follower FE (#36481)
88e02c836d1 is described below

commit 88e02c836d1c7b96c63524776da57f8aacbaa600
Author: wangbo 
AuthorDate: Thu Jun 20 15:16:16 2024 +0800

[Fix]Fix insert select missing audit log when connect follower FE (#36481)

## Proposed changes

pick #36472
---
 be/src/runtime/fragment_mgr.cpp|   5 +-
 be/src/runtime/query_context.cpp   |  18 +--
 be/src/runtime/query_context.h |   3 +-
 be/src/runtime/runtime_query_statistics_mgr.cpp|  34 --
 .../apache/doris/planner/StreamLoadPlanner.java|   2 +
 .../java/org/apache/doris/qe/ConnectContext.java   |   9 ++
 .../main/java/org/apache/doris/qe/Coordinator.java |  14 +++
 .../WorkloadRuntimeStatusMgr.java  | 125 ++---
 gensrc/thrift/PaloInternalService.thrift   |   3 +
 9 files changed, 128 insertions(+), 85 deletions(-)

diff --git a/be/src/runtime/fragment_mgr.cpp b/be/src/runtime/fragment_mgr.cpp
index 63079933ca1..a7808cb6d56 100644
--- a/be/src/runtime/fragment_mgr.cpp
+++ b/be/src/runtime/fragment_mgr.cpp
@@ -643,13 +643,14 @@ Status FragmentMgr::_get_query_ctx(const Params& params, 
TUniqueId query_id, boo
 LOG(INFO) << "query_id: " << print_id(query_id) << ", coord_addr: " << 
params.coord
   << ", total fragment num on current host: " << 
params.fragment_num_on_host
   << ", fe process uuid: " << 
params.query_options.fe_process_uuid
-  << ", query type: " << params.query_options.query_type;
+  << ", query type: " << params.query_options.query_type
+  << ", report audit fe:" << params.current_connect_fe;
 
 // This may be a first fragment request of the query.
 // Create the query fragments context.
 query_ctx = QueryContext::create_shared(query_id, 
params.fragment_num_on_host, _exec_env,
 params.query_options, 
params.coord, pipeline,
-params.is_nereids);
+params.is_nereids, 
params.current_connect_fe);
 SCOPED_SWITCH_THREAD_MEM_TRACKER_LIMITER(query_ctx->query_mem_tracker);
 RETURN_IF_ERROR(DescriptorTbl::create(&(query_ctx->obj_pool), 
params.desc_tbl,
   &(query_ctx->desc_tbl)));
diff --git a/be/src/runtime/query_context.cpp b/be/src/runtime/query_context.cpp
index f9cc9757fe3..bbcdc3b4771 100644
--- a/be/src/runtime/query_context.cpp
+++ b/be/src/runtime/query_context.cpp
@@ -43,7 +43,7 @@ public:
 
 QueryContext::QueryContext(TUniqueId query_id, int total_fragment_num, 
ExecEnv* exec_env,
const TQueryOptions& query_options, TNetworkAddress 
coord_addr,
-   bool is_pipeline, bool is_nereids)
+   bool is_pipeline, bool is_nereids, TNetworkAddress 
current_connect_fe)
 : fragment_num(total_fragment_num),
   timeout_second(-1),
   _query_id(query_id),
@@ -70,10 +70,13 @@ QueryContext::QueryContext(TUniqueId query_id, int 
total_fragment_num, ExecEnv*
 DCHECK_EQ(is_query_type_valid, true);
 
 this->coord_addr = coord_addr;
-// external query has no coord_addr
+// current_connect_fe is used for report query statistics
+this->current_connect_fe = current_connect_fe;
+// external query has no current_connect_fe
 if (query_options.query_type != TQueryType::EXTERNAL) {
-bool is_coord_addr_valid = !this->coord_addr.hostname.empty() && 
this->coord_addr.port != 0;
-DCHECK_EQ(is_coord_addr_valid, true);
+bool is_report_fe_addr_valid =
+!this->current_connect_fe.hostname.empty() && 
this->current_connect_fe.port != 0;
+DCHECK_EQ(is_report_fe_addr_valid, true);
 }
 
 register_memory_statistics();
@@ -265,7 +268,7 @@ void QueryContext::set_pipeline_context(
 
 void QueryContext::register_query_statistics(std::shared_ptr 
qs) {
 _exec_env->runtime_query_statistics_mgr()->register_query_statistics(
-print_id(_query_id), qs, coord_addr, _query_options.query_type);
+print_id(_query_id), qs, current_connect_fe, 
_query_options.query_type);
 }
 
 std::shared_ptr QueryContext::get_query_statistics() {
@@ -279,7 +282,7 @@ void QueryContext::register_memory_statistics() {
 std::string query_id = print_id(_query_id);
 if (qs) {
 
_exec_env->runtime_query_statistics_mgr()->register_query_statistics(
-query_id, qs, coord_addr, _query_options.query_ty

(doris) branch branch-2.0 updated: [fix](planner) fix no data issue when use datetimev1/datatimev2 & datev2 as function coalesce's parameter in legacy planner (#36583)

2024-06-20 Thread lide
This is an automated email from the ASF dual-hosted git repository.

lide pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.0 by this push:
 new bb9d532e653 [fix](planner) fix no data issue when use 
datetimev1/datatimev2 & datev2 as function coalesce's parameter in legacy 
planner (#36583)
bb9d532e653 is described below

commit bb9d532e653a131e7a3883062f1bfb9ab382623a
Author: Yulei-Yang 
AuthorDate: Thu Jun 20 15:22:54 2024 +0800

[fix](planner) fix no data issue when use datetimev1/datatimev2 & datev2 as 
function coalesce's parameter in legacy planner (#36583)
---
 gensrc/script/doris_builtins_functions.py  |   2 +
 .../conditional_functions/test_coalesce_new.groovy | 101 +
 2 files changed, 103 insertions(+)

diff --git a/gensrc/script/doris_builtins_functions.py 
b/gensrc/script/doris_builtins_functions.py
index 9fa38174be1..ea09c5473c1 100644
--- a/gensrc/script/doris_builtins_functions.py
+++ b/gensrc/script/doris_builtins_functions.py
@@ -1452,8 +1452,10 @@ visible_functions = {
 [['coalesce'], 'FLOAT', ['FLOAT', '...'], 'CUSTOM'],
 [['coalesce'], 'DOUBLE', ['DOUBLE', '...'], 'CUSTOM'],
 [['coalesce'], 'DATETIME', ['DATETIME', '...'], 'CUSTOM'],
+[['coalesce'], 'DATETIMEV2', ['DATETIME', 'DATEV2', '...'], 'CUSTOM'],
 [['coalesce'], 'DATE', ['DATE', '...'], 'CUSTOM'],
 [['coalesce'], 'DATETIMEV2', ['DATETIMEV2', '...'], 'CUSTOM'],
+[['coalesce'], 'DATETIMEV2', ['DATETIMEV2', 'DATEV2', '...'], 
'CUSTOM'],
 [['coalesce'], 'DATEV2', ['DATEV2', '...'], 'CUSTOM'],
 [['coalesce'], 'DECIMALV2', ['DECIMALV2', '...'], 'CUSTOM'],
 [['coalesce'], 'DECIMAL32', ['DECIMAL32', '...'], 'CUSTOM'],
diff --git 
a/regression-test/suites/query_p0/sql_functions/conditional_functions/test_coalesce_new.groovy
 
b/regression-test/suites/query_p0/sql_functions/conditional_functions/test_coalesce_new.groovy
new file mode 100644
index 000..194849a3c63
--- /dev/null
+++ 
b/regression-test/suites/query_p0/sql_functions/conditional_functions/test_coalesce_new.groovy
@@ -0,0 +1,101 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+suite("test_coalesce_new") {
+// test parameter:datetime, datev2
+sql """
+admin set frontend config ("enable_date_conversion"="false")
+"""
+sql """
+admin set frontend config ("disable_datev1"="false")
+"""
+sql """
+drop table if exists test_cls
+"""
+
+sql """
+CREATE TABLE `test_cls` (
+`id` int(11) NOT NULL COMMENT '',
+`name` varchar(32) NOT NULL COMMENT '',
+`dt` datetime NOT NULL
+) ENGINE=OLAP
+UNIQUE KEY(`id`)
+DISTRIBUTED BY HASH(`id`) BUCKETS 2
+PROPERTIES(
+"replication_allocation" = "tag.location.default: 1"
+);  
+"""
+
+sql """
+insert into test_cls values (1,'Alice','2023-06-01 
12:00:00'),(2,'Bob','2023-06-02 12:00:00'),(3,'Carl','2023-05-01 14:00:00')
+"""
+
+sql """
+SET enable_nereids_planner=false
+"""
+def result1 = try_sql """
+select dt from test_cls where coalesce (dt, 
str_to_date(concat('202306', '01'), '%Y%m%d')) >= '2023-06-01'
+"""
+assertEquals(result1.size(), 2);
+
+
+// test parameter:datetimev2, datev2
+sql """
+admin set frontend config ("enable_date_conversion"="true")
+"""
+sql """
+admin set frontend config ("disable_datev1"="true")
+"""
+sql """
+drop table if exists test_cls_dtv2
+"""
+
+sql """
+CREATE TABLE `test_cls_dtv2` (
+`id` int(11) NOT NULL COMMENT '',
+`name` varchar(32) NOT NULL COMMENT '',
+`dt` datetime NOT NULL
+) ENGINE=OLAP
+UNIQUE KEY(`id`)
+DISTRIBUTED BY HASH(`id`) BUCKETS 2
+PROPERTIES(
+"replication_allocation" = "tag.location.default: 1"
+);  
+"""
+
+sql """
+insert

(doris) branch 2.0.10-decimal-patch updated: Fix insert select missing audit log when connect follower FE (#36597)

2024-06-20 Thread wangbo
This is an automated email from the ASF dual-hosted git repository.

wangbo pushed a commit to branch 2.0.10-decimal-patch
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/2.0.10-decimal-patch by this 
push:
 new 25fc5d24826 Fix insert select missing audit log when connect follower 
FE (#36597)
25fc5d24826 is described below

commit 25fc5d24826481ce080c7b36ed2ea8f6cc97a883
Author: wangbo 
AuthorDate: Thu Jun 20 15:23:24 2024 +0800

Fix insert select missing audit log when connect follower FE (#36597)

## Proposed changes

pick #36454
---
 be/src/runtime/fragment_mgr.cpp|   4 +-
 be/src/runtime/query_context.h |   7 +-
 be/src/runtime/runtime_query_statistics_mgr.cpp|  34 --
 .../apache/doris/planner/StreamLoadPlanner.java|   2 +
 .../java/org/apache/doris/qe/ConnectContext.java   |   9 ++
 .../main/java/org/apache/doris/qe/Coordinator.java |  14 +++
 .../WorkloadRuntimeStatusMgr.java  | 121 +++--
 gensrc/thrift/PaloInternalService.thrift   |   3 +
 8 files changed, 119 insertions(+), 75 deletions(-)

diff --git a/be/src/runtime/fragment_mgr.cpp b/be/src/runtime/fragment_mgr.cpp
index 1529d66def2..66538529c3f 100644
--- a/be/src/runtime/fragment_mgr.cpp
+++ b/be/src/runtime/fragment_mgr.cpp
@@ -692,9 +692,11 @@ Status FragmentMgr::_get_query_ctx(const Params& params, 
TUniqueId query_id, boo
 }
 
 query_ctx->coord_addr = params.coord;
+query_ctx->current_connect_fe = params.current_connect_fe;
 LOG(INFO) << "query_id: " << UniqueId(query_ctx->query_id.hi, 
query_ctx->query_id.lo)
   << " coord_addr " << query_ctx->coord_addr
-  << " total fragment num on current host: " << 
params.fragment_num_on_host;
+  << " total fragment num on current host: " << 
params.fragment_num_on_host
+  << " report audit fe:" << params.current_connect_fe;
 query_ctx->query_globals = params.query_globals;
 
 if (params.__isset.resource_info) {
diff --git a/be/src/runtime/query_context.h b/be/src/runtime/query_context.h
index 8746483df4c..e47e09e5921 100644
--- a/be/src/runtime/query_context.h
+++ b/be/src/runtime/query_context.h
@@ -182,7 +182,7 @@ public:
 
 void register_query_statistics(std::shared_ptr qs) {
 
_exec_env->runtime_query_statistics_mgr()->register_query_statistics(print_id(query_id),
 qs,
- 
coord_addr);
+ 
current_connect_fe);
 }
 
 std::shared_ptr get_query_statistics() {
@@ -198,7 +198,7 @@ public:
 if (_exec_env &&
 _exec_env->runtime_query_statistics_mgr()) { // for ut 
FragmentMgrTest.normal
 
_exec_env->runtime_query_statistics_mgr()->register_query_statistics(
-query_id_str, qs, coord_addr);
+query_id_str, qs, current_connect_fe);
 }
 } else {
 LOG(INFO) << " query " << query_id_str << " get memory query 
statistics failed ";
@@ -212,7 +212,7 @@ public:
 if (_exec_env &&
 _exec_env->runtime_query_statistics_mgr()) { // for ut 
FragmentMgrTest.normal
 
_exec_env->runtime_query_statistics_mgr()->register_query_statistics(
-print_id(query_id), _cpu_statistics, coord_addr);
+print_id(query_id), _cpu_statistics, 
current_connect_fe);
 }
 }
 }
@@ -226,6 +226,7 @@ public:
 std::string user;
 std::string group;
 TNetworkAddress coord_addr;
+TNetworkAddress current_connect_fe;
 TQueryGlobals query_globals;
 
 /// In the current implementation, for multiple fragments executed by a 
query on the same BE node,
diff --git a/be/src/runtime/runtime_query_statistics_mgr.cpp 
b/be/src/runtime/runtime_query_statistics_mgr.cpp
index 5c40ea61763..0ed8cbeb79c 100644
--- a/be/src/runtime/runtime_query_statistics_mgr.cpp
+++ b/be/src/runtime/runtime_query_statistics_mgr.cpp
@@ -83,8 +83,8 @@ void 
RuntimeQueryStatiticsMgr::report_runtime_query_statistics() {
 
 if (!coord_status.ok()) {
 std::stringstream ss;
-LOG(WARNING) << "could not get client " << add_str
- << " when report workload runtime stats, reason is "
+LOG(WARNING) << "[report_query_statistics]could not get client " 
<< add_str
+ << " when report workload runtime stats, reason:"
  << coord_status.to_string();
 continue;
 }
@@ -103,26 +103,38 @@ void 
RuntimeQueryStatiticsMgr::report_runtime_query_statistics() {
 coord->reportExecStatus(res, params);
 rpc_result[add

(doris) branch master updated (a2fb08c5944 -> 09e5d3530eb)

2024-06-20 Thread zouxinyi
This is an automated email from the ASF dual-hosted git repository.

zouxinyi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from a2fb08c5944 [Fix]Fix insert select missing audit log when connect 
follower FE (#36472)
 add 09e5d3530eb [revert](memory) Revert fix jdk17 and jemalloc hook not 
compatible on some env (#35694)

No new revisions were added by this update.

Summary of changes:
 be/CMakeLists.txt   |  5 --
 be/cmake/thirdparty.cmake   |  7 ++-
 be/src/http/action/jeprofile_actions.cpp|  4 --
 be/src/http/default_path_handlers.cpp   |  4 --
 be/src/runtime/CMakeLists.txt   |  2 +-
 be/src/runtime/thread_context.h |  2 +-
 be/src/util/mem_info.cpp|  4 --
 be/src/util/mem_info.h  | 11 
 build.sh| 37 +++
 cloud/CMakeLists.txt|  3 -
 cloud/src/common/CMakeLists.txt |  2 +-
 regression-test/pipeline/performance/compile.sh |  1 -
 thirdparty/build-thirdparty.sh  | 84 +++--
 13 files changed, 22 insertions(+), 144 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated (09e5d3530eb -> cdcb02a6e8d)

2024-06-20 Thread starocean999
This is an automated email from the ASF dual-hosted git repository.

starocean999 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from 09e5d3530eb [revert](memory) Revert fix jdk17 and jemalloc hook not 
compatible on some env (#35694)
 add cdcb02a6e8d [fix](Nereids) should check it is Slot before check it is 
DELETE_SIGN (#36564)

No new revisions were added by this update.

Summary of changes:
 .../rules/analysis/LogicalResultSinkToShortCircuitPointQuery.java  | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated: [function](signature) add datev2 signature for from_days function (#36505)

2024-06-20 Thread zhangstar333
This is an automated email from the ASF dual-hosted git repository.

zhangstar333 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new f7307875ae2 [function](signature) add datev2 signature for from_days 
function (#36505)
f7307875ae2 is described below

commit f7307875ae213ce20c310d62cd773378e906f7c8
Author: zhangstar333 <87313068+zhangstar...@users.noreply.github.com>
AuthorDate: Thu Jun 20 15:47:22 2024 +0800

[function](signature) add datev2 signature for from_days function (#36505)

## Proposed changes
add datev2 signature for from_days function
---
 .../expressions/functions/executable/DateTimeExtractAndTransform.java | 4 ++--
 .../doris/nereids/trees/expressions/functions/scalar/FromDays.java| 4 ++--
 gensrc/script/doris_builtins_functions.py | 1 +
 3 files changed, 5 insertions(+), 4 deletions(-)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/executable/DateTimeExtractAndTransform.java
 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/executable/DateTimeExtractAndTransform.java
index b6960d4384b..754c13e43d1 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/executable/DateTimeExtractAndTransform.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/executable/DateTimeExtractAndTransform.java
@@ -389,7 +389,7 @@ public class DateTimeExtractAndTransform {
 /**
  * from_days.
  */
-@ExecFunction(name = "from_days", argTypes = {"INT"}, returnType = "DATE")
+@ExecFunction(name = "from_days", argTypes = {"INT"}, returnType = 
"DATEV2")
 public static Expression fromDays(IntegerLiteral n) {
 // doris treat AD as ordinary year but java LocalDateTime treat it 
as lunar year.
 LocalDateTime res = LocalDateTime.of(0, 1, 1, 0, 0, 0)
@@ -397,7 +397,7 @@ public class DateTimeExtractAndTransform {
 if (res.isBefore(LocalDateTime.of(0, 3, 1, 0, 0, 0))) {
 res = res.plusDays(-1);
 }
-return DateLiteral.fromJavaDateType(res);
+return DateV2Literal.fromJavaDateType(res);
 }
 
 @ExecFunction(name = "last_day", argTypes = {"DATE"}, returnType = "DATE")
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/scalar/FromDays.java
 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/scalar/FromDays.java
index 7adf680c0ef..a2b5a420c34 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/scalar/FromDays.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/scalar/FromDays.java
@@ -23,7 +23,7 @@ import 
org.apache.doris.nereids.trees.expressions.functions.AlwaysNullable;
 import 
org.apache.doris.nereids.trees.expressions.functions.ExplicitlyCastableSignature;
 import org.apache.doris.nereids.trees.expressions.shape.UnaryExpression;
 import org.apache.doris.nereids.trees.expressions.visitor.ExpressionVisitor;
-import org.apache.doris.nereids.types.DateType;
+import org.apache.doris.nereids.types.DateV2Type;
 import org.apache.doris.nereids.types.IntegerType;
 
 import com.google.common.base.Preconditions;
@@ -38,7 +38,7 @@ public class FromDays extends ScalarFunction
 implements UnaryExpression, ExplicitlyCastableSignature, 
AlwaysNullable {
 
 public static final List SIGNATURES = ImmutableList.of(
-FunctionSignature.ret(DateType.INSTANCE).args(IntegerType.INSTANCE)
+
FunctionSignature.ret(DateV2Type.INSTANCE).args(IntegerType.INSTANCE)
 );
 
 /**
diff --git a/gensrc/script/doris_builtins_functions.py 
b/gensrc/script/doris_builtins_functions.py
index ee801bc7a1b..38c9f8ac886 100644
--- a/gensrc/script/doris_builtins_functions.py
+++ b/gensrc/script/doris_builtins_functions.py
@@ -922,6 +922,7 @@ visible_functions = {
 [['utc_timestamp'], 'DATETIME', [], 'ALWAYS_NOT_NULLABLE'],
 [['timestamp'], 'DATETIME', ['DATETIME'], 'ALWAYS_NULLABLE'],
 
+[['from_days'], 'DATEV2', ['INT'], 'ALWAYS_NULLABLE'],
 [['from_days'], 'DATE', ['INT'], 'ALWAYS_NULLABLE'],
 [['last_day'], 'DATE', ['DATETIME'], 'ALWAYS_NULLABLE'],
 [['last_day'], 'DATE', ['DATE'], 'ALWAYS_NULLABLE'],


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch branch-2.1 updated: [cherry-pick](branch2.1) fix week/yearweek function get wrong result (#36538)

2024-06-20 Thread zhangstar333
This is an automated email from the ASF dual-hosted git repository.

zhangstar333 pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 1a242b8ae09 [cherry-pick](branch2.1) fix week/yearweek function get 
wrong result (#36538)
1a242b8ae09 is described below

commit 1a242b8ae09cfd9ed02109729b0a94a00dd308a1
Author: zhangstar333 <87313068+zhangstar...@users.noreply.github.com>
AuthorDate: Thu Jun 20 15:48:19 2024 +0800

[cherry-pick](branch2.1) fix week/yearweek function get wrong result 
(#36538)

## Proposed changes
cherry-pick from master #36000 #36159
---
 .../executable/DateTimeExtractAndTransform.java| 30 ++-
 .../functions/DateTimeExtractAndTransformTest.java | 59 ++
 .../suites/nereids_syntax_p0/explain.groovy|  6 +++
 3 files changed, 93 insertions(+), 2 deletions(-)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/executable/DateTimeExtractAndTransform.java
 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/executable/DateTimeExtractAndTransform.java
index e7a92354440..b6960d4384b 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/executable/DateTimeExtractAndTransform.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/executable/DateTimeExtractAndTransform.java
@@ -687,7 +687,10 @@ public class DateTimeExtractAndTransform {
 return week(date.toJavaDateType(), mode.getIntValue());
 }
 
-private static Expression week(LocalDateTime localDateTime, int mode) {
+/**
+ * the impl of function week(date/datetime, mode)
+ */
+public static Expression week(LocalDateTime localDateTime, int mode) {
 switch (mode) {
 case 0: {
 return new TinyIntLiteral(
@@ -697,6 +700,13 @@ public class DateTimeExtractAndTransform {
 return new TinyIntLiteral((byte) 
localDateTime.get(WeekFields.ISO.weekOfYear()));
 }
 case 2: {
+// 
https://dev.mysql.com/doc/refman/8.4/en/date-and-time-functions.html#function_week
+// mode 2 is start with a Sunday day as first week in this 
year.
+// and special case for -01-01, as it's SATURDAY, 
calculate result of 52 is
+// last year, so it's meaningless.
+if (checkIsSpecificDate(localDateTime)) {
+return new TinyIntLiteral((byte) 1);
+}
 return new TinyIntLiteral(
 (byte) 
localDateTime.get(WeekFields.of(DayOfWeek.SUNDAY, 7).weekOfWeekBasedYear()));
 }
@@ -757,9 +767,15 @@ public class DateTimeExtractAndTransform {
 return yearWeek(dateTime.toJavaDateType(), 0);
 }
 
-private static Expression yearWeek(LocalDateTime localDateTime, int mode) {
+/**
+ * the impl of function yearWeek(date/datetime, mode)
+ */
+public static Expression yearWeek(LocalDateTime localDateTime, int mode) {
 switch (mode) {
 case 0: {
+if (checkIsSpecificDate(localDateTime)) {
+return new IntegerLiteral(1);
+}
 return new IntegerLiteral(
 localDateTime.get(WeekFields.of(DayOfWeek.SUNDAY, 
7).weekBasedYear()) * 100
 + localDateTime.get(
@@ -770,6 +786,9 @@ public class DateTimeExtractAndTransform {
 + 
localDateTime.get(WeekFields.ISO.weekOfWeekBasedYear()));
 }
 case 2: {
+if (checkIsSpecificDate(localDateTime)) {
+return new IntegerLiteral(1);
+}
 return new IntegerLiteral(
 localDateTime.get(WeekFields.of(DayOfWeek.SUNDAY, 
7).weekBasedYear()) * 100
 + localDateTime.get(
@@ -810,6 +829,13 @@ public class DateTimeExtractAndTransform {
 }
 }
 
+/**
+ * -01-01 is specific date, sometime need handle it alone.
+ */
+private static boolean checkIsSpecificDate(LocalDateTime localDateTime) {
+return localDateTime.getYear() == 0 && localDateTime.getMonthValue() 
== 1 && localDateTime.getDayOfMonth() == 1;
+}
+
 @ExecFunction(name = "weekofyear", argTypes = {"DATETIMEV2"}, returnType = 
"TINYINT")
 public static Expression weekOfYear(DateTimeV2Literal dateTime) {
 return new TinyIntLiteral((byte) 
dateTime.toJavaDateType().get(WeekFields.ISO.weekOfWeekBasedYear()));
diff --git 
a/fe/fe-core/src/test/java/org/apache/doris/nereids/trees/expressions/functions/DateTimeExtractAndTransformTest.java
 
b/fe/fe-core/src/test/java/org/apache/doris/nereids/trees/expressions/functions/DateTimeExtractAndTransformTest.java
new fi

(doris) branch branch-2.0 updated: [fix](variable) modify @@auto_commit column type to BIGINT (#36584)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.0 by this push:
 new 132b9d615f8 [fix](variable) modify @@auto_commit column type to BIGINT 
(#36584)
132b9d615f8 is described below

commit 132b9d615f804984a134d716112edd0a2f95c84c
Author: Mingyu Chen 
AuthorDate: Thu Jun 20 16:02:27 2024 +0800

[fix](variable) modify @@auto_commit column type to BIGINT (#36584)

bp #33887 #33282
---
 .../java/org/apache/doris/qe/SessionVariable.java  |  11 ++-
 .../main/java/org/apache/doris/qe/VariableMgr.java | 109 +
 .../java/org/apache/doris/qe/VariableMgrTest.java  |  38 +++
 3 files changed, 113 insertions(+), 45 deletions(-)

diff --git a/fe/fe-core/src/main/java/org/apache/doris/qe/SessionVariable.java 
b/fe/fe-core/src/main/java/org/apache/doris/qe/SessionVariable.java
index cee326f0f99..0de84bfa16f 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/qe/SessionVariable.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/qe/SessionVariable.java
@@ -554,7 +554,9 @@ public class SessionVariable implements Serializable, 
Writable {
 public String resourceGroup = "";
 
 // this is used to make mysql client happy
-@VariableMgr.VarAttr(name = AUTO_COMMIT)
+// autocommit is actually a boolean value, but @@autocommit is type of 
BIGINT.
+// So we need to set convertBoolToLongMethod to make "select @@autocommit" 
happy.
+@VariableMgr.VarAttr(name = AUTO_COMMIT, convertBoolToLongMethod = 
"convertBoolToLong")
 public boolean autoCommit = true;
 
 // this is used to make c3p0 library happy
@@ -1669,10 +1671,6 @@ public class SessionVariable implements Serializable, 
Writable {
 return enableJoinReorderBasedCost;
 }
 
-public boolean isAutoCommit() {
-return autoCommit;
-}
-
 public boolean isTxReadonly() {
 return txReadonly;
 }
@@ -2470,6 +2468,9 @@ public class SessionVariable implements Serializable, 
Writable {
 }
 }
 
+public long convertBoolToLong(Boolean val) {
+return val ? 1 : 0;
+}
 
 public boolean isEnableFileCache() {
 return enableFileCache;
diff --git a/fe/fe-core/src/main/java/org/apache/doris/qe/VariableMgr.java 
b/fe/fe-core/src/main/java/org/apache/doris/qe/VariableMgr.java
index 85224c6aef7..b8fd6810778 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/qe/VariableMgr.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/qe/VariableMgr.java
@@ -36,6 +36,7 @@ import 
org.apache.doris.nereids.trees.expressions.literal.Literal;
 import org.apache.doris.persist.GlobalVarPersistInfo;
 
 import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
 import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.ImmutableSortedMap;
 import com.google.common.collect.Lists;
@@ -489,47 +490,61 @@ public class VariableMgr {
 }
 
 private static void fillValue(Object obj, Field field, VariableExpr desc) {
-try {
-switch (field.getType().getSimpleName()) {
-case "boolean":
-desc.setType(Type.BOOLEAN);
-desc.setBoolValue(field.getBoolean(obj));
-break;
-case "byte":
-desc.setType(Type.TINYINT);
-desc.setIntValue(field.getByte(obj));
-break;
-case "short":
-desc.setType(Type.SMALLINT);
-desc.setIntValue(field.getShort(obj));
-break;
-case "int":
-desc.setType(Type.INT);
-desc.setIntValue(field.getInt(obj));
-break;
-case "long":
-desc.setType(Type.BIGINT);
-desc.setIntValue(field.getLong(obj));
-break;
-case "float":
-desc.setType(Type.FLOAT);
-desc.setFloatValue(field.getFloat(obj));
-break;
-case "double":
-desc.setType(Type.DOUBLE);
-desc.setFloatValue(field.getDouble(obj));
-break;
-case "String":
-desc.setType(Type.VARCHAR);
-desc.setStringValue((String) field.get(obj));
-break;
-default:
-desc.setType(Type.VARCHAR);
-desc.setStringValue("");
-break;
+VarAttr attr = field.getAnnotation(VarAttr.class);
+if (!Strings.isNullOrEmpty(attr.convertBoolToLongMethod())) {
+try {
+Preconditions.checkArgument(obj instanceof SessionVariable);
+long val = (Long) 
SessionVariable.class.ge

(doris) branch master updated (f7307875ae2 -> 294437c62ad)

2024-06-20 Thread gavinchou
This is an automated email from the ASF dual-hosted git repository.

gavinchou pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from f7307875ae2 [function](signature) add datev2 signature for from_days 
function (#36505)
 add 294437c62ad [fix](mow) Fix missing mow flag when streamload commit txn 
from BE introduced by #36237 (#36496)

No new revisions were added by this update.

Summary of changes:
 fe/fe-core/src/main/java/org/apache/doris/load/StreamLoadHandler.java | 1 +
 .../src/main/java/org/apache/doris/qe/InsertStreamTxnExecutor.java| 2 ++
 .../suites/schema_change_p0/test_schema_change_unique.groovy  | 4 
 3 files changed, 7 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch branch-2.0 updated: [fix](short circurt) fix return default value issue #34186 (#36570)

2024-06-20 Thread kxiao
This is an automated email from the ASF dual-hosted git repository.

kxiao pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.0 by this push:
 new 9a889ecc349 [fix](short circurt) fix return default value issue #34186 
(#36570)
9a889ecc349 is described below

commit 9a889ecc349439f9eb00edd31f931463a29af666
Author: lw112 <131352377+felixw...@users.noreply.github.com>
AuthorDate: Thu Jun 20 17:48:51 2024 +0800

[fix](short circurt) fix return default value issue #34186 (#36570)
---
 .../vec/data_types/serde/data_type_nullable_serde.cpp |  10 +-
 .../test_compaction_uniq_keys_row_store.out   |   8 
 .../compaction/test_vertical_compaction_agg_keys.out  |   1 +
 .../compaction/test_vertical_compaction_uniq_keys.out |   1 +
 .../insert_into_table/partial_update_seq_col.out  | Bin 1412 -> 1416 bytes
 regression-test/data/point_query_p0/test_rowstore.out |   6 ++
 .../test_partial_update_insert_seq_col.out| Bin 1412 -> 1416 bytes
 .../partial_update/test_partial_update_seq_col.out| Bin 1411 -> 1415 bytes
 .../test_partial_update_seq_col_delete.out| Bin 1526 -> 1530 bytes
 .../suites/point_query_p0/test_rowstore.groovy|   9 +
 10 files changed, 26 insertions(+), 9 deletions(-)

diff --git a/be/src/vec/data_types/serde/data_type_nullable_serde.cpp 
b/be/src/vec/data_types/serde/data_type_nullable_serde.cpp
index b96ef441026..d87ca7afff4 100644
--- a/be/src/vec/data_types/serde/data_type_nullable_serde.cpp
+++ b/be/src/vec/data_types/serde/data_type_nullable_serde.cpp
@@ -225,13 +225,13 @@ void DataTypeNullableSerDe::write_one_cell_to_jsonb(const 
IColumn& column, Jsonb
 Arena* mem_pool, int32_t 
col_id,
 int row_num) const {
 auto& nullable_col = assert_cast(column);
+result.writeKey(col_id);
 if (nullable_col.is_null_at(row_num)) {
-// do not insert to jsonb
-return;
+result.writeNull();
+} else {
+
nested_serde->write_one_cell_to_jsonb(nullable_col.get_nested_column(), result, 
mem_pool,
+  col_id, row_num);
 }
-result.writeKey(col_id);
-nested_serde->write_one_cell_to_jsonb(nullable_col.get_nested_column(), 
result, mem_pool,
-  col_id, row_num);
 }
 
 void DataTypeNullableSerDe::read_one_cell_from_jsonb(IColumn& column, const 
JsonbValue* arg) const {
diff --git 
a/regression-test/data/compaction/test_compaction_uniq_keys_row_store.out 
b/regression-test/data/compaction/test_compaction_uniq_keys_row_store.out
index cedf0dbe9bd..7c163c62d33 100644
--- a/regression-test/data/compaction/test_compaction_uniq_keys_row_store.out
+++ b/regression-test/data/compaction/test_compaction_uniq_keys_row_store.out
@@ -18,10 +18,10 @@
 3  2017-10-01  2017-10-01  2017-10-01T11:11:11.026 
2017-10-01T11:11:11.016 Beijing 10  1   2020-01-04T00:00
2020-01-04T00:002017-10-01T11:11:11.110 2017-10-01T11:11:11.150111  
2020-01-04T00:001   33  21
 
 -- !point_select --
-3  2017-10-01  2017-10-01  2017-10-01T11:11:11.027 
2017-10-01T11:11:11.017 Beijing 10  1   1970-01-01T00:00
1970-01-01T00:001970-01-01T00:00:00.111 1970-01-01T00:00
2020-01-05T00:001   34  20
+3  2017-10-01  2017-10-01  2017-10-01T11:11:11.027 
2017-10-01T11:11:11.017 Beijing 10  1   \N  \N  \N  \N  
2020-01-05T00:001   34  20
 
 -- !point_select --
-4  2017-10-01  2017-10-01  2017-10-01T11:11:11.028 
2017-10-01T11:11:11.018 Beijing 10  1   1970-01-01T00:00
1970-01-01T00:001970-01-01T00:00:00.111 1970-01-01T00:00
2020-01-05T00:001   34  20
+4  2017-10-01  2017-10-01  2017-10-01T11:11:11.028 
2017-10-01T11:11:11.018 Beijing 10  1   \N  \N  \N  \N  
2020-01-05T00:001   34  20
 
 -- !point_select --
 1  2017-10-01  2017-10-01  2017-10-01T11:11:11.021 
2017-10-01T11:11:11.011 Beijing 10  1   2020-01-01T00:00
2020-01-01T00:002017-10-01T11:11:11.170 2017-10-01T11:11:11.110111  
2020-01-01T00:001   30  20
@@ -42,8 +42,8 @@
 3  2017-10-01  2017-10-01  2017-10-01T11:11:11.026 
2017-10-01T11:11:11.016 Beijing 10  1   2020-01-04T00:00
2020-01-04T00:002017-10-01T11:11:11.110 2017-10-01T11:11:11.150111  
2020-01-04T00:001   33  21
 
 -- !point_select --
-3  2017-10-01  2017-10-01  2017-10-01T11:11:11.027 
2017-10-01T11:11:11.017 Beijing 10  1   1970-01-01T00:00
1970-01-01T00:001970-01-01T00:00:00.111 1970-01-01T00:00
2020-01-

(doris) branch branch-2.1 updated: [branch-2.1](auto-partition) fix auto partition expr change unexpected (#36345) (#36514)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 6df1a9ab753 [branch-2.1](auto-partition) fix auto partition expr 
change unexpected (#36345) (#36514)
6df1a9ab753 is described below

commit 6df1a9ab753e1680bb2f001c2ee6a1df01749ca9
Author: zclllyybb 
AuthorDate: Thu Jun 20 17:50:31 2024 +0800

[branch-2.1](auto-partition) fix auto partition expr change unexpected 
(#36345) (#36514)

pick #36345
---
 .../org/apache/doris/catalog/PartitionInfo.java|  3 ++-
 .../doris/analysis/PartitionPruneTestBase.java |  4 ++-
 .../doris/analysis/RangePartitionPruneTest.java|  3 ---
 .../test_date_function_prune.groovy| 31 ++
 4 files changed, 36 insertions(+), 5 deletions(-)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/catalog/PartitionInfo.java 
b/fe/fe-core/src/main/java/org/apache/doris/catalog/PartitionInfo.java
index c899a4e8917..434812b07d3 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/catalog/PartitionInfo.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/catalog/PartitionInfo.java
@@ -249,8 +249,9 @@ public class PartitionInfo implements Writable {
 return isAutoCreatePartitions;
 }
 
+// forbid change metadata.
 public ArrayList getPartitionExprs() {
-return this.partitionExprs;
+return Expr.cloneList(this.partitionExprs);
 }
 
 public void checkPartitionItemListsMatch(List list1, 
List list2) throws DdlException {
diff --git 
a/fe/fe-core/src/test/java/org/apache/doris/analysis/PartitionPruneTestBase.java
 
b/fe/fe-core/src/test/java/org/apache/doris/analysis/PartitionPruneTestBase.java
index 162a81ccb88..8a9d9787731 100644
--- 
a/fe/fe-core/src/test/java/org/apache/doris/analysis/PartitionPruneTestBase.java
+++ 
b/fe/fe-core/src/test/java/org/apache/doris/analysis/PartitionPruneTestBase.java
@@ -34,7 +34,9 @@ public abstract class PartitionPruneTestBase extends 
TestWithFeService {
 }
 
 private void assertExplainContains(String sql, String subString) throws 
Exception {
-Assert.assertTrue(String.format("sql=%s, expectResult=%s", sql, 
subString),
+Assert.assertTrue(
+String.format("sql=%s, expectResult=%s, but got %s", sql, 
subString,
+getSQLPlanOrErrorMsg("explain " + sql)),
 getSQLPlanOrErrorMsg("explain " + sql).contains(subString));
 }
 
diff --git 
a/fe/fe-core/src/test/java/org/apache/doris/analysis/RangePartitionPruneTest.java
 
b/fe/fe-core/src/test/java/org/apache/doris/analysis/RangePartitionPruneTest.java
index 4cd7f8d2049..7bce2526df0 100644
--- 
a/fe/fe-core/src/test/java/org/apache/doris/analysis/RangePartitionPruneTest.java
+++ 
b/fe/fe-core/src/test/java/org/apache/doris/analysis/RangePartitionPruneTest.java
@@ -206,9 +206,6 @@ public class RangePartitionPruneTest extends 
PartitionPruneTestBase {
 "partitions=6/8");
 addCase("select /*+ SET_VAR(enable_nereids_planner=false) */ * from 
test.test_to_date_trunc where event_day= \"2023-08-07 11:00:00\" ",
 "partitions=1/2");
-addCase("select /*+ SET_VAR(enable_nereids_planner=false) */ * from 
test.test_to_date_trunc where date_trunc(event_day, \"day\")= \"2023-08-07 
11:00:00\" ",
-"partitions=1/2");
-
 }
 
 
diff --git 
a/regression-test/suites/nereids_rules_p0/partition_prune/test_date_function_prune.groovy
 
b/regression-test/suites/nereids_rules_p0/partition_prune/test_date_function_prune.groovy
index c126206eba0..c6f122e3c87 100644
--- 
a/regression-test/suites/nereids_rules_p0/partition_prune/test_date_function_prune.groovy
+++ 
b/regression-test/suites/nereids_rules_p0/partition_prune/test_date_function_prune.groovy
@@ -91,4 +91,35 @@ suite("test_date_function_prune") {
 sql "select * from dp where date_time > 
str_to_date('2020-01-02','%Y-%m-%d')"
 contains("partitions=2/3 (p2,p3)")
 }
+
+sql "drop table if exists test_to_date_trunc"
+sql """
+CREATE TABLE test_to_date_trunc(
+event_day DATETIME NOT NULL
+)
+DUPLICATE KEY(event_day)
+AUTO PARTITION BY range (date_trunc(event_day, "day")) (
+PARTITION `p20230807` values [(20230807 ), (20230808 )),
+PARTITION `p20020106` values [(20020106 ), (20020107 ))
+)
+DISTRIBUTED BY HASH(event_day) BUCKETS 4
+PROPERTIES("replication_num" = "1");
+"""
+explain {
+sql """ select /*+ SET_VAR(enable_nereids_planner=false) */ * from 
test_to_date_trunc where date_trunc(event_day, "day")= "2023-08-07 11:00:00" """
+contains("partitions=0/2")
+}
+explain {
+sql """ select * from test_to_date_trunc where date_trunc(event_day, 
"day")= "2023-08-07 11:00:00" """
+  

(doris) branch branch-2.1 updated: [branch-2.1](auto-partition) Fix auto partition load failure in multi replica (#36586)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new bd47d5a6816 [branch-2.1](auto-partition) Fix auto partition load 
failure in multi replica (#36586)
bd47d5a6816 is described below

commit bd47d5a68164e26a09247baa3d749b5c8865c715
Author: zclllyybb 
AuthorDate: Thu Jun 20 17:51:18 2024 +0800

[branch-2.1](auto-partition) Fix auto partition load failure in multi 
replica (#36586)

this pr
1. picked #35630, which was reverted #36098 before.
2. picked #36344 from master

these two pr fixed existing bug about auto partition load.

-

Co-authored-by: Kaijie Chen 
---
 be/src/exec/tablet_info.cpp|  17 +--
 be/src/runtime/load_channel.cpp|  28 -
 be/src/runtime/load_channel.h  |  11 +-
 be/src/runtime/load_channel_mgr.cpp|   8 --
 be/src/runtime/load_stream.cpp |   2 +-
 be/src/runtime/load_stream.h   |   4 +
 be/src/runtime/tablets_channel.cpp |  55 +++--
 be/src/runtime/tablets_channel.h   |  10 +-
 be/src/vec/sink/load_stream_map_pool.cpp   |  11 +-
 be/src/vec/sink/load_stream_map_pool.h |   4 +-
 be/src/vec/sink/load_stream_stub.cpp   |  13 +-
 be/src/vec/sink/load_stream_stub.h |  10 +-
 be/src/vec/sink/writer/vtablet_writer.cpp  | 136 +++--
 be/src/vec/sink/writer/vtablet_writer.h|  67 ++
 be/src/vec/sink/writer/vtablet_writer_v2.cpp   |  60 ++---
 be/src/vec/sink/writer/vtablet_writer_v2.h |   2 +
 .../apache/doris/catalog/ListPartitionItem.java|   2 +-
 .../org/apache/doris/catalog/PartitionKey.java |   7 ++
 .../apache/doris/catalog/RangePartitionItem.java   |   6 +-
 .../apache/doris/datasource/InternalCatalog.java   |   4 +-
 .../org/apache/doris/planner/OlapTableSink.java| 111 -
 .../apache/doris/service/FrontendServiceImpl.java  |  14 +--
 gensrc/proto/internal_service.proto|   3 +
 gensrc/thrift/Descriptors.thrift   |   1 +
 .../sql/two_instance_correctness.out   |   4 +
 .../test_auto_range_partition.groovy   |   3 +-
 .../auto_partition/sql/multi_thread_load.groovy|   2 +-
 .../sql/two_instance_correctness.groovy|  45 +++
 28 files changed, 492 insertions(+), 148 deletions(-)

diff --git a/be/src/exec/tablet_info.cpp b/be/src/exec/tablet_info.cpp
index 62ff0b2fcce..e32e9c9efcf 100644
--- a/be/src/exec/tablet_info.cpp
+++ b/be/src/exec/tablet_info.cpp
@@ -388,18 +388,21 @@ Status VOlapTablePartitionParam::init() {
 // for both auto/non-auto partition table.
 _is_in_partition = _part_type == TPartitionType::type::LIST_PARTITIONED;
 
-// initial partitions
+// initial partitions. if meet dummy partitions only for open BE nodes, 
not generate key of them for finding
 for (const auto& t_part : _t_param.partitions) {
 VOlapTablePartition* part = nullptr;
 RETURN_IF_ERROR(generate_partition_from(t_part, part));
 _partitions.emplace_back(part);
-if (_is_in_partition) {
-for (auto& in_key : part->in_keys) {
-_partitions_map->emplace(std::tuple {in_key.first, 
in_key.second, false}, part);
+
+if (!_t_param.partitions_is_fake) {
+if (_is_in_partition) {
+for (auto& in_key : part->in_keys) {
+_partitions_map->emplace(std::tuple {in_key.first, 
in_key.second, false}, part);
+}
+} else {
+_partitions_map->emplace(
+std::tuple {part->end_key.first, part->end_key.second, 
false}, part);
 }
-} else {
-_partitions_map->emplace(std::tuple {part->end_key.first, 
part->end_key.second, false},
- part);
 }
 }
 
diff --git a/be/src/runtime/load_channel.cpp b/be/src/runtime/load_channel.cpp
index 146575feac9..3d8c8e1dbf3 100644
--- a/be/src/runtime/load_channel.cpp
+++ b/be/src/runtime/load_channel.cpp
@@ -33,11 +33,11 @@ namespace doris {
 bvar::Adder g_loadchannel_cnt("loadchannel_cnt");
 
 LoadChannel::LoadChannel(const UniqueId& load_id, int64_t timeout_s, bool 
is_high_priority,
- const std::string& sender_ip, int64_t backend_id, 
bool enable_profile)
+ std::string sender_ip, int64_t backend_id, bool 
enable_profile)
 : _load_id(load_id),
   _timeout_s(timeout_s),
   _is_high_priority(is_high_priority),
-  _sender_ip(sender_ip),
+  _sender_ip(std::move(sender_ip)),
   _backend_id(backend_id),
   _enable_profile(e

(doris) branch branch-2.0 updated: [branch-2.0](colocate group) fix colocate group always exclude the same host #33823 (#36503)

2024-06-20 Thread kxiao
This is an automated email from the ASF dual-hosted git repository.

kxiao pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.0 by this push:
 new 6b31a7db6c5 [branch-2.0](colocate group) fix colocate group always 
exclude the same host #33823 (#36503)
6b31a7db6c5 is described below

commit 6b31a7db6c54c6ea28d7e65604ae74d63d58f6b0
Author: yujun 
AuthorDate: Thu Jun 20 17:52:19 2024 +0800

[branch-2.0](colocate group) fix colocate group always exclude the same 
host #33823 (#36503)
---
 .../clone/ColocateTableCheckerAndBalancer.java | 25 ++-
 .../doris/cluster/DecommissionBackendTest.java | 86 +-
 .../apache/doris/utframe/TestWithFeService.java|  6 +-
 3 files changed, 110 insertions(+), 7 deletions(-)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/clone/ColocateTableCheckerAndBalancer.java
 
b/fe/fe-core/src/main/java/org/apache/doris/clone/ColocateTableCheckerAndBalancer.java
index 456701213ba..1a085ab8104 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/clone/ColocateTableCheckerAndBalancer.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/clone/ColocateTableCheckerAndBalancer.java
@@ -859,6 +859,8 @@ public class ColocateTableCheckerAndBalancer extends 
MasterDaemon {
 
 int targetSeqIndex = -1;
 long minDataSizeDiff = Long.MAX_VALUE;
+boolean destBeContainsAllBuckets = true;
+boolean theSameHostContainsAllBuckets = true;
 for (int seqIndex : seqIndexes) {
 // the bucket index.
 // eg: 0 / 3 = 0, so that the bucket index of the 4th 
backend id in flatBackendsPerBucketSeq is 0.
@@ -866,9 +868,15 @@ public class ColocateTableCheckerAndBalancer extends 
MasterDaemon {
 List backendsSet = 
backendsPerBucketSeq.get(bucketIndex);
 List hostsSet = hostsPerBucketSeq.get(bucketIndex);
 // the replicas of a tablet can not locate in same Backend 
or same host
-if (backendsSet.contains(destBeId) || 
hostsSet.contains(destBe.getHost())) {
+if (backendsSet.contains(destBeId)) {
 continue;
 }
+destBeContainsAllBuckets = false;
+
+if (!Config.allow_replica_on_same_host && 
hostsSet.contains(destBe.getHost())) {
+continue;
+}
+theSameHostContainsAllBuckets = false;
 
 Preconditions.checkState(backendsSet.contains(srcBeId), 
srcBeId);
 long bucketDataSize =
@@ -895,8 +903,19 @@ public class ColocateTableCheckerAndBalancer extends 
MasterDaemon {
 
 if (targetSeqIndex < 0) {
 // we use next node as dst node
-LOG.info("unable to replace backend {} with backend {} in 
colocate group {}",
-srcBeId, destBeId, groupId);
+String failedReason;
+if (destBeContainsAllBuckets) {
+failedReason = "dest be contains all the same buckets";
+} else if (theSameHostContainsAllBuckets) {
+failedReason = "dest be's host contains all the same 
buckets "
++ "and 
Config.allow_replica_on_same_host=false";
+} else {
+failedReason = "dest be has no fit path, maybe disk 
usage is exceeds "
++ 
"Config.storage_high_watermark_usage_percent";
+}
+LOG.info("unable to replace backend {} with dest backend 
{} in colocate group {}, "
++ "failed reason: {}",
+srcBeId, destBeId, groupId, failedReason);
 continue;
 }
 
diff --git 
a/fe/fe-core/src/test/java/org/apache/doris/cluster/DecommissionBackendTest.java
 
b/fe/fe-core/src/test/java/org/apache/doris/cluster/DecommissionBackendTest.java
index e689723cdf8..79216f28c40 100644
--- 
a/fe/fe-core/src/test/java/org/apache/doris/cluster/DecommissionBackendTest.java
+++ 
b/fe/fe-core/src/test/java/org/apache/doris/cluster/DecommissionBackendTest.java
@@ -22,6 +22,10 @@ import org.apache.doris.catalog.Database;
 import org.apache.doris.catalog.Env;
 import org.apache.doris.catalog.MaterializedIndex;
 import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.Replica;
+import org.apache.doris.catalog.Tablet;
+import org.apache.doris.clone.RebalancerTestUtil;
 import org.apache.doris.common.AnalysisException;
 import org.apache.doris.common.Config;
 import org.apache.doris.common.FeConstants;
@@ -39,7 +43,7 @@ import java.util.List;
 public class DecommissionBac

(doris) branch branch-2.0 updated: [fix](in expr) fix error result when in expr has null value and left expr is 0 #36024 (#36585)

2024-06-20 Thread kxiao
This is an automated email from the ASF dual-hosted git repository.

kxiao pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.0 by this push:
 new 0c1247f580a [fix](in expr) fix error result when in expr has null 
value and left expr is 0 #36024 (#36585)
0c1247f580a is described below

commit 0c1247f580ac2d968632428408b45ec3b0ccd60b
Author: Mryange <59914473+mrya...@users.noreply.github.com>
AuthorDate: Thu Jun 20 17:54:03 2024 +0800

[fix](in expr) fix error result when in expr has null value and left expr 
is 0 #36024 (#36585)
---
 be/src/vec/functions/in.h  | 13 +---
 .../data/nereids_p0/sql_functions/test_in_expr.out |  3 +++
 .../nereids_p0/sql_functions/test_in_expr.groovy   | 23 ++
 3 files changed, 36 insertions(+), 3 deletions(-)

diff --git a/be/src/vec/functions/in.h b/be/src/vec/functions/in.h
index 18a5f86e1af..de6e72d0747 100644
--- a/be/src/vec/functions/in.h
+++ b/be/src/vec/functions/in.h
@@ -272,8 +272,9 @@ private:
 continue;
 }
 
-std::unique_ptr hybrid_set(
-create_set(context->get_arg_type(0)->type, 
set_columns.size()));
+std::vector set_datas;
+// To comply with the SQL standard, IN() returns NULL not only if 
the expression on the left hand side is NULL,
+// but also if no match is found in the list and one of the 
expressions in the list is NULL.
 bool null_in_set = false;
 
 for (const auto& set_column : set_columns) {
@@ -281,9 +282,15 @@ private:
 if (set_data.data == nullptr) {
 null_in_set = true;
 } else {
-hybrid_set->insert((void*)(set_data.data), set_data.size);
+set_datas.push_back(set_data);
 }
 }
+std::unique_ptr hybrid_set(
+create_set(context->get_arg_type(0)->type, 
set_datas.size()));
+for (auto& set_data : set_datas) {
+hybrid_set->insert((void*)(set_data.data), set_data.size);
+}
+
 vec_res[i] = negative ^ hybrid_set->find((void*)ref_data.data, 
ref_data.size);
 if (null_in_set) {
 vec_null_map_to[i] = negative == vec_res[i];
diff --git a/regression-test/data/nereids_p0/sql_functions/test_in_expr.out 
b/regression-test/data/nereids_p0/sql_functions/test_in_expr.out
index 31d6bb5b1ac..4881e63f223 100644
--- a/regression-test/data/nereids_p0/sql_functions/test_in_expr.out
+++ b/regression-test/data/nereids_p0/sql_functions/test_in_expr.out
@@ -53,3 +53,6 @@ a
 b
 d
 
+-- !select --
+\N
+
diff --git 
a/regression-test/suites/nereids_p0/sql_functions/test_in_expr.groovy 
b/regression-test/suites/nereids_p0/sql_functions/test_in_expr.groovy
index 8f0d3015cab..b6c1b7a7d9a 100644
--- a/regression-test/suites/nereids_p0/sql_functions/test_in_expr.groovy
+++ b/regression-test/suites/nereids_p0/sql_functions/test_in_expr.groovy
@@ -115,4 +115,27 @@ suite("test_in_expr", "query") {
 sql """DROP TABLE IF EXISTS ${nullTableName}"""
 sql """DROP TABLE IF EXISTS ${notNullTableName}"""
 
+sql """DROP TABLE IF EXISTS table_with_null"""
+
+sql """
+  CREATE TABLE IF NOT EXISTS table_with_null (
+  `id` INT ,
+  `c1` INT
+) ENGINE=OLAP
+DUPLICATE KEY(`id`)
+DISTRIBUTED BY HASH(`id`) BUCKETS 1
+PROPERTIES (
+"replication_allocation" = "tag.location.default: 1",
+"storage_format" = "V2"
+);
+"""
+
+sql """ insert into table_with_null values(1, null); """
+
+qt_select """ select 0 in (c1, null) from table_with_null;"""
+
+
+
+
+
 }


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch branch-2.0 updated: [chore](be) Support config max message size for be thrift server #36467 (#36591)

2024-06-20 Thread kxiao
This is an automated email from the ASF dual-hosted git repository.

kxiao pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.0 by this push:
 new 950e667c5ff [chore](be) Support config max message size for be thrift 
server #36467 (#36591)
950e667c5ff is described below

commit 950e667c5ff07a0da7d7307a8ea6aa3a184cd181
Author: walter 
AuthorDate: Thu Jun 20 17:54:49 2024 +0800

[chore](be) Support config max message size for be thrift server #36467 
(#36591)
---
 be/src/common/config.cpp   |  3 ++
 be/src/common/config.h |  3 ++
 be/src/runtime/snapshot_loader.cpp |  2 +-
 be/src/util/thrift_server.cpp  | 66 +-
 4 files changed, 51 insertions(+), 23 deletions(-)

diff --git a/be/src/common/config.cpp b/be/src/common/config.cpp
index 34181f4d256..604535825fb 100644
--- a/be/src/common/config.cpp
+++ b/be/src/common/config.cpp
@@ -240,6 +240,9 @@ DEFINE_mInt32(thrift_connect_timeout_seconds, "3");
 DEFINE_mInt32(fetch_rpc_timeout_seconds, "30");
 // default thrift client retry interval (in milliseconds)
 DEFINE_mInt64(thrift_client_retry_interval_ms, "1000");
+// max message size of thrift request
+// default: 100 * 1024 * 1024
+DEFINE_mInt64(thrift_max_message_size, "104857600");
 // max row count number for single scan range, used in segmentv1
 DEFINE_mInt32(doris_scan_range_row_count, "524288");
 // max bytes number for single scan range, used in segmentv2
diff --git a/be/src/common/config.h b/be/src/common/config.h
index 7665b4866dd..6e7f2ff490a 100644
--- a/be/src/common/config.h
+++ b/be/src/common/config.h
@@ -285,6 +285,9 @@ DECLARE_mInt32(thrift_connect_timeout_seconds);
 DECLARE_mInt32(fetch_rpc_timeout_seconds);
 // default thrift client retry interval (in milliseconds)
 DECLARE_mInt64(thrift_client_retry_interval_ms);
+// max message size of thrift request
+// default: 100 * 1024 * 1024
+DECLARE_mInt64(thrift_max_message_size);
 // max row count number for single scan range, used in segmentv1
 DECLARE_mInt32(doris_scan_range_row_count);
 // max bytes number for single scan range, used in segmentv2
diff --git a/be/src/runtime/snapshot_loader.cpp 
b/be/src/runtime/snapshot_loader.cpp
index da22a7c9167..7c2c68de3dd 100644
--- a/be/src/runtime/snapshot_loader.cpp
+++ b/be/src/runtime/snapshot_loader.cpp
@@ -93,7 +93,7 @@ Status SnapshotLoader::init(TStorageBackendType::type type, 
const std::string& l
 RETURN_IF_ERROR(io::BrokerFileSystem::create(_broker_addr, _prop, 
&fs));
 _remote_fs = std::move(fs);
 } else {
-return Status::InternalError("Unknown storage tpye: {}", type);
+return Status::InternalError("Unknown storage type: {}", type);
 }
 return Status::OK();
 }
diff --git a/be/src/util/thrift_server.cpp b/be/src/util/thrift_server.cpp
index 3bd25ab61f3..2d753c58918 100644
--- a/be/src/util/thrift_server.cpp
+++ b/be/src/util/thrift_server.cpp
@@ -34,6 +34,7 @@
 // IWYU pragma: no_include 
 #include  // IWYU pragma: keep
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -59,6 +60,28 @@ 
DEFINE_GAUGE_METRIC_PROTOTYPE_3ARG(thrift_current_connections, MetricUnit::CONNE
 DEFINE_COUNTER_METRIC_PROTOTYPE_3ARG(thrift_connections_total, 
MetricUnit::CONNECTIONS,
  "Total connections made over the lifetime 
of this server");
 
+// Nonblocking Server socket implementation of TNonblockingServerTransport. 
Wrapper around a unix
+// socket listen and accept calls.
+class ImprovedNonblockingServerSocket : public 
apache::thrift::transport::TNonblockingServerSocket {
+using TConfiguration = apache::thrift::TConfiguration;
+using TSocket = apache::thrift::transport::TSocket;
+
+public:
+// Constructor.
+ImprovedNonblockingServerSocket(int port)
+: TNonblockingServerSocket(port),
+  
config(std::make_shared(config::thrift_max_message_size)) {}
+~ImprovedNonblockingServerSocket() override = default;
+
+protected:
+std::shared_ptr createSocket(THRIFT_SOCKET clientSocket) override 
{
+return std::make_shared(clientSocket, config);
+}
+
+private:
+std::shared_ptr config;
+};
+
 // Helper class that starts a server in a separate thread, and handles
 // the inter-thread communication to monitor whether it started
 // correctly.
@@ -69,26 +92,26 @@ public:
 : _thrift_server(thrift_server), _signal_fired(false) {}
 
 // friendly to code style
-virtual ~ThriftServerEventProcessor() {}
+~ThriftServerEventProcessor() override = default;
 
 // Called by TNonBlockingServer when server has acquired its resources and 
is ready to
 // serve, and signals to StartAndWaitForServer that start-up is finished.
 // From TServerEventHandler.
-virtual void preServe();
+void preServe() override;
 
 // Called when a client connects; we create per-client state and call an

(doris) branch branch-2.0 updated: [chore](be) Improve ingesting binlog error checking #36487 (#36593)

2024-06-20 Thread kxiao
This is an automated email from the ASF dual-hosted git repository.

kxiao pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.0 by this push:
 new 94da4c928ae [chore](be) Improve ingesting binlog error checking #36487 
(#36593)
94da4c928ae is described below

commit 94da4c928ae2350d7906fecd44a52492dc532bdf
Author: walter 
AuthorDate: Thu Jun 20 17:55:29 2024 +0800

[chore](be) Improve ingesting binlog error checking #36487 (#36593)
---
 be/src/service/backend_service.cpp | 23 +++
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/be/src/service/backend_service.cpp 
b/be/src/service/backend_service.cpp
index 745a47d89c0..2221eea5428 100644
--- a/be/src/service/backend_service.cpp
+++ b/be/src/service/backend_service.cpp
@@ -156,10 +156,25 @@ void _ingest_binlog(IngestBinlogArg* arg) {
 }
 
 std::vector binlog_info_parts = strings::Split(binlog_info, 
":");
-// TODO(Drogon): check binlog info content is right
-DCHECK(binlog_info_parts.size() == 2);
-const std::string& remote_rowset_id = binlog_info_parts[0];
-int64_t num_segments = std::stoll(binlog_info_parts[1]);
+if (binlog_info_parts.size() != 2) {
+status = Status::RuntimeError("failed to parse binlog info into 2 
parts: {}", binlog_info);
+LOG(WARNING) << "failed to get binlog info from " << 
get_binlog_info_url
+ << ", status=" << status.to_string();
+status.to_thrift(&tstatus);
+return;
+}
+std::string remote_rowset_id = std::move(binlog_info_parts[0]);
+int64_t num_segments = -1;
+try {
+num_segments = std::stoll(binlog_info_parts[1]);
+} catch (std::exception& e) {
+status = Status::RuntimeError("failed to parse num segments from 
binlog info {}: {}",
+  binlog_info, e.what());
+LOG(WARNING) << "failed to get binlog info from " << 
get_binlog_info_url
+ << ", status=" << status;
+status.to_thrift(&tstatus);
+return;
+}
 
 // Step 4: get rowset meta
 auto get_rowset_meta_url = fmt::format(


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch branch-2.1 updated: [fix](nereids)NullSafeEqualToEqual rule should keep <=> unchanged if it has none-literal child (#36523)

2024-06-20 Thread morrysnow
This is an automated email from the ASF dual-hosted git repository.

morrysnow pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 64a94e883de [fix](nereids)NullSafeEqualToEqual rule should keep <=> 
unchanged if it has none-literal child (#36523)
64a94e883de is described below

commit 64a94e883de82748a3ef400b7487012f3d9b30a9
Author: starocean999 <40539150+starocean...@users.noreply.github.com>
AuthorDate: Thu Jun 20 17:55:36 2024 +0800

[fix](nereids)NullSafeEqualToEqual rule should keep <=> unchanged if it has 
none-literal child (#36523)

pick from master #36521

convert:
expr <=> null to expr is null
null <=> null to true
null <=> 1 to false
literal <=> literal to literal = literal ( 1 <=> 2 to 1 = 2 )
others are unchanged.
---
 .../expression/rules/NullSafeEqualToEqual.java | 26 +++
 .../expression/rules/NullSafeEqualToEqualTest.java | 38 +-
 2 files changed, 41 insertions(+), 23 deletions(-)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/expression/rules/NullSafeEqualToEqual.java
 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/expression/rules/NullSafeEqualToEqual.java
index e8eedb1e198..16c4663a1ed 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/expression/rules/NullSafeEqualToEqual.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/expression/rules/NullSafeEqualToEqual.java
@@ -24,15 +24,16 @@ import 
org.apache.doris.nereids.trees.expressions.Expression;
 import org.apache.doris.nereids.trees.expressions.IsNull;
 import org.apache.doris.nereids.trees.expressions.NullSafeEqual;
 import org.apache.doris.nereids.trees.expressions.literal.BooleanLiteral;
-import org.apache.doris.nereids.trees.expressions.literal.NullLiteral;
 
 import com.google.common.collect.ImmutableList;
 
 import java.util.List;
 
 /**
- * convert "<=>" to "=", if any side is not nullable
  * convert "A <=> null" to "A is null"
+ * null <=> null : true
+ * null <=> 1 : false
+ * 1 <=> 2 : 1 = 2
  */
 public class NullSafeEqualToEqual implements ExpressionPatternRuleFactory {
 public static final NullSafeEqualToEqual INSTANCE = new 
NullSafeEqualToEqual();
@@ -45,19 +46,14 @@ public class NullSafeEqualToEqual implements 
ExpressionPatternRuleFactory {
 }
 
 private static Expression rewrite(NullSafeEqual nullSafeEqual) {
-if (nullSafeEqual.left() instanceof NullLiteral) {
-if (nullSafeEqual.right().nullable()) {
-return new IsNull(nullSafeEqual.right());
-} else {
-return BooleanLiteral.FALSE;
-}
-} else if (nullSafeEqual.right() instanceof NullLiteral) {
-if (nullSafeEqual.left().nullable()) {
-return new IsNull(nullSafeEqual.left());
-} else {
-return BooleanLiteral.FALSE;
-}
-} else if (!nullSafeEqual.left().nullable() && 
!nullSafeEqual.right().nullable()) {
+// because the nullable info hasn't been finalized yet, the 
optimization is limited
+if (nullSafeEqual.left().isNullLiteral() && 
nullSafeEqual.right().isNullLiteral()) {
+return BooleanLiteral.TRUE;
+} else if (nullSafeEqual.left().isNullLiteral()) {
+return nullSafeEqual.right().isLiteral() ? BooleanLiteral.FALSE : 
new IsNull(nullSafeEqual.right());
+} else if (nullSafeEqual.right().isNullLiteral()) {
+return nullSafeEqual.left().isLiteral() ? BooleanLiteral.FALSE : 
new IsNull(nullSafeEqual.left());
+} else if (nullSafeEqual.left().isLiteral() && 
nullSafeEqual.right().isLiteral()) {
 return new EqualTo(nullSafeEqual.left(), nullSafeEqual.right());
 }
 return nullSafeEqual;
diff --git 
a/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/expression/rules/NullSafeEqualToEqualTest.java
 
b/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/expression/rules/NullSafeEqualToEqualTest.java
index db1186738da..8da25e92e7e 100644
--- 
a/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/expression/rules/NullSafeEqualToEqualTest.java
+++ 
b/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/expression/rules/NullSafeEqualToEqualTest.java
@@ -24,6 +24,7 @@ import org.apache.doris.nereids.trees.expressions.IsNull;
 import org.apache.doris.nereids.trees.expressions.NullSafeEqual;
 import org.apache.doris.nereids.trees.expressions.SlotReference;
 import org.apache.doris.nereids.trees.expressions.literal.BooleanLiteral;
+import org.apache.doris.nereids.trees.expressions.literal.IntegerLiteral;
 import org.apache.doris.nereids.trees.expressions.literal.NullLiteral;
 import org.apache.doris.nereids.types.StringType;
 
@@ -32,7 +33,7 @@ import org.junit.jupiter.api.Test;
 
 class NullSafeEqualToEqualTest extends Express

(doris) branch branch-2.1 updated: [cherry-pick] (branch-2.1)fix variant index (#36577)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new fbcf63e1f5b [cherry-pick] (branch-2.1)fix variant index  (#36577)
fbcf63e1f5b is described below

commit fbcf63e1f5b97b20936c87e250986ddc33b20554
Author: Sun Chenyang 
AuthorDate: Thu Jun 20 17:57:26 2024 +0800

[cherry-pick] (branch-2.1)fix variant index  (#36577)

pick from master #36163
---
 .../olap/rowset/segment_v2/inverted_index_writer.h |   2 +-
 be/src/olap/rowset/segment_v2/segment_writer.cpp   |   4 +-
 .../rowset/segment_v2/vertical_segment_writer.cpp  |   4 +-
 be/src/olap/tablet_schema.cpp  |  12 ++-
 be/src/olap/tablet_schema.h|   5 +-
 be/src/olap/task/index_builder.cpp |   2 +-
 be/src/vec/common/schema_util.cpp  |   2 +-
 .../test_variant_index_format_v1.out   |  10 ++
 .../test_variant_index_format_v1.groovy| 105 +
 9 files changed, 135 insertions(+), 11 deletions(-)

diff --git a/be/src/olap/rowset/segment_v2/inverted_index_writer.h 
b/be/src/olap/rowset/segment_v2/inverted_index_writer.h
index 06bc960bc33..3b4e5ba2709 100644
--- a/be/src/olap/rowset/segment_v2/inverted_index_writer.h
+++ b/be/src/olap/rowset/segment_v2/inverted_index_writer.h
@@ -75,7 +75,7 @@ public:
 
 // check if the column is valid for inverted index, some columns
 // are generated from variant, but not all of them are supported
-static bool check_column_valid(const TabletColumn& column) {
+static bool check_support_inverted_index(const TabletColumn& column) {
 // bellow types are not supported in inverted index for extracted 
columns
 static std::set invalid_types = {
 FieldType::OLAP_FIELD_TYPE_DOUBLE,
diff --git a/be/src/olap/rowset/segment_v2/segment_writer.cpp 
b/be/src/olap/rowset/segment_v2/segment_writer.cpp
index 33f4e863824..7665aec1372 100644
--- a/be/src/olap/rowset/segment_v2/segment_writer.cpp
+++ b/be/src/olap/rowset/segment_v2/segment_writer.cpp
@@ -218,9 +218,7 @@ Status SegmentWriter::init(const std::vector& 
col_ids, bool has_key) {
 }
 // indexes for this column
 opts.indexes = 
std::move(_tablet_schema->get_indexes_for_column(column));
-if (!InvertedIndexColumnWriter::check_column_valid(column)) {
-// skip inverted index if invalid
-opts.indexes.clear();
+if (!InvertedIndexColumnWriter::check_support_inverted_index(column)) {
 opts.need_zone_map = false;
 opts.need_bloom_filter = false;
 opts.need_bitmap_index = false;
diff --git a/be/src/olap/rowset/segment_v2/vertical_segment_writer.cpp 
b/be/src/olap/rowset/segment_v2/vertical_segment_writer.cpp
index 394f5bae184..15b3688585c 100644
--- a/be/src/olap/rowset/segment_v2/vertical_segment_writer.cpp
+++ b/be/src/olap/rowset/segment_v2/vertical_segment_writer.cpp
@@ -171,9 +171,7 @@ Status 
VerticalSegmentWriter::_create_column_writer(uint32_t cid, const TabletCo
 }
 // indexes for this column
 opts.indexes = _tablet_schema->get_indexes_for_column(column);
-if (!InvertedIndexColumnWriter::check_column_valid(column)) {
-// skip inverted index if invalid
-opts.indexes.clear();
+if (!InvertedIndexColumnWriter::check_support_inverted_index(column)) {
 opts.need_zone_map = false;
 opts.need_bloom_filter = false;
 opts.need_bitmap_index = false;
diff --git a/be/src/olap/tablet_schema.cpp b/be/src/olap/tablet_schema.cpp
index 0418f4c6334..290e5a6bc25 100644
--- a/be/src/olap/tablet_schema.cpp
+++ b/be/src/olap/tablet_schema.cpp
@@ -1275,6 +1275,10 @@ const TabletColumn& TabletSchema::column(const 
std::string& field_name) const {
 std::vector TabletSchema::get_indexes_for_column(
 const TabletColumn& col) const {
 std::vector indexes_for_column;
+// Some columns (Float, Double, JSONB ...) from the variant do not support 
index, but they are listed in TabltetIndex.
+if 
(!segment_v2::InvertedIndexColumnWriter::check_support_inverted_index(col)) {
+return indexes_for_column;
+}
 int32_t col_unique_id = col.is_extracted_column() ? col.parent_unique_id() 
: col.unique_id();
 const std::string& suffix_path =
 col.has_path_info() ? 
escape_for_path_name(col.path_info_ptr()->get_path()) : "";
@@ -1346,7 +1350,13 @@ const TabletIndex* 
TabletSchema::get_inverted_index(int32_t col_unique_id,
 return nullptr;
 }
 
-const TabletIndex* TabletSchema::get_inverted_index(const TabletColumn& col) 
const {
+const TabletIndex* TabletSchema::get_inverted_index(const TabletColumn& col,
+bool check_valid) const {
+// With check_valid set to true by default
+// Some colu

(doris) branch master updated: [fix](nereids)NullSafeEqualToEqual rule should keep <=> unchanged if it has none-literal child (#36521)

2024-06-20 Thread starocean999
This is an automated email from the ASF dual-hosted git repository.

starocean999 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new 9c7ec37fde7 [fix](nereids)NullSafeEqualToEqual rule should keep <=> 
unchanged if it has none-literal child (#36521)
9c7ec37fde7 is described below

commit 9c7ec37fde7e7a9631846ee1cd712ee7db3a7a9a
Author: starocean999 <40539150+starocean...@users.noreply.github.com>
AuthorDate: Thu Jun 20 18:05:19 2024 +0800

[fix](nereids)NullSafeEqualToEqual rule should keep <=> unchanged if it has 
none-literal child (#36521)

convert:
 expr <=> null to expr is null
 null <=> null to true
 null <=> 1 to false
 literal <=> literal to literal = literal ( 1 <=> 2 to 1 = 2 )
others are unchanged.
---
 .../rules/expression/ExpressionOptimization.java   |  2 ++
 .../expression/rules/NullSafeEqualToEqual.java | 24 +-
 .../expression/rules/NullSafeEqualToEqualTest.java | 38 +-
 3 files changed, 41 insertions(+), 23 deletions(-)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/expression/ExpressionOptimization.java
 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/expression/ExpressionOptimization.java
index 828592bbba3..abf57057601 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/expression/ExpressionOptimization.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/expression/ExpressionOptimization.java
@@ -23,6 +23,7 @@ import 
org.apache.doris.nereids.rules.expression.rules.DateFunctionRewrite;
 import org.apache.doris.nereids.rules.expression.rules.DistinctPredicatesRule;
 import org.apache.doris.nereids.rules.expression.rules.ExtractCommonFactorRule;
 import org.apache.doris.nereids.rules.expression.rules.LikeToEqualRewrite;
+import org.apache.doris.nereids.rules.expression.rules.NullSafeEqualToEqual;
 import org.apache.doris.nereids.rules.expression.rules.OrToIn;
 import 
org.apache.doris.nereids.rules.expression.rules.SimplifyComparisonPredicate;
 import 
org.apache.doris.nereids.rules.expression.rules.SimplifyDecimalV3Comparison;
@@ -51,6 +52,7 @@ public class ExpressionOptimization extends ExpressionRewrite 
{
 ArrayContainToArrayOverlap.INSTANCE,
 CaseWhenToIf.INSTANCE,
 TopnToMax.INSTANCE,
+NullSafeEqualToEqual.INSTANCE,
 LikeToEqualRewrite.INSTANCE
 )
 );
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/expression/rules/NullSafeEqualToEqual.java
 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/expression/rules/NullSafeEqualToEqual.java
index dda109a42e0..16c4663a1ed 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/expression/rules/NullSafeEqualToEqual.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/expression/rules/NullSafeEqualToEqual.java
@@ -24,17 +24,16 @@ import 
org.apache.doris.nereids.trees.expressions.Expression;
 import org.apache.doris.nereids.trees.expressions.IsNull;
 import org.apache.doris.nereids.trees.expressions.NullSafeEqual;
 import org.apache.doris.nereids.trees.expressions.literal.BooleanLiteral;
-import org.apache.doris.nereids.trees.expressions.literal.NullLiteral;
 
 import com.google.common.collect.ImmutableList;
 
 import java.util.List;
 
 /**
- * convert "<=>" to "=", if both sides are not nullable
  * convert "A <=> null" to "A is null"
  * null <=> null : true
  * null <=> 1 : false
+ * 1 <=> 2 : 1 = 2
  */
 public class NullSafeEqualToEqual implements ExpressionPatternRuleFactory {
 public static final NullSafeEqualToEqual INSTANCE = new 
NullSafeEqualToEqual();
@@ -47,19 +46,14 @@ public class NullSafeEqualToEqual implements 
ExpressionPatternRuleFactory {
 }
 
 private static Expression rewrite(NullSafeEqual nullSafeEqual) {
-if (nullSafeEqual.left() instanceof NullLiteral) {
-if (nullSafeEqual.right().nullable()) {
-return new IsNull(nullSafeEqual.right());
-} else {
-return BooleanLiteral.FALSE;
-}
-} else if (nullSafeEqual.right() instanceof NullLiteral) {
-if (nullSafeEqual.left().nullable()) {
-return new IsNull(nullSafeEqual.left());
-} else {
-return BooleanLiteral.FALSE;
-}
-} else if (!nullSafeEqual.left().nullable() && 
!nullSafeEqual.right().nullable()) {
+// because the nullable info hasn't been finalized yet, the 
optimization is limited
+if (nullSafeEqual.left().isNullLiteral() && 
nullSafeEqual.right().isNullLiteral()) {
+return BooleanLiteral.TRUE;
+} else if (nullSafeEqual.left().isNullLiteral()) {
+return nullSafeEqual.right().isLiteral() ? BooleanLiteral.FALSE : 
new IsNull(nullSafeEqual.right());
+

(doris) branch branch-2.1 updated: [Enhancement](multi-catalog) Add more error msgs for wrong data types in orc and parquet reader. (#36580)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new f7f7b2b7386 [Enhancement](multi-catalog) Add more error msgs for wrong 
data types in orc and parquet reader. (#36580)
f7f7b2b7386 is described below

commit f7f7b2b7386a4d630ad2f04ec0961e6bf01378a5
Author: Qi Chen 
AuthorDate: Thu Jun 20 18:10:25 2024 +0800

[Enhancement](multi-catalog) Add more error msgs for wrong data types in 
orc and parquet reader. (#36580)

Backport #36417
---
 be/src/vec/exec/format/orc/vorc_reader.h  |  9 ++---
 be/src/vec/exec/format/parquet/vparquet_column_reader.cpp | 12 +---
 2 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/be/src/vec/exec/format/orc/vorc_reader.h 
b/be/src/vec/exec/format/orc/vorc_reader.h
index c790d78123f..77eec261b01 100644
--- a/be/src/vec/exec/format/orc/vorc_reader.h
+++ b/be/src/vec/exec/format/orc/vorc_reader.h
@@ -313,7 +313,8 @@ private:
 SCOPED_RAW_TIMER(&_statistics.decode_value_time);
 OrcColumnType* data = dynamic_cast(cvb);
 if (data == nullptr) {
-return Status::InternalError("Wrong data type for colum '{}'", 
col_name);
+return Status::InternalError("Wrong data type for column '{}', 
expected {}", col_name,
+ cvb->toString());
 }
 auto* cvb_data = data->data.data();
 auto& column_data = 
static_cast&>(*data_column).get_data();
@@ -355,7 +356,8 @@ private:
orc::ColumnVectorBatch* cvb, size_t 
num_values) {
 OrcColumnType* data = dynamic_cast(cvb);
 if (data == nullptr) {
-return Status::InternalError("Wrong data type for colum '{}'", 
col_name);
+return Status::InternalError("Wrong data type for column '{}', 
expected {}", col_name,
+ cvb->toString());
 }
 if (_decimal_scale_params_index >= _decimal_scale_params.size()) {
 DecimalScaleParams temp_scale_params;
@@ -443,7 +445,8 @@ private:
 SCOPED_RAW_TIMER(&_statistics.decode_value_time);
 auto* data = dynamic_cast(cvb);
 if (data == nullptr) {
-return Status::InternalError("Wrong data type for colum '{}'", 
col_name);
+return Status::InternalError("Wrong data type for column '{}', 
expected {}", col_name,
+ cvb->toString());
 }
 date_day_offset_dict& date_dict = date_day_offset_dict::get();
 auto& column_data = 
static_cast&>(*data_column).get_data();
diff --git a/be/src/vec/exec/format/parquet/vparquet_column_reader.cpp 
b/be/src/vec/exec/format/parquet/vparquet_column_reader.cpp
index 85d03daebc5..4efa6c60e47 100644
--- a/be/src/vec/exec/format/parquet/vparquet_column_reader.cpp
+++ b/be/src/vec/exec/format/parquet/vparquet_column_reader.cpp
@@ -594,7 +594,9 @@ Status ArrayColumnReader::read_column_data(ColumnPtr& 
doris_column, DataTypePtr&
 data_column = doris_column->assume_mutable();
 }
 if (remove_nullable(type)->get_type_id() != TypeIndex::Array) {
-return Status::Corruption("Wrong data type for column '{}'", 
_field_schema->name);
+return Status::Corruption(
+"Wrong data type for column '{}', expected Array type, actual 
type id {}.",
+_field_schema->name, remove_nullable(type)->get_type_id());
 }
 
 ColumnPtr& element_column = 
static_cast(*data_column).get_data_ptr();
@@ -643,7 +645,9 @@ Status MapColumnReader::read_column_data(ColumnPtr& 
doris_column, DataTypePtr& t
 data_column = doris_column->assume_mutable();
 }
 if (remove_nullable(type)->get_type_id() != TypeIndex::Map) {
-return Status::Corruption("Wrong data type for column '{}'", 
_field_schema->name);
+return Status::Corruption(
+"Wrong data type for column '{}', expected Map type, actual 
type id {}.",
+_field_schema->name, remove_nullable(type)->get_type_id());
 }
 
 auto& map = static_cast(*data_column);
@@ -710,7 +714,9 @@ Status StructColumnReader::read_column_data(ColumnPtr& 
doris_column, DataTypePtr
 data_column = doris_column->assume_mutable();
 }
 if (remove_nullable(type)->get_type_id() != TypeIndex::Struct) {
-return Status::Corruption("Wrong data type for column '{}'", 
_field_schema->name);
+return Status::Corruption(
+"Wrong data type for column '{}', expected Struct type, actual 
type id {}.",
+_field_schema->name, remove_nullable(type)->get_type_id());
 }
 
 auto& doris_struct = static_cast(*data_column);


-
To unsubscribe, e-mail: commits-unsu

(doris) branch master updated (9c7ec37fde7 -> 83931814436)

2024-06-20 Thread morrysnow
This is an automated email from the ASF dual-hosted git repository.

morrysnow pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from 9c7ec37fde7 [fix](nereids)NullSafeEqualToEqual rule should keep <=> 
unchanged if it has none-literal child (#36521)
 add 83931814436 [enhance](mtmv)when calculating the availability of MTMV, 
no longer consider refresh state (#36507)

No new revisions were added by this update.

Summary of changes:
 .../main/java/org/apache/doris/mtmv/MTMVRewriteUtil.java  |  4 ++--
 .../java/org/apache/doris/mtmv/MTMVRewriteUtilTest.java   | 15 ++-
 2 files changed, 16 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch branch-2.1 updated: [fix](auth)Auth support case insensitive (#36381) (#36557)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 22d37ba3fe6 [fix](auth)Auth support case insensitive (#36381) (#36557)
22d37ba3fe6 is described below

commit 22d37ba3fe64fe0d319548dcd21d771b72cdcd39
Author: zhangdong <493738...@qq.com>
AuthorDate: Thu Jun 20 18:31:30 2024 +0800

[fix](auth)Auth support case insensitive (#36381) (#36557)

pick from: #36381
---
 .../main/java/org/apache/doris/catalog/Env.java|  4 ++
 .../doris/mysql/privilege/TablePrivEntry.java  |  3 +-
 .../org/apache/doris/mysql/privilege/AuthTest.java | 50 ++
 3 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/fe/fe-core/src/main/java/org/apache/doris/catalog/Env.java 
b/fe/fe-core/src/main/java/org/apache/doris/catalog/Env.java
index 4209eeaa532..09cb46fdf27 100755
--- a/fe/fe-core/src/main/java/org/apache/doris/catalog/Env.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/catalog/Env.java
@@ -5921,6 +5921,10 @@ public class Env {
 return GlobalVariable.lowerCaseTableNames == 2;
 }
 
+public static boolean isTableNamesCaseSensitive() {
+return GlobalVariable.lowerCaseTableNames == 0;
+}
+
 private static void getTableMeta(OlapTable olapTable, TGetMetaDBMeta 
dbMeta) {
 if (LOG.isDebugEnabled()) {
 LOG.debug("get table meta. table: {}", olapTable.getName());
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/mysql/privilege/TablePrivEntry.java 
b/fe/fe-core/src/main/java/org/apache/doris/mysql/privilege/TablePrivEntry.java
index c89104cde1c..27693bbf6a3 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/mysql/privilege/TablePrivEntry.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/mysql/privilege/TablePrivEntry.java
@@ -17,6 +17,7 @@
 
 package org.apache.doris.mysql.privilege;
 
+import org.apache.doris.catalog.Env;
 import org.apache.doris.common.AnalysisException;
 import org.apache.doris.common.CaseSensibility;
 import org.apache.doris.common.PatternMatcher;
@@ -58,7 +59,7 @@ public class TablePrivEntry extends DbPrivEntry {
 ctl, CaseSensibility.CATALOG.getCaseSensibility(), 
ctl.equals(ANY_CTL));
 
 PatternMatcher tblPattern = PatternMatcher.createFlatPattern(
-tbl, CaseSensibility.TABLE.getCaseSensibility(), 
tbl.equals(ANY_TBL));
+tbl, Env.isTableNamesCaseSensitive(), tbl.equals(ANY_TBL));
 
 if (privs.containsNodePriv() || privs.containsResourcePriv()) {
 throw new AnalysisException("Table privilege can not contains 
global or resource privileges: " + privs);
diff --git 
a/fe/fe-core/src/test/java/org/apache/doris/mysql/privilege/AuthTest.java 
b/fe/fe-core/src/test/java/org/apache/doris/mysql/privilege/AuthTest.java
index 43737066748..baedd1483d4 100644
--- a/fe/fe-core/src/test/java/org/apache/doris/mysql/privilege/AuthTest.java
+++ b/fe/fe-core/src/test/java/org/apache/doris/mysql/privilege/AuthTest.java
@@ -2373,6 +2373,56 @@ public class AuthTest {
 revoke(revokeStmt);
 }
 
+@Test
+public void testTableNamesCaseSensitive() throws UserException {
+new Expectations() {
+{
+Env.isTableNamesCaseSensitive();
+minTimes = 0;
+result = true;
+}
+};
+UserIdentity userIdentity = new UserIdentity("sensitiveUser", "%");
+createUser(userIdentity);
+// `load_priv` and `select_priv` can not `show create view`
+GrantStmt grantStmt = new GrantStmt(userIdentity, null, new 
TablePattern("sensitivedb", "sensitiveTable"),
+Lists.newArrayList(new 
AccessPrivilegeWithCols(AccessPrivilege.SELECT_PRIV)));
+grant(grantStmt);
+Assert.assertTrue(accessManager
+.checkTblPriv(userIdentity, 
InternalCatalog.INTERNAL_CATALOG_NAME, "sensitivedb", "sensitiveTable",
+PrivPredicate.SELECT));
+
+Assert.assertFalse(accessManager
+.checkTblPriv(userIdentity, 
InternalCatalog.INTERNAL_CATALOG_NAME, "sensitivedb", "sensitivetable",
+PrivPredicate.SELECT));
+dropUser(userIdentity);
+}
+
+@Test
+public void testTableNamesCaseInsensitive() throws UserException {
+new Expectations() {
+{
+Env.isTableNamesCaseSensitive();
+minTimes = 0;
+result = false;
+}
+};
+UserIdentity userIdentity = new UserIdentity("sensitiveUser1", "%");
+createUser(userIdentity);
+// `load_priv` and `select_priv` can not `show create view`
+GrantStmt grantStmt = new GrantStmt(userIdentity, null, new 
TablePattern("sensitivedb1", "sensitiveTable"),
+Lists.newArrayList(new 
AccessPr

(doris) branch branch-2.1 updated (22d37ba3fe6 -> ac0f6e75d26)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a change to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


from 22d37ba3fe6 [fix](auth)Auth support case insensitive (#36381) (#36557)
 add ac0f6e75d26 [bugfix](iceberg)Read error when timestamp does not have 
time zone for 2.1 (#36435)

No new revisions were added by this update.

Summary of changes:
 .../docker-compose/iceberg/iceberg.yaml.tpl|  1 +
 .../docker-compose/iceberg/spark-defaults.conf | 34 ++
 .../docker-compose/iceberg/spark-defaults.conf.tpl | 11 ---
 .../docker-compose/iceberg/spark-init.sql  |  5 +++-
 docker/thirdparties/run-thirdparties-docker.sh |  2 --
 .../org/apache/doris/common/util/TimeUtils.java|  6 +++-
 .../doris/datasource/iceberg/IcebergUtils.java |  6 +++-
 .../iceberg/test_iceberg_filter.out| 13 +
 .../iceberg/test_iceberg_filter.groovy |  7 +
 9 files changed, 69 insertions(+), 16 deletions(-)
 create mode 100644 
docker/thirdparties/docker-compose/iceberg/spark-defaults.conf
 delete mode 100644 
docker/thirdparties/docker-compose/iceberg/spark-defaults.conf.tpl


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch branch-2.1 updated: [branch-2.1][fix](jdbc catalog) fix jdbc mysql client match jsonb type (#36180)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 3ee259fc003 [branch-2.1][fix](jdbc catalog) fix jdbc mysql client 
match jsonb type (#36180)
3ee259fc003 is described below

commit 3ee259fc0035d0242960c27fc56c4db787268ad0
Author: zy-kkk 
AuthorDate: Thu Jun 20 18:33:27 2024 +0800

[branch-2.1][fix](jdbc catalog) fix jdbc mysql client match jsonb type 
(#36180)

bp #36177
---
 .../java/org/apache/doris/datasource/jdbc/client/JdbcMySQLClient.java| 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/datasource/jdbc/client/JdbcMySQLClient.java
 
b/fe/fe-core/src/main/java/org/apache/doris/datasource/jdbc/client/JdbcMySQLClient.java
index d48746ae3a6..efb69d8003f 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/datasource/jdbc/client/JdbcMySQLClient.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/datasource/jdbc/client/JdbcMySQLClient.java
@@ -380,6 +380,7 @@ public class JdbcMySQLClient extends JdbcClient {
 case "STRING":
 case "TEXT":
 case "JSON":
+case "JSONB":
 return ScalarType.createStringType();
 case "HLL":
 return ScalarType.createHllType();


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch branch-2.1 updated: [fix](auth)ldap set passwd need forward to master (#36436) (#36598)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 838af130015 [fix](auth)ldap set passwd need forward to master (#36436) 
(#36598)
838af130015 is described below

commit 838af13001574407ff08e1a78e556ceeae41090f
Author: zhangdong <493738...@qq.com>
AuthorDate: Thu Jun 20 18:35:37 2024 +0800

[fix](auth)ldap set passwd need forward to master (#36436) (#36598)

pick from master: #36436
---
 fe/fe-core/src/main/java/org/apache/doris/analysis/SetStmt.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fe/fe-core/src/main/java/org/apache/doris/analysis/SetStmt.java 
b/fe/fe-core/src/main/java/org/apache/doris/analysis/SetStmt.java
index 5e0d0f9105c..3c6d938026f 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/analysis/SetStmt.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/analysis/SetStmt.java
@@ -91,7 +91,7 @@ public class SetStmt extends StatementBase {
 public RedirectStatus getRedirectStatus() {
 if (setVars != null) {
 for (SetVar var : setVars) {
-if (var instanceof SetPassVar) {
+if (var instanceof SetPassVar || var instanceof 
SetLdapPassVar) {
 return RedirectStatus.FORWARD_WITH_SYNC;
 } else if (var.getType() == SetType.GLOBAL) {
 return RedirectStatus.FORWARD_WITH_SYNC;


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated: [test](jdbc catalog) reopen and fix db2 catalog test case (#35966)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new e373cd4efad [test](jdbc catalog) reopen and fix db2 catalog test case 
(#35966)
e373cd4efad is described below

commit e373cd4efad8982bcd7ea31806b981b3f22cabd2
Author: zy-kkk 
AuthorDate: Thu Jun 20 18:37:11 2024 +0800

[test](jdbc catalog) reopen and fix db2 catalog test case (#35966)

Change the download address of db2 image, reopen the test and fix the
Result close problem in the test
---
 .../thirdparties/docker-compose/db2/db2.yaml.tpl   |   4 +-
 docker/thirdparties/run-thirdparties-docker.sh |   2 +-
 .../pipeline/external/conf/regression-conf.groovy  |   1 +
 .../jdbc/test_db2_jdbc_catalog.groovy  | 446 ++---
 4 files changed, 227 insertions(+), 226 deletions(-)

diff --git a/docker/thirdparties/docker-compose/db2/db2.yaml.tpl 
b/docker/thirdparties/docker-compose/db2/db2.yaml.tpl
index a3f2e778ae6..9967e4013bf 100644
--- a/docker/thirdparties/docker-compose/db2/db2.yaml.tpl
+++ b/docker/thirdparties/docker-compose/db2/db2.yaml.tpl
@@ -19,12 +19,12 @@ version: '3'
 
 services:
   doris--db2_11:
-image: icr.io/db2_community/db2
+image: icr.io/db2_community/db2:11.5.9.0
 ports:
   - ${DOCKER_DB2_EXTERNAL_PORT}:5
 privileged: true
 healthcheck:
-  test: ["CMD-SHELL", "su - db2inst1 -c \"db2 connect to doris && db2 
'select 1 from sysibm.sysdummy1'\""]
+  test: ["CMD-SHELL", "su - db2inst1 -c \"source ~/.bash_profile; db2 
connect to doris && db2 'select 1 from sysibm.sysdummy1'\""]
   interval: 20s
   timeout: 60s
   retries: 10
diff --git a/docker/thirdparties/run-thirdparties-docker.sh 
b/docker/thirdparties/run-thirdparties-docker.sh
index 0615241523a..012f840cb9f 100755
--- a/docker/thirdparties/run-thirdparties-docker.sh
+++ b/docker/thirdparties/run-thirdparties-docker.sh
@@ -59,7 +59,7 @@ eval set -- "${OPTS}"
 
 if [[ "$#" == 1 ]]; then
 # default
-
COMPONENTS="mysql,es,hive2,hive3,pg,oracle,sqlserver,clickhouse,mariadb,iceberg"
+
COMPONENTS="mysql,es,hive2,hive3,pg,oracle,sqlserver,clickhouse,mariadb,iceberg,db2"
 else
 while true; do
 case "$1" in
diff --git a/regression-test/pipeline/external/conf/regression-conf.groovy 
b/regression-test/pipeline/external/conf/regression-conf.groovy
index 2c163b07989..7683a3e5b9e 100644
--- a/regression-test/pipeline/external/conf/regression-conf.groovy
+++ b/regression-test/pipeline/external/conf/regression-conf.groovy
@@ -152,6 +152,7 @@ hdfs_port=8020
 oracle_11_port=1521
 sqlserver_2022_port=1433
 clickhouse_22_port=8123
+db2_11_port=5
 
 // trino-connector catalog test config
 enableTrinoConnectorTest = true
diff --git 
a/regression-test/suites/external_table_p0/jdbc/test_db2_jdbc_catalog.groovy 
b/regression-test/suites/external_table_p0/jdbc/test_db2_jdbc_catalog.groovy
index a334d394c9b..ee930bbf6c3 100644
--- a/regression-test/suites/external_table_p0/jdbc/test_db2_jdbc_catalog.groovy
+++ b/regression-test/suites/external_table_p0/jdbc/test_db2_jdbc_catalog.groovy
@@ -29,249 +29,249 @@ suite("test_db2_jdbc_catalog", 
"p0,external,db2,external_docker,external_docker_
 String bucket = getS3BucketName()
 String driver_url = 
"https://${bucket}.${s3_endpoint}/regression/jdbc_driver/jcc-11.5.8.0.jar";
 if (enabled != null && enabled.equalsIgnoreCase("true")) {
-// String catalog_name = "db2_jdbc_catalog";
-// String internal_db_name = "regression_test_jdbc_catalog_p0";
-// String ex_db_name = "DORIS_TEST";
-// String db2_port = context.config.otherConfigs.get("db2_11_port");
-// String sample_table = "SAMPLE_TABLE";
+String catalog_name = "db2_jdbc_catalog";
+String internal_db_name = "regression_test_jdbc_catalog_p0";
+String ex_db_name = "DORIS_TEST";
+String db2_port = context.config.otherConfigs.get("db2_11_port");
+String sample_table = "SAMPLE_TABLE";
 
-// try {
-// db2_docker "CREATE SCHEMA doris_test;"
-// db2_docker "CREATE SCHEMA test;"
-// db2_docker """CREATE TABLE doris_test.sample_table (
-// id_column INT GENERATED ALWAYS AS IDENTITY,
-// numeric_column NUMERIC,
-// decimal_column DECIMAL(31, 10),
-// decfloat_column DECFLOAT,
-// float_column FLOAT,
-// real_column REAL,
-// double_column DOUBLE,
-// double_precision_column DOUBLE PRECISION,
-// smallint_column SMALLINT,
-// int_column INT,
-// bigint_column BIGINT,
-// varchar_column VARCHAR(255),
-// varcharphic_column VARGRAPHIC(50),
-// long_varchar_column L

Error while running notifications feature from refs/heads/master:.asf.yaml in doris-website!

2024-06-20 Thread Apache Infrastructure


An error occurred while running notifications feature in .asf.yaml!:
Invalid notification target 'comm...@foo.apache.org'. Must be a valid 
@doris.apache.org list!


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris-website) branch master updated: [Improve]Add best practices for doris-kafka-connector (#724)

2024-06-20 Thread luzhijing
This is an automated email from the ASF dual-hosted git repository.

luzhijing pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 46e2b4b591 [Improve]Add best practices for doris-kafka-connector (#724)
46e2b4b591 is described below

commit 46e2b4b591e0125f063d3f8c5c46775caf97e830
Author: wudongliang <46414265+donglian...@users.noreply.github.com>
AuthorDate: Thu Jun 20 19:00:52 2024 +0800

[Improve]Add best practices for doris-kafka-connector (#724)
---
 docs/ecosystem/doris-kafka-connector.md| 83 +-
 .../current/ecosystem/doris-kafka-connector.md | 80 +
 .../version-2.0/ecosystem/doris-kafka-connector.md | 80 +
 .../version-2.1/ecosystem/doris-kafka-connector.md | 80 +
 .../version-2.0/ecosystem/doris-kafka-connector.md | 83 +-
 .../version-2.1/ecosystem/doris-kafka-connector.md | 83 +-
 6 files changed, 486 insertions(+), 3 deletions(-)

diff --git a/docs/ecosystem/doris-kafka-connector.md 
b/docs/ecosystem/doris-kafka-connector.md
index 120f089eec..c3d254f797 100644
--- a/docs/ecosystem/doris-kafka-connector.md
+++ b/docs/ecosystem/doris-kafka-connector.md
@@ -246,4 +246,85 @@ Doris-kafka-connector uses logical or primitive type 
mapping to resolve the colu
 | io.debezium.time.MicroTimestamp | DATETIME  |
 | io.debezium.time.NanoTimestamp  | DATETIME  |
 | io.debezium.time.ZonedTimestamp | DATETIME  |
-| io.debezium.data.VariableScaleDecimal   | DOUBLE|
\ No newline at end of file
+| io.debezium.data.VariableScaleDecimal   | DOUBLE|
+
+
+## Best Practices
+### Load Json serialized data
+```
+curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" 
-X POST -d '{ 
+  "name":"doris-json-test", 
+  "config":{ 
+"connector.class":"org.apache.doris.kafka.connector.DorisSinkConnector", 
+"topics":"json_topic", 
+"tasks.max":"10",
+"doris.topic2table.map": "json_topic:json_tab", 
+"buffer.count.records":"10", 
+"buffer.flush.time":"120", 
+"buffer.size.bytes":"1000", 
+"doris.urls":"127.0.0.1", 
+"doris.user":"root", 
+"doris.password":"", 
+"doris.http.port":"8030", 
+"doris.query.port":"9030", 
+"doris.database":"test", 
+"load.model":"stream_load",
+"key.converter":"org.apache.kafka.connect.json.JsonConverter",
+"value.converter":"org.apache.kafka.connect.json.JsonConverter"
+  } 
+}'
+```
+
+### Load Avro serialized data
+```
+curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" 
-X POST -d '{ 
+  "name":"doris-avro-test", 
+  "config":{ 
+"connector.class":"org.apache.doris.kafka.connector.DorisSinkConnector", 
+"topics":"avro_topic", 
+"tasks.max":"10",
+"doris.topic2table.map": "avro_topic:avro_tab", 
+"buffer.count.records":"10", 
+"buffer.flush.time":"120", 
+"buffer.size.bytes":"1000", 
+"doris.urls":"127.0.0.1", 
+"doris.user":"root", 
+"doris.password":"", 
+"doris.http.port":"8030", 
+"doris.query.port":"9030", 
+"doris.database":"test", 
+"load.model":"stream_load",
+"key.converter":"io.confluent.connect.avro.AvroConverter",
+"key.converter.schema.registry.url":"http://127.0.0.1:8081";,
+"value.converter":"io.confluent.connect.avro.AvroConverter",
+"value.converter.schema.registry.url":"http://127.0.0.1:8081";
+  } 
+}'
+```
+
+### Load Protobuf serialized data
+```
+curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" 
-X POST -d '{ 
+  "name":"doris-protobuf-test", 
+  "config":{ 
+"connector.class":"org.apache.doris.kafka.connector.DorisSinkConnector", 
+"topics":"proto_topic", 
+"tasks.max":"10",
+"doris.topic2table.map": "proto_topic:proto_tab", 
+"buffer.count.records":"10", 
+"buffer.flush.time":"120", 
+"buffer.size.bytes":"1000", 
+"doris.urls":"127.0.0.1", 
+"doris.user":"root", 
+"doris.password":"", 
+"doris.http.port":"8030", 
+"doris.query.port":"9030", 
+"doris.database":"test", 
+"load.model":"stream_load",
+"key.converter":"io.confluent.connect.protobuf.ProtobufConverter",
+"key.converter.schema.registry.url":"http://127.0.0.1:8081";,
+"value.converter":"io.confluent.connect.protobuf.ProtobufConverter",
+"value.converter.schema.registry.url":"http://127.0.0.1:8081";
+  } 
+}'
+```
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-kafka-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-kafka-connector.md
index 319c393356..7eb53a5215 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-kafka-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/dori

Error while running notifications feature from refs/heads/master:.asf.yaml in doris-website!

2024-06-20 Thread Apache Infrastructure


An error occurred while running notifications feature in .asf.yaml!:
Invalid notification target 'comm...@foo.apache.org'. Must be a valid 
@doris.apache.org list!


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris-website) branch master updated: [doc](standard-deployment) Fix Doc Spelling Mistake (#735)

2024-06-20 Thread luzhijing
This is an automated email from the ASF dual-hosted git repository.

luzhijing pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 438fd151e4 [doc](standard-deployment) Fix Doc Spelling Mistake (#735)
438fd151e4 is described below

commit 438fd151e4c83dc588f3f00f95a79b9140ebd1a8
Author: Wanghuan <499218...@qq.com>
AuthorDate: Thu Jun 20 19:02:08 2024 +0800

[doc](standard-deployment) Fix Doc Spelling Mistake (#735)

Co-authored-by: KassieZ <139741991+kass...@users.noreply.github.com>
Co-authored-by: wanghuan 
---
 .../cluster-deployment/standard-deployment.md  | 10 +-
 .../cluster-deployment/standard-deployment.md  | 22 +++---
 .../cluster-deployment/standard-deployment.md  | 20 ++--
 .../cluster-deployment/standard-deployment.md  | 20 ++--
 .../cluster-deployment/standard-deployment.md  | 22 +++---
 .../cluster-deployment/standard-deployment.md  | 22 +++---
 6 files changed, 58 insertions(+), 58 deletions(-)

diff --git a/docs/install/cluster-deployment/standard-deployment.md 
b/docs/install/cluster-deployment/standard-deployment.md
index 62526282ad..94c0bd2364 100644
--- a/docs/install/cluster-deployment/standard-deployment.md
+++ b/docs/install/cluster-deployment/standard-deployment.md
@@ -201,16 +201,16 @@ echo never > /sys/kernel/mm/transparent_hugepage/defrag
 Doris instances communicate directly over the network, requiring the following 
ports for normal operation. Administrators can adjust Doris ports according to 
their environment:
 
 | Instance | Port   | Default Port | Communication Direction   
  | Description  |
-|  | -- |  | 
--- | 
 |
+|  | -- |  
|-| 
 |
 | BE   | be_port| 9060 | FE --> BE 
  | thrift server port on BE, receiving requests from FE |
 | BE   | webserver_port | 8040 | BE <--> BE
  | http server port on BE   |
 | BE   | heartbeat_service_port | 9050 | FE --> BE 
  | heartbeat service port (thrift) on BE, receiving heartbeats from FE |
-| BE   | brpc_port  | 8060 | FE <--> BEBE <--> BE  
  | brpc port on BE, used for communication between BEs  |
-| FE   | http_port  | 8030 | FE <--> FEClient <--> FE  
  | http server port on FE   |
-| FE   | rpc_port   | 9020 | BE --> FEFE <--> FE   
  | thrift server port on FE, configuration of each FE should be consistent |
+| BE   | brpc_port  | 8060 | FE <--> BE,BE <--> BE 
  | brpc port on BE, used for communication between BEs  |
+| FE   | http_port  | 8030 | FE <--> FE,Client <--> FE 
  | http server port on FE   |
+| FE   | rpc_port   | 9020 | BE --> FE,FE <--> FE  
  | thrift server port on FE, configuration of each FE should be consistent |
 | FE   | query_port | 9030 | Client <--> FE
  | MySQL server port on FE  |
 | FE   | edit_log_port  | 9010 | FE <--> FE
  | port on FE for bdbje communication   |
-| Broker   | broker_ipc_port| 8000 | FE --> Broker BE --> 
Broker | thrift server on Broker, receiving requests  |
+| Broker   | broker_ipc_port| 8000 | FE --> Broker,BE --> 
Broker | thrift server on Broker, receiving requests  |
 
 ### Plan the nodes
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/install/cluster-deployment/standard-deployment.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/install/cluster-deployment/standard-deployment.md
index 2045fe826f..c22872ab48 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/install/cluster-deployment/standard-deployment.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/install/cluster-deployment/standard-deployment.md
@@ -210,17 +210,17 @@ echo never > /sys/kernel/mm/transparent_hugepage/defrag
 
 Doris 各个实例直接通过网络进行通讯,其正常运行需要网络环境提供以下的端口。管理员可以根据实际环境自行调整 Doris 的端口:
 
-| 实例名称 | 端口名称   | 默认端口 | 通信方向| 说明  
   |
-|  | -- |  | --- | 
-

(doris) branch master updated (e373cd4efad -> a09613f5d29)

2024-06-20 Thread dataroaring
This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from e373cd4efad [test](jdbc catalog) reopen and fix db2 catalog test case 
(#35966)
 add a09613f5d29 [fix](group commit) Fix the incorrect group commit count 
in log; fix the core in get_first_block (#36408)

No new revisions were added by this update.

Summary of changes:
 be/src/pipeline/exec/group_commit_block_sink_operator.cpp | 1 -
 be/src/runtime/group_commit_mgr.cpp   | 3 ++-
 2 files changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch branch-2.1 updated: [branch-2.1](doris compose) fix docker start failed (#36534)

2024-06-20 Thread dataroaring
This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 26b1ef428a0 [branch-2.1](doris compose) fix docker start failed 
(#36534)
26b1ef428a0 is described below

commit 26b1ef428a03bd559d868855ccec4f2eb6d05edb
Author: yujun 
AuthorDate: Thu Jun 20 20:14:17 2024 +0800

[branch-2.1](doris compose) fix docker start failed (#36534)
---
 docker/runtime/doris-compose/Dockerfile  |  32 +++--
 docker/runtime/doris-compose/Readme.md   |  10 +-
 docker/runtime/doris-compose/cluster.py  |  26 +++-
 docker/runtime/doris-compose/command.py  | 148 +++
 docker/runtime/doris-compose/resource/init_be.sh |   4 +-
 docker/runtime/doris-compose/utils.py|  28 +++--
 6 files changed, 198 insertions(+), 50 deletions(-)

diff --git a/docker/runtime/doris-compose/Dockerfile 
b/docker/runtime/doris-compose/Dockerfile
index 2306bf67cd2..73561e6410e 100644
--- a/docker/runtime/doris-compose/Dockerfile
+++ b/docker/runtime/doris-compose/Dockerfile
@@ -16,14 +16,30 @@
 # specific language governing permissions and limitations
 # under the License.
 
+ START ARG 
+
+# docker build cmd example:
+# docker build -f docker/runtime/doris-compose/Dockerfile -t 
: .
+
 # choose a base image
-FROM openjdk:8u342-jdk
+ARG JDK_IMAGE=openjdk:17-jdk-slim
+#ARG JDK_IMAGE=openjdk:8u342-jdk
+
+ END ARG 
+
+FROM ${JDK_IMAGE}
 
-ARG OUT_DIRECTORY=output
+RUN 

(doris) branch branch-2.1 updated: [chore](be) Improve ingesting binlog error checking (#36596)

2024-06-20 Thread dataroaring
This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new b3dcfae8647 [chore](be) Improve ingesting binlog error checking 
(#36596)
b3dcfae8647 is described below

commit b3dcfae8647e497b634e53433cbfd3507d808d63
Author: walter 
AuthorDate: Thu Jun 20 20:15:26 2024 +0800

[chore](be) Improve ingesting binlog error checking (#36596)

Cherry-pick #36487
---
 be/src/service/backend_service.cpp | 23 +++
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/be/src/service/backend_service.cpp 
b/be/src/service/backend_service.cpp
index 6a46cf38408..c4ccaa7281b 100644
--- a/be/src/service/backend_service.cpp
+++ b/be/src/service/backend_service.cpp
@@ -160,10 +160,25 @@ void _ingest_binlog(IngestBinlogArg* arg) {
 }
 
 std::vector binlog_info_parts = strings::Split(binlog_info, 
":");
-// TODO(Drogon): check binlog info content is right
-DCHECK(binlog_info_parts.size() == 2);
-const std::string& remote_rowset_id = binlog_info_parts[0];
-int64_t num_segments = std::stoll(binlog_info_parts[1]);
+if (binlog_info_parts.size() != 2) {
+status = Status::RuntimeError("failed to parse binlog info into 2 
parts: {}", binlog_info);
+LOG(WARNING) << "failed to get binlog info from " << 
get_binlog_info_url
+ << ", status=" << status.to_string();
+status.to_thrift(&tstatus);
+return;
+}
+std::string remote_rowset_id = std::move(binlog_info_parts[0]);
+int64_t num_segments = -1;
+try {
+num_segments = std::stoll(binlog_info_parts[1]);
+} catch (std::exception& e) {
+status = Status::RuntimeError("failed to parse num segments from 
binlog info {}: {}",
+  binlog_info, e.what());
+LOG(WARNING) << "failed to get binlog info from " << 
get_binlog_info_url
+ << ", status=" << status;
+status.to_thrift(&tstatus);
+return;
+}
 
 // Step 4: get rowset meta
 auto get_rowset_meta_url = fmt::format(


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch branch-2.1 updated: [chore](be) Support config max message size for be thrift server (#36595)

2024-06-20 Thread dataroaring
This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new a79b56ac23d [chore](be) Support config max message size for be thrift 
server (#36595)
a79b56ac23d is described below

commit a79b56ac23d9003f4b32c1f5adb78de1a7758bf9
Author: walter 
AuthorDate: Thu Jun 20 20:15:43 2024 +0800

[chore](be) Support config max message size for be thrift server (#36595)

Cherry-pick #36467
---
 be/src/common/config.cpp   |  3 ++
 be/src/common/config.h |  3 ++
 be/src/runtime/snapshot_loader.cpp |  2 +-
 be/src/util/thrift_server.cpp  | 67 +-
 4 files changed, 52 insertions(+), 23 deletions(-)

diff --git a/be/src/common/config.cpp b/be/src/common/config.cpp
index a8abee2f4cd..ba173b0d03f 100644
--- a/be/src/common/config.cpp
+++ b/be/src/common/config.cpp
@@ -248,6 +248,9 @@ DEFINE_mInt32(thrift_connect_timeout_seconds, "3");
 DEFINE_mInt32(fetch_rpc_timeout_seconds, "30");
 // default thrift client retry interval (in milliseconds)
 DEFINE_mInt64(thrift_client_retry_interval_ms, "1000");
+// max message size of thrift request
+// default: 100 * 1024 * 1024
+DEFINE_mInt64(thrift_max_message_size, "104857600");
 // max row count number for single scan range, used in segmentv1
 DEFINE_mInt32(doris_scan_range_row_count, "524288");
 // max bytes number for single scan range, used in segmentv2
diff --git a/be/src/common/config.h b/be/src/common/config.h
index 865d23000f5..5c60ffae258 100644
--- a/be/src/common/config.h
+++ b/be/src/common/config.h
@@ -294,6 +294,9 @@ DECLARE_mInt32(thrift_connect_timeout_seconds);
 DECLARE_mInt32(fetch_rpc_timeout_seconds);
 // default thrift client retry interval (in milliseconds)
 DECLARE_mInt64(thrift_client_retry_interval_ms);
+// max message size of thrift request
+// default: 100 * 1024 * 1024
+DECLARE_mInt64(thrift_max_message_size);
 // max row count number for single scan range, used in segmentv1
 DECLARE_mInt32(doris_scan_range_row_count);
 // max bytes number for single scan range, used in segmentv2
diff --git a/be/src/runtime/snapshot_loader.cpp 
b/be/src/runtime/snapshot_loader.cpp
index f064bd798f7..cab8edb1927 100644
--- a/be/src/runtime/snapshot_loader.cpp
+++ b/be/src/runtime/snapshot_loader.cpp
@@ -117,7 +117,7 @@ Status SnapshotLoader::init(TStorageBackendType::type type, 
const std::string& l
 RETURN_IF_ERROR(io::BrokerFileSystem::create(_broker_addr, _prop, 
&fs));
 _remote_fs = std::move(fs);
 } else {
-return Status::InternalError("Unknown storage tpye: {}", type);
+return Status::InternalError("Unknown storage type: {}", type);
 }
 return Status::OK();
 }
diff --git a/be/src/util/thrift_server.cpp b/be/src/util/thrift_server.cpp
index 06e59963130..7844f7daa1e 100644
--- a/be/src/util/thrift_server.cpp
+++ b/be/src/util/thrift_server.cpp
@@ -34,10 +34,12 @@
 // IWYU pragma: no_include 
 #include  // IWYU pragma: keep
 #include 
+#include 
 #include 
 #include 
 #include 
 
+#include "common/config.h"
 #include "service/backend_options.h"
 #include "util/doris_metrics.h"
 
@@ -58,6 +60,28 @@ 
DEFINE_GAUGE_METRIC_PROTOTYPE_3ARG(thrift_current_connections, MetricUnit::CONNE
 DEFINE_COUNTER_METRIC_PROTOTYPE_3ARG(thrift_connections_total, 
MetricUnit::CONNECTIONS,
  "Total connections made over the lifetime 
of this server");
 
+// Nonblocking Server socket implementation of TNonblockingServerTransport. 
Wrapper around a unix
+// socket listen and accept calls.
+class ImprovedNonblockingServerSocket : public 
apache::thrift::transport::TNonblockingServerSocket {
+using TConfiguration = apache::thrift::TConfiguration;
+using TSocket = apache::thrift::transport::TSocket;
+
+public:
+// Constructor.
+ImprovedNonblockingServerSocket(int port)
+: TNonblockingServerSocket(port),
+  
config(std::make_shared(config::thrift_max_message_size)) {}
+~ImprovedNonblockingServerSocket() override = default;
+
+protected:
+std::shared_ptr createSocket(THRIFT_SOCKET clientSocket) override 
{
+return std::make_shared(clientSocket, config);
+}
+
+private:
+std::shared_ptr config;
+};
+
 // Helper class that starts a server in a separate thread, and handles
 // the inter-thread communication to monitor whether it started
 // correctly.
@@ -68,26 +92,26 @@ public:
 : _thrift_server(thrift_server), _signal_fired(false) {}
 
 // friendly to code style
-virtual ~ThriftServerEventProcessor() {}
+~ThriftServerEventProcessor() override = default;
 
 // Called by TNonBlockingServer when server has acquired its resources and 
is ready to
 // serve, and signals to StartAndWaitForServer that start-up is finished.
 // From TServerEventHandler.
-virtual v

(doris) branch master updated (a09613f5d29 -> 7bb1944599f)

2024-06-20 Thread morrysnow
This is an automated email from the ASF dual-hosted git repository.

morrysnow pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from a09613f5d29 [fix](group commit) Fix the incorrect group commit count 
in log; fix the core in get_first_block (#36408)
 add 7bb1944599f [enhance](mtmv)reduce the behavior of triggering the mtmv 
state to change to schema_change (#36513)

No new revisions were added by this update.

Summary of changes:
 .../main/java/org/apache/doris/alter/Alter.java| 16 +++-
 .../org/apache/doris/analysis/AddColumnClause.java |  5 ++
 .../apache/doris/analysis/AddColumnsClause.java|  5 ++
 .../apache/doris/analysis/AddPartitionClause.java  |  5 ++
 .../doris/analysis/AddPartitionLikeClause.java |  5 ++
 .../org/apache/doris/analysis/AddRollupClause.java |  5 ++
 .../org/apache/doris/analysis/AlterClause.java |  4 +
 .../apache/doris/analysis/AlterTableClause.java|  2 +
 .../apache/doris/analysis/BuildIndexClause.java|  5 ++
 .../apache/doris/analysis/ColumnRenameClause.java  |  5 ++
 .../apache/doris/analysis/CreateIndexClause.java   |  5 ++
 .../apache/doris/analysis/DropColumnClause.java|  5 ++
 .../org/apache/doris/analysis/DropIndexClause.java |  5 ++
 .../apache/doris/analysis/DropPartitionClause.java |  5 ++
 .../analysis/DropPartitionFromIndexClause.java |  5 ++
 .../apache/doris/analysis/DropRollupClause.java|  5 ++
 .../apache/doris/analysis/EnableFeatureClause.java |  5 ++
 .../apache/doris/analysis/ModifyColumnClause.java  |  5 ++
 .../doris/analysis/ModifyColumnCommentClause.java  |  5 ++
 .../doris/analysis/ModifyDistributionClause.java   |  5 ++
 .../apache/doris/analysis/ModifyEngineClause.java  |  5 ++
 .../doris/analysis/ModifyPartitionClause.java  |  5 ++
 .../doris/analysis/ModifyTableCommentClause.java   |  5 ++
 .../analysis/ModifyTablePropertiesClause.java  |  5 ++
 .../doris/analysis/PartitionRenameClause.java  |  5 ++
 .../doris/analysis/ReorderColumnsClause.java   |  5 ++
 .../doris/analysis/ReplacePartitionClause.java |  5 ++
 .../apache/doris/analysis/ReplaceTableClause.java  |  5 ++
 .../apache/doris/analysis/RollupRenameClause.java  |  5 ++
 .../apache/doris/analysis/TableRenameClause.java   |  5 ++
 regression-test/data/mtmv_p0/test_base_mtmv.out| 39 --
 .../suites/mtmv_p0/test_base_mtmv.groovy   | 91 ++
 32 files changed, 263 insertions(+), 24 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch branch-2.1 updated: [Fix](Variant) forbit create variant as key #36555 (#36578)

2024-06-20 Thread kxiao
This is an automated email from the ASF dual-hosted git repository.

kxiao pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new c28c243c986 [Fix](Variant) forbit create variant as key #36555 (#36578)
c28c243c986 is described below

commit c28c243c986692814367e0bc7c07fdeccdee59d0
Author: lihangyu <15605149...@163.com>
AuthorDate: Thu Jun 20 20:33:48 2024 +0800

[Fix](Variant) forbit create variant as key #36555 (#36578)
---
 .../plans/commands/info/ColumnDefinition.java  |  3 ++
 regression-test/suites/variant_p0/load.groovy  | 54 +++---
 2 files changed, 40 insertions(+), 17 deletions(-)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/plans/commands/info/ColumnDefinition.java
 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/plans/commands/info/ColumnDefinition.java
index 806aa7cd2aa..77d6040c216 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/plans/commands/info/ColumnDefinition.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/plans/commands/info/ColumnDefinition.java
@@ -226,6 +226,9 @@ public class ColumnDefinition {
 } else if (type.isJsonType()) {
 throw new AnalysisException(
 "JsonType type should not be used in key column[" + 
getName() + "].");
+} else if (type.isVariantType()) {
+throw new AnalysisException(
+"Variant type should not be used in key column[" + 
getName() + "].");
 } else if (type.isMapType()) {
 throw new AnalysisException("Map can only be used in the 
non-key column of"
 + " the duplicate table at present.");
diff --git a/regression-test/suites/variant_p0/load.groovy 
b/regression-test/suites/variant_p0/load.groovy
index 899f7218b8e..572f7ce8ffc 100644
--- a/regression-test/suites/variant_p0/load.groovy
+++ b/regression-test/suites/variant_p0/load.groovy
@@ -255,23 +255,6 @@ suite("regression_test_variant", "nonConcurrent"){
 // b? 7.111  [123,{"xx":1}]  {"b":{"c":456,"e":7.111}}   456
 qt_sql_30 "select v['b']['e'], v['a'], v['b'], v['b']['c'] from 
jsonb_values where cast(v['b']['e'] as double) > 1;"
 
-test {
-sql "select v['a'] from ${table_name} group by v['a']"
-exception("errCode = 2, detailMessage = Doris hll, bitmap, array, 
map, struct, jsonb, variant column must use with specific function, and don't 
support filter, group by or order by")
-}
-
-test {
-sql """
-create table var(
-`content` variant
-)distributed by hash(`content`) buckets 8
-properties(
-  "replication_allocation" = "tag.location.default: 1"
-);
-"""
-exception("errCode = 2, detailMessage = Hash distribution info 
should not contain variant columns")
-}
-
 // 13. sparse columns
 table_name = "sparse_columns"
 create_table table_name
@@ -440,6 +423,43 @@ suite("regression_test_variant", "nonConcurrent"){
 qt_sql_records3 """SELECT value FROM records WHERE   value['text99'] 
MATCH_ALL '来 广州 但是嗯嗯 还 不能 在'  OR (  value['text47'] MATCH_ALL '你 觉得 超 好看 的 动' ) 
OR (  value['text43'] MATCH_ALL ' 楼主 拒绝 了 一个 女生 我 傻逼 吗手' )  LIMIT 0, 100"""
 qt_sql_records4 """SELECT value FROM records WHERE  value['id16'] = 
'39960' AND (  value['text59'] = '非 明显 是 一 付 很 嫌') AND (  value['text99'] = '来 
广州 但是嗯嗯 还 不能 在 ')  """
 qt_sql_records5 """SELECT value FROM records WHERE  value['text3'] 
MATCH_ALL '伊心 是 来 搞笑 的'  LIMIT 0, 100"""
+
+test {
+sql "select v['a'] from ${table_name} group by v['a']"
+exception("errCode = 2, detailMessage = Doris hll, bitmap, array, 
map, struct, jsonb, variant column must use with specific function, and don't 
support filter, group by or order by")
+}
+
+test {
+sql """
+create table var(
+`key` int,
+`content` variant
+)
+DUPLICATE KEY(`key`)
+distributed by hash(`content`) buckets 8
+properties(
+  "replication_allocation" = "tag.location.default: 1"
+);
+"""
+exception("errCode = 2, detailMessage = Hash distribution info 
should not contain variant columns")
+}
+
+ test {
+sql """
+CREATE TABLE `var_as_key` (
+  `key` int NULL,
+  `var` variant NULL
+) ENGINE=OLAP
+DUPLICATE KEY(`key`, `var`)
+COMMENT 'OLAP'
+DISTRIBUTED BY RANDOM BUCKETS 1
+PROPERTIES (
+"replication_allocation" = "tag.location.default: 1"
+);
+""" 
+exc

(doris) branch branch-2.1 updated: [fix](connection) kill connection when meeting Write mysql packet failed error #36559 (#36616)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 3febac1d916 [fix](connection) kill connection when meeting Write mysql 
packet failed error #36559 (#36616)
3febac1d916 is described below

commit 3febac1d916c0219d91c61b8c65feba2b64383fa
Author: Mingyu Chen 
AuthorDate: Thu Jun 20 22:27:01 2024 +0800

[fix](connection) kill connection when meeting Write mysql packet failed 
error #36559 (#36616)

bp #36559
---
 .../apache/doris/common/ConnectionException.java   | 35 ++
 .../java/org/apache/doris/mysql/MysqlChannel.java  |  7 +++--
 .../java/org/apache/doris/qe/ConnectProcessor.java | 17 +++
 .../org/apache/doris/qe/MysqlConnectProcessor.java |  5 ++--
 .../arrowflight/FlightSqlConnectProcessor.java |  3 +-
 5 files changed, 57 insertions(+), 10 deletions(-)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/common/ConnectionException.java 
b/fe/fe-core/src/main/java/org/apache/doris/common/ConnectionException.java
new file mode 100644
index 000..3f1de2ae2b8
--- /dev/null
+++ b/fe/fe-core/src/main/java/org/apache/doris/common/ConnectionException.java
@@ -0,0 +1,35 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.common;
+
+import java.io.IOException;
+
+/**
+ * This is a special exception.
+ * If this exception is thrown, it means that the connection to the server is 
abnormal.
+ * We need to kill the connection actively.
+ */
+public class ConnectionException extends IOException {
+public ConnectionException(String message) {
+super(message);
+}
+
+public ConnectionException(String message, Throwable cause) {
+super(message, cause);
+}
+}
diff --git a/fe/fe-core/src/main/java/org/apache/doris/mysql/MysqlChannel.java 
b/fe/fe-core/src/main/java/org/apache/doris/mysql/MysqlChannel.java
index d22ba393699..392b0587585 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/mysql/MysqlChannel.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/mysql/MysqlChannel.java
@@ -17,6 +17,7 @@
 
 package org.apache.doris.mysql;
 
+import org.apache.doris.common.ConnectionException;
 import org.apache.doris.common.util.NetUtils;
 import org.apache.doris.qe.ConnectContext;
 import org.apache.doris.qe.ConnectProcessor;
@@ -401,11 +402,13 @@ public class MysqlChannel implements BytesChannel {
 protected void realNetSend(ByteBuffer buffer) throws IOException {
 buffer = encryptData(buffer);
 long bufLen = buffer.remaining();
+long start = System.currentTimeMillis();
 long writeLen = Channels.writeBlocking(conn.getSinkChannel(), buffer, 
context.getNetWriteTimeout(),
 TimeUnit.SECONDS);
 if (bufLen != writeLen) {
-throw new IOException("Write mysql packet failed.[write=" + 
writeLen
-+ ", needToWrite=" + bufLen + "]");
+long duration = System.currentTimeMillis() - start;
+throw new ConnectionException("Write mysql packet failed.[write=" 
+ writeLen
++ ", needToWrite=" + bufLen + "], duration: " + duration + 
" ms");
 }
 Channels.flushBlocking(conn.getSinkChannel(), 
context.getNetWriteTimeout(), TimeUnit.SECONDS);
 isSend = true;
diff --git a/fe/fe-core/src/main/java/org/apache/doris/qe/ConnectProcessor.java 
b/fe/fe-core/src/main/java/org/apache/doris/qe/ConnectProcessor.java
index d54b708e818..51911c0 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/qe/ConnectProcessor.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/qe/ConnectProcessor.java
@@ -32,6 +32,7 @@ import org.apache.doris.catalog.Env;
 import org.apache.doris.catalog.TableIf;
 import org.apache.doris.common.AnalysisException;
 import org.apache.doris.common.Config;
+import org.apache.doris.common.ConnectionException;
 import org.apache.doris.common.DdlException;
 import org.apache.doris.common.ErrorCode;
 import org.apache.doris.common.NotImplementedException;
@@ -198,9 +199,11 @@ public abstract class ConnectPro

(doris) branch branch-2.0 updated: [fix](index compaction)Change index_id from int32 to int64 to avoid overflow (#36625)

2024-06-20 Thread kxiao
This is an automated email from the ASF dual-hosted git repository.

kxiao pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.0 by this push:
 new b568702d6a9 [fix](index compaction)Change index_id from int32 to int64 
to avoid overflow (#36625)
b568702d6a9 is described below

commit b568702d6a9c852585305591aea1f12d116ae3be
Author: qiye 
AuthorDate: Thu Jun 20 22:39:47 2024 +0800

[fix](index compaction)Change index_id from int32 to int64 to avoid 
overflow (#36625)

master is fix by #30145
---
 be/src/olap/rowset/segment_v2/inverted_index_compaction.cpp | 2 +-
 be/src/olap/rowset/segment_v2/inverted_index_compaction.h   | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/be/src/olap/rowset/segment_v2/inverted_index_compaction.cpp 
b/be/src/olap/rowset/segment_v2/inverted_index_compaction.cpp
index b04edd6eb83..38d24c14c5b 100644
--- a/be/src/olap/rowset/segment_v2/inverted_index_compaction.cpp
+++ b/be/src/olap/rowset/segment_v2/inverted_index_compaction.cpp
@@ -24,7 +24,7 @@
 #include "util/debug_points.h"
 
 namespace doris::segment_v2 {
-Status compact_column(int32_t index_id, int src_segment_num, int 
dest_segment_num,
+Status compact_column(int64_t index_id, int src_segment_num, int 
dest_segment_num,
   std::vector src_index_files,
   std::vector dest_index_files, const 
io::FileSystemSPtr& fs,
   std::string index_writer_path, std::string tablet_path,
diff --git a/be/src/olap/rowset/segment_v2/inverted_index_compaction.h 
b/be/src/olap/rowset/segment_v2/inverted_index_compaction.h
index f615192b199..bfcf1b1b616 100644
--- a/be/src/olap/rowset/segment_v2/inverted_index_compaction.h
+++ b/be/src/olap/rowset/segment_v2/inverted_index_compaction.h
@@ -25,7 +25,7 @@
 namespace doris {
 
 namespace segment_v2 {
-Status compact_column(int32_t index_id, int src_segment_num, int 
dest_segment_num,
+Status compact_column(int64_t index_id, int src_segment_num, int 
dest_segment_num,
   std::vector src_index_files,
   std::vector dest_index_files, const 
io::FileSystemSPtr& fs,
   std::string index_writer_path, std::string tablet_path,


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch branch-2.0 updated: [fix](topn-opt) remove redundant check for fetch phase (#36631)

2024-06-20 Thread kxiao
This is an automated email from the ASF dual-hosted git repository.

kxiao pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.0 by this push:
 new 920b5d13a9a [fix](topn-opt) remove redundant check for fetch phase 
(#36631)
920b5d13a9a is described below

commit 920b5d13a9a3425398defce73d374e69887e1911
Author: lihangyu <15605149...@163.com>
AuthorDate: Thu Jun 20 23:08:23 2024 +0800

[fix](topn-opt) remove redundant check for fetch phase (#36631)
---
 be/src/exec/rowid_fetcher.cpp | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/be/src/exec/rowid_fetcher.cpp b/be/src/exec/rowid_fetcher.cpp
index b986830888e..c5f775846fb 100644
--- a/be/src/exec/rowid_fetcher.cpp
+++ b/be/src/exec/rowid_fetcher.cpp
@@ -230,10 +230,6 @@ Status RowIDFetcher::fetch(const vectorized::ColumnPtr& 
column_row_ids,
 std::vector rows_locs;
 rows_locs.reserve(rows_locs.size());
 RETURN_IF_ERROR(_merge_rpc_results(mget_req, resps, cntls, res_block, 
&rows_locs));
-if (rows_locs.size() != res_block->rows() || rows_locs.size() != 
column_row_ids->size()) {
-return Status::InternalError("Miss matched return row loc count {}, 
expected {}, input {}",
- rows_locs.size(), res_block->rows(), 
column_row_ids->size());
-}
 // Final sort by row_ids sequence, since row_ids is already sorted if need
 std::map positions;
 for (size_t i = 0; i < rows_locs.size(); ++i) {
@@ -250,11 +246,12 @@ Status RowIDFetcher::fetch(const vectorized::ColumnPtr& 
column_row_ids,
 reinterpret_cast(column_row_ids->get_data_at(i).data);
 permutation.push_back(positions[*location]);
 }
-size_t num_rows = res_block->rows();
 for (size_t i = 0; i < res_block->columns(); ++i) {
 res_block->get_by_position(i).column =
-res_block->get_by_position(i).column->permute(permutation, 
num_rows);
+res_block->get_by_position(i).column->permute(permutation, 
permutation.size());
 }
+// Check row consistency
+RETURN_IF_CATCH_EXCEPTION(res_block->check_number_of_rows());
 // shrink for char type
 std::vector char_type_idx;
 for (size_t i = 0; i < _fetch_option.desc->slots().size(); i++) {


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch branch-2.1 updated: [improve](fe) Support to config max msg/frame size of the thrift server (#36594)

2024-06-20 Thread dataroaring
This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 58cc1dca7f5 [improve](fe) Support to config max msg/frame size of the 
thrift server (#36594)
58cc1dca7f5 is described below

commit 58cc1dca7f562c20536b191331ef2c38afe8debf
Author: walter 
AuthorDate: Fri Jun 21 00:15:15 2024 +0800

[improve](fe) Support to config max msg/frame size of the thrift server 
(#36594)

Cherry-pick #35845
---
 .../main/java/org/apache/doris/common/Config.java  | 10 
 .../java/org/apache/doris/common/ThriftServer.java | 61 ++
 2 files changed, 60 insertions(+), 11 deletions(-)

diff --git a/fe/fe-common/src/main/java/org/apache/doris/common/Config.java 
b/fe/fe-common/src/main/java/org/apache/doris/common/Config.java
index f8ff3cd5d47..6adf03c56cd 100644
--- a/fe/fe-common/src/main/java/org/apache/doris/common/Config.java
+++ b/fe/fe-common/src/main/java/org/apache/doris/common/Config.java
@@ -405,6 +405,16 @@ public class Config extends ConfigBase {
 "The connection timeout of thrift client, in milliseconds. 0 means 
no timeout."})
 public static int thrift_client_timeout_ms = 0;
 
+// The default value is inherited from org.apache.thrift.TConfiguration
+@ConfField(description = {"thrift server 接收请求大小的上限",
+"The maximum size of a (received) message of the thrift server, in 
bytes"})
+public static int thrift_max_message_size = 100 * 1024 * 1024;
+
+// The default value is inherited from org.apache.thrift.TConfiguration
+@ConfField(description = {"thrift server transport 接收的每帧数据大小的上限",
+"The limits of the size of one frame of thrift server transport"})
+public static int thrift_max_frame_size = 16384000;
+
 @ConfField(description = {"thrift server 的 backlog 数量。"
 + "如果调大这个值,则需同时调整 /proc/sys/net/core/somaxconn 的值",
 "The backlog number of thrift server. "
diff --git a/fe/fe-core/src/main/java/org/apache/doris/common/ThriftServer.java 
b/fe/fe-core/src/main/java/org/apache/doris/common/ThriftServer.java
index 2396dc95074..f18dbb378a1 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/common/ThriftServer.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/common/ThriftServer.java
@@ -23,6 +23,7 @@ import org.apache.doris.thrift.TNetworkAddress;
 import com.google.common.collect.Sets;
 import org.apache.logging.log4j.LogManager;
 import org.apache.logging.log4j.Logger;
+import org.apache.thrift.TConfiguration;
 import org.apache.thrift.TProcessor;
 import org.apache.thrift.protocol.TBinaryProtocol;
 import org.apache.thrift.server.TServer;
@@ -31,10 +32,13 @@ import org.apache.thrift.server.TThreadPoolServer;
 import org.apache.thrift.server.TThreadedSelectorServer;
 import org.apache.thrift.transport.TNonblockingServerSocket;
 import org.apache.thrift.transport.TServerSocket;
+import org.apache.thrift.transport.TSocket;
 import org.apache.thrift.transport.TTransportException;
 
 import java.io.IOException;
 import java.net.InetSocketAddress;
+import java.net.ServerSocket;
+import java.net.Socket;
 import java.util.Set;
 import java.util.concurrent.ThreadPoolExecutor;
 
@@ -98,8 +102,9 @@ public class ThriftServer {
 
 private void createThreadedServer() throws TTransportException {
 TThreadedSelectorServer.Args args = new TThreadedSelectorServer.Args(
-new TNonblockingServerSocket(port, 
Config.thrift_client_timeout_ms)).protocolFactory(
-new TBinaryProtocol.Factory()).processor(processor);
+new TNonblockingServerSocket(port, 
Config.thrift_client_timeout_ms))
+.protocolFactory(new TBinaryProtocol.Factory())
+.processor(processor);
 ThreadPoolExecutor threadPoolExecutor = 
ThreadPoolManager.newDaemonCacheThreadPool(
 Config.thrift_server_max_worker_threads, "thrift-server-pool", 
true);
 args.executorService(threadPoolExecutor);
@@ -111,19 +116,19 @@ public class ThriftServer {
 
 if (FrontendOptions.isBindIPV6()) {
 socketTransportArgs = new TServerSocket.ServerSocketTransportArgs()
-.bindAddr(new InetSocketAddress("::0", port))
-.clientTimeout(Config.thrift_client_timeout_ms)
-.backlog(Config.thrift_backlog_num);
+.bindAddr(new InetSocketAddress("::0", port))
+.clientTimeout(Config.thrift_client_timeout_ms)
+.backlog(Config.thrift_backlog_num);
 } else {
 socketTransportArgs = new TServerSocket.ServerSocketTransportArgs()
-.bindAddr(new InetSocketAddress("0.0.0.0", port))
-.clientTimeout(Config.thrift_client_timeout_ms)
-.backlog(Config.thrift_backlog_num);
+   

(doris) branch branch-2.0 updated: [Pick 2.0](inverted index) fix wrong opt for pk no need read data (#36633)

2024-06-20 Thread airborne
This is an automated email from the ASF dual-hosted git repository.

airborne pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.0 by this push:
 new 1909c454e04 [Pick 2.0](inverted index) fix wrong opt for pk no need 
read data (#36633)
1909c454e04 is described below

commit 1909c454e04a851c2d12429933b215b70c9eb61d
Author: airborne12 
AuthorDate: Fri Jun 21 00:57:00 2024 +0800

[Pick 2.0](inverted index) fix wrong opt for pk no need read data (#36633)

## Proposed changes

Pick from #36618
---
 be/src/olap/rowset/segment_v2/segment_iterator.cpp |  3 +
 .../test_pk_no_need_read_data.out  | 13 +
 .../test_pk_no_need_read_data.groovy   | 66 ++
 3 files changed, 82 insertions(+)

diff --git a/be/src/olap/rowset/segment_v2/segment_iterator.cpp 
b/be/src/olap/rowset/segment_v2/segment_iterator.cpp
index 977ca340ff8..d57f24ef5ca 100644
--- a/be/src/olap/rowset/segment_v2/segment_iterator.cpp
+++ b/be/src/olap/rowset/segment_v2/segment_iterator.cpp
@@ -2555,6 +2555,9 @@ bool SegmentIterator::_no_need_read_key_data(ColumnId 
cid, vectorized::MutableCo
 if (cids.contains(cid)) {
 return false;
 }
+if 
(_column_pred_in_remaining_vconjunct.contains(_opts.tablet_schema->column(cid).name()))
 {
+return false;
+}
 
 if (column->is_nullable()) {
 auto* nullable_col_ptr = 
reinterpret_cast(column.get());
diff --git 
a/regression-test/data/inverted_index_p0/test_pk_no_need_read_data.out 
b/regression-test/data/inverted_index_p0/test_pk_no_need_read_data.out
new file mode 100644
index 000..b38181b1845
--- /dev/null
+++ b/regression-test/data/inverted_index_p0/test_pk_no_need_read_data.out
@@ -0,0 +1,13 @@
+-- This file is automatically generated. You should know what you did if you 
want to edit this
+-- !select_0 --
+1
+
+-- !select_1 --
+1
+
+-- !select_2 --
+1
+
+-- !select_3 --
+1
+
diff --git 
a/regression-test/suites/inverted_index_p0/test_pk_no_need_read_data.groovy 
b/regression-test/suites/inverted_index_p0/test_pk_no_need_read_data.groovy
new file mode 100644
index 000..4aa969debda
--- /dev/null
+++ b/regression-test/suites/inverted_index_p0/test_pk_no_need_read_data.groovy
@@ -0,0 +1,66 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+suite("test_pk_no_need_read_data", "p0"){
+def table1 = "test_pk_no_need_read_data"
+
+sql "drop table if exists ${table1}"
+
+sql """
+   CREATE TABLE IF NOT EXISTS `${table1}` (
+  `date` date NULL COMMENT "",
+  `city` varchar(20) NULL COMMENT "",
+  `addr` varchar(20) NULL COMMENT "",
+  `name` varchar(20) NULL COMMENT "",
+  `compy` varchar(20) NULL COMMENT "",
+  `n` int NULL COMMENT "",
+  INDEX idx_city(city) USING INVERTED,
+  INDEX idx_addr(addr) USING INVERTED PROPERTIES("parser"="english"),
+  INDEX idx_n(n) USING INVERTED
+) ENGINE=OLAP
+DUPLICATE KEY(`date`)
+COMMENT "OLAP"
+DISTRIBUTED BY HASH(`date`) BUCKETS 1
+PROPERTIES (
+"replication_allocation" = "tag.location.default: 1",
+"in_memory" = "false",
+"storage_format" = "V2"
+)
+"""
+
+sql """insert into ${table1} values
+('2017-10-01',null,'addr qie3','yy','lj',100),
+('2018-10-01',null,'hehe',null,'lala',200),
+('2019-10-01','beijing','addr xuanwu','wugui',null,300),
+('2020-10-01','beijing','addr fengtai','fengtai1','fengtai2',null),
+('2021-10-01','beijing','addr chaoyang','wangjing','donghuqu',500),
+('2022-10-01','shanghai','hehe',null,'haha',null),
+('2023-10-01','tengxun','qie','addr gg','lj',null),
+('2024-10-01','tengxun2','qie',null,'lj',800)
+"""
+
+// case1: enable count on index
+sql "set enable_count_on_index_pushdown = true"
+
+qt_select_0 "SELECT COUNT() FROM ${table1} WHERE date='2017-10-01'"
+qt_select_1 "SELECT COUNT() FROM ${table1} WHERE year(date)='2017'"
+
+// case1: disable count on index
+sql "set enable_count_on_index_pushdown = false"
+
+qt_select_2 "SELECT COUNT() FROM ${ta

(doris) branch branch-2.1 updated: [Pick 2.1](inverted index) fix wrong opt for pk no need read data (#36634)

2024-06-20 Thread airborne
This is an automated email from the ASF dual-hosted git repository.

airborne pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 8105dc7de87 [Pick 2.1](inverted index) fix wrong opt for pk no need 
read data (#36634)
8105dc7de87 is described below

commit 8105dc7de87daebc623d37fd2b1501bb4a93316c
Author: airborne12 
AuthorDate: Fri Jun 21 00:57:23 2024 +0800

[Pick 2.1](inverted index) fix wrong opt for pk no need read data (#36634)

## Proposed changes

Pick from #36618
---
 be/src/olap/rowset/segment_v2/segment_iterator.cpp |  3 +
 .../test_pk_no_need_read_data.out  | 13 +
 .../test_pk_no_need_read_data.groovy   | 66 ++
 3 files changed, 82 insertions(+)

diff --git a/be/src/olap/rowset/segment_v2/segment_iterator.cpp 
b/be/src/olap/rowset/segment_v2/segment_iterator.cpp
index f93d6264058..614604494ae 100644
--- a/be/src/olap/rowset/segment_v2/segment_iterator.cpp
+++ b/be/src/olap/rowset/segment_v2/segment_iterator.cpp
@@ -2774,6 +2774,9 @@ bool SegmentIterator::_no_need_read_key_data(ColumnId 
cid, vectorized::MutableCo
 if (cids.contains(cid)) {
 return false;
 }
+if 
(_column_pred_in_remaining_vconjunct.contains(_opts.tablet_schema->column(cid).name()))
 {
+return false;
+}
 
 if (column->is_nullable()) {
 auto* nullable_col_ptr = 
reinterpret_cast(column.get());
diff --git 
a/regression-test/data/inverted_index_p0/test_pk_no_need_read_data.out 
b/regression-test/data/inverted_index_p0/test_pk_no_need_read_data.out
new file mode 100644
index 000..b38181b1845
--- /dev/null
+++ b/regression-test/data/inverted_index_p0/test_pk_no_need_read_data.out
@@ -0,0 +1,13 @@
+-- This file is automatically generated. You should know what you did if you 
want to edit this
+-- !select_0 --
+1
+
+-- !select_1 --
+1
+
+-- !select_2 --
+1
+
+-- !select_3 --
+1
+
diff --git 
a/regression-test/suites/inverted_index_p0/test_pk_no_need_read_data.groovy 
b/regression-test/suites/inverted_index_p0/test_pk_no_need_read_data.groovy
new file mode 100644
index 000..4aa969debda
--- /dev/null
+++ b/regression-test/suites/inverted_index_p0/test_pk_no_need_read_data.groovy
@@ -0,0 +1,66 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+suite("test_pk_no_need_read_data", "p0"){
+def table1 = "test_pk_no_need_read_data"
+
+sql "drop table if exists ${table1}"
+
+sql """
+   CREATE TABLE IF NOT EXISTS `${table1}` (
+  `date` date NULL COMMENT "",
+  `city` varchar(20) NULL COMMENT "",
+  `addr` varchar(20) NULL COMMENT "",
+  `name` varchar(20) NULL COMMENT "",
+  `compy` varchar(20) NULL COMMENT "",
+  `n` int NULL COMMENT "",
+  INDEX idx_city(city) USING INVERTED,
+  INDEX idx_addr(addr) USING INVERTED PROPERTIES("parser"="english"),
+  INDEX idx_n(n) USING INVERTED
+) ENGINE=OLAP
+DUPLICATE KEY(`date`)
+COMMENT "OLAP"
+DISTRIBUTED BY HASH(`date`) BUCKETS 1
+PROPERTIES (
+"replication_allocation" = "tag.location.default: 1",
+"in_memory" = "false",
+"storage_format" = "V2"
+)
+"""
+
+sql """insert into ${table1} values
+('2017-10-01',null,'addr qie3','yy','lj',100),
+('2018-10-01',null,'hehe',null,'lala',200),
+('2019-10-01','beijing','addr xuanwu','wugui',null,300),
+('2020-10-01','beijing','addr fengtai','fengtai1','fengtai2',null),
+('2021-10-01','beijing','addr chaoyang','wangjing','donghuqu',500),
+('2022-10-01','shanghai','hehe',null,'haha',null),
+('2023-10-01','tengxun','qie','addr gg','lj',null),
+('2024-10-01','tengxun2','qie',null,'lj',800)
+"""
+
+// case1: enable count on index
+sql "set enable_count_on_index_pushdown = true"
+
+qt_select_0 "SELECT COUNT() FROM ${table1} WHERE date='2017-10-01'"
+qt_select_1 "SELECT COUNT() FROM ${table1} WHERE year(date)='2017'"
+
+// case1: disable count on index
+sql "set enable_count_on_index_pushdown = false"
+
+qt_select_2 "SELECT COUNT() FROM ${ta

(doris) branch master updated: [fix](split) FileSystemCacheKey are always different in overload equals (#36432)

2024-06-20 Thread ashingau
This is an automated email from the ASF dual-hosted git repository.

ashingau pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new ed89565fcbe [fix](split) FileSystemCacheKey are always different in 
overload equals (#36432)
ed89565fcbe is described below

commit ed89565fcbe3fa712a35a0fc30114d3ff65fe076
Author: Ashin Gau 
AuthorDate: Fri Jun 21 08:56:50 2024 +0800

[fix](split) FileSystemCacheKey are always different in overload equals 
(#36432)

## Proposed changes

## Fixed Bugs introduced from #33937
1. `FileSystemCacheKey.equals()` compares properties by `==`, resulting
in creating new file system in each partition
2. `dfsFileSystem` is not synchronized, resulting in creating more file
systems than need.
3. `jobConf.iterator()` will produce more than 2000 pairs of key-value
---
 .../doris/datasource/hive/HiveMetaStoreCache.java  | 20 +--
 .../java/org/apache/doris/fs/FileSystemCache.java  | 40 +-
 .../org/apache/doris/fs/remote/S3FileSystem.java   | 24 +++--
 .../apache/doris/fs/remote/dfs/DFSFileSystem.java  | 35 +--
 4 files changed, 70 insertions(+), 49 deletions(-)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/datasource/hive/HiveMetaStoreCache.java
 
b/fe/fe-core/src/main/java/org/apache/doris/datasource/hive/HiveMetaStoreCache.java
index b76b4675dee..f402d27cf6d 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/datasource/hive/HiveMetaStoreCache.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/datasource/hive/HiveMetaStoreCache.java
@@ -349,11 +349,11 @@ public class HiveMetaStoreCache {
 List partitionValues,
 String bindBrokerName) throws UserException {
 FileCacheValue result = new FileCacheValue();
-Map properties = new HashMap<>();
-jobConf.iterator().forEachRemaining(e -> properties.put(e.getKey(), 
e.getValue()));
 RemoteFileSystem fs = 
Env.getCurrentEnv().getExtMetaCacheMgr().getFsCache().getRemoteFileSystem(
 new 
FileSystemCache.FileSystemCacheKey(LocationPath.getFSIdentity(
-location, bindBrokerName), properties, 
bindBrokerName));
+location, bindBrokerName),
+catalog.getCatalogProperty().getProperties(),
+bindBrokerName, jobConf));
 result.setSplittable(HiveUtil.isSplittable(fs, inputFormat, location));
 // For Tez engine, it may generate subdirectoies for "union" query.
 // So there may be files and directories in the table directory at the 
same time. eg:
@@ -781,12 +781,12 @@ public class HiveMetaStoreCache {
 return Collections.emptyList();
 }
 String acidVersionPath = new Path(baseOrDeltaPath, 
"_orc_acid_version").toUri().toString();
-Map properties = new HashMap<>();
-jobConf.iterator().forEachRemaining(e -> 
properties.put(e.getKey(), e.getValue()));
 RemoteFileSystem fs = 
Env.getCurrentEnv().getExtMetaCacheMgr().getFsCache().getRemoteFileSystem(
 new FileSystemCache.FileSystemCacheKey(
 
LocationPath.getFSIdentity(baseOrDeltaPath.toUri().toString(),
-bindBrokerName), properties, 
bindBrokerName));
+bindBrokerName),
+
catalog.getCatalogProperty().getProperties(),
+bindBrokerName, jobConf));
 Status status = fs.exists(acidVersionPath);
 if (status != Status.OK) {
 if (status.getErrCode() == ErrCode.NOT_FOUND) {
@@ -806,12 +806,10 @@ public class HiveMetaStoreCache {
 List deleteDeltas = new ArrayList<>();
 for (AcidUtils.ParsedDelta delta : 
directory.getCurrentDirectories()) {
 String location = delta.getPath().toString();
-Map properties = new HashMap<>();
-jobConf.iterator().forEachRemaining(e -> 
properties.put(e.getKey(), e.getValue()));
 RemoteFileSystem fs = 
Env.getCurrentEnv().getExtMetaCacheMgr().getFsCache().getRemoteFileSystem(
 new FileSystemCache.FileSystemCacheKey(
 LocationPath.getFSIdentity(location, 
bindBrokerName),
-properties, bindBrokerName));
+
catalog.getCatalogProperty().getProperties(), bindBrokerName, jobConf));
 List remoteFiles = new ArrayList<>();
 Status status = fs.listFiles(location, false, remoteFiles);
  

(doris) branch master updated: Revert "[Improvement](sink) optimization for parallel result sink (#3… (#36628)

2024-06-20 Thread gabriellee
This is an automated email from the ASF dual-hosted git repository.

gabriellee pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new 46457bcecd4 Revert "[Improvement](sink) optimization for parallel 
result sink (#3… (#36628)
46457bcecd4 is described below

commit 46457bcecd4dcd1080310546ddf618ff1b4fb0f5
Author: Pxl 
AuthorDate: Fri Jun 21 09:51:42 2024 +0800

Revert "[Improvement](sink) optimization for parallel result sink (#3… 
(#36628)

…6305)"

This reverts commit fdb5891c3eccefad7a354436dfb0eae82da5bd6e.
---
 be/src/pipeline/exec/result_file_sink_operator.cpp |   5 +-
 be/src/pipeline/exec/result_file_sink_operator.h   |   2 +-
 be/src/pipeline/exec/result_sink_operator.cpp  |  13 +-
 be/src/pipeline/exec/result_sink_operator.h|   2 +-
 be/src/pipeline/local_exchange/local_exchanger.cpp |  22 +-
 be/src/runtime/buffer_control_block.cpp| 258 +++--
 be/src/runtime/buffer_control_block.h  |  33 ++-
 be/src/runtime/result_buffer_mgr.cpp   |   6 +-
 be/src/runtime/result_buffer_mgr.h |   3 +-
 be/src/runtime/result_writer.h |   2 +-
 be/src/service/point_query_executor.cpp|  14 +-
 be/src/service/point_query_executor.h  |   2 +-
 be/src/vec/sink/varrow_flight_result_writer.cpp|   4 +-
 be/src/vec/sink/varrow_flight_result_writer.h  |   2 +-
 be/src/vec/sink/vmysql_result_writer.cpp   |   4 +-
 be/src/vec/sink/vmysql_result_writer.h |   2 +-
 be/src/vec/sink/writer/async_result_writer.cpp |   2 +-
 .../sink/writer/iceberg/viceberg_table_writer.cpp  |   2 +-
 .../sink/writer/iceberg/viceberg_table_writer.h|   2 +-
 be/src/vec/sink/writer/vfile_result_writer.cpp |   5 +-
 be/src/vec/sink/writer/vfile_result_writer.h   |   2 +-
 be/src/vec/sink/writer/vhive_table_writer.cpp  |   2 +-
 be/src/vec/sink/writer/vhive_table_writer.h|   4 +-
 be/src/vec/sink/writer/vjdbc_table_writer.cpp  |   2 +-
 be/src/vec/sink/writer/vjdbc_table_writer.h|   2 +-
 be/src/vec/sink/writer/vmysql_table_writer.cpp |   2 +-
 be/src/vec/sink/writer/vmysql_table_writer.h   |   2 +-
 be/src/vec/sink/writer/vodbc_table_writer.cpp  |   2 +-
 be/src/vec/sink/writer/vodbc_table_writer.h|   2 +-
 be/src/vec/sink/writer/vtablet_writer.cpp  |   4 +-
 be/src/vec/sink/writer/vtablet_writer.h|   2 +-
 be/src/vec/sink/writer/vtablet_writer_v2.cpp   |   4 +-
 be/src/vec/sink/writer/vtablet_writer_v2.h |   2 +-
 .../serde/data_type_serde_mysql_test.cpp   |   2 +-
 34 files changed, 213 insertions(+), 206 deletions(-)

diff --git a/be/src/pipeline/exec/result_file_sink_operator.cpp 
b/be/src/pipeline/exec/result_file_sink_operator.cpp
index 029bea7494e..0cd14899f52 100644
--- a/be/src/pipeline/exec/result_file_sink_operator.cpp
+++ b/be/src/pipeline/exec/result_file_sink_operator.cpp
@@ -99,8 +99,7 @@ Status ResultFileSinkLocalState::init(RuntimeState* state, 
LocalSinkStateInfo& i
 if (p._is_top_sink) {
 // create sender
 RETURN_IF_ERROR(state->exec_env()->result_mgr()->create_sender(
-state->fragment_instance_id(), p._buf_size, &_sender, 
state->execution_timeout(),
-state->batch_size()));
+state->fragment_instance_id(), p._buf_size, &_sender, 
state->execution_timeout()));
 // create writer
 _writer.reset(new (std::nothrow) vectorized::VFileResultWriter(
 p._file_opts.get(), p._storage_type, 
state->fragment_instance_id(),
@@ -176,7 +175,7 @@ Status ResultFileSinkLocalState::close(RuntimeState* state, 
Status exec_status)
 // close sender, this is normal path end
 if (_sender) {
 _sender->update_return_rows(_writer == nullptr ? 0 : 
_writer->get_written_rows());
-RETURN_IF_ERROR(_sender->close(state->fragment_instance_id(), 
final_status));
+RETURN_IF_ERROR(_sender->close(final_status));
 }
 state->exec_env()->result_mgr()->cancel_at_time(
 time(nullptr) + config::result_buffer_cancelled_interval_time,
diff --git a/be/src/pipeline/exec/result_file_sink_operator.h 
b/be/src/pipeline/exec/result_file_sink_operator.h
index 7623dae7fea..4fa31f615ce 100644
--- a/be/src/pipeline/exec/result_file_sink_operator.h
+++ b/be/src/pipeline/exec/result_file_sink_operator.h
@@ -107,7 +107,7 @@ private:
 
 // Owned by the RuntimeState.
 RowDescriptor _output_row_descriptor;
-int _buf_size = 4096; // Allocated from _pool
+int _buf_size = 1024; // Allocated from _pool
 bool _is_top_sink = true;
 std::string _header;
 std::string _header_type;
diff --git a/be/src/pipeline/exec/result_sink_operator.cpp 
b/be/src/pipeline/exec/result_sink_operator.cpp
index 378fea18eea..24c5

(doris) branch master updated (46457bcecd4 -> 1a1baa31d7a)

2024-06-20 Thread hellostephen
This is an automated email from the ASF dual-hosted git repository.

hellostephen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from 46457bcecd4 Revert "[Improvement](sink) optimization for parallel 
result sink (#3… (#36628)
 add 1a1baa31d7a [api](cache) Add HTTP API to clear data cache (#36599)

No new revisions were added by this update.

Summary of changes:
 .../{health_action.cpp => clear_cache_action.cpp} | 19 ++-
 .../action/{health_action.h => clear_cache_action.h}  |  7 +++
 be/src/runtime/memory/cache_manager.cpp   |  7 +++
 be/src/runtime/memory/cache_manager.h |  1 +
 be/src/service/http_service.cpp   |  6 ++
 5 files changed, 23 insertions(+), 17 deletions(-)
 copy be/src/http/action/{health_action.cpp => clear_cache_action.cpp} (72%)
 copy be/src/http/action/{health_action.h => clear_cache_action.h} (87%)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated (1a1baa31d7a -> 5f9242c6a3f)

2024-06-20 Thread jianliangqi
This is an automated email from the ASF dual-hosted git repository.

jianliangqi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from 1a1baa31d7a [api](cache) Add HTTP API to clear data cache (#36599)
 add 5f9242c6a3f [fix](inverted index) implementation of match_regexp 
function without index (#36471)

No new revisions were added by this update.

Summary of changes:
 be/src/vec/functions/match.cpp | 92 +-
 be/src/vec/functions/match.h   |  5 +-
 ...st_index_delete.out => test_no_index_match.out} | 10 +--
 ...ch_regexp.groovy => test_no_index_match.groovy} | 63 ---
 4 files changed, 127 insertions(+), 43 deletions(-)
 copy regression-test/data/inverted_index_p0/{test_index_delete.out => 
test_no_index_match.out} (89%)
 copy regression-test/suites/inverted_index_p0/{test_index_match_regexp.groovy 
=> test_no_index_match.groovy} (62%)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated: [fix](inverted index) fixed in_list condition not indexed on pipelinex (#36565)

2024-06-20 Thread jianliangqi
This is an automated email from the ASF dual-hosted git repository.

jianliangqi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new d545eb3865c [fix](inverted index) fixed in_list condition not indexed 
on pipelinex (#36565)
d545eb3865c is described below

commit d545eb3865c77d16304c7a0d56003f487d33a5f3
Author: zzzxl <33418555+zzzxl1...@users.noreply.github.com>
AuthorDate: Fri Jun 21 10:14:00 2024 +0800

[fix](inverted index) fixed in_list condition not indexed on pipelinex 
(#36565)
---
 be/src/exec/olap_utils.h   |  4 +-
 be/src/olap/rowset/segment_v2/segment_iterator.cpp |  9 +++
 be/src/pipeline/exec/scan_operator.cpp | 93 +++---
 be/src/pipeline/exec/scan_operator.h   | 23 --
 .../test_index_inlist_fault_injection.out  | 19 +
 .../test_index_inlist_fault_injection.groovy   | 93 ++
 6 files changed, 203 insertions(+), 38 deletions(-)

diff --git a/be/src/exec/olap_utils.h b/be/src/exec/olap_utils.h
index d1a1be81f5d..ddf8562fea1 100644
--- a/be/src/exec/olap_utils.h
+++ b/be/src/exec/olap_utils.h
@@ -117,9 +117,9 @@ inline SQLFilterOp to_olap_filter_type(const std::string& 
function_name, bool op
 return opposite ? FILTER_NOT_IN : FILTER_IN;
 } else if (function_name == "ne") {
 return opposite ? FILTER_IN : FILTER_NOT_IN;
-} else if (function_name == "in_list") {
+} else if (function_name == "in") {
 return opposite ? FILTER_NOT_IN : FILTER_IN;
-} else if (function_name == "not_in_list") {
+} else if (function_name == "not_in") {
 return opposite ? FILTER_IN : FILTER_NOT_IN;
 } else {
 DCHECK(false) << "Function Name: " << function_name;
diff --git a/be/src/olap/rowset/segment_v2/segment_iterator.cpp 
b/be/src/olap/rowset/segment_v2/segment_iterator.cpp
index 37df15d6939..f0c3f8f4920 100644
--- a/be/src/olap/rowset/segment_v2/segment_iterator.cpp
+++ b/be/src/olap/rowset/segment_v2/segment_iterator.cpp
@@ -2403,6 +2403,15 @@ Status 
SegmentIterator::_next_batch_internal(vectorized::Block* block) {
 return Status::EndOfFile("no more data in segment");
 }
 
+DBUG_EXECUTE_IF("segment_iterator._rowid_result_for_index", {
+for (auto& iter : _rowid_result_for_index) {
+if (iter.second.first) {
+return Status::Error(
+"_rowid_result_for_index exists true");
+}
+}
+})
+
 if (!_is_need_vec_eval && !_is_need_short_eval && !_is_need_expr_eval) {
 if (_non_predicate_columns.empty()) {
 return Status::InternalError("_non_predicate_columns is empty");
diff --git a/be/src/pipeline/exec/scan_operator.cpp 
b/be/src/pipeline/exec/scan_operator.cpp
index 161a79fb7c1..21f87c68d5d 100644
--- a/be/src/pipeline/exec/scan_operator.cpp
+++ b/be/src/pipeline/exec/scan_operator.cpp
@@ -994,8 +994,10 @@ void 
ScanLocalState::_normalize_compound_predicate(
 auto compound_fn_name = expr->fn().name.function_name;
 auto children_num = expr->children().size();
 for (auto i = 0; i < children_num; ++i) {
-auto child_expr = expr->children()[i].get();
-if (TExprNodeType::BINARY_PRED == child_expr->node_type()) {
+auto* child_expr = expr->children()[i].get();
+if (TExprNodeType::BINARY_PRED == child_expr->node_type() ||
+TExprNodeType::IN_PRED == child_expr->node_type() ||
+TExprNodeType::MATCH_PRED == child_expr->node_type()) {
 SlotDescriptor* slot = nullptr;
 ColumnValueRangeType* range_on_slot = nullptr;
 if (_is_predicate_acting_on_slot(child_expr, 
in_predicate_checker, &slot,
@@ -1010,30 +1012,16 @@ void 
ScanLocalState::_normalize_compound_predicate(
 value_range.mark_runtime_filter_predicate(
 _is_runtime_filter_predicate);
 }};
-
static_cast(_normalize_binary_in_compound_predicate(
-child_expr, expr_ctx, slot, 
value_range, pdt));
-},
-active_range);
-
-_compound_value_ranges.emplace_back(active_range);
-}
-} else if (TExprNodeType::MATCH_PRED == child_expr->node_type()) {
-SlotDescriptor* slot = nullptr;
-ColumnValueRangeType* range_on_slot = nullptr;
-if (_is_predicate_acting_on_slot(child_expr, 
in_predicate_checker, &slot,
- &range_on_slot) ||
-_is_predicate_acting_on_slot(child_expr, 
eq_predicate_checker, &slot,
- &ran

(doris) branch master updated: [feature](mtmv) Support to use nondeterministic function when create async mv (#36111)

2024-06-20 Thread morrysnow
This is an automated email from the ASF dual-hosted git repository.

morrysnow pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new 35ebef62362 [feature](mtmv) Support to use nondeterministic function 
when create async mv (#36111)
35ebef62362 is described below

commit 35ebef62362334fce5d2b901b9f47fd3517abee2
Author: seawinde <149132972+seawi...@users.noreply.github.com>
AuthorDate: Fri Jun 21 10:31:58 2024 +0800

[feature](mtmv) Support to use nondeterministic function when create async 
mv (#36111)

Support to use current_date() when create async materialized view by
adding
'enable_nondeterministic_function' = 'true' in properties when create
materialized view. `enable_nondeterministic_function` is default false.

Here is a example, it will success

>CREATE MATERIALIZED VIEW mv_name
>BUILD DEFERRED REFRESH AUTO ON MANUAL
>DISTRIBUTED BY RANDOM BUCKETS 2
>PROPERTIES (
>'replication_num' = '1',
>'enable_nondeterministic_function' = 'true'
>)
>AS
>   SELECT *, unix_timestamp(k3, '%Y-%m-%d %H:%i-%s') from ${tableName} 
where current_date() > k3;

Note:
unix_timestamp is nondeterministic when has no params. it is
deterministic when has params which means format column k3 date

another example, it will success

>CREATE MATERIALIZED VIEW mv_name
>BUILD DEFERRED REFRESH AUTO ON MANUAL
>DISTRIBUTED BY RANDOM BUCKETS 2
>PROPERTIES (
>'replication_num' = '1',
>'enable_nondeterministic_function' = 'true'
>)
>AS
>   SELECT *, unix_timestamp() from ${tableName} where current_date() > 
k3;

though unix_timestamp() is nondeterministic, we add
'enable_date_nondeterministic_function' = 'true' in properties
---
 .../apache/doris/common/util/PropertyAnalyzer.java |   3 +
 .../org/apache/doris/mtmv/MTMVPropertyUtil.java|   7 +-
 .../exploration/mv/MaterializedViewUtils.java  |  13 ++
 .../expressions/functions/ExpressionTrait.java |  22 
 .../expressions/functions/Nondeterministic.java|  11 +-
 .../functions/scalar/UnixTimestamp.java|   9 +-
 .../trees/plans/commands/info/CreateMTMVInfo.java  |  25 ++--
 .../visitor/NondeterministicFunctionCollector.java |  21 ++--
 .../doris/nereids/trees/plans/PlanVisitorTest.java |  99 +++
 ...enable_date_non_deterministic_function_mtmv.out |  11 ++
 ...ble_date_non_deterministic_function_mtmv.groovy | 136 +
 11 files changed, 307 insertions(+), 50 deletions(-)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/common/util/PropertyAnalyzer.java 
b/fe/fe-core/src/main/java/org/apache/doris/common/util/PropertyAnalyzer.java
index 69869188c77..6f087d14f4c 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/common/util/PropertyAnalyzer.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/common/util/PropertyAnalyzer.java
@@ -178,6 +178,9 @@ public class PropertyAnalyzer {
 public static final String 
PROPERTIES_ENABLE_DUPLICATE_WITHOUT_KEYS_BY_DEFAULT =
 "enable_duplicate_without_keys_by_default";
 public static final String PROPERTIES_GRACE_PERIOD = "grace_period";
+
+public static final String PROPERTIES_ENABLE_NONDETERMINISTIC_FUNCTION =
+"enable_nondeterministic_function";
 public static final String PROPERTIES_EXCLUDED_TRIGGER_TABLES = 
"excluded_trigger_tables";
 public static final String PROPERTIES_REFRESH_PARTITION_NUM = 
"refresh_partition_num";
 public static final String PROPERTIES_WORKLOAD_GROUP = "workload_group";
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/mtmv/MTMVPropertyUtil.java 
b/fe/fe-core/src/main/java/org/apache/doris/mtmv/MTMVPropertyUtil.java
index a9df9b87d72..12287183886 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/mtmv/MTMVPropertyUtil.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/mtmv/MTMVPropertyUtil.java
@@ -30,14 +30,15 @@ import java.util.Optional;
 import java.util.Set;
 
 public class MTMVPropertyUtil {
-public static final Set mvPropertyKeys = Sets.newHashSet(
+public static final Set MV_PROPERTY_KEYS = Sets.newHashSet(
 PropertyAnalyzer.PROPERTIES_GRACE_PERIOD,
 PropertyAnalyzer.PROPERTIES_EXCLUDED_TRIGGER_TABLES,
 PropertyAnalyzer.PROPERTIES_REFRESH_PARTITION_NUM,
 PropertyAnalyzer.PROPERTIES_WORKLOAD_GROUP,
 PropertyAnalyzer.PROPERTIES_PARTITION_SYNC_LIMIT,
 PropertyAnalyzer.PROPERTIES_PARTITION_TIME_UNIT,
-PropertyAnalyzer.PROPERTIES_PARTITION_DATE_FORMAT
+PropertyAnalyzer.PROPERTIES_PARTITION_DATE_FORMAT,
+PropertyAnalyzer.PROPERTIES_ENABLE_NONDETERMINISTIC_FUNCTION
 );
 
 public st

(doris) branch master updated (35ebef62362 -> 60dfe0c64dd)

2024-06-20 Thread lijibing
This is an automated email from the ASF dual-hosted git repository.

lijibing pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from 35ebef62362 [feature](mtmv) Support to use nondeterministic function 
when create async mv (#36111)
 add 60dfe0c64dd [fix](regression)Disable auto analyze for related cases, 
do not need to enable it for pipeline test cases. (#36604)

No new revisions were added by this update.

Summary of changes:
 .../suites/nereids_p0/stats/column_stats.groovy|   1 -
 ...lyze_stats_triggered_by_update_row_count.groovy |   4 -
 ...triggered_by_update_row_count_streamload.groovy |   4 -
 .../suites/statistics/test_analyze_mtmv.groovy | 910 ++---
 .../suites/statistics/test_update_rows_mv.groovy   |   2 -
 5 files changed, 453 insertions(+), 468 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated (60dfe0c64dd -> 3b35d2b5d4c)

2024-06-20 Thread panxiaolei
This is an automated email from the ASF dual-hosted git repository.

panxiaolei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from 60dfe0c64dd [fix](regression)Disable auto analyze for related cases, 
do not need to enable it for pipeline test cases. (#36604)
 add 3b35d2b5d4c [Bug](materialized-view) forbid agg function with order by 
elements on create mv (#36614)

No new revisions were added by this update.

Summary of changes:
 .../doris/analysis/CreateMaterializedViewStmt.java|  3 +++
 .../order_by/order_by.groovy} | 19 ---
 2 files changed, 7 insertions(+), 15 deletions(-)
 copy 
regression-test/suites/mv_p0/{multi_slot_k1p2ap3p/multi_slot_k1p2ap3p.groovy => 
agg_state/order_by/order_by.groovy} (68%)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated (3b35d2b5d4c -> faa2c17a1f8)

2024-06-20 Thread dataroaring
This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from 3b35d2b5d4c [Bug](materialized-view) forbid agg function with order by 
elements on create mv (#36614)
 add faa2c17a1f8 [Enhancement](group commit) Use async group commit rpc 
call (#36499)

No new revisions were added by this update.

Summary of changes:
 be/src/service/internal_service.cpp | 19 +--
 1 file changed, 5 insertions(+), 14 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated: [fix](mtmv) Fix track partition column fail when date_trunc in group by (#36175)

2024-06-20 Thread morrysnow
This is an automated email from the ASF dual-hosted git repository.

morrysnow pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new 4c8e66b45ef [fix](mtmv) Fix track partition column fail when 
date_trunc in group by (#36175)
4c8e66b45ef is described below

commit 4c8e66b45effc13635828caffd6a280b88453076
Author: seawinde <149132972+seawi...@users.noreply.github.com>
AuthorDate: Fri Jun 21 11:20:34 2024 +0800

[fix](mtmv) Fix track partition column fail when date_trunc in group by 
(#36175)

This is brought by #35562

At the pr above when you create partition materialized view as
following, which would fail with the message:
Unable to find a suitable base table for partitioning

CREATE MATERIALIZED VIEW mvName
BUILD IMMEDIATE REFRESH AUTO ON MANUAL
PARTITION BY (date_trunc(month_alias, 'month'))
DISTRIBUTED BY RANDOM BUCKETS 2
PROPERTIES (
  'replication_num' = '1'
)
AS
SELECT date_trunc(`k2`,'day') AS month_alias, k3, count(*)
FROM tableName GROUP BY date_trunc(`k2`,'day'), k3;

This pr supports to create partition materialized view when `date_trunc`
in group by cluause.
---
 .../exploration/mv/MaterializedViewUtils.java  | 164 +
 .../exploration/mv/MaterializedViewUtilsTest.java  |  93 
 .../data/mtmv_p0/test_rollup_partition_mtmv.out|  60 ++--
 .../mtmv_p0/test_rollup_partition_mtmv.groovy  | 137 -
 4 files changed, 369 insertions(+), 85 deletions(-)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/exploration/mv/MaterializedViewUtils.java
 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/exploration/mv/MaterializedViewUtils.java
index 49e6e7ffc4e..c86584016c2 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/exploration/mv/MaterializedViewUtils.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/exploration/mv/MaterializedViewUtils.java
@@ -63,6 +63,7 @@ import com.google.common.collect.ImmutableMultimap;
 import com.google.common.collect.ImmutableSet;
 import com.google.common.collect.Maps;
 import com.google.common.collect.Multimap;
+import com.google.common.collect.Sets;
 
 import java.util.ArrayList;
 import java.util.BitSet;
@@ -312,58 +313,12 @@ public class MaterializedViewUtils {
 
 @Override
 public Void visitLogicalProject(LogicalProject 
project, IncrementCheckerContext context) {
-NamedExpression mvPartitionColumn = context.getMvPartitionColumn();
 List output = project.getOutput();
-if (context.getMvPartitionColumn().isColumnFromTable()) {
-return visit(project, context);
-}
-for (Slot projectSlot : output) {
-if (!projectSlot.equals(mvPartitionColumn.toSlot())) {
-continue;
-}
-if (projectSlot.isColumnFromTable()) {
-context.setMvPartitionColumn(projectSlot);
-} else {
-// should be only use date_trunc
-Expression shuttledExpression =
-
ExpressionUtils.shuttleExpressionWithLineage(projectSlot, project, new 
BitSet());
-// merge date_trunc
-shuttledExpression = new 
ExpressionNormalization().rewrite(shuttledExpression,
-new 
ExpressionRewriteContext(context.getCascadesContext()));
-
-List expressions = 
shuttledExpression.collectToList(Expression.class::isInstance);
-for (Expression expression : expressions) {
-if (SUPPORT_EXPRESSION_TYPES.stream().noneMatch(
-supportExpression -> 
supportExpression.isAssignableFrom(expression.getClass( {
-context.addFailReason(
-String.format("partition column use 
invalid implicit expression, invalid "
-+ "expression is %s", 
expression));
-return null;
-}
-}
-List dataTruncExpressions =
-
shuttledExpression.collectToList(DateTrunc.class::isInstance);
-if (dataTruncExpressions.size() != 1) {
-// mv time unit level is little then query
-context.addFailReason("partition column time unit 
level should be "
-+ "greater than sql select column");
-return null;
-}
-Optional columnExpr =
-
shuttledExpression.getArgument(0).collectFirst(Slot.class::isInstance);
-if (!col

(doris) branch master updated (86fc14e6bb8 -> 5e009b5abd3)

2024-06-20 Thread morrysnow
This is an automated email from the ASF dual-hosted git repository.

morrysnow pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from 86fc14e6bb8 [refactor](variant) refactor sub path push down on variant 
type (#36478)
 add 5e009b5abd3 [fix](mtmv) Fix data wrong if base table add new partition 
when query rewrite by partition rolled up mv (#36414)

No new revisions were added by this update.

Summary of changes:
 .../mv/AbstractMaterializedViewRule.java   |  24 ++-
 .../nereids/rules/exploration/mv/StructInfo.java   |  18 ++-
 .../plans/commands/UpdateMvByPartitionCommand.java |  52 +++---
 .../nereids_rules_p0/mv/partition_mv_rewrite.out   |  42 +
 .../mv/partition_mv_rewrite.groovy | 180 +++--
 5 files changed, 277 insertions(+), 39 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated (5e009b5abd3 -> 025b12de5df)

2024-06-20 Thread morrysnow
This is an automated email from the ASF dual-hosted git repository.

morrysnow pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from 5e009b5abd3 [fix](mtmv) Fix data wrong if base table add new partition 
when query rewrite by partition rolled up mv (#36414)
 add 025b12de5df [enhance](mtmv)show create materialized view (#36188)

No new revisions were added by this update.

Summary of changes:
 .../antlr4/org/apache/doris/nereids/DorisParser.g4 |   1 +
 fe/fe-core/src/main/cup/sql_parser.cup |   4 +
 ...ShowRollupStmt.java => ShowCreateMTMVStmt.java} |  65 ++-
 .../apache/doris/analysis/ShowCreateTableStmt.java |   6 +
 .../main/java/org/apache/doris/catalog/Env.java| 545 -
 .../doris/nereids/parser/LogicalPlanBuilder.java   |   9 +
 .../apache/doris/nereids/trees/plans/PlanType.java |   1 +
 ...MTMVCommand.java => ShowCreateMTMVCommand.java} |  19 +-
 ...elMTMVTaskInfo.java => ShowCreateMTMVInfo.java} |  51 +-
 .../trees/plans/visitor/CommandVisitor.java|   5 +
 .../java/org/apache/doris/qe/ShowExecutor.java |  21 +
 .../java/org/apache/doris/qe/StmtExecutor.java |   9 +
 .../test_alter_distribution_type_mtmv.groovy   |   2 +-
 .../suites/mtmv_p0/test_bloom_filter_mtmv.groovy   |   6 +-
 .../suites/mtmv_p0/test_build_mtmv.groovy  |   8 +
 .../suites/mtmv_p0/test_compression_mtmv.groovy|   2 +-
 .../suites/mtmv_p0/test_show_create_mtmv.groovy| 104 
 17 files changed, 560 insertions(+), 298 deletions(-)
 copy fe/fe-core/src/main/java/org/apache/doris/analysis/{ShowRollupStmt.java 
=> ShowCreateMTMVStmt.java} (57%)
 copy 
fe/fe-core/src/main/java/org/apache/doris/nereids/trees/plans/commands/{ResumeMTMVCommand.java
 => ShowCreateMTMVCommand.java} (67%)
 copy 
fe/fe-core/src/main/java/org/apache/doris/nereids/trees/plans/commands/info/{CancelMTMVTaskInfo.java
 => ShowCreateMTMVInfo.java} (69%)
 create mode 100644 regression-test/suites/mtmv_p0/test_show_create_mtmv.groovy


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris-website) branch master updated: [fix](docs) tvf add `resource` property (#688)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 984252359d [fix](docs) tvf add `resource` property (#688)
984252359d is described below

commit 984252359def988c11029fece0d2aa2621a8391c
Author: Tiewei Fang <43782773+bepppo...@users.noreply.github.com>
AuthorDate: Fri Jun 21 11:58:55 2024 +0800

[fix](docs) tvf add `resource` property (#688)

related: https://github.com/apache/doris/pull/35139
---
 docs/sql-manual/sql-functions/table-functions/hdfs.md| 1 +
 docs/sql-manual/sql-functions/table-functions/s3.md  | 1 +
 .../current/sql-manual/sql-functions/table-functions/hdfs.md | 1 +
 .../current/sql-manual/sql-functions/table-functions/s3.md   | 1 +
 .../version-2.1/sql-manual/sql-functions/table-functions/hdfs.md | 1 +
 .../version-2.1/sql-manual/sql-functions/table-functions/s3.md   | 1 +
 .../version-2.1/sql-manual/sql-functions/table-functions/hdfs.md | 1 +
 .../version-2.1/sql-manual/sql-functions/table-functions/s3.md   | 1 +
 8 files changed, 8 insertions(+)

diff --git a/docs/sql-manual/sql-functions/table-functions/hdfs.md 
b/docs/sql-manual/sql-functions/table-functions/hdfs.md
index 3b73028086..7a281e06c6 100644
--- a/docs/sql-manual/sql-functions/table-functions/hdfs.md
+++ b/docs/sql-manual/sql-functions/table-functions/hdfs.md
@@ -91,6 +91,7 @@ File format parameters:
 other kinds of parameters:
 
 - `path_partition_keys`: (optional) Specifies the column names carried in the 
file path. For example, if the file path is 
/path/to/city=beijing/date="2023-07-09", you should fill in 
`path_partition_keys="city,date"`. It will automatically read the corresponding 
column names and values from the path during load process.
+- `resource`:(optional)Specify the resource name. Hdfs Tvf can use the 
existing Hdfs resource to directly access Hdfs. You can refer to the method for 
creating an Hdfs resource: 
[CREATE-RESOURCE](../../sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md).
 This property is supported starting from version 2.1.4 .
 
 ### Examples
 
diff --git a/docs/sql-manual/sql-functions/table-functions/s3.md 
b/docs/sql-manual/sql-functions/table-functions/s3.md
index 6027f61141..5d7e25816c 100644
--- a/docs/sql-manual/sql-functions/table-functions/s3.md
+++ b/docs/sql-manual/sql-functions/table-functions/s3.md
@@ -99,6 +99,7 @@ The following 2 parameters are used for loading in csv format
 other parameter:
 
 - `path_partition_keys`: (optional) Specifies the column names carried in the 
file path. For example, if the file path is 
/path/to/city=beijing/date="2023-07-09", you should fill in 
`path_partition_keys="city,date"`. It will automatically read the corresponding 
column names and values from the path during load process.
+- `resource`:(optional)Specify the resource name. S3 tvf can use the existing 
S3 resource to directly access S3. You can refer to the method for creating an 
S3 resource: 
[CREATE-RESOURCE](../../sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md).
 This property is supported starting from version 2.1.4.
 
 ### Example
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/table-functions/hdfs.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/table-functions/hdfs.md
index 9e65320f3d..4ea148cf58 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/table-functions/hdfs.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/table-functions/hdfs.md
@@ -91,6 +91,7 @@ hdfs(
 
 其他参数:
 - 
`path_partition_keys`:(选填)指定文件路径中携带的分区列名,例如/path/to/city=beijing/date="2023-07-09",
 则填写`path_partition_keys="city,date"`,将会自动从路径中读取相应列名和列值进行导入。
+- `resource`:(选填)指定resource名,hdfs tvf 可以利用已有的 hdfs resource 来直接访问hdfs。创建 hdfs 
resource 的方法可以参照 
[CREATE-RESOURCE](../../sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE.md)。该功能自
 2.1.4 版本开始支持。
 
 ### Examples
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/table-functions/s3.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/table-functions/s3.md
index b3dbc2e12e..8a91595401 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/table-functions/s3.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/table-functions/s3.md
@@ -99,6 +99,7 @@ S3 tvf中的每一个参数都是一个 `"key"="value"` 对。
 
 其他参数:
 - 
`path_partition_keys`:(选填)指定文件路径中携带的分区列名,例如/path/to/city=beijing/date="2023-07-09",
 则填写`path_partition_keys="city,date"`,将会自动从路径中读取相应列名和列值进行导入。
+- `resource`:(选填)指定resource名,s3 tvf 可以利用已有的 s3 resource 来直接访问s3。创建 s3 resource 
的方法可

Error while running notifications feature from refs/heads/master:.asf.yaml in doris-website!

2024-06-20 Thread Apache Infrastructure


An error occurred while running notifications feature in .asf.yaml!:
Invalid notification target 'comm...@foo.apache.org'. Must be a valid 
@doris.apache.org list!


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated (025b12de5df -> 1b8214d41c5)

2024-06-20 Thread eldenmoon
This is an automated email from the ASF dual-hosted git repository.

eldenmoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from 025b12de5df [enhance](mtmv)show create materialized view (#36188)
 add 1b8214d41c5 [Fix](Variant) create table should not automatically add 
variant to keys (#36609)

No new revisions were added by this update.

Summary of changes:
 be/src/olap/rowset/segment_v2/vertical_segment_writer.cpp   |  1 -
 .../nereids/trees/plans/commands/info/CreateTableInfo.java  |  2 +-
 regression-test/data/variant_p0/load.out|  6 +-
 regression-test/suites/variant_p0/load.groovy   | 13 +
 4 files changed, 19 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated: [fix](ubsan) Set default value for enable_unique_key_merge_on_write (#36624)

2024-06-20 Thread dataroaring
This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new 34c5625f418 [fix](ubsan) Set default value for 
enable_unique_key_merge_on_write (#36624)
34c5625f418 is described below

commit 34c5625f418e5059e2199230506dad170f0ff589
Author: Lightman <31928846+lchangli...@users.noreply.github.com>
AuthorDate: Fri Jun 21 12:19:53 2024 +0800

[fix](ubsan) Set default value for enable_unique_key_merge_on_write (#36624)

Fix undefined behavior problem. The detail is SUMMARY:
UndefinedBehaviorSanitizer: undefined-behavior

/home/zcp/repo_center/doris_master/doris/be/src/olap/rowset/segment_v2/vertical_segment_writer.cpp:559:19
in


/home/zcp/repo_center/doris_master/doris/be/src/olap/schema_change.cpp:1301:19:
runtime error: load of value 192, which is not a valid value for type
'bool'.
---
 be/src/olap/schema_change.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/be/src/olap/schema_change.h b/be/src/olap/schema_change.h
index ae4093063fd..eb0f046270d 100644
--- a/be/src/olap/schema_change.h
+++ b/be/src/olap/schema_change.h
@@ -269,7 +269,7 @@ struct AlterMaterializedViewParam {
 
 struct SchemaChangeParams {
 AlterTabletType alter_tablet_type;
-bool enable_unique_key_merge_on_write;
+bool enable_unique_key_merge_on_write = false;
 std::vector ref_rowset_readers;
 DeleteHandler* delete_handler = nullptr;
 std::unordered_map 
materialized_params_map;


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated (34c5625f418 -> 147a621065e)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from 34c5625f418 [fix](ubsan) Set default value for 
enable_unique_key_merge_on_write (#36624)
 add 147a621065e [fix](connection) kill connection when meeting Write mysql 
packet failed error (#36559)

No new revisions were added by this update.

Summary of changes:
 ...LoadException.java => ConnectionException.java} | 19 +-
 .../java/org/apache/doris/mysql/MysqlChannel.java  |  7 +--
 .../java/org/apache/doris/qe/ConnectProcessor.java | 23 --
 .../org/apache/doris/qe/MysqlConnectProcessor.java |  5 +++--
 .../arrowflight/FlightSqlConnectProcessor.java |  3 ++-
 5 files changed, 37 insertions(+), 20 deletions(-)
 copy fe/fe-core/src/main/java/org/apache/doris/common/{LoadException.java => 
ConnectionException.java} (67%)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated: [feat](Nereids) Optimize Sum Literal Rewriting by Excluding Single Instances (#35559)

2024-06-20 Thread xiejiann
This is an automated email from the ASF dual-hosted git repository.

xiejiann pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new 9b5a7646238 [feat](Nereids) Optimize Sum Literal Rewriting by 
Excluding Single Instances (#35559)
9b5a7646238 is described below

commit 9b5a764623873f3ec3165e9d8eca3980cb67fcd7
Author: 谢健 
AuthorDate: Fri Jun 21 13:11:58 2024 +0800

[feat](Nereids) Optimize Sum Literal Rewriting by Excluding Single 
Instances (#35559)

## Proposed changes

This PR introduces a change in the method removeOneSumLiteral to enhance
the performance of sum literal rewriting in SQL queries. The
modification ensures that sum literals appearing only once, such as in
expressions like select count(id1 + 1), count(id2 + 1) from t, are not
rewritten.
---
 .../nereids/rules/rewrite/SumLiteralRewrite.java   | 25 +++--
 .../rules/rewrite/SumLiteralRewriteTest.java   | 31 ++
 2 files changed, 54 insertions(+), 2 deletions(-)

diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/rewrite/SumLiteralRewrite.java
 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/rewrite/SumLiteralRewrite.java
index c99071a714e..dcc64ce2c1d 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/rewrite/SumLiteralRewrite.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/rewrite/SumLiteralRewrite.java
@@ -44,6 +44,7 @@ import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
+import java.util.Map.Entry;
 import java.util.Objects;
 import java.util.Set;
 
@@ -64,13 +65,33 @@ public class SumLiteralRewrite extends 
OneRewriteRuleFactory {
 }
 sumLiteralMap.put(pel.first, pel.second);
 }
-if (sumLiteralMap.isEmpty()) {
+Map> 
validSumLiteralMap =
+removeOneSumLiteral(sumLiteralMap);
+if (validSumLiteralMap.isEmpty()) {
 return null;
 }
-return rewriteSumLiteral(agg, sumLiteralMap);
+return rewriteSumLiteral(agg, validSumLiteralMap);
 }).toRule(RuleType.SUM_LITERAL_REWRITE);
 }
 
+// when there only one sum literal like select count(id1 + 1), count(id2 + 
1) from t, we don't rewrite them.
+private Map> removeOneSumLiteral(
+Map> sumLiteralMap) {
+Map countSum = new HashMap<>();
+for (Entry> e : 
sumLiteralMap.entrySet()) {
+Expression expr = e.getValue().first.expr;
+countSum.merge(expr, 1, Integer::sum);
+}
+Map> validSumLiteralMap = new 
HashMap<>();
+for (Entry> e : 
sumLiteralMap.entrySet()) {
+Expression expr = e.getValue().first.expr;
+if (countSum.get(expr) > 1) {
+validSumLiteralMap.put(e.getKey(), e.getValue());
+}
+}
+return validSumLiteralMap;
+}
+
 private Plan rewriteSumLiteral(
 LogicalAggregate agg, Map> sumLiteralMap) {
 Set newAggOutput = new HashSet<>();
diff --git 
a/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/rewrite/SumLiteralRewriteTest.java
 
b/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/rewrite/SumLiteralRewriteTest.java
index cb2cc77627e..19ea7b864fb 100644
--- 
a/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/rewrite/SumLiteralRewriteTest.java
+++ 
b/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/rewrite/SumLiteralRewriteTest.java
@@ -112,4 +112,35 @@ class SumLiteralRewriteTest implements 
MemoPatternMatchSupported {
 .printlnTree()
 .matches(logicalAggregate().when(p -> p.getOutputs().size() == 
4));
 }
+
+@Test
+void testSumOnce() {
+Slot slot1 = scan1.getOutput().get(0);
+Alias add1 = new Alias(new Sum(false, true, new Add(slot1, 
Literal.of(1;
+LogicalAggregate agg = new LogicalAggregate<>(
+ImmutableList.of(scan1.getOutput().get(0)), 
ImmutableList.of(add1), scan1);
+PlanChecker.from(MemoTestUtils.createConnectContext(), agg)
+.applyTopDown(ImmutableList.of(new 
SumLiteralRewrite().build()))
+.printlnTree()
+.matches(logicalAggregate().when(p -> p.getOutputs().size() == 
1));
+
+Slot slot2 = new Alias(scan1.getOutput().get(0)).toSlot();
+Alias add2 = new Alias(new Sum(false, true, new Add(slot2, 
Literal.of(2;
+agg = new LogicalAggregate<>(
+ImmutableList.of(scan1.getOutput().get(0)), 
ImmutableList.of(add1, add2), scan1);
+PlanChecker.from(MemoTestUtils.createConnectContext(), agg)
+.applyTopDown(ImmutableList.of(new 
SumLiteralRewrit

(doris) branch master updated: [Fix](Nereids) fix leading with different be instance number (#36613)

2024-06-20 Thread dataroaring
This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new f8a308fd138 [Fix](Nereids) fix leading with different be instance 
number (#36613)
f8a308fd138 is described below

commit f8a308fd138db7dee87540320ae4e87b0ead6f5e
Author: LiBinfeng <46676950+libinfeng...@users.noreply.github.com>
AuthorDate: Fri Jun 21 13:22:27 2024 +0800

[Fix](Nereids) fix leading with different be instance number (#36613)

Problem:
When use different be number to test leading explain shape plan,
physical distribute plan would differ due to different be numbers
Solved:
Disable physical distribute node showing in fix leading cases
---
 .../data/nereids_p0/hint/fix_leading.out   | 63 +-
 .../suites/nereids_p0/hint/fix_leading.groovy  |  2 +-
 2 files changed, 26 insertions(+), 39 deletions(-)

diff --git a/regression-test/data/nereids_p0/hint/fix_leading.out 
b/regression-test/data/nereids_p0/hint/fix_leading.out
index 7acd1523337..372ffad30a3 100644
--- a/regression-test/data/nereids_p0/hint/fix_leading.out
+++ b/regression-test/data/nereids_p0/hint/fix_leading.out
@@ -1,18 +1,14 @@
 -- This file is automatically generated. You should know what you did if you 
want to edit this
 -- !select1 --
 PhysicalResultSink
---PhysicalDistribute[DistributionSpecGather]
-hashJoin[INNER_JOIN] hashCondition=((t1.c1 = t3.c3) and (t1.c1 = t4.c4)) 
otherCondition=()
---NestedLoopJoin[CROSS_JOIN]
-PhysicalOlapScan[t1]
-PhysicalDistribute[DistributionSpecReplicated]
---filter((t2.c2 = t2.c2))
-PhysicalOlapScan[t2]
---PhysicalDistribute[DistributionSpecHash]
-hashJoin[INNER_JOIN] hashCondition=((t3.c3 = t4.c4)) otherCondition=()
---PhysicalOlapScan[t3]
---PhysicalDistribute[DistributionSpecHash]
-PhysicalOlapScan[t4]
+--hashJoin[INNER_JOIN] hashCondition=((t1.c1 = t3.c3) and (t1.c1 = t4.c4)) 
otherCondition=()
+NestedLoopJoin[CROSS_JOIN]
+--PhysicalOlapScan[t1]
+--filter((t2.c2 = t2.c2))
+PhysicalOlapScan[t2]
+hashJoin[INNER_JOIN] hashCondition=((t3.c3 = t4.c4)) otherCondition=()
+--PhysicalOlapScan[t3]
+--PhysicalOlapScan[t4]
 
 Hint log:
 Used: leading({ t1 t2 } { t3 t4 } )
@@ -237,14 +233,11 @@ PhysicalResultSink
 --hashAgg[GLOBAL]
 hashAgg[LOCAL]
 --NestedLoopJoin[RIGHT_OUTER_JOIN](c3 > 500)
-PhysicalDistribute[DistributionSpecGather]
---NestedLoopJoin[LEFT_OUTER_JOIN](c1 < 200)(c1 > 500)
-PhysicalOlapScan[t1]
-PhysicalDistribute[DistributionSpecReplicated]
---filter((t2.c2 > 500))
-PhysicalOlapScan[t2]
-PhysicalDistribute[DistributionSpecGather]
---PhysicalOlapScan[t3]
+NestedLoopJoin[LEFT_OUTER_JOIN](c1 < 200)(c1 > 500)
+--PhysicalOlapScan[t1]
+--filter((t2.c2 > 500))
+PhysicalOlapScan[t2]
+PhysicalOlapScan[t3]
 
 Hint log:
 Used: leading(t1 t2 t3 )
@@ -254,24 +247,18 @@ SyntaxError:
 -- !select6_1 --
 PhysicalResultSink
 --hashAgg[GLOBAL]
-PhysicalDistribute[DistributionSpecGather]
---hashAgg[LOCAL]
-hashJoin[INNER_JOIN] hashCondition=((t1.c1 = t6.c6)) otherCondition=()
---hashJoin[INNER_JOIN] hashCondition=((t1.c1 = t2.c2) and (t1.c1 = 
t3.c3) and (t1.c1 = t4.c4) and (t1.c1 = t5.c5)) otherCondition=()
-PhysicalOlapScan[t1]
-PhysicalDistribute[DistributionSpecHash]
---hashJoin[INNER_JOIN] hashCondition=((t2.c2 = t4.c4) and (t2.c2 = 
t5.c5) and (t3.c3 = t4.c4) and (t3.c3 = t5.c5)) otherCondition=()
-hashJoin[INNER_JOIN] hashCondition=((t2.c2 = t3.c3)) 
otherCondition=()
---PhysicalOlapScan[t2]
---PhysicalDistribute[DistributionSpecHash]
-PhysicalOlapScan[t3]
-PhysicalDistribute[DistributionSpecHash]
---hashJoin[INNER_JOIN] hashCondition=((t4.c4 = t5.c5)) 
otherCondition=()
-PhysicalOlapScan[t4]
-PhysicalDistribute[DistributionSpecHash]
---PhysicalOlapScan[t5]
---PhysicalDistribute[DistributionSpecHash]
-PhysicalOlapScan[t6]
+hashAgg[LOCAL]
+--hashJoin[INNER_JOIN] hashCondition=((t1.c1 = t6.c6)) otherCondition=()
+hashJoin[INNER_JOIN] hashCondition=((t1.c1 = t2.c2) and (t1.c1 = 
t3.c3) and (t1.c1 = t4.c4) and (t1.c1 = t5.c5)) otherCondition=()
+--PhysicalOlapScan[t1]
+--hashJoin[INNER_JOIN] hashCondition=((t2.c2 = t4.c4) and (t2.c2 = 
t5.c5) and (t3.c3 = t4.c4) and (t3.c3 = t5.c5)) otherCondition=()
+hashJoin[INNER_JOIN] hashCondition=((t2.c2 = t3.c3)) 
otherCondition=()
+--PhysicalOlapScan[t2]
+--PhysicalOlapScan[t3]
+hashJoin[INNER

(doris) branch master updated (f8a308fd138 -> 2341e326225)

2024-06-20 Thread dataroaring
This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from f8a308fd138 [Fix](Nereids) fix leading with different be instance 
number (#36613)
 add 2341e326225 [fix](be) Check MD5 of the downloaded files before 
ingesting binlog (#36621)

No new revisions were added by this update.

Summary of changes:
 be/src/http/action/download_binlog_action.cpp |  9 +++-
 be/src/service/backend_service.cpp| 60 ---
 2 files changed, 61 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch master updated: [Chore](GA) Set the maintainer for the FE ENV file (#36650)

2024-06-20 Thread kirs
This is an automated email from the ASF dual-hosted git repository.

kirs pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
 new 54cc7962807 [Chore](GA) Set the maintainer for the FE ENV file (#36650)
54cc7962807 is described below

commit 54cc7962807f907534225835b4ac376882816820
Author: Calvin Kirs 
AuthorDate: Fri Jun 21 14:03:09 2024 +0800

[Chore](GA) Set the maintainer for the FE ENV file (#36650)

## Proposed changes
we need to set the maintainer for the FE ENV file, which is a critical
file for starting FE. We have experienced multiple instances of resource
leaks due to incorrect modifications. Therefore, we need to add someone
familiar with this file as the maintainer. Any changes to this file must
be reviewed and approved by the maintainer before they can be merged.
---
 tools/maintainers/maintainers.json | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/tools/maintainers/maintainers.json 
b/tools/maintainers/maintainers.json
index 981b63021cd..d0bfad2f81c 100644
--- a/tools/maintainers/maintainers.json
+++ b/tools/maintainers/maintainers.json
@@ -7,6 +7,14 @@
  "gavinchou",
  "dataroaring"
]
+},
+{
+  "path": "fe/fe-core/src/main/java/org/apache/doris/catalog/Env.java",
+  "maintainers": [
+"CalvinKirs",
+"morningman",
+"dataroaring"
+  ]
 }
-  ] 
+  ]
 }


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org



(doris) branch branch-2.1 updated: [fix](eq_for_null) fix incorrect logic in function eq_for_null #36004 (#36124)

2024-06-20 Thread morningman
This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new c8f2a3f9522 [fix](eq_for_null) fix incorrect logic in function 
eq_for_null #36004 (#36124)
c8f2a3f9522 is described below

commit c8f2a3f9522641d56e27759c4f3f42d00defea71
Author: zhiqiang 
AuthorDate: Fri Jun 21 14:31:21 2024 +0800

[fix](eq_for_null) fix incorrect logic in function eq_for_null #36004 
(#36124)

cherry pick from #36004
cherry pick from #36164
---
 be/src/vec/functions/comparison_equal_for_null.cpp | 139 +++--
 be/src/vec/functions/function.cpp  |   2 +-
 be/test/vec/function/function_eq_for_null_test.cpp | 647 +
 3 files changed, 748 insertions(+), 40 deletions(-)

diff --git a/be/src/vec/functions/comparison_equal_for_null.cpp 
b/be/src/vec/functions/comparison_equal_for_null.cpp
index b3e618551e7..cca941840e8 100644
--- a/be/src/vec/functions/comparison_equal_for_null.cpp
+++ b/be/src/vec/functions/comparison_equal_for_null.cpp
@@ -29,6 +29,7 @@
 #include "vec/columns/column_const.h"
 #include "vec/columns/column_nullable.h"
 #include "vec/columns/column_vector.h"
+#include "vec/columns/columns_number.h"
 #include "vec/common/assert_cast.h"
 #include "vec/core/block.h"
 #include "vec/core/column_numbers.h"
@@ -38,6 +39,7 @@
 #include "vec/data_types/data_type_nullable.h"
 #include "vec/data_types/data_type_number.h"
 #include "vec/functions/function.h"
+#include "vec/functions/function_helpers.h"
 #include "vec/functions/simple_function_factory.h"
 
 namespace doris {
@@ -66,69 +68,119 @@ public:
 size_t result, size_t input_rows_count) const override 
{
 ColumnWithTypeAndName& col_left = block.get_by_position(arguments[0]);
 ColumnWithTypeAndName& col_right = block.get_by_position(arguments[1]);
+
+const bool left_const = is_column_const(*col_left.column);
+const bool right_const = is_column_const(*col_right.column);
 bool left_only_null = col_left.column->only_null();
 bool right_only_null = col_right.column->only_null();
+
 if (left_only_null && right_only_null) {
+// TODO: return ColumnConst after 
function.cpp::default_implementation_for_constant_arguments supports it.
 auto result_column = ColumnVector::create(input_rows_count, 
1);
 block.get_by_position(result).column = std::move(result_column);
 return Status::OK();
 } else if (left_only_null) {
 auto right_type_nullable = col_right.type->is_nullable();
 if (!right_type_nullable) {
+// right_column is not nullable, so result is all false.
 block.get_by_position(result).column =
 ColumnVector::create(input_rows_count, 0);
 } else {
-auto const* nullable_right_col =
-assert_cast(col_right.column.get());
-block.get_by_position(result).column =
-
nullable_right_col->get_null_map_column().clone_resized(input_rows_count);
+// right_column is nullable
+const ColumnNullable* nullable_right_col = nullptr;
+if (right_const) {
+nullable_right_col = assert_cast(
+&(assert_cast(col_right.column.get())
+  ->get_data_column()));
+// Actually, when we reach here, the result can only be 
all false (all not null).
+// Since if right column is const, and it is all null, we 
will be short-circuited
+// to (left_only_null && right_only_null) branch. So here 
the right column is all not null.
+block.get_by_position(result).column = ColumnUInt8::create(
+input_rows_count,
+
nullable_right_col->get_null_map_column().get_data()[0]);
+} else {
+nullable_right_col = assert_cast(col_right.column.get());
+// left column is all null, so result has same nullmap 
with right column.
+block.get_by_position(result).column =
+nullable_right_col->get_null_map_column().clone();
+}
 }
 return Status::OK();
 } else if (right_only_null) {
 auto left_type_nullable = col_left.type->is_nullable();
 if (!left_type_nullable) {
+// right column is all but left column is not nullable, so 
result is all false.
 block.get_by_position(result).column =
 ColumnVector::create(input_rows_count, 
(UInt8)0);
 } else {
-auto const* nullable_left_col =
-

(doris) branch master updated (54cc7962807 -> f1c943966f9)

2024-06-20 Thread jacktengg
This is an automated email from the ASF dual-hosted git repository.

jacktengg pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


from 54cc7962807 [Chore](GA) Set the maintainer for the FE ENV file (#36650)
 add f1c943966f9 [fix](regression) fix outfile test case failure (#36592)

No new revisions were added by this update.

Summary of changes:
 .../decimalv3/test_decimal256_outfile_csv.groovy   | 31 +-
 1 file changed, 13 insertions(+), 18 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org