[jira] [Created] (KUDU-3578) De-flaking effort

2024-05-17 Thread Marton Greber (Jira)
Marton Greber created KUDU-3578:
---

 Summary: De-flaking effort
 Key: KUDU-3578
 URL: https://issues.apache.org/jira/browse/KUDU-3578
 Project: Kudu
  Issue Type: Improvement
Reporter: Marton Greber
Assignee: Bakai Ádám


We have quite the number of flaky tests: http://dist-test.cloudera.org:8080/
This makes verifying patches a tedious process.
The idea is to use the above dashboard, to work though the flaky tests.
This should be an umbrella jira. 
If you wish to contribute please create sub-tasks for each test de-flaking 
effort.

Thanks!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KUDU-3579) auto_leader_rebalancer-test

2024-05-17 Thread Marton Greber (Jira)
Marton Greber created KUDU-3579:
---

 Summary: auto_leader_rebalancer-test
 Key: KUDU-3579
 URL: https://issues.apache.org/jira/browse/KUDU-3579
 Project: Kudu
  Issue Type: Sub-task
Reporter: Marton Greber


As of now there are 14 references for SleepFor in the test code which are waits 
that are highly error prone.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KUDU-3578) De-flaking effort

2024-05-17 Thread Marton Greber (Jira)


 [ 
https://issues.apache.org/jira/browse/KUDU-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Greber updated KUDU-3578:

Description: 
We have quite the number of flaky tests: http://dist-test.cloudera.org:8080/
This makes verifying patches a tedious process.
The idea is to use the above dashboard, to work though the flaky tests.
This should be an umbrella jira. 
If you wish to contribute please create sub-tasks for each test de-flaking 
effort.
Testing remarks: 
--stress_cpu_threads to simulate testing under load.

Thanks!

  was:
We have quite the number of flaky tests: http://dist-test.cloudera.org:8080/
This makes verifying patches a tedious process.
The idea is to use the above dashboard, to work though the flaky tests.
This should be an umbrella jira. 
If you wish to contribute please create sub-tasks for each test de-flaking 
effort.

Thanks!


> De-flaking effort
> -
>
> Key: KUDU-3578
> URL: https://issues.apache.org/jira/browse/KUDU-3578
> Project: Kudu
>  Issue Type: Improvement
>Reporter: Marton Greber
>Assignee: Bakai Ádám
>Priority: Major
>  Labels: flaky-test
>
> We have quite the number of flaky tests: http://dist-test.cloudera.org:8080/
> This makes verifying patches a tedious process.
> The idea is to use the above dashboard, to work though the flaky tests.
> This should be an umbrella jira. 
> If you wish to contribute please create sub-tasks for each test de-flaking 
> effort.
> Testing remarks: 
> --stress_cpu_threads to simulate testing under load.
> Thanks!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KUDU-3574) MasterAuthzITest.TestAuthzListTablesConcurrentRename fails from time to time

2024-05-17 Thread Marton Greber (Jira)


 [ 
https://issues.apache.org/jira/browse/KUDU-3574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Greber updated KUDU-3574:

Parent: KUDU-3578
Issue Type: Sub-task  (was: Bug)

> MasterAuthzITest.TestAuthzListTablesConcurrentRename fails from time to time
> 
>
> Key: KUDU-3574
> URL: https://issues.apache.org/jira/browse/KUDU-3574
> Project: Kudu
>  Issue Type: Sub-task
>Affects Versions: 1.17.0
>Reporter: Alexey Serbin
>Priority: Major
> Attachments: master_authz-itest.6.txt.xz
>
>
> The {{MasterAuthzITest.TestAuthzListTablesConcurrentRename}} scenario 
> sometimes fail with errors like below:
> {noformat}
> src/kudu/integration-tests/master_authz-itest.cc:913: Failure
> Expected equality of these values:
>   1
>   tables.size()  
> Which is: 2 
> {noformat}
> The log is attached.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KUDU-3573) TestNewOpsDontGetScheduledDuringUnregister sometimes fail

2024-05-17 Thread Marton Greber (Jira)


 [ 
https://issues.apache.org/jira/browse/KUDU-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Greber updated KUDU-3573:

Parent: KUDU-3578
Issue Type: Sub-task  (was: Bug)

> TestNewOpsDontGetScheduledDuringUnregister sometimes fail
> -
>
> Key: KUDU-3573
> URL: https://issues.apache.org/jira/browse/KUDU-3573
> Project: Kudu
>  Issue Type: Sub-task
>Affects Versions: 1.17.0
>Reporter: Alexey Serbin
>Priority: Major
> Attachments: maintenance_manager-test.txt.xz
>
>
> The {{MaintenanceManagerTest.TestNewOpsDontGetScheduledDuringUnregister}} 
> scenario fails from time to time with output like below:
> {noformat}
> src/kudu/util/maintenance_manager-test.cc:468: Failure
> Expected: (op1.DurationHistogram()->TotalCount()) <= (2), actual: 3 vs 2
> {noformat}
> Full output produced by the test scenario is attached.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KUDU-3571) AutoIncrementingItest.BootstrapNoWalsNoData fails sometimes

2024-05-17 Thread Marton Greber (Jira)


 [ 
https://issues.apache.org/jira/browse/KUDU-3571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Greber updated KUDU-3571:

Parent: KUDU-3578
Issue Type: Sub-task  (was: Bug)

> AutoIncrementingItest.BootstrapNoWalsNoData fails sometimes
> ---
>
> Key: KUDU-3571
> URL: https://issues.apache.org/jira/browse/KUDU-3571
> Project: Kudu
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 1.17.0
>Reporter: Alexey Serbin
>Priority: Major
> Attachments: auto_incrementing-itest.txt.xz
>
>
> The {{AutoIncrementingItest.BootstrapNoWalsNoData}} scenario fails from time 
> to time with one of its assertions triggered, see below.  Full log is 
> attached.
> {noformat}
> /root/Projects/kudu/src/kudu/tserver/tablet_server-test-base.cc:362: Failure
> Failed
> Bad status: Invalid argument: Index 0 does not reference a valid sidecar
> /root/Projects/kudu/src/kudu/integration-tests/auto_incrementing-itest.cc:446:
>  Failure
> Expected equality of these values:
>   200
>   results.size()
> Which is: 0
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KUDU-3194) testReadDataFrameAtSnapshot(org.apache.kudu.spark.kudu.DefaultSourceTest) sometimes fails

2024-05-17 Thread Marton Greber (Jira)


 [ 
https://issues.apache.org/jira/browse/KUDU-3194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Greber updated KUDU-3194:

Parent: KUDU-3578
Issue Type: Sub-task  (was: Bug)

> testReadDataFrameAtSnapshot(org.apache.kudu.spark.kudu.DefaultSourceTest) 
> sometimes fails
> -
>
> Key: KUDU-3194
> URL: https://issues.apache.org/jira/browse/KUDU-3194
> Project: Kudu
>  Issue Type: Sub-task
>  Components: client, test
>Affects Versions: 1.13.0, 1.14.0, 1.15.0, 1.16.0, 1.17.0
>Reporter: Alexey Serbin
>Priority: Major
> Attachments: test-output-20201125.txt.xz, test-output.txt.xz
>
>
> The test scenario sometimes fails.
> {noformat}  
> Time: 55.485
> There was 1 failure:
> 1) testReadDataFrameAtSnapshot(org.apache.kudu.spark.kudu.DefaultSourceTest)
> java.lang.AssertionError: expected:<100> but was:<99>
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.failNotEquals(Assert.java:835)
>   at org.junit.Assert.assertEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:633)
>   at 
> org.apache.kudu.spark.kudu.DefaultSourceTest.testReadDataFrameAtSnapshot(DefaultSourceTest.scala:784)
> FAILURES!!!
> Tests run: 30,  Failures: 1
> {noformat}
> The full log is attached (RELEASE build); the relevant stack trace looks like 
> the following:
> {noformat}
> 23:53:48.683 [ERROR - main] (RetryRule.java:219) 
> org.apache.kudu.spark.kudu.DefaultSourceTest.testReadDataFrameAtSnapshot: 
> failed attempt 1
> java.lang.AssertionError: expected:<100> but was:<99> 
>   
>   at org.junit.Assert.fail(Assert.java:89) ~[junit-4.13.jar:4.13] 
>   
>   at org.junit.Assert.failNotEquals(Assert.java:835) ~[junit-4.13.jar:4.13]   
>   
>   at org.junit.Assert.assertEquals(Assert.java:647) ~[junit-4.13.jar:4.13]
>   
>   at org.junit.Assert.assertEquals(Assert.java:633) ~[junit-4.13.jar:4.13]
>   
>   at 
> org.apache.kudu.spark.kudu.DefaultSourceTest.testReadDataFrameAtSnapshot(DefaultSourceTest.scala:784)
>  ~[test/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_141] 
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_141]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_141]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_141]  
>   
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  ~[junit-4.13.jar:4.13]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  ~[junit-4.13.jar:4.13]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  ~[junit-4.13.jar:4.13]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  ~[junit-4.13.jar:4.13]
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> ~[junit-4.13.jar:4.13]
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> ~[junit-4.13.jar:4.13]
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) 
> ~[junit-4.13.jar:4.13]
>   at 
> org.apache.kudu.test.junit.RetryRule$RetryStatement.doOneAttempt(RetryRule.java:217)
>  [kudu-test-utils-1.13.0-SNAPSHOT.jar:1.13.0-SNAPSHOT]
>   at 
> org.apache.kudu.test.junit.RetryRule$RetryStatement.evaluate(RetryRule.java:234)
>  [kudu-test-utils-1.13.0-SNAPSHOT.jar:1.13.0-SNAPSHOT]
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) 
> [junit-4.13.jar:4.13]
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) 
> [junit-4.13.jar:4.13]
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  [junit-4.13.jar:4.13]
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) 
> [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) 
> [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) 
> [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) 
> [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) 
> [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) 
> [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner.run(ParentRunner.java

[jira] [Updated] (KUDU-3559) AutoRebalancerTest.TestMaxMovesPerServer is flaky

2024-05-17 Thread Marton Greber (Jira)


 [ 
https://issues.apache.org/jira/browse/KUDU-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Greber updated KUDU-3559:

Parent: KUDU-3578
Issue Type: Sub-task  (was: Bug)

> AutoRebalancerTest.TestMaxMovesPerServer is flaky
> -
>
> Key: KUDU-3559
> URL: https://issues.apache.org/jira/browse/KUDU-3559
> Project: Kudu
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 1.17.0
>Reporter: Alexey Serbin
>Priority: Major
> Attachments: auto_rebalancer-test.log.xz
>
>
> The {{AutoRebalancerTest.TestMaxMovesPerServer}} is flaky, sometimes failing 
> with an error like below.  The full log is attached.
> {noformat}
> src/kudu/master/auto_rebalancer-test.cc:196: Failure
> Expected equality of these values:
>   0
>   NumMovesScheduled(leader_idx, BalanceThreadType::LEADER_REBALANCE)
> Which is: 1
> src/kudu/util/test_util.cc:395: Failure
> Failed
> Timed out waiting for assertion to pass.
> src/kudu/master/auto_rebalancer-test.cc:575: Failure
> Expected: CheckNoLeaderMovesScheduled() doesn't generate new fatal failures 
> in the current thread.
>   Actual: it does.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KUDU-3524) The TestScannerKeepAlivePeriodicallyCrossServers scenario fails with SIGABRT

2024-05-17 Thread Marton Greber (Jira)


 [ 
https://issues.apache.org/jira/browse/KUDU-3524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Greber resolved KUDU-3524.
-
Fix Version/s: 1.18
   Resolution: Fixed

> The TestScannerKeepAlivePeriodicallyCrossServers scenario fails with SIGABRT
> 
>
> Key: KUDU-3524
> URL: https://issues.apache.org/jira/browse/KUDU-3524
> Project: Kudu
>  Issue Type: Bug
>Reporter: Alexey Serbin
>Priority: Major
> Fix For: 1.18
>
>
> Running the newly added tests scenario 
> {{TestScannerKeepAlivePeriodicallyCrossServers}} fails with SIGABRT when run 
> as the following on macOS (but I guess it's not macOS-specific) in DEBUG 
> build:
> {noformat}
> ./bin/client-test --stress_cpu_threads=32 
> --gtest_filter='*TestScannerKeepAlivePeriodicallyCrossServers*'
> {noformat}
> The error message and the stacktrace is below:
> {noformat}
> F20231113 12:21:13.431455 41195482 thread_restrictions.cc:79] Check failed: 
> LoadTLS()->wait_allowed Waiting is not allowed to be used on this thread to 
> prevent server-wide latency aberrations and deadlocks. Thread 41195482 (name: 
> "rpc reactor", category: "reactor")
> *** Check failure stack trace: ***
> Process 77090 stopped
> * thread #335, name = 'rpc reactor-41195482', stop reason = signal SIGABRT
> frame #0: 0x7fff205b890e libsystem_kernel.dylib`__pthread_kill + 10
> libsystem_kernel.dylib`__pthread_kill:
> ->  0x7fff205b890e <+10>: jae0x7fff205b8918; <+20>
> 0x7fff205b8910 <+12>: movq   %rax, %rdi
> 0x7fff205b8913 <+15>: jmp0x7fff205b2ab9; cerror_nocancel
> 0x7fff205b8918 <+20>: retq   
> Target 0: (client-test) stopped.
> (lldb) bt
> * thread #335, name = 'rpc reactor-41195482', stop reason = signal SIGABRT
>   * frame #0: 0x7fff205b890e libsystem_kernel.dylib`__pthread_kill + 10
> frame #1: 0x7fff205e75bd libsystem_pthread.dylib`pthread_kill + 263
> frame #2: 0x7fff2053c406 libsystem_c.dylib`abort + 125
> frame #3: 0x00010f64ebd8 
> libglog.1.dylib`google::LogMessage::SendToLog() [inlined] 
> google::LogMessage::Fail() at logging.cc:1946:3 [opt]
> frame #4: 0x00010f64ebd2 
> libglog.1.dylib`google::LogMessage::SendToLog(this=0x70001a95e108) at 
> logging.cc:1920:5 [opt]
> frame #5: 0x00010f64f47a 
> libglog.1.dylib`google::LogMessage::Flush(this=0x70001a95e108) at 
> logging.cc:1777:5 [opt]
> frame #6: 0x00010f65428f 
> libglog.1.dylib`google::LogMessageFatal::~LogMessageFatal(this=0x70001a95e108)
>  at logging.cc:2557:5 [opt]
> frame #7: 0x00010f650349 
> libglog.1.dylib`google::LogMessageFatal::~LogMessageFatal(this=) 
> at logging.cc:2556:37 [opt]
> frame #8: 0x00010e545473 
> libkudu_util.dylib`kudu::ThreadRestrictions::AssertWaitAllowed() at 
> thread_restrictions.cc:79:3
> frame #9: 0x00010013ebb9 
> client-test`kudu::CountDownLatch::Wait(this=0x70001a95e2a0) const at 
> countdown_latch.h:74:5
> frame #10: 0x00010a1749f5 
> libkrpc.dylib`kudu::Notification::WaitForNotification(this=0x70001a95e2a0)
>  const at notification.h:127:12
> frame #11: 0x00010a1748e9 
> libkrpc.dylib`kudu::rpc::Proxy::SyncRequest(this=0x00011317e9b8, 
> method="ScannerKeepAlive", req=0x70001a95e428, resp=0x70001a95e408, 
> controller=0x70001a95e458) at proxy.cc:259:8
> frame #12: 0x00010697220f 
> libtserver_service_proto.dylib`kudu::tserver::TabletServerServiceProxy::ScannerKeepAlive(this=0x00011317e9b8,
>  req=0x70001a95e428, resp=0x70001a95e408, 
> controller=0x70001a95e458) at tserver_service.proxy.cc:98:10
> frame #13: 0x00010525c5b6 
> libkudu_client.dylib`kudu::client::KuduScanner::Data::KeepAlive(this=0x00011290c700)
>  at scanner-internal.cc:664:3
> frame #14: 0x000105269e76 
> libkudu_client.dylib`kudu::client::KuduScanner::Data::StartKeepAlivePeriodically(this=0x000112899858)::$_0::operator()()
>  const at scanner-internal.cc:112:16
> frame #15: 0x000105269e30 
> libkudu_client.dylib`decltype(__f=0x000112899858)::$_0&>(fp)()) 
> std::__1::__invoke  long long, 
> std::__1::shared_ptr)::$_0&>(kudu::client::KuduScanner::Data::StartKeepAlivePeriodically(unsigned
>  long long, std::__1::shared_ptr)::$_0&) at 
> type_traits:3694:1
> frame #16: 0x000105269dd1 libkudu_client.dylib`void 
> std::__1::__invoke_void_return_wrapper true>::__call(kudu::client::KuduScanner::Data::StartKeepAlivePeriodically(unsigned
>  long long, std::__1::shared_ptr)::$_0&) at 
> __functional_base:348:9
> frame #17: 0x000105269d9d 
> libkudu_client.dylib`std::__1::__function::__alloc_func  long long, std::__1::shared_ptr)::$_0, 
> std::__1::allocator  long long, std::__1::shared_ptr)::$_0>, void 
> ()>::operator(this=0x000112

[jira] [Updated] (KUDU-3577) Dropping a nullable column from a table with per-range hash partitions make the table unusable

2024-05-17 Thread Alexey Serbin (Jira)


 [ 
https://issues.apache.org/jira/browse/KUDU-3577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Serbin updated KUDU-3577:

Description: 
For particular table schemas with per-range hash schemas, dropping a nullable 
column from might make the table unusable.  A workaround exists: just add the 
dropped column back using the {{kudu table add_column}} CLI tool.  For example, 
for the reproduction scenario below, use the following command to restore the 
access to the table's data:
{noformat}
$ kudu table add_column $M test city string
{noformat}

As for the reproduction scenario, see below for the sequence of {{kudu}} CLI 
commands.

Set environment variable for the Kudu cluster's RPC endpoint:
{noformat}
$ export M=
{noformat}

Create a table with two range partitions.  It's crucial that the {{city}} 
column is nullable.
{noformat}
$ kudu table create $M '{ "table_name": "test", "schema": { "columns": [ { 
"column_name": "id", "column_type": "INT64" }, { "column_name": "name", 
"column_type": "STRING" }, { "column_name": "age", "column_type": "INT32" }, { 
"column_name": "city", "column_type": "STRING", "is_nullable": true } ], 
"key_column_names": ["id", "name", "age"] }, "partition": { "hash_partitions": 
[ {"columns": ["id"], "num_buckets": 4, "seed": 1}, {"columns": ["name"], 
"num_buckets": 4, "seed": 2} ], "range_partition": { "columns": ["age"], 
"range_bounds": [ { "lower_bound": {"bound_type": "inclusive", "bound_values": 
["30"]}, "upper_bound": {"bound_type": "exclusive", "bound_values": ["60"]} }, 
{ "lower_bound": {"bound_type": "inclusive", "bound_values": ["60"]}, 
"upper_bound": {"bound_type": "exclusive", "bound_values": ["90"]} } ] } }, 
"num_replicas": 1 }'
{noformat}

Add an extra range partition with custom hash schema:
{noformat}
$ kudu table add_range_partition $M test '[90]' '[120]' --hash_schema 
'{"hash_schema": [ {"columns": ["id"], "num_buckets": 3, "seed": 5}, 
{"columns": ["name"], "num_buckets": 3, "seed": 6} ]}'
{noformat}

Check the updated partitioning info:
{noformat}
$ kudu table describe $M test
TABLE test (
id INT64 NOT NULL,
name STRING NOT NULL,
age INT32 NOT NULL,
city STRING NULLABLE,
PRIMARY KEY (id, name, age)
)
HASH (id) PARTITIONS 4 SEED 1,
HASH (name) PARTITIONS 4 SEED 2,
RANGE (age) (
PARTITION 30 <= VALUES < 60,
PARTITION 60 <= VALUES < 90,
PARTITION 90 <= VALUES < 120 HASH(id) PARTITIONS 3 HASH(name) PARTITIONS 3
)
OWNER root
REPLICAS 1
COMMENT 
{noformat}

Drop the {{city}} column:
{noformat}
$ kudu table delete_column $M test city
{noformat}

Now try to run the {{kudu table describe}} against the table once the {{city}} 
column is dropped.  It errors out with {{Invalid argument}}:
{noformat}
$ kudu table describe $M test
Invalid argument: Invalid split row type UNKNOWN
{noformat}

A similar issue manifests itself when trying to run {{kudu table scan}} against 
the table:
{noformat}
$ kudu table scan $M test
Invalid argument: Invalid split row type UNKNOWN
{noformat}

  was:
See the reproduction scenario using the {{kudu}} CLI tools below.

Set environment variable for the Kudu cluster's RPC endpoint:
{noformat}
$ export M=
{noformat}

Create a table with two range partitions.  It's crucial that the {{city}} 
column is nullable.
{noformat}
$ kudu table create $M '{ "table_name": "test", "schema": { "columns": [ { 
"column_name": "id", "column_type": "INT64" }, { "column_name": "name", 
"column_type": "STRING" }, { "column_name": "age", "column_type": "INT32" }, { 
"column_name": "city", "column_type": "STRING", "is_nullable": true } ], 
"key_column_names": ["id", "name", "age"] }, "partition": { "hash_partitions": 
[ {"columns": ["id"], "num_buckets": 4, "seed": 1}, {"columns": ["name"], 
"num_buckets": 4, "seed": 2} ], "range_partition": { "columns": ["age"], 
"range_bounds": [ { "lower_bound": {"bound_type": "inclusive", "bound_values": 
["30"]}, "upper_bound": {"bound_type": "exclusive", "bound_values": ["60"]} }, 
{ "lower_bound": {"bound_type": "inclusive", "bound_values": ["60"]}, 
"upper_bound": {"bound_type": "exclusive", "bound_values": ["90"]} } ] } }, 
"num_replicas": 1 }'
{noformat}

Add an extra range partition with custom hash schema:
{noformat}
$ kudu table add_range_partition $M test '[90]' '[120]' --hash_schema 
'{"hash_schema": [ {"columns": ["id"], "num_buckets": 3, "seed": 5}, 
{"columns": ["name"], "num_buckets": 3, "seed": 6} ]}'
{noformat}

Check the updated partitioning info:
{noformat}
$ kudu table describe $M test
TABLE test (
id INT64 NOT NULL,
name STRING NOT NULL,
age INT32 NOT NULL,
city STRING NULLABLE,
PRIMARY KEY (id, name, age)
)
HASH (id) PARTITIONS 4 SEED 1,
HASH (name) PARTITIONS 4 SEED 2,
RANGE (age) (
PARTITION 30 <= VALUES < 60,
PARTITION 60 <= VALUES < 90,
PARTITION 90 <= VALUES < 120 HASH(id) PARTITIONS 3 HASH(name) PARTITIONS 3
)
OWNER root
REPLICAS 1
COMMENT 
{noformat}

Drop th