Read BUILDING.md on how to rebuild Hbase with Hadoop 3

On Thu, Sep 30, 2021 at 8:42 AM Simon Mottram <simon.mott...@cucumber.co.nz>
wrote:

> Cheers thanks for that.  I get a few errors so it doesn't build. At bottom
> of this message.
>
> i noticed from the build.md that i should try mvn install separately,
> which I also did followed by mvn package, but got same error
>
> S
>
>
>
>
> [ERROR] Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed:
> 28.412 s <<< FAILURE! - in
> org.apache.phoenix.hbase.index.write.recovery.TestPerRegionIndexWriteCache
> [ERROR]
> org.apache.phoenix.hbase.index.write.recovery.TestPerRegionIndexWriteCache.testMultipleRegions
>  Time elapsed: 2.205 s  <<< ERROR!
> java.lang.IncompatibleClassChangeError: Found interface
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:536)
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$400(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:616)
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:611)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:624)
>         at
> org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:53)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:180)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166)
>         at
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:113)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:669)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:130)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:841)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:548)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:489)
>         at
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160)
>         at
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62)
>         at
> org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:296)
>         at
> org.apache.phoenix.hbase.index.write.recovery.TestPerRegionIndexWriteCache.setUp(TestPerRegionIndexWriteCache.java:109)
>         at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>         at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>         at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>         at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>         at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
>
> [ERROR]
> org.apache.phoenix.hbase.index.write.recovery.TestPerRegionIndexWriteCache.testAddRemoveSingleRegion
>  Time elapsed: 0.033 s  <<< ERROR!
> java.lang.IncompatibleClassChangeError: Found interface
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:536)
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$400(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:616)
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:611)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:624)
>         at
> org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:53)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:180)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166)
>         at
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:113)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:669)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:130)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:841)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:548)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:489)
>         at
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160)
>         at
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62)
>         at
> org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:296)
>         at
> org.apache.phoenix.hbase.index.write.recovery.TestPerRegionIndexWriteCache.setUp(TestPerRegionIndexWriteCache.java:109)
>         at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>         at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>         at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>         at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>         at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
>
> [ERROR]
> org.apache.phoenix.hbase.index.write.recovery.TestPerRegionIndexWriteCache.testMultipleAddsForSingleRegion
>  Time elapsed: 0.032 s  <<< ERROR!
> java.lang.IncompatibleClassChangeError: Found interface
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus, but class was expected
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:536)
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$400(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:616)
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$8.doCall(FanOutOneBlockAsyncDFSOutputHelper.java:611)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.createOutput(FanOutOneBlockAsyncDFSOutputHelper.java:624)
>         at
> org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:53)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:180)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:166)
>         at
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:113)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:669)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:130)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:841)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:548)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:489)
>         at
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160)
>         at
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62)
>         at
> org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:296)
>         at
> org.apache.phoenix.hbase.index.write.recovery.TestPerRegionIndexWriteCache.setUp(TestPerRegionIndexWriteCache.java:109)
>         at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>         at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>         at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>         at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>         at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>         at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
>         at
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
>         at
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
>
> [INFO]
> [INFO] Results:
> [INFO]
> [ERROR] Errors:
> [ERROR]   TestPerRegionIndexWriteCache.setUp:109 » IncompatibleClassChange
> Found interfa...
> [ERROR]   TestPerRegionIndexWriteCache.setUp:109 » IncompatibleClassChange
> Found interfa...
> [ERROR]   TestPerRegionIndexWriteCache.setUp:109 » IncompatibleClassChange
> Found interfa...
> [INFO]
> [ERROR] Tests run: 1898, Failures: 0, Errors: 3, Skipped: 6
> [INFO]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary for Apache Phoenix 5.2.0-SNAPSHOT:
> [INFO]
> [INFO] Apache Phoenix ..................................... SUCCESS [02:51
> min]
> [INFO] Phoenix Hbase 2.4.1 compatibility .................. SUCCESS [07:03
> min]
> [INFO] Phoenix Hbase 2.4.0 compatibility .................. SUCCESS [
> 37.760 s]
> [INFO] Phoenix Hbase 2.3.0 compatibility .................. SUCCESS [01:40
> min]
> [INFO] Phoenix Hbase 2.2.5 compatibility .................. SUCCESS [01:03
> min]
> [INFO] Phoenix Hbase 2.1.6 compatibility .................. SUCCESS [
> 39.114 s]
> [INFO] Phoenix Core ....................................... FAILURE [12:08
> min]
> [INFO] Phoenix - Pherf .................................... SKIPPED
> [INFO] Phoenix - Tracing Web Application .................. SKIPPED
> [INFO] Phoenix Client Parent .............................. SKIPPED
> [INFO] Phoenix Client ..................................... SKIPPED
> [INFO] Phoenix Client Embedded ............................ SKIPPED
> [INFO] Phoenix Server JAR ................................. SKIPPED
> [INFO] Phoenix Assembly ................................... SKIPPED
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO]
> ------------------------------------------------------------------------
> ------------------------------
> *From:* Istvan Toth <st...@cloudera.com>
> *Sent:* 30 September 2021 5:40 PM
> *To:* user@phoenix.apache.org <user@phoenix.apache.org>
> *Subject:* Re: Urgent problem with Phoenix function LAST_VALUE and joins
>
> git clone https://github.com/apache/phoenix
> mvn clean package
> bin/phoenix_sandbox.py
> use the host and port displayed to start sqlline.py
>
> On Thu, Sep 30, 2021 at 4:21 AM Simon Mottram <
> simon.mott...@cucumber.co.nz> wrote:
>
> Sounds like a plan thanks.
>
> Is there any doco to help with getting setup?  I assume the phoenix dev
> channel is the place to ask questions.
>
> Cheers
>
> Simon
> ------------------------------
> *From:* Istvan Toth <st...@cloudera.com>
> *Sent:* 29 September 2021 7:37 PM
> *To:* user@phoenix.apache.org <user@phoenix.apache.org>
> *Subject:* Re: Urgent problem with Phoenix function LAST_VALUE and joins
>
> Please try to reproduce with the latest released Phoenix version, or git
> master HEAD before opening the JIRA.
> You can use bin/phoenix_sandbox.py in the source repo/distribution to
> quickly start a test environment without installing Phoenix in a real
> cluster.
>
>
> On Wed, Sep 29, 2021 at 12:36 AM Simon Mottram <
> simon.mott...@cucumber.co.nz> wrote:
>
> Ok, should I bug this in your JIRA?
>
> I will try and setup a more reproducible series of steps
>
> Cheers
>
> S
> ------------------------------
> *From:* Kadir Ozdemir <kozde...@salesforce.com>
> *Sent:* 29 September 2021 11:32 AM
> *To:* Geoffrey Jacoby <gjac...@salesforce.com>
> *Cc:* user <user@phoenix.apache.org>
> *Subject:* Re: Urgent problem with Phoenix function LAST_VALUE and joins
>
> Geoffrey,
>
> Thank you for pointing out this. I mixed up release numbers. Yes, the
> paging feature cannot be the reason here. My comment for disabling the
> paging feature should be ignored.
>
> Kadir
>
> On Tue, Sep 28, 2021 at 7:44 AM Geoffrey Jacoby <gjac...@salesforce.com>
> wrote:
>
> Kadir,
>
> The server-side paging changes are in Phoenix 5.1, which is at feature
> parity with 4.16 for HBase 1.x, not 5.0 (which was released in 2018 and is
> roughly equivalent to Phoenix 4.14).
>
> So if Simon's environment is Phoenix 5.0, it can't be affected by the
> server-side changes. (Though I see that he's running HBase 2.2, which I
> thought wasn't supported until Phoenix 5.1?)
>
> Geoffrey
>
> On Tue, Sep 28, 2021 at 4:07 AM Simon Mottram <
> simon.mott...@cucumber.co.nz> wrote:
>
> Hi thanks for the quick response
>
> I'm going to appear thick now, brace yourself.   By client do you mean the
> queryserver running in EMR which does have an hbase-site.xml or my Java API
> as a property, we don't have an hbase-site.xml (unless it's embedded in the
> thin client jar.
>
> Cheers
>
> S
> ------------------------------
> *From:* Kadir Ozdemir <kozde...@salesforce.com>
> *Sent:* 28 September 2021 7:45 PM
> *To:* user <user@phoenix.apache.org>
> *Subject:* Re: Urgent problem with Phoenix function LAST_VALUE and joins
>
> In Phoenix 5.0 we introduced a new server side paging feature to eliminate
> timeouts due long running server side computations such aggregation and
> joins. I wonder if this new feature caused your issue. If so, the work
> around is to disable it by setting phoenix.server.paging.enabled to false
> in hbase-site.xml. This is a client side config param so you just need to
> restart your client. Hope this will fix your issue.
>
> On Mon, Sep 27, 2021 at 9:23 PM Simon Mottram <
> simon.mott...@cucumber.co.nz> wrote:
>
> Hi
>
> Got my fingers crossed that there's a work around for this as this really
> is a big problem for us
>
> We are using:
>
> Amazon EMR
>
> Release label:emr-6.1.0
> Hadoop distribution:Amazon
> Applications:Hbase 2.2.5, Hive 3.1.2, Phoenix 5.0.0, Pig 0.17.0
>
> Thin Client version:
> phoenix-5.0.0-HBase-2.0-thin-client.jar
>
> We get the following error when doing an LAST_VALUE aggregation where
>
>   1.  A JOIN is empty
>   2.  The column is INTEGER or DATETIME
>
> Remote driver error: IllegalArgumentException: offset (25) + length (4)
> exceed the capacity of the array: 25
>
> The query that breaks is:
>
> SELECT
> "BIOMATERIAL_NAME",
> FIRST_VALUE("PLANT_FRUIT_COUNT") WITHIN GROUP (
> ORDER BY OBSERVATION_DATE DESC) AS "Plant Fruit Count"
> FROM
> VARIABLE_VALUES_QA.OBSERVATION
> JOIN VARIABLE_VALUES_QA.OBSERVATION_TAG_INDEX
> ON DOCID = OBSERVATION_VALUE_ID
>   AND TAGNAME = 'TRIAL_ID'
> AND TAGVALUE = 'f62dd8e0-d2ea-4d9a-9ab6-2049601bb9fe'
> GROUP BY
> "BIOMATERIAL_NAME"
> LIMIT 10
>  OFFSET 0;
>
> I can refactor this using EXIST but get same error, presumably the driver
> knows to treat them the same:
>
> SELECT
> "BIOMATERIAL_NAME",
> FIRST_VALUE("PLANT_FRUIT_COUNT") WITHIN GROUP (
> ORDER BY OBSERVATION_DATE DESC) AS "Plant Fruit Count"
> FROM
> VARIABLE_VALUES_QA.OBSERVATION
> WHERE
> EXISTS (
> SELECT
> DOCID
> FROM
> VARIABLE_VALUES_QA.OBSERVATION_TAG_INDEX
> WHERE
> DOCID = OBSERVATION_VALUE_ID
> AND TAGNAME = 'TRIAL_ID'
> AND TAGVALUE = 'f62dd8e0-d2ea-4d9a-9ab6-2049601bb9fe')
> GROUP BY
> "BIOMATERIAL_NAME"
> LIMIT 10
>  OFFSET 0;
>
> If we remove the external reference we get no error, regardless of whether
> there are any hits or not
>
> -- these all work
> There are no hits for this query
>
> SELECT
> "BIOMATERIAL_NAME",
> FIRST_VALUE("PLANT_FRUIT_COUNT") WITHIN GROUP (
> ORDER BY OBSERVATION_DATE DESC) AS "Plant Fruit Count"
> FROM
> VARIABLE_VALUES_QA.OBSERVATION
> WHERE
> BIOMATERIAL_TYPE = 'aardvark'
> GROUP BY
> "BIOMATERIAL_NAME"
> LIMIT 10
>  OFFSET 0;
>
> Lots of hits for this query:
>
> SELECT
> "BIOMATERIAL_NAME",
> FIRST_VALUE("PLANT_FRUIT_COUNT") WITHIN GROUP (
> ORDER BY OBSERVATION_DATE DESC) AS "Plant Fruit Count"
> FROM
> VARIABLE_VALUES_QA.OBSERVATION
> GROUP BY
> "BIOMATERIAL_NAME"
> LIMIT 10  OFFSET 0;
>
> I've tried weird things like:
>
> Comparing exists to TRUE to try and force it into a normal BOOLEAN value,
> same IllegalArgumentException.
>
>  SELECT
> "BIOMATERIAL_NAME",
> FIRST_VALUE("PLANT_FRUIT_COUNT") WITHIN GROUP (
> ORDER BY OBSERVATION_DATE DESC) AS "Plant Fruit Count"
> FROM
> VARIABLE_VALUES_QA.OBSERVATION
> WHERE
> (EXISTS (
> SELECT
> DOCID
> FROM
> VARIABLE_VALUES_QA.OBSERVATION_TAG_INDEX
> WHERE
> DOCID = OBSERVATION_VALUE_ID
> AND TAGNAME = 'TRIAL_ID'
> AND TAGVALUE = 'f62dd8e0-d2ea-4d9a-9ab6-2049601bb9fe')) = TRUE
> GROUP BY
> "BIOMATERIAL_NAME"
> LIMIT 10
>  OFFSET 0;
>
> And are you prepared for this one, which throws exact same error:
>
>  SELECT
> "BIOMATERIAL_NAME",
> FIRST_VALUE("PLANT_FRUIT_COUNT") WITHIN GROUP (
> ORDER BY OBSERVATION_DATE DESC) AS "Plant Fruit Count"
> FROM
> VARIABLE_VALUES_QA.OBSERVATION
> WHERE
> TRUE
> GROUP BY
> "BIOMATERIAL_NAME"
> LIMIT 10
>  OFFSET 0;
>
> Change FIRST_VALUE to AVG and it works fine.
>
>
>
> --
> *István Tóth* | Staff Software Engineer
> st...@cloudera.com <https://www.cloudera.com>
> [image: Cloudera] <https://www.cloudera.com/>
> [image: Cloudera on Twitter] <https://twitter.com/cloudera> [image:
> Cloudera on Facebook] <https://www.facebook.com/cloudera> [image:
> Cloudera on LinkedIn] <https://www.linkedin.com/company/cloudera>
> <https://www.cloudera.com/>
> ------------------------------
>
>
>
> --
> *István Tóth* | Staff Software Engineer
> st...@cloudera.com <https://www.cloudera.com>
> [image: Cloudera] <https://www.cloudera.com/>
> [image: Cloudera on Twitter] <https://twitter.com/cloudera> [image:
> Cloudera on Facebook] <https://www.facebook.com/cloudera> [image:
> Cloudera on LinkedIn] <https://www.linkedin.com/company/cloudera>
> <https://www.cloudera.com/>
> ------------------------------
>


-- 
*István Tóth* | Staff Software Engineer
st...@cloudera.com <https://www.cloudera.com>
[image: Cloudera] <https://www.cloudera.com/>
[image: Cloudera on Twitter] <https://twitter.com/cloudera> [image:
Cloudera on Facebook] <https://www.facebook.com/cloudera> [image: Cloudera
on LinkedIn] <https://www.linkedin.com/company/cloudera>
<https://www.cloudera.com/>
------------------------------

Reply via email to