hi josh:thank you for your advice,and it did work .
i build the client-spark jar refreed the patch with thr CDH code and it
succeed.Then i run some code with the "local" mode ,and the
result is correct. But when it comes to the "yarn-client" mode ,som
Hi,
The error "java.sql.SQLException: No suitable driver found..." is typically
thrown when the worker nodes can't find Phoenix on the class path.
I'm not certain that passing those values using '--conf' actually works or
not with Spark. I tend to set them in my 'spark-defaults.conf' in the Spark
Hi,
I am running HBase on HDP 2.3.2 with the following parameters:
Hadoop Version
2.7.1.2.3.2.0-2950, revision=5cc60e0003e33aa98205f18bccaeaf36cb193c1c
Zookeeper Quorum
sandbox.hortonworks.com:2181
Zookeeper Base Path
/hbase-unsecure
HBase Version
1.1.2.2.3.2.0-2950, revision=58355eb3c88bded7
Did you run thin server ? (The http server that proxy to Phoenix)
Le 5 janv. 2016 11:15 PM, a écrit :
> Hi,
>
> I am running HBase on HDP 2.3.2 with the following parameters:
>
> Hadoop Version 2.7.1.2.3.2.0-2950,
> revision=5cc60e0003e33aa98205f18bccaeaf36cb193c1c
> Zookeeper Quorum sandbox.hor
Hi Guys,
I have been testing out the Phoenix Local Indexes and I'm facing an issue
after restart the entire HBase cluster.
*Scenario:* I'm using Phoenix 4.4 and HBase 1.1.1. My test cluster contains
10 machines and the main table contains 300 pre-split regions which implies
300 regions on local i
Hello Thomas,
Thank you ! this what was missing ! :)
I noticed that the dbConnection.commit(); is not supported.
Is there any other method to commit? the inserted values are not
persisted.
Thank you,
-- C
From: Thomas Decaux
To: user@phoenix.apache.org,
Date: 01/05/2016 05:17 PM
We have a table with a data type BIGINT[], Since phoenix doesnt support to
index this data type, our queries are doing a full table scan when we have
to do filtering on this field.
What are the alternate approaches? Tried looking into Views, but nope.
Appreciate your time.
Kumar
There is some limited indexing you can do on an array by creating a
functional index for a particular array element. For example:
CREATE TABLE T (K VARCHAR PRIMARY KEY, A BIGINT ARRAY);
CREATE INDEX IDX ON T (A[3]);
In this case, the following query would use the index:
SELECT K FROM T
Thanks James for the response. Our use case is, that array holds all the
accounts for a particular customer. so the table and query is
CREATE TABLE T ( ID VARCHAR PRIMARY KEY, A BIGINT ARRAY);
find by account is a use case -
select ID from table T where ANY (A);
On Tue, Jan 5, 2016 at 3:34 PM
Hi,
I am using Phoenix4.4. I am trying to integrate my MapReduce job with
Phoenix following this doc: https://phoenix.apache.org/phoenix_mr.html
My phoenix table has around 1000 columns. I have some hesitation regarding
using *COLUMN_INDEX* for setting its value rather than *NAME* as per
followi
If the "finding customers that have a particular account" is a common
query, you might consider modifying your schema by pulling the account into
an optional/nullable row key column, like this:
CREATE TABLE T (CID VARCHAR NOT NULL, AID BIGINT, V1 VARCHAR, V2 VARCHAR
CONSTRAINT pk PRIMARY KEY (
Hi Team,
Any idea with this issue? We are struck up with this issue and we need to
provide a solution before Jan 8th.
Any sugesstions, guidance please.
Thanks,
Durga Prasad
On Thu, Dec 31, 2015 at 12:14 PM, Ns G wrote:
> Hi There,
>
> Here is my JDBC connection string.
>
> My Hbase Cluster is
With JDBC, both will already work.
pstmt.setString("STOCK_NAME", stockName);
pstmt.setString(1, stockName);
On Tuesday, January 5, 2016, anil gupta wrote:
> Hi,
>
> I am using Phoenix4.4. I am trying to integrate my MapReduce job with
> Phoenix following this doc: https://phoenix.apache.org/p
Hi Durga,
Can you kinit using the same keytab/principal from the node you are trying
to run this program? Is your program able to read keytab file?
Can you try to run this program from the same node that is running sqlline.
Also, dont pass the jdbcjaas.conf this time.
This line seems to provide mo
14 matches
Mail list logo