Hi,
We are using HDP2.3.2(Phoenix 4.4 and HBase 1.1), we created a secondary
index on an already existing table. We paused all writes to Primary table.
Then we invoked IndexTool to populate secondary index table. We have tried
same steps many times but we keep on getting following error(we have al
I have everything running inside a single docker container
https://github.com/CheyenneForbes/docker-apache-phoenix/blob/master/Dockerfile
could you look at my dockerfile and tell me what steps are missing to get
the logging?
Regards,
Cheyenne O. Forbes
On Thu, May 25, 2017 at 3:05 PM, Josh El
The log4j.properties which you have configured to be on the HBase
RegionServer classpath. I don't know how you configured your system.
On 5/25/17 2:02 PM, Cheyenne Forbes wrote:
Which one of the files? I found 4
//usr/local/hbase-1.2.5/conf/log4j.properties
/usr/local/apache-phoenix-4.10.0-HBa
Which one of the files? I found 4
*/usr/local/hbase-1.2.5/conf/log4j.properties/usr/local/apache-phoenix-4.10.0-HBase-1.2-bin/bin/log4j.properties/usr/local/apache-phoenix-4.10.0-HBase-1.2-bin/bin/sandbox-log4j.properties/usr/local/apache-phoenix-4.10.0-HBase-1.2-bin/bin/config/log4j.properties
Verify HBase's log4j.properties configuration is set to print log
messages for your class at info (check rootLogger level, log threshold,
and logger class/package level).
On 5/24/17 11:02 AM, Cheyenne Forbes wrote:
I want to output the steps of execution of my UDF but I cant find the
logs, I s
Hi Megan,
Did you happen to restart Squirrel and/or re-connect to Phoenix after
making the change? The steps you took (Squirrel, notwithstanding) should
have sufficiently fixed the issue you described.
Another sanity check would be to make sure your didn't have any
mis-typing of the configur
Which is more efficient in heavy usage platform?
1. Join 8 tables with billions of rows
2. Select the "primary row" from a table then run multiple select
queries on the other tables using each primary key returned from the first
table in a for loop on the client side
Regards,
Cheyen
Hi,
There performance reports on Phoenix website -
http://phoenix-bin.github.io/client/performance/latest.htm
What is the cluster capacity used (RAM, Cores, CPU, Number of RegionServers)
for testing these ?
Sorry if this is indeed mentioned on the website and i missed it.
Thanks
Chaitanya
--
Hi,
I am observing a similar behavior.I am doing POC of Phoenix since our query
workloads are a mix of point lookups and aggregations.
As i can see, Phoenix performs well on point lookups based on either PK or
secondary index. But when it comes to aggregations, it has to do a full scan
and its sl
Next release of Phoenix(v4.11.0) will be supporting HBase 1.3.1(see
PHOENIX-3603) and there is no timeline yet decided for the release. But you
may expect some updates in next 1-2 months.
On Thu, May 25, 2017 at 3:32 AM, Anirudha Jadhav wrote:
> hi,
>
> just checking in, any idea what kind of a
10 matches
Mail list logo