Re: Permissions Question

2015-07-07 Thread Gabriel Reid
Hi Zack, There are two options that I know of, and I think that both of them should work. First is that you can supply a custom output directory to the bulk loader using the -o parameter (see http://phoenix.apache.org/bulk_dataload.html). In this way you can ensure that the output directory doesn

[ANNOUNCE] YCSB 0.2.0 Release

2015-07-07 Thread Sean Busbey
On behalf of the development community, I am pleased to announce the release of YCSB version 0.2.0. Highlights: * Apache Cassandra 2.0 CQL support * Apache HBase 1.0 support * Apache Accumulo 1.6 support * MongoDB - support for all production versions released since 2011 * Tarantool 1.6 support *

Re: Could not find hash cache for joinId

2015-07-07 Thread Maryann Xue
It could be not the real cache expiration (which should not be considered a bug), since your increasing the cache live time didn't solve the problem. So the problem might be the cache had not been sent over to that server at all, which then would be a bug, and mostly likely it would be because the

Re: Could not find hash cache for joinId

2015-07-07 Thread Alex Kamil
Maryann, is this patch only for the client? as we saw the error in regionserver logs and it seems that server side cache has expired also by "start a new process doing the same query" do you mean start two client instances and run the query from one then from the other client? thanks Alex On Tu

Re: Could not find hash cache for joinId

2015-07-07 Thread Maryann Xue
My question was actually if the problem appears on your cluster, will it go away if you just start a new process doing the same query? I do have a patch, but it only fixes the problem I assume here, and it might be something else. Thanks, Maryann On Tue, Jul 7, 2015 at 12:59 PM, Alex Kamil wrot

Re: Could not find hash cache for joinId

2015-07-07 Thread Alex Kamil
a patch would be great, we saw that this problem goes away in standalone mode but reappears on the cluster On Tue, Jul 7, 2015 at 12:56 PM, Alex Kamil wrote: > sure, sounds good > > On Tue, Jul 7, 2015 at 10:57 AM, Maryann Xue > wrote: > >> Hi Alex, >> >> I suspect it's related to using cached

Re: Could not find hash cache for joinId

2015-07-07 Thread Alex Kamil
sure, sounds good On Tue, Jul 7, 2015 at 10:57 AM, Maryann Xue wrote: > Hi Alex, > > I suspect it's related to using cached region locations that might have > been invalid. A simple way to verify this is try starting a new java > process doing this query and see if the problem goes away. > > > T

Re: HBase + Phoenix for CDR

2015-07-07 Thread James Taylor
Actually, we support deletes on tables with IMMUTABLE_ROWS=true ( I believe as of 4.2 release), as long as you're not filtering on a column not contained in the index. On Tuesday, July 7, 2015, Vladimir Rodionov wrote: > Phoenix grammar contains examples of usage. For example, create table: > ht

Re: HBase + Phoenix for CDR

2015-07-07 Thread Vladimir Rodionov
Phoenix grammar contains examples of usage. For example, create table: https://phoenix.apache.org/language/index.html#create_table You can not specify TTL per record. I suggest you using 1 year for whole table and additional logic inside your application to filter expired rows out. When you set I

Delete statement reports row deleted when no rows deleted

2015-07-07 Thread Michael McAllister
All We have just noticed that when you issue the same delete statement twice, the delete statement reports rows deleted the second time, when there are in fact no rows available to delete. I have created a test case and attached it with a log of the results I am seeing. Note specifically in th

Re: Could not find hash cache for joinId

2015-07-07 Thread Maryann Xue
Hi Alex, I suspect it's related to using cached region locations that might have been invalid. A simple way to verify this is try starting a new java process doing this query and see if the problem goes away. Thanks, Maryann On Mon, Jul 6, 2015 at 10:56 PM, Maryann Xue wrote: > Thanks a lot f

RE: Permissions Question

2015-07-07 Thread Riesland, Zack
Thanks Krishna, The hfiles are stored in, for example, /tmp/daa6119d-f49e-485e-a6fe-1405d9c3f2a4/ ‘tmp’ is owned by ‘hdfs’ in group ‘hdfs’. ‘daa6119d-f49e-485e-a6fe-1405d9c3f2a4’ is owned by my script user (‘user1’ for example) in group ‘hdfs’. I cannot run the script as ‘hbase’, and the name

Re: create a view on existing production table ?

2015-07-07 Thread Anil Gupta
Yup, that's right. Sql takes away flexibility of NoSql. I have been battling with this tradeoff for a while. ;) Sent from my iPhone > On Jul 7, 2015, at 6:46 AM, Sergey Malov wrote: > > Thanks, Anil, that’s what I thought initially. I was a bit confused with what > James wrote, that I can cre

Re: create a view on existing production table ?

2015-07-07 Thread Sergey Malov
Thanks, Anil, that’s what I thought initially. I was a bit confused with what James wrote, that I can create a view, but it wouldn’t be feasible. Essentially, it seems, I need to impose a strict schema on Hbase table for Phoenix to work on it directly, which defeats a purpose of schema-less db

Re: Can't UPSERT into a VIEW?

2015-07-07 Thread Martin Pernollet
Hi and thanks again James. It is a small table (100k rows, 1650 cols splitted in 2 column families, one with 50 cols, the other with 1600 cols). Running on a 5 nodes cluster. *I don't understand the purpose of computing another rowkey at table creation**, as the table already contains keys?* Is C

Re: HBase + Phoenix for CDR

2015-07-07 Thread Matjaž Trtnik
Vlad and Eli, thanks for your answer and comments. 1. Normally I do query by whole Anumber, meaning country code + operator id + user number but as you suggested I could just reverse everything and it should work well if I’ll reverse number entered by user. 2. What’s the suggested number of Reg