Re: Global Secondary Index: ERROR 2008 (INT10): Unable to find cached index metadata. (PHOENIX-1718)

2016-01-15 Thread anil gupta
Hi James, Thanks for your reply. My problem was resolved by setting phoenix.coprocessor.maxServerCacheTimeToLiveMs to 3 minutes and phoenix.upsert.batch.size to 10. I think, i can increase phoenix.upsert.batch.size to a higher value but haven't got opportunity to try that out yet. Thanks, Anil Gu

Re: Announcing phoenix-for-cloudera 4.6.0

2016-01-15 Thread Krishna
On the branch: 4.5-HBase-1.0-cdh5, I set cdh version to 5.5.1 in pom and building the package produces following errors. Repo: https://github.com/chiastic-security/phoenix-for-cloudera [ERROR] ~/phoenix_related/phoenix-for-cloudera/phoenix-core/src/main/java/org/apache/phoenix/trace/util/Tracing.

Re: Announcing phoenix-for-cloudera 4.6.0

2016-01-15 Thread Krishna
Thanks Andrew. Are binaries available for CDH5.5.x? On Tue, Nov 3, 2015 at 9:10 AM, Andrew Purtell wrote: > Today I pushed a new branch '4.6-HBase-1.0-cdh5' and the tag > 'v4.6.0-cdh5.4.5' (58fcfa6) to > https://github.com/chiastic-security/phoenix-for-cloudera. This is the > Phoenix 4.6.0 relea

Re: Bulk load same data twice and storage doubled

2016-01-15 Thread Krishna
Did you run compaction after bulk loading twice? On Friday, January 15, 2016, sac...@outlook.com wrote: > hi: > >when i bulk load the same data twice and The storage doubled > . I did add the versions 1 when i create the table ,but I can not find it > in the hbase`s table describ

Bulk load same data twice and storage doubled

2016-01-15 Thread sac...@outlook.com
hi: when i bulk load the same data twice and The storage doubled . I did add the versions 1 when i create the table ,but I can not find it in the hbase`s table describe. While when i add the version conf 10 when creating table . the versions 10 shows in hbase`s table de

Re: Telco HBase POC

2016-01-15 Thread Pariksheet Barapatre
Hi Willem, Looking at your use case. Phoenix would be a handy client. Few notes from my experience : 1. Use bulk load rather than psql.py. Load larger files(merge) instead of small files. 2. Increase HBase block cache 3. Turn off HBase auto compaction 4. Select primary key correctly 5. Don't use

Re: Telco HBase POC

2016-01-15 Thread Pedro Gandola
Hi Willem, Just to give you my short experience as phoenix user. I'm using Phoenix4.4 on top of a HBase cluster where I keep 3 billion entries. In our use case Phoenix is doing very well and it saved a lot of code complexity and time. If you guys have already decided that HBase is the way to go

Telco HBase POC

2016-01-15 Thread Willem Conradie
Hi, I am currently consulting at a client with the following requirements. They want to make available detailed data usage CDRs for customers to verify their data usage against the websites that they visited. In short this can be seen as an itemised bill for data usage. The data is curren