Noam,
I followed these instructions to get phoenix 4.5 working with cdh5.4.2:
http://stackoverflow.com/a/31934434/165130
Alex
On Mon, Oct 26, 2015 at 2:13 PM, Bulvik, Noam wrote:
> Hi,
>
> does anyone have working phoenix version for CDH 5.4.x if so witch version
> of phoenix and which version
d automate the
>> above process, doing it asynchronously where the data type change wouldn't
>> take effect until the entire process is complete.
>>
>> Thanks,
>> James
>>
>> On Mon, Oct 19, 2015 at 10:29 AM, ALEX K wrote:
>>
>>> Is it possible to change data type of column in Apache Phoenix without
>>> losing HBase data?
>>>
>>
>>
Is it possible to change data type of column in Apache Phoenix without
losing HBase data?
interesting comparison of Impala/Kudu vs. Hbase/Phoenix (section 6.3):
http://getkudu.io/kudu.pdf
these changes worked for me on cdh5.4.4 and phoenix 4.5.1-hbase-1.0 :
http://stackoverflow.com/a/31934434/165130
On Sat, Aug 22, 2015 at 10:17 AM, Ns G wrote:
> Hi Lukas,
>
> I have shared the changes I made in one of the previous email. Did u try
> them?
>
> Thanks,
> Satya
> On 22-Aug-2015 6:
yes
On Mon, Sep 7, 2015 at 4:40 AM, Serega Sheypak
wrote:
> Hi, so you hold phoenix java.sql.Connection for each thread as
> thread-local variable and don't get any problems, correct?
>
> 2015-09-07 6:43 GMT+02:00 ALEX K :
>
>> Serega,
>>
>> I haven'
Serega,
I haven't seen any issues so far with this approach (keeping connections
open in thread-local, one conn per thread)
we send a stream of kafka messages to hbase in the following way
- each "saver" thread initializes two connections - one for upserts and
one for alter statements (for multi
I'm using the same solution as Samarth suggested (commit batching), it
brings down latency per single row upsert from 50ms to 5ms (averaged after
batching)
On Wed, Aug 19, 2015 at 7:11 PM, Samarth Jain
wrote:
> You can do this via phoenix by doing something like this:
>
> try (Connection conn =