Thanks Sergey.
Let me check this
Regards
Sanooj
On 22 Oct 2016 1:55 a.m., "Sergey Soldatov"
wrote:
> Hi Sanooj,
>
> You may take a look at BaseResulterators.getIterators() and
> BaseResultIterators.getParallelScans()
>
> Thanks,
> Sergey
>
> On Fri, Oct 2
Hi all
If anyone can provide some information as to which part of the phoenix code
we need to check to see how parallel execution is performed.
Thanks again
Sanooj
On 20 Oct 2016 11:31 a.m., "Sanooj Padmakumar" wrote:
> Hi James,
>
> We are loading data from Phoenix ta
scan of
a larger region.
As you mentioned phoenix does this for all its queries. Can you please
provide pointers to the phoenix code where this happens ?
Thanks for the prompt response.
Thanks
Sanooj Padmakumar
On Wed, Oct 19, 2016 at 11:22 PM, James Taylor
wrote:
> Hi Sanooj,
> I'
Hi All
We are are loading data in our HBase table into in-memory. For this we
provide a start row and end row and scan the hbase regions. Is there a way
we can scan a big region in parallel to fasten this whole process ? Any
help/pointers on this will be of great help.
--
Thanks,
Sanooj
Thanks for the confirmation Anil.
On Fri, Oct 7, 2016 at 11:22 PM, anil gupta wrote:
> I dont think that feature is supported yet in bulk load tool.
>
> On Thu, Oct 6, 2016 at 9:55 PM, Sanooj Padmakumar
> wrote:
>
>> Hi All,
>>
>> Can we populate dynamic column
oj
On Fri, Oct 7, 2016 at 8:32 PM, James Taylor wrote:
> Hi Sanjooj,
> What version of Phoenix? Would you mind filing a JIRA with steps to
> reproduce the issue?
> Thanks,
> James
>
>
> On Friday, October 7, 2016, Sanooj Padmakumar wrote:
>
>> Hi All
>>
>&
Hi All
We get mutation state related error when we try altering a table to which
views are added. We always have to drop the view before doing the alter. Is
there a way we can avoid this?
Thanks
Sanooj
Hi All,
Can we populate dynamic columns as well while bulk loading data (
https://phoenix.apache.org/bulk_dataload.html) into Hbase using Phoenix ?
It didn't work when we tried and hence posting it here. Thanks in advance
--
Thanks,
Sanooj Padmakumar
72317624237,"tag":[],"qualifier"
But as I said with very less number of rows in the select query, the
MR works just fine and data is populated alright ? Any Phoenix/Hbase
parameters
that is should look into ?
Thanks
Sanooj Padmakumar
On Wed, Aug 24, 2016 at 11:26 AM, S
at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:1017)
at
org.apache.phoenix.execute.MutationState.commit(MutationState.java:444)
... 25 more
--
Thanks,
Sanooj Padmakumar
Thanks James, this helps
On 9 Aug 2016 22:58, "James Taylor" wrote:
> Make sure to set autoCommit on before issuing the DELETE. Otherwise the
> client needs to hold onto all the row keys of the rows being deleted.
>
> On Tuesday, August 9, 2016, Sanooj Padmakumar wrote:
&
of
data
Thanks!
--
Thanks,
Sanooj Padmakumar
, May 9, 2016 at 12:26 PM, Sanooj Padmakumar
> wrote:
>
>> Hi
>>
>> I am getting this exception while running phoenix MR job. Is there
>> anything I can do at MR side to fix this ? Thanks in advance
>>
>>
>> Error: java.lang.RuntimeException:
>> or
,
Sanooj Padmakumar
(MetaDataEndpointImpl.java:1156)
... 10 more
Any input on this will be extremely helpful.
--
Thanks,
Sanooj Padmakumar
Setting the below configuration in the MR made it work
conf.set("hbase.coprocessor.region.classes", "");
Thanks everyone!
On Tue, Apr 26, 2016 at 8:53 AM, Sanooj Padmakumar
wrote:
> Apologies if its against the convention to re-open an old discussion.
>
> The e
ask you make the
> > HFiles readable to all during creation.
> >
> > I believe that the alternate solutions listed on the jira ticket
> > (running the tool as the hbase user or using the alternate HBase
> > coprocessor for loading HFiles) won't have this drawb
Thanks
Sanooj Padmakumar
On Mon, Apr 25, 2016 at 1:59 PM, Ankit Singhal
wrote:
> Sanooj,
> It is not necessary that output can only be written to a table when using
> MR, you can have your own custom reducer with appropriate OutputFormat set
> in driver.
>
> Similar solutions with
ix_mr.html) but this always require the
output to be written to a table.
Any inputs ?
--
Thanks,
Sanooj Padmakumar
? Ideally you don't
>> need to renew ticket since Phoenix Driver gets the required
>> information (principal name and keytab path) from jdbc connection
>> string and performs User.login itself.
>>
>> Thanks,
>> Sergey
>>
>>
any Kerberos tgt)]
On Wed, Mar 16, 2016 at 8:35 PM, Sanooj Padmakumar
wrote:
> Hi Anil
>
> Thanks for your reply.
>
> We do not do anything explicitly in the code to do the ticket renwal ,
> what we do is run a cron job for the user for which the ticket has to be
> renewed. B
connection url for getting the phoenix connection
jdbc:phoenix:::/hbase::
This along with the entries in hbase-site.xml & core-site.xml are passed to
the connection object
Thanks
Sanooj Padmakumar
On Tue, Mar 15, 2016 at 12:04 AM, anil gupta wrote:
> Hi,
>
> At my previous jo
I don't know how to decode the value to normal string.what's the
> codeset?
>
--
Thanks,
Sanooj Padmakumar
Hi
We have a rest style micro service application fetching data from hbase
using Phoenix. The cluster is kerberos secured and we run a cron to renew
the kerberos ticket on the machine where the micro service is deployed.
But it always needs a restart of micro service java process to get the
kerbe
7;X',1,'Y'
> will be :
> ('X', 0x00) (0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01), ('Y')
>
>
> Thanks,
> Sergey
>
> On Mon, Feb 22, 2016 at 8:43 AM, Sanooj Padmakumar
> wrote:
> > Hi,
> >
> > I have a HBase table created usin
hanks,
Sanooj Padmakumar
> as a command line parameter.
>
> - Gabriel
>
> On Tue, Nov 17, 2015 at 8:16 PM, Sanooj Padmakumar
> wrote:
> > Hi Gabriel
> >
> > Thank you so much
> >
> > I set the below property and it worked now.. I hope this is the correct
> > thing to d
gt; tries=11,
> > retries=35, started=88315 ms ago, cancelled=false, msg=row '' on table
> > 'TABLE1' at region=TABLE1,<<>>>, seqNum=26
> >
> > Is there any setting I should make inorder to make the program work on
> > Kerberos secured environment ?
> >
> > Please note , our DEV environment doesnt use Kerberos and things are
> working
> > just fine
> >
> > --
> > Thanks in advance,
> > Sanooj Padmakumar
>
--
Thanks,
Sanooj Padmakumar
environment ?
Please note , our DEV environment doesnt use Kerberos and things are
working just fine
--
Thanks in advance,
Sanooj Padmakumar
29 matches
Mail list logo