iduals have spent a significant amount of
> time and effort in stabilizing secondary indexes in the past 1-2 years,
> not to mention others spending time on a local index implementation.
> Judging Phoenix in its entirety based off of an arbitrarily old version
> of Phoenix is disingen
I think this is an unavoidable problem in some sense, if global indexes are
used. Essentially global indexes create a graph of dependent region
servers due to index rpc calls from one RS to another. Any single failure
is bound to affect the entire graph, which under reasonable load becomes
the ent
Another observation with Phoenix global indexes - at very large volumes of
writes, a single region server failure cascades to the entire cluster very
quickly
On Sat, Oct 27, 2018, 4:50 AM Nicolas Paris
wrote:
> Hi
>
> I am benchmarking phoenix to better understand its strength and
> weaknesses.
Thanks for the information Rajesh Babu, this is really helpful!
On Jun 29, 2017 11:00 PM, "rajeshb...@apache.org"
wrote:
> Yes Neelesh, at present need to touch all the regions and there is a JIRA
> for the optimization[1
> <https://issues.apache.org/jira/browse/PHOENI
jeshb...@apache.org"
wrote:
9,10 slides gives details how read path works.
https://www.slideshare.net/rajeshbabuchintaguntla/local-
secondary-indexes-in-apache-phoenix
Let's know if you need more information.
Thanks,
Rajeshbabu.
On Fri, Jun 30, 2017 at 4:20 AM, Neelesh wrote:
Hi,
The documentation says - "From 4.8.0 onwards we are storing all local
index data in the separate shadow column families in the same data table".
It is not quite clear to me how the read path works with local indexes. Is
there any document that has some details on how it works ? PHOENIX-173
explicit about not using local indexes yet.
I was interested in seeing if anyone in the community has experienced
similar issues around global indexes
On Mon, Dec 5, 2016 at 2:39 PM, James Taylor wrote:
> Have you tried local indexes?
>
> On Mon, Dec 5, 2016 at 2:35 PM Neelesh wrote:
&
Hello,
When a region server is under stress (hotspotting, or large replication,
call queue sizes hitting the limit, other processes competing with HBase
etc), we experience latency spikes for all regions hosted by that region
server. This is somewhat expected in the plain HBase world.
However,
to provide a more recent release.
>
> Thanks,
> James
>
> On Sat, Nov 26, 2016 at 10:23 AM Neelesh wrote:
>
>> Hi All,
>> we are using phoenix 4.4 with HBase 1.1.2 (HortonWorks distribution).
>> We're struggling with the following error on pretty much all
Hi All,
we are using phoenix 4.4 with HBase 1.1.2 (HortonWorks distribution).
We're struggling with the following error on pretty much all our region
servers. The indexes are global, the data table has more than a 100B rows
2016-11-26 12:15:41,250 INFO
[RW.default.writeRpcServer.handler=40,queu
in" wrote:
Hi Neelesh,
The saveToPhoenix method uses the MapReduce PhoenixOutputFormat under the
hood, which is a wrapper over the JDBC driver. It's likely not as efficient
as the CSVBulkLoader, although there are performance improvements over a
simple JDBC client as the writes are spr
Hi ,
Does phoenix-spark's saveToPhoenix use the JDBC driver internally, or
does it do something similar to CSVBulkLoader using HFiles?
Thanks!
Also, was your change to phoenix.upsert.batch.size on the client or on the
region server or both?
On Wed, Feb 17, 2016 at 2:57 PM, Neelesh wrote:
> Thanks Anil. We've upped phoenix.coprocessor.maxServerCacheTimeToLiveMs,
> but haven't tried playing with phoenix.upsert.batch.
9UY0h2FKuo8RfAPN
>
> Please let us know if the problem still persists.
>
> On Wed, Feb 17, 2016 at 12:02 PM, Neelesh wrote:
>
>> We've been running phoenix 4.4 client for a while now with HBase 1.1.2.
>> Once in a while while UPSERTing records (on a table with 2 gl
We've been running phoenix 4.4 client for a while now with HBase 1.1.2.
Once in a while while UPSERTing records (on a table with 2 global indexes),
we see the following error. I found
https://issues.apache.org/jira/browse/PHOENIX-1718 and upped both values in
that JIRA to 360. This still does
15 matches
Mail list logo