t; data consistency between the tables created for each matrix.
>
>
>
> *From:* Pedro Boado [mailto:pedro.bo...@gmail.com]
> *Sent:* Friday, September 27, 2019 10:53 AM
> *To:* user@phoenix.apache.org
> *Subject:* Re: Materialized views in Hbase/Phoenix
>
>
>
> *CAUTIO
he mean value of each column when all rows are grouped by a certain row
> property).
>
>
>
> Precomputing seems much more efficient.
>
>
>
> *From:* Pedro Boado [mailto:pedro.bo...@gmail.com]
> *Sent:* Friday, September 27, 2019 9:27 AM
> *To:* user@phoenix.apache.o
ded to scale to that degree.
>
>
>
> If one of the tables fails to write, we need some kind of a rollback
> mechanism, which is why I was considering a transaction. We cannot be in a
> partial state where some of the ‘views’ are written and some aren’t.
>
>
>
>
>
For just a few million rows I would go for a RDBMS and not Phoenix / HBase.
You don't really need transactions to control completion, just write a flag
(a COMPLETED empty file, for instance) as a final step in your job.
On Fri, 27 Sep 2019, 15:03 Gautham Acharya,
wrote:
> Thanks Anil.
>
>
>
>
My former employer has been running for the last 3 years thousands of
queries per second (with milliseconds response time) scanning thousands of
rows in tables with a few billion rows without further issues. Combined
with an additional write load of a few thousand writes per second. But it
didn't u
Hi,
Indexes in Phoenix are implemented using an additional HBase table, and the
index key fields are serialized as HBase table key.
So same limitations apply to varbinary and varchar when used as index
columns: they can only be used as the last column in the index key.
Cheers,
Pedro.
On Mon, 13
What type of queries are being thrown to the cluster? What's the average
row size? 5M rows seems a tiny table size. 30ms is OK for scans over a few
thousand records, but maybe not for full table scans.
Are you connecting to phoenix from a java app? Just add it to your JVM
classpath... Depending on how you're running it can be added in one way or
another.
If it for instance is a springboot app, java -jar app.war -cp
folder_containing_additional_classpath_resources
Or just include it as part of y
Hi,
Column mapping is stored in SYSTEM.CATALOG table . There is only one column
mapping strategy with between 1 to 4 bytes to be used to represent column
number. Regardless of encoded column size, column name lookup strategy
remains the same.
Hope it helps,
Pedro.
On Wed, 26 Dec 2018, 23:00 S
Hi,
change cdh version for dependencies in pom.xml and recompile ( mvn clean
package, you'll find your parcels under module phoenix-server) . But do
this at your own risk! Potentially a number of IT tests won't pass.
Saying that both run same HBase 1.2 is quite imprecise, Cloudera keeps
applying
Have you tried disabling column name mapping either globally or in a per
table basis? Column names are stored in every cell so there is no direct
workaround but disabling it.
On Wed, 14 Nov 2018, 15:34 talluri abhishek Hi All,
>
> We are upgrading from Phoenix 4.7 to 4.14 and observed that data
Yes, but the first release supporting CDH will be delayed to some point in
the next couple of months.
On Sun, 21 Oct 2018, 09:48 Bulvik, Noam, wrote:
> Hi
>
>
>
> Do you plan to issue Phoenix 5.x parcel based on CDH6 like there was
> phoenix 4.x parcels based on CDH 5.x?
>
>
>
> Regards,
>
>
>
>
Are you reaching any of the ulimits for the user running your application?
On Wed, 10 Oct 2018, 17:00 Hemal Parekh, wrote:
> We have an analytical application running concurrent phoenix queries
> against Hortonworks HDP 2.6 cluster. Application uses phoenix JDBC
> connection to run queries. Ofte
Does updating statistics on the table help?
On Fri, 7 Sep 2018, 13:51 Azharuddin Shaikh,
wrote:
> Hi All,
>
> We have upgraded the phoenix version from 4.8 to 4.12 to resolve duplicate
> count issue but we are now facing issue with restoration of tables on
> phoenix version 4.12. Our Hbase versi
.
Finally have you checked that all RS receive same traffic ?
On Thu, 12 Jul 2018, 23:10 Pedro Boado, wrote:
> I believe it's related to your client code - In our use case we do easily
> 15k writes/sec in a cluster lower specced than yours.
>
> Check that your jdbc connection h
I believe it's related to your client code - In our use case we do easily
15k writes/sec in a cluster lower specced than yours.
Check that your jdbc connection has autocommit off so Phoenix can batch
writes and that table has a reasonable UPDATE_CACHE_FREQUENCY ( more than
6 ).
On Thu, 12 J
Can you set log4j to DEBUG? That will give you a hint about what's going on
the server.
On Thu, 14 Jun 2018, 18:40 Susheel Kumar Gadalay,
wrote:
> Can someone please help me to resolve this.
>
> Thanks
> Susheel Kumar
>
> On Tuesday, June 12, 2018, Susheel Kumar Gadalay
> wrote:
> > Hi,
> >
> >
I guess this thread is not about kafka streams but what Josh suggested is
basically my last resource plan for building kafka streams as you'll be
constrained by HBase/Phoenix upsert ratio -you'll be doing 5x the number of
upserts-
In my experience Kafka Streams is not bad at all doing this kind of
Just to discard it. Might you need Java Unlimited Cryptography Extension to
be installed to deal with the cipher algorithms in your keytabs?
On Thu, 12 Apr 2018, 20:47 Yan Koyfman, wrote:
> We are attempting to create a connection to PQS (Phoenix 4.13.1) in a
> Kerberized Hbase cluster, but have
Flavio I get same behaviour, a count(*) over 180M records needs a couple of
minutes to complete for a table with 10 regions and 4 rs serving it.
Why are you evaluating robustness in terms of full scans? As Anil said I
wouldn't expect a NoSQL database to run quick counts on hundreds of
millions or
Hi all,
Do you know of any integration approach to stream documents from Phoenix to
Solr in a similar way to what Lily HBase Indexer does?
Thanks!
Maybe the warnings are not the cause but a consequence (gc calls finalize()
and not the other way around)
Any details on memory usage? G1? Non full Vs full gc ratio, average freed
memory... Did any gc run in the 3rd RS?
Memory percentage assigned to memstores? You have an average memory
assigned
, "Mujtaba Chohan" wrote:
> Just to remove one variable, can you repeat the same test after truncating
> Phoenix Stats table? (either truncate SYSTEM.STATS from HBase shell or use
> sql: delete from SYSTEM.STATS)
>
> On Mon, Jan 29, 2018 at 4:36 PM, Pedro Boado
> wrote:
&
te took 77 msec.
Checking table TABLE4 state took 142 msec.
...
Any other idea maybe?
On 29 Jan 2018 01:55, "James Taylor" wrote:
> Did you do an rs.next() on the first query? Sounds related to HConnection
> establishment. Also, least expensive query is SELECT 1 FROM T LIIMI
Hi all,
I'm running into issues with a java springboot app that ends up querying a
Phoenix cluster (from out of the cluster) through the non-thin client.
Basically this application has a high latency - around 2 to 4 seconds - for
the first query per primary key to each region of a table with 180
ache.org
> Cc: d...@hbase.apache.org; d...@phoenix.apache.org; u...@hbase.apache.org
> Subject: Re: [ANNOUNCE] Apache Phoenix 4.13.2 for CDH 5.11.2 released
>
> On Sat, Jan 20, 2018 at 12:29 PM Pedro Boado
> wrote:
>
> > The Apache Phoenix team is pleased to announce the immediate availa
The Apache Phoenix team is pleased to announce the immediate availability
of the 4.13.2 release for CDH 5.11.2. Apache Phoenix enables SQL-based OLTP
and operational analytics for Apache Hadoop using Apache HBase as its
backing store and providing integration with other projects in the Apache
ecosy
Regards
> Sumanta
>
>
> -Pedro Boado wrote: -----
> To: user@phoenix.apache.org
> From: Pedro Boado
> Date: 01/17/2018 04:04PM
> Subject: Re: Phoenix 4.13 on Hortonworks
>
> Hi,
>
> Afaik Hortonworks already includes Apache Phoenix as part of the platform,
Hi,
Afaik Hortonworks already includes Apache Phoenix as part of the platform,
doesn't it?
Cheers.
On 17 Jan 2018 10:30, "Sumanta Gh" wrote:
> I am eager to learn if anyone has installed Phoenix 4.13 on Hortonworks
> HDP cluster.
> Please let me know the version number of HDP that was used.
>
We haven't made a public release *yet*. Once it's done it will be published
in the download page.
Thanks.
On 20 Dec 2017 08:54, "Bulvik, Noam" wrote:
*Noam *
--
PRIVILEGED AND CONFIDENTIAL
PLEASE NOTE: The information contained in this message is privileged a
Hi Noam,
thanks for your feedback. PHOENIX-4454 and PHOENIX-4453 were opened for
looking into these issues and a fix for both has already been applied to
the git branch.
I'll publish a new dev release of the parcel in the next couple of days in
the same repo as the previous one.
Cheers.
On 6 De
gt;
> On 18 Dec 2017 00:49, "Ethan" wrote:
>
>> System.mutex should come with Phoenix, so you should have it even though
>> sometimes doesn't show up. To truncate that table you may try delete
>> statement in sqline.
>>
>>
>> On December 17,
from HBase
shell?
On Sun, Dec 17, 2017 at 11:24 PM, Pedro Boado wrote:
> You can do that through the hbase shell doing a
>
> hbase(main):011:0> truncate 'SYSTEM.MUTEX'
>
>
>
>
> On 17 December 2017 at 22:01, Flavio Pompermaier
> wrote:
>
>> I'
ommandHa
>> ndler.java:38)
>> at sqlline.SqlLine.dispatch(SqlLine.java:809)
>> at sqlline.SqlLine.initArgs(SqlLine.java:588)
>> at sqlline.SqlLine.begin(SqlLine.java:661)
>> at sqlline.SqlLine.start(SqlLine.java:398)
>> at sqlline.SqlLine.main(SqlLine.java:291)
>> sqlline version 1.2.0
>>
>> How can I repair my installation? I can't find any log nor anything
>> strange in the SYSTEM.CATALOG HBase table..
>>
>> Thanks in advance,
>> Flavio
>>
>>
>
>
> --
> Flavio Pompermaier
> Development Department
>
> OKKAM S.r.l.
> Tel. +(39) 0461 041809 <+39%200461%20041809>
>
--
Un saludo.
Pedro Boado.
when doing the UPSERT -
even by setting it through some obscure hidden jdbc property - ?
I want to avoid by all means doing a checkAndPut as the volume of changes
is going to be quite bug.
--
Un saludo.
Pedro Boado.
software.com]
> *Sent:* Monday, November 27, 2017 10:51 AM
> *To:* user@phoenix.apache.org
> *Subject:* Re: [ANNOUNCE] Apache Phoenix 4.13 released
>
>
>
> You mean CDH5.9 and 5.10? And also HBASE 17587?
>
>
>
> On Mon, Nov 27, 2017 at 12:37 AM, Pedro Boado
> wrote:
&
Yes to all, I meant CDH5.9 and 5.10 and also HBASE 17587.
On 27 Nov 2017 08:50, "Kumar Palaniappan"
wrote:
> You mean CDH5.9 and 5.10? And also HBASE 17587?
>
> On Mon, Nov 27, 2017 at 12:37 AM, Pedro Boado
> wrote:
>
>> My branch is based on 4.x-HBase. But I
3HBase1.2?
>>>>
>>>> On Sun, Nov 19, 2017 at 1:21 PM, James Taylor
>>>> wrote:
>>>>
>>>>> Hi Kumar,
>>>>> I started a discussion [1][2] on the dev list to find an RM for the
>>>>> HBase 1.2 (and HBase 1.1)
ppan:Desktop:linkedin.gif]
> <http://www.linkedin.com/in/kumarpalaniappan>
>
> On Nov 19, 2017, at 3:43 PM, Pedro Boado wrote:
>
> As I have volunteered to keep a CDH compatible release for Phoenix and as
> for now CDH 5.x is based on HBase 1.2 is of my interest keep releas
e were
> no plans for a release. Subsequently we've heard from a few folks that they
> needed it, and Pedro Boado volunteered to do CDH compatible release
> (see PHOENIX-4372) which requires an up to date HBase 1.2 based release.
>
> So I've volunteered to do one more Phoe
reation of
>>> official Cloudera parcels (at least from Phoenix side)...?
>>>
>>> On Tue, Oct 31, 2017 at 8:09 AM, Flavio Pompermaier <
>>> pomperma...@okkam.it> wrote:
>>>
>>>> Anyone from Phoenix...?
>>>>
>>>> O
For creating a CDH parcel repository the only thing needed is a web server
where the parcels and the manifest.json is published. But we need one.
I'm in of course. Who can help onboarding this changes and publishing etc
and getting users to push changes to the project? How do you do this in
Phoeni
because we also need Phoenix on CDH. Maybe I
> could writie some documentation about it's installation and usage, on the
> README or on the official Phoenix site. Let's set up a an unofficial (but
> working) repo of Phoenix Parcels!
>
> On Fri, Oct 27, 2017 at 9:12 AM, Pedro Bo
gt;
> Thanks,
> James
>
> On Thu, Oct 26, 2017 at 2:43 PM, Pedro Boado
> wrote:
>
>> Sorry, it s provided "as is" . Try a "mvn clean package -D
>> skipTests=true" .
>>
>> And grab the parcel from phoenix-parcel/target
>>
>>
Sorry, it s provided "as is" . Try a "mvn clean package -D skipTests=true" .
And grab the parcel from phoenix-parcel/target
On 26 Oct 2017 22:21, "Flavio Pompermaier" wrote:
> And how do you use the parcel? Where it is generated?any documentation
> about th
I've done it for Phoenix 4.11 and CDH 5.11.2 based on previous work from
chiastic-security.
https://github.com/pboado/phoenix-for-cloudera/tree/4.11-cdh5.11.2
All integrations tests running, and I've added a parcel module for
parcel-generation in rhel6.
Contributions are welcome for supporting o
tion is strictly
> prohibited. If you have received this communication in error, or if any
> problems occur with transmission, please contact sender. Thank you.
>
--
Un saludo.
Pedro Boado.
bug that client is sending all rpc with index priority). If
> you see it, remove controller factory property on client side.
>
> Thanks,
> Sergey
>
> On Fri, Aug 18, 2017 at 4:46 AM, Pedro Boado
> wrote:
>
>> Hi all,
>>
>> We have two HBase 1.0 clusters
Hi all,
We have two HBase 1.0 clusters running the same process in parallel
-effectively keeps the same data in both Phoenix tables-
This process feeds data into Phoenix 4.5 via HFile and once the data is
loaded a Spark process deletes a few thousand rows from the tables
-secondary indexing is di
Hi guys
we are planning a migration from our current version of Apache Phoenix (
4.5 ) to 4.9. We checked the upgrade guide and we are aware that Phoenix
maintains two versions-back compatibility but... is it really needed to do
a stop in 4.7 to have a safe migration?
We don't have any secondary
Hi guys,
We are trying to populate a Phoenix table based on a 1:1 projection of
another table with around 15.000.000.000 records via an UPSERT SELECT in
phoenix client. We've noticed a very poor performance ( I suspect the
client is using a single-threaded approach ) and lots of issues with client
Hi,
we're just having in production an
org.apache.phoenix.schema.StaleRegionBoundaryCacheException:
ERROR 1108 (XCL08): Cache of region boundaries are out of date.
and we don't find a lot of information about the error apart of
https://issues.apache.org/jira/browse/PHOENIX-2599
The error occurre
It doesn't make a lot of sense having quotes in an integer column, does it?
Maybe removing this quotes from the source would solve the problem.
On 30 Mar 2017 18:43, "anil gupta" wrote:
> Hi Brian,
>
> It seems like Phoenix is not liking ''(single quotes) in an integer
> column. IMO, it will be
Hi all,
I have a quick question. We are still running on Phoenix 4.5 (I know, it's
not my fault) and we're trying to setup a read only user on a phoenix
table. The minimum set of permissions to get access through sqlline is
grant 'readonlyuser' , 'RXC', 'SYSTEM.CATALOG'
grant 'readonlyuser' , 'RX
Hi.
I don't think it's weird. That column is PK and you've upserted twice the
same key value so first one is inserted and second one is updated.
Regards.
On 7 Feb 2017 04:59, "Dhaval Modi" wrote:
> Hi All,
>
> I am facing abnormal scenarios with ROW_TIMESTAMP.
>
> I created table in Phoenix
is
support we are stuck at 4.8.2 until we upgrade our cluster to HBase 1.1/1.2
Is there any plan to support HBase 1.0 again on this (or newer) versions?
Thanks for a great work!
Regards.
--
Un saludo.
Pedro Boado.
56 matches
Mail list logo