>
Got it, thanks.
> On 10/22/19 5:08 PM, jesse wrote:
> > It is properly restored, we double checked.
> >
> > We worked around the issue by restarting the query server.
> >
> > But it seems a bad bug.
> >
> >
> >
> >
> >
>
ence in the restored table?
>
> On Fri, Oct 4, 2019 at 1:52 PM jesse wrote:
>
>> Let's say there is a running cluster A, with table:books and
>> system.sequence current value 5000, cache size 100, incremental is 1, the
>> latest book with sequence id:4800
>>
>
Let's say there is a running cluster A, with table:books and
system.sequence current value 5000, cache size 100, incremental is 1, the
latest book with sequence id:4800
Now the cluster A snapshot is backed up & restored into cluster b,
system.sequence and books table are properly restored, when we
Josh, in your sample project pom.xml file, the following build dependence
is not needed:
org.apache.phoenix
phoenix-server-client
4.7.0-HBase-1.1-SNAPSHOT
On Thu, Sep 19, 2019, 10:53 AM jesse wrote:
> A) phoenix-4.14.2-HBase-1.4-thin-client.jar
>
> Just A) is good enough, Josh,
, Sep 19, 2019, 8:54 AM jesse wrote:
> You confused me more, if I write a Java program with http endpoint to PQS
> for Phoenix read/write functions, should I depend on
>
> A) phoenix-4.14.2-HBase-1.4-thin-client.jar
>
> B) phoenix-queryserver-client-4.14.2-HBase-1.4.jar
>
>
a shaded jar is created, with the
> human-readable name "thin-client" to make it very clear to you that this
> is the jar the use.
>
> The Maven build shows how all of this work.
>
> On 9/18/19 8:04 PM, jesse wrote:
> > It seems it is just the sqllinewrap
It seems it is just the sqllinewrapper client, so confusing name...
On Wed, Sep 18, 2019, 4:46 PM jesse wrote:
> For query via PQS, we are using phoenix-4.14.2-HBase-1.4-thin-client.jar
>
> Then what is purpose and usage
> of phoenix-queryserver-client-4.14.2-HBase-1.4.jar?
>
> Thanks
>
For query via PQS, we are using phoenix-4.14.2-HBase-1.4-thin-client.jar
Then what is purpose and usage
of phoenix-queryserver-client-4.14.2-HBase-1.4.jar?
Thanks
f sync
>
> Below is the SQL to update table stats
> Update statistics table
> By default above executes asynchronously, hence it may take some time to
> update depending on table size
>
> On Tue 20 Aug, 2019, 6:34 AM jesse, wrote:
>
>> And the table is simple and ha
And the table is simple and has no index set up.
On Mon, Aug 19, 2019, 6:03 PM jesse wrote:
> we got some trouble, maybe someone could shed some light on this.
>
> Table has primary key c1, c2 and c3.
> Table is set with SALT_BUCKETS=12. Now it has 14 regions.
>
> The table ha
it returns results
What the heck is going wrong? The system used to work fine.
On Mon, Aug 19, 2019, 5:33 PM James Taylor wrote:
> It’ll start with 12 regions, but those regions may split as they’re
> written to.
>
> On Mon, Aug 19, 2019 at 4:34 PM jesse wrote:
>
>> I have a table is SALT_BUCKETS = 12, but it has 14 regions, is this
>> right?
>>
>> Thanks
>>
>>
>>
I have a table is SALT_BUCKETS = 12, but it has 14 regions, is this right?
Thanks
t;
> On Wed, Aug 7, 2019 at 9:14 PM jesse wrote:
>
>> Thank you all, very helpful information.
>>
>> 1) for server side ELB, what is the PQS health check url path?
>>
>> 2) Does Phoenix client library support client-side load-balancing? i. e
>> client gets lis
QueryServer is using ZK quorum to get everything it needs
> > - If you need to balance traffic with multiply PQSs - then yes, but
> > again - it's up to you. It is not required multiply PQSs if you have
> > multiply HBase masters.
> >
> > On Wed, Aug 7, 2019 at 12:
Our cluster used to have one hbase master, now a secondary is added. For
phonenix, what changes should we make?
- do we have to install new hbase libraries on the new hbase master node?
- do we need to install new query server on the hbase master?
- any configuration changes should we make?
- do w
gt;
> Conclusion: With the current status of Phoenix, I would never use it again.
>
>
>
> Regards
>
> Martin
>
>
>
>
>
>
>
> *Von:* jesse [mailto:chat2je...@gmail.com]
> *Gesendet:* Samstag, 22. Juni 2019 20:04
> *An:* user@phoenix.apache.org
&g
I stumbled on this post:
https://medium.com/@vkrava4/fighting-with-apache-phoenix-secondary-indexes-163495bcb361
and the bug:
https://issues.apache.org/jira/browse/PHOENIX-5287
I had a similar very frustrating experience with Phoenix, In addition to
various performance issues, you can foun
It seems the write take a long time and the system substantially slows down
with requests.
however, hbase official doc mentions soft limit is 32mb.
STATS (as there are safeguards to prevent re-creating
> statistics too frequently).
>
> There have been some bugs in the past that results from invalid stats
> guideposts.
>
> On 6/19/19 3:25 PM, jesse wrote:
> > 1) hbase clone-snapshot into my_table
> > 2) sqlline
1) hbase clone-snapshot into my_table
2) sqlline.py zk:port console to create my_table.
Very straight forward.
On Wed, Jun 19, 2019, 11:40 AM anil gupta wrote:
> Sounds strange.
> What steps you followed to restore snapshot of Phoenix table?
>
> On Tue, Jun 18, 2019 at 9:34 PM
hi:
When my table is restored via hbase clone-snapshot,
1) sqlline.py console shows the proper number of records: select count(*)
from my_table.
2) select my_column from my_table limit 1 works fine.
However, select * from my_table limit 1; returns no row.
Do I need to perform some extra ope
Just have to make sure you don't have schema change during snapshots
On Fri, Feb 12, 2016 at 6:24 PM Gaurav Agarwal
wrote:
> We can take snapshot or restore snapshot from hbase of the phoenix tables.
> Export/import feature also hbase provide to us.
>
> Thanks
> On Feb 13, 2016 3:15 AM, "Nick Di
Yes, lots of ppl do, including folks at Salesforce. You need to setup your
own query tuning infra to make sure it runs ok
On Tue, Jan 26, 2016, 8:26 AM John Lilley wrote:
> Does anyone ever use Phoenix on standalone Hbase for production? Is it
> advisable?
>
>
>
> *John Lilley*
>
>
>
I think he means its not a terribly expensive process - it is basically
just a fancy query proxy. If you are running a cluster any larger than 3
nodes you should seriously consider running at least a second or third
HMaster. When they are in standby mode they don't do very much - just watch
ZK for
I think with that version of Phoenix you should have that class.
1. Can you grep the jar contents and ensure the class (IndexedWALEditCodec)
is there?
2. Can you check the hbase classpath to ensure the jar is getting picked
up? (bin/hbase classpath)
On Sat, Nov 28, 2015, 6:10 PM Saba Varatharajap
Great post, awesome to see the optimization going in.
Would be cool to see if we could roll in some of the stuff talked about at
the last meetup too :)
On Sun, Nov 8, 2015, 11:27 AM James Taylor wrote:
> Thanks, Juan. I fixed the typo.
>
> On Sun, Nov 8, 2015 at 11:21 AM, Samarth Jain
> wrote:
Correct. So you have to make sure that you have enough memory to handle the
fetchSize * concurrent requests.
On Tue, Oct 6, 2015 at 10:34 AM Sumit Nigam wrote:
> Thanks Samarth and Jesse.
>
> So, in effect setting the batch size (say, stmt.setFetchSize()) ensures
> that only that
So HBase (and by extension, Phoenix) does not do true "streaming" of rows -
rows are copied into memory from the HFiles and then eventually copied
en-mass onto the wire. On the client they are pulled off in chunks and
paged through by the client scanner. You can control the batch size (amount
of ro
Along the same lines as HBase, it seems fine to discontinue old code lines
until such point as there is someone willing to maintain a given line - a
new RM. Essentially, its the same as an RM stepping down from managing a
release and no releases happening until someone cares enough to make a new
on
e time. For instance, a query like:
Select * from EXAMPLE WHERE m.c0 = "a" AND m.c1 = "b" will leverage the
index on both columns. However, if you are just querying each column
separately, then using your solution (b) will be better.
Does that make sense?
--
You could understand it just by reading the code, or running the tests, or
running a HBase minicluster in the JVM or in standalone mode or in
pseudo-distributed mode or in a fully distributed setup.
What are you trying to achieve?
In the area you are interested in, have you:
- read the docs
- r
It looks like you are using two different metrics files on the classpath of
the server. You can only have one (quirk of Hadoop's metrics2 system). The
configurations for the phoenix sinks should be in the
hadoop-metrics2-hbase.properties
file since HBase will load the metrics system before the phoe
And it looks like you already figured that out :)
On Tue, Jan 6, 2015, 9:43 AM Jesse Yates wrote:
> You wouldn't even need another table, just a single VARCHAR[] column to
> keep the column names. Its ideal to keep it in the same row (possibly in
> another cf if you expect it to b
hings
like annotations.a0, .a1, .a2, etc)
The downside is that you then need to do two queries to get all the
columns, but until we implement the cf.* logic for dynamic columns, that's
the best you can do.
- jesse
On Tue, Jan 6, 2015, 9:23 AM Sumanta Gh wrote:
> Thanks Nicolas for replyi
The phoenix indexes are only kept up to date when writes are made through
the phoenix client.
A more "out there" option would be to write your own indexer plugin (the
actual name escapes me right now) that does similar writes when the Phoenix
plugin wouldn't do an update (eg. Non-phoenix client wr
You absolutely can use snapshots with phoenix.
You would need to snapshot both the phoenix metadata table and the table
you want to snapshot.
Then on restore, you restore both those tables to new tables, point phoenix
there and get the data you need.
Missing pieces:
1) I'm not sure there is a wa
, 2014 10:59 PM, "Flavio Pompermaier" wrote:
> So Hoya is a different thing from Hbase on hadoop 2?
> On Sep 25, 2014 11:23 PM, "Jesse Yates" wrote:
>
>> Mostly its the use of hadoop metrics2 that has to change, as they change
>> the backing methods. If you are
Mostly its the use of hadoop metrics2 that has to change, as they change
the backing methods. If you are interested, take the hbase compatibility
layer <https://github.com/apache/hbase/tree/master/hbase-hadoop-compat/> to
see most of where HBase shims things.
---
Jesse
x27;t think we should, though we could continue it in the 4.X line
for compatibility's sake.
---
Jesse Yates
@jesse_yates
jyates.github.com
On Mon, Sep 22, 2014 at 9:31 AM, James Taylor
wrote:
> +1 to doing the same for hbase-testing-util. Thanks for the analysis,
>
th it!
---
Jesse Yates
@jesse_yates
jyates.github.com
On Mon, Sep 15, 2014 at 11:57 AM, Krishna wrote:
> Hi, Is anyone aware of Phoenix meetups coming up in the next couple of
> months in Bay Area?
>
> Thanks
>
>
>
>
ou really have a concern with?
---
Jesse Yates
@jesse_yates
jyates.github.com
On Thu, Sep 4, 2014 at 1:34 AM, su...@certusnet.com.cn <
su...@certusnet.com.cn> wrote:
> I know that disable tracing would require to remove phoenix metrics2
> configuration
> and bouncing th
Filed https://issues.apache.org/jira/browse/PHOENIX-1234 and attached what
I think is the fix.
With this patch you still need to ensure that "hbase.zookeeper.quorum" gets
set to just the hostnames, not the host:port combination, as per the HBase
ref guide.
---
J
s2.example.com:,rs3.example.com:,
> rs4.example.com:,rs5.example.com:
port=
Which will not create a correct connection.
---
Jesse Yates
@jesse_yates
jyates.github.com
On Wed, Sep 3, 2014 at 1:22 PM, Jeffrey Zhong
wrote:
>
> I think the “hbase.zo
file JIRA
---
Jesse Yates
@jesse_yates
jyates.github.com
On Wed, Sep 3, 2014 at 1:05 PM, Mujtaba Chohan wrote:
> Phoenix connection URL should be of this form
> jdbc:phoenix:zookeeper2,zookeeper1,zookeeper3:2181
>
>
>
> On Wed, Sep 3, 2014 at 12:11 PM, Jesse Yates
>
It looks like the connection string that the tracing module is using isn't
configured correctly. Is 2181 the client port on which you are running
zookeeper?
@James Taylor - phoenix can connect to multiple ZK nodes this way, right?
---
Jesse Yates
@jesse_yates
jyates.githu
a#L142>.
When it receives a metric (really, just a conversion of a span to a Hadoop
metrics2 metric), it will create the table as needed.
Hope that helps!
---
Jesse Yates
@jesse_yates
jyates.github.com
On Tue, Aug 26, 2014 at 7:21 PM, Dan Di Spaltro
wrote:
> I'
) and you can pull the files you need directly from there for the
moment.
---
Jesse Yates
@jesse_yates
jyates.github.com
On Fri, Aug 22, 2014 at 2:35 AM, su...@certusnet.com.cn <
su...@certusnet.com.cn> wrote:
> Hi all,
> I got the v4.1.0-rc0 phoenix release f
to go.
I imagine this is also what various distributors are doing for their forks
as well.
---
Jesse Yates
@jesse_yates
jyates.github.com
On Tue, Aug 19, 2014 at 3:36 PM, Russell Jurney
wrote:
> First of all, I apologize if you feel like I was picking on you. I was not
> t
Yup, that looks like an issue to me :-/
---
Jesse Yates
@jesse_yates
jyates.github.com
On Tue, Aug 19, 2014 at 2:06 PM, Russell Jurney
wrote:
> Running against any version would be ok, but it does not work. I get this
> error:
>
> 2014-08-19 14:03:
s not true, means you should probably file a
jira.
---
Jesse Yates
@jesse_yates
jyates.github.com
On Tue, Aug 19, 2014 at 11:36 AM, Russell Jurney
wrote:
> Thats really bad. That means... CDH 5.x can't run Phoenix? How can this be
> fixed? I'm not sure what to do.
That seems correct. I'm not sure where the issue is either. It seems like
the property isn't in the correct config files (also, you don't need it on
the master configs, but it won't hurt).
Is the property there when you dump the config from the RS's UI page?
--
n/../lib/zookeeper/*:/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/bin/../lib/zookeeper/lib/*:
> *
>
> this is the result i got for Hbase classpath command...and this is the
> "/opt/cloudera/parcels/CDH/lib/hbase/lib/" path i executed the code...
>
>
> On Mon,
; System.out.println("True");
> return true;
> }
> System.out.println("Not Found");
> return false;
> }
>
> }
>
>
> am not sure this is how you want me to execute the code...If am wrong
&
:03 AM, "Saravanan A" wrote:
> Hi Jesse,
>
> I ran the following code to test the existence of the classes you asked me
> to check. I initialized the two constants to the following values.
>
> ===
> public static final String
0.98.4+ (as pointed out in the section "Advanced
Setup - Removing Index Deadlocks (0.98.4+)"). However, it should still be
fine to have in older versions.
---
Jesse Yates
@jesse_yates
jyates.github.com
On Fri, Aug 8, 2014 at 2:18 AM, Saravanan A
wrote:
It just uses the standard phoenix connection to do the writes, so whatever
phoenix can do, it can also do.
---
Jesse Yates
@jesse_yates
jyates.github.com
On Wed, Jul 2, 2014 at 12:32 PM, Rob Anderson
wrote:
> This is great! As a recovering Oracle DBA, I long for the sw
ike in
the Hadoop2 impl).
Happy to push the code somewhere so people can take a look... or they just
wait a couple weeks :)
---
Jesse Yates
@jesse_yates
jyates.github.com
On Tue, Jul 1, 2014 at 10:53 AM, James Taylor
wrote:
> How about if we make it a hadoop2 only feat
t implementation (as a metrics sink) that writes them to a phoenix
table so it can be analyzed later. Because of the changes between Hadoop1
and 2, there is some reflection funkiness that is necessary to support
both, but just haven't finished the hadoop1 side of it.
---
58 matches
Mail list logo