Yes, Job is correct: support for subquery is in-progress, but not yet
implemented. In the meantime, you'll need to run this as two separate
queries:
select max(time) from session;
select * from session where time = the_max;
Performance-wise, there will not be much difference for this query.
Thank
We've finished the conversion from our incubator home to our new top
level project home. Please make sure to adjust your URLs to the git
and svn repo as documented here: http://phoenix.apache.org/source.html
Everyone who was subscribed to the incubator user list is now
subscribed to our TLP user l
Sounds like you have a mix of the old GitHub phoenix jar and the new Apache
phoenix jar (maybe on your client class path), as the class
com.salesforce.phoenix.schema.TableAlreadyExistsException doesn't exist in
the Apache jars.
James
On Thu, Jun 5, 2014 at 12:36 PM, Russell Jurney
wrote:
> Sud
Hi Justin,
What Jeffrey said is accurate. It would not be difficult to add a new
built-in function for this purpose, but it's ambiguous as far as which
KeyValue you'd use to get the timestamp as there is no "row"
timestamp. Would you filter all columns against the timestamp? If all
KeyValues for a
Can you give an example of what you mean? LIKE is for strings, but you
can cast an Integer/Double to a string.
Thanks,
James
On Sun, Jun 8, 2014 at 9:02 PM, Ramya S wrote:
>
>
> Does the LIKE operator in phoenix support Integer/Double column values ?
>
>
>
> Thanks
> Ramya.S
>
>
Very nice! Thanks for letting us know about this - very useful.
James
On Sun, Jun 8, 2014 at 8:05 PM, Pham Phuong Tu wrote:
> Hi guys,
>
> I just wrote a restful service for query, caching, log slow query for
> Phoenix. No more time spend with ssh & phoenix shell. Run your query via
> url.
> htt
phoenix handles type cast of Integer/Double to a string internally?
>
>
>
> Thanks
>
> Ramya.S
>
>
>
>
> From: James Taylor [mailto:jamestay...@apache.org]
> Sent: Mon 6/9/2014 9:48 AM
> To: user
> Subject: Re: LIKE operator in Ph
TEGER and VARCHAR for ASH_ID
>
> Please help to solve...
>
>
>
> Thanks
> Ramya.S
>
>
> ____
>
> From: James Taylor [mailto:jamestay...@apache.org]
> Sent: Mon 6/9/2014 10:46 AM
> To: user
> Subject: Re: LIKE operator in Phoeni
Try this as a workaround:
select * from account_service_history where TO_CHAR(ash_id) LIKE '217%'
Thanks,
James
On Mon, Jun 9, 2014 at 10:28 AM, James Taylor wrote:
> Please file a JIRA here: https://issues.apache.org/jira/browse/PHOENIX
> and include the query plus y
The default column family (i.e. the name of the column family used for
your table when one is not explicitly specified) was changed from _0
to 0 between 2.2 and 3.0/4.0. You can override this in your CREATE
TABLE statement through the DEFAULT_COLUMN_FAMILY property. The
upgrade script modifies this
ur existing data first to CDH5 and try out a
> few things on HBase 0.96.
>
>
> On Sun, Jun 29, 2014 at 9:50 AM, James Taylor
> wrote:
>>
>> The default column family (i.e. the name of the column family used for
>> your table when one is not explicitly specified) was cha
Hi,
I was going to suggest the above (i.e. using bind variables). There's
currently no way to specify a binary constant on the command line. You
could enhance the ENCODE built-in function to take a VARCHAR and
return a VARBINARY. Or you could modify the grammar to interpret
escaped characters corre
the table
>> is salted, and load the data accordingly.
>>
>> Can you think in any other scenario?
>>
>> Roberto.
>>
>> On 28/05/2014 6:01 PM, "Roberto Gastaldelli"
>> wrote:
>>
>>
>>
.SqlLine$Commands.sql(SqlLine.java:3584)
> at sqlline.SqlLine.dispatch(SqlLine.java:821)
> at sqlline.SqlLine.begin(SqlLine.java:699)
> at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
> at sqlline.SqlLine.main(SqlLine.java:424)
>
>
>
> On Sun, Jun 29, 2014 at
Increase the client-side Phoenix timeout (phoenix.query.timeoutMs) and
the server-side HBase timeout (hbase.regionserver.lease.period).
Thanks,
James
On Fri, Jun 20, 2014 at 6:30 PM, Andrew wrote:
> Using Phoenix 4 & the bundled SqlLine client I am attempting the following
> long-running command
Russell,
What version of Phoenix are you using? If you can produce a unit test
for this, then we'll get a fix for you in our next patch release.
Thanks,
James
On Fri, Jun 20, 2014 at 12:22 AM, Jody Landreneau
wrote:
> for your item 3, I am trying the following that seems to work from the
> SQuirr
HBase has many security features that you define on a per table, per
user basis. Though these aren't surfaced in SQL by Phoenix (we have
open JIRAs if someone is interested in doing this), the security
defined at the HBase level would be honored by Phoenix (as Phoenix
goes through the standard HBas
Hi,
What version of Phoenix are you currently using? I remember a bug
along these lines that was fixed, but I thought it made it into
3.0/4.0. Does the problem occur on the latest on the 3.0/4.0 branch?
The index selection is done in QueryOptimizer, but I doubt the bug
would be there. Might be in
How about if we make it a hadoop2 only feature?
On Tuesday, July 1, 2014, Jesse Yates wrote:
> I was working on a patch to support using Cloudera's HTrace (
> https://github.com/cloudera/htrace) library for phoenix queries. There
> was a preliminary patch for the 2.X series, but it never got com
ands (just like in the
> Hadoop2 impl).
>
> Happy to push the code somewhere so people can take a look... or they just
> wait a couple weeks :)
>
> ---
> Jesse Yates
> @jesse_yates
> jyates.github.com
>
>
> On Tue, Jul 1, 2014 at 10:53 AM, James T
ks,
>
> Eli
>
>
> On Tue, Jul 1, 2014 at 11:13 AM, James Taylor > wrote:
>
>> Seems like an excellent feature to have in 4.1 which I'm hoping we can
>> do by end of the month. I'd rather the feature make it in and only
>> support hadoop2 than h
Please file a JIRA and attach a reproducible test case.
Thanks,
James
On Wed, Jul 2, 2014 at 4:29 AM, Pham Phuong Tu wrote:
> Hi guys,
>
> I have one big problem with Phoenix is some time, range query like: >, <,
> <=, >= return missing one or more result,
>
> E.g:
> SELECT count(1) AS total, hou
gt; Email from people at capillarytech.com may not represent official policy
>> of Capillary Technologies unless explicitly stated. Please see our
>> Corporate-Email-Policy for details. Contents of this email are confidential.
>> Please contact the Sender if you have received this em
If the table already exists, then CREATE TABLE IF NOT EXISTS is a noop.
Drop the table first and then create initially with the COMPRESSION='SNAPPY'
property.
Thanks,
James
On Wed, Jul 2, 2014 at 11:29 AM, puneet wrote:
> Hi Team,
>
> I need snappy compression to be used for the Hbase table b
chnologies
>>> M:919886208262
>>> abhil...@capillarytech.com | www.capillarytech.com
>>>
>>> Email from people at capillarytech.com may not represent official policy
>>> of Capillary Technologies unless explicitly stated. Please see our
>>>
ts of this email are confidential.
> Please contact the Sender if you have received this email in error.
>
>
>
> On Thu, Jul 3, 2014 at 6:00 PM, James Taylor wrote:
>>
>> It's best to go through the JDBC APIs - any particular reason you're
>> bypassing them
Looks like a bug - it should not be necessary to have an IS NOT NULL
filter. Please file a JIRA.
Thanks,
James
On Fri, Jul 4, 2014 at 2:50 PM, puneet wrote:
> Seems , it is only happening for Phoenix 4.0.0 and not for Phoenix 3.0.0
>
>
> On Friday 04 July 2014 05:54 PM, puneet wrote:
>
> Hi T
Hi Gagan,
Sorry you're running into problems. You may have hit a bug in skip
scan. The skip scan filter acts as a finite state machine. If you can
isolate the row *before* this state and the incoming KeyValue that
causes this issue, then we'll have the information we need to fix it.
If you could pa
Hi,
All bug fixes are applied to both the 3.0 and 4.0 branch, So the fix
should appear in our upcoming 3.1/4.1 release. If you can track down
the JIRA, I can double check for you.
Thanks,
James
On Tue, Jul 15, 2014 at 7:20 PM, ashish tapdiya wrote:
> Hi,
>
> I am using Phoenix secondary index an
g/mod_mbox/phoenix-dev/201406.mbox/%3CJIRA.12723505.1403649977329.41328.1403649985092@arcas%3E
>
> When is 3.1 gonna come out.
>
> Thanks and appreciate the help.
> ~Ashish
>
>
> On Tue, Jul 15, 2014 at 12:24 PM, James Taylor
> wrote:
>>
>> Hi,
>> All bug fix
Thanks for the positive feedback, Mike. I wouldn't counter this
argument, I'd agree with it. The more Cloudera customers that ask for
Phoenix to be included in the distribution, the more likely it is to
be included. Some good news, though - Cloudera just released 5.1 which
should work fine with Pho
Hi Steve,
Thanks for reporting this - it looks like a bug. I've filed this JIRA
for it: https://issues.apache.org/jira/browse/PHOENIX-1102
As a work around while a fix is being made, try naming AA.NUM2 and
BB.NUM3 with the same column name: AA.NUM2 and BB.NUM2. I suspect that
will work around this
Hi Abe,
No nothing required beyond what HBase requires to upgrade from 0.94 to 0.98.
Thanks,
James
On Mon, Jul 21, 2014 at 6:17 AM, Abe Weinograd wrote:
> Is there anything specific we need to do to upgrade from 3.0 to 4.0 other
> than upgrade server and client jars?
>
> Thanks,
> Abe
SQL statements are executed synchronously in Phoenix, but you can
always spawn your own thread and make it asynchronous. Note that
connections are not thread-safe, so you'd need to do this in its own
connection.
Thanks,
James
On Sun, Jul 20, 2014 at 7:14 PM, Rahul Mittal wrote:
> Does phoenix s
Hi Steve,
Just wanted to let you know that this issue has been fixed (thanks to
Anoop). The fix will appear in our next release which is planned in a
few weeks.
Thanks,
James
On Sat, Jul 19, 2014 at 7:45 AM, James Taylor wrote:
> Hi Steve,
> Thanks for reporting this - it looks like a bug
Hi Michael,
You need to re-bind the variables with the id of the last row you get
back from the SELECT statement (the 10th row in this case). Also, you
only need to compare only the PK column in the WHERE clause, so you
could do something like this:
SELECT id, interfaceid FROM moninorlog WHERE
Hi Abe,
FWIW, there's an improvement in place
(https://issues.apache.org/jira/browse/PHOENIX-539) for our upcoming
next release that doesn't cause the first rext row call to pull over
everything. Instead, it is done in chunks.
As far as what you can do now, I'd recommend putting a LIMIT clause on
Yes, we're planning on cutting an RC early next week.
Thanks,
James
On Fri, Jul 25, 2014 at 12:36 PM, Abe Weinograd wrote:
> Thanks James. That's very helpful.
>
> 4.1 is being released soon?
>
> Thanks,
> Abe
>
>
> On Fri, Jul 25, 2014 at 3:34 PM, James T
Yes, we have code at Salesforce that does exactly that. You really
only need to cache 1) the row number ordinal and 2) the PK column
values (and only if there's an ORDER BY in your query, as otherwise
you can page through the results through the paging mechanism
described in this thread). Then you
For an example of this flow, see QueryMoreIT:
https://github.com/apache/phoenix/blob/master/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java
On Fri, Jul 25, 2014 at 12:45 PM, James Taylor wrote:
> Yes, we have code at Salesforce that does exactly that. You really
> onl
Looks like you may be the first, Mike. If you try it, would you mind
reporting back how it works?
Thanks,
James
On Mon, Jul 28, 2014 at 10:52 AM, Alex Kamil wrote:
> Mike, I'm on cdh4, but generally the extra steps are rebuilding phoenix with
> hadoop and hbase jars from cdh, I posted steps here
(Unknown Source)
>at com.onseven.dbvis.g.B.D.ā(Z:1413)
>at com.onseven.dbvis.g.B.F$A.call(Z:1474)
>at java.util.concurrent.FutureTask.run(Unknown Source)
>at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>a
Hi Deepak,
I removed the dev and commit list in my reply. The dev list is mainly
for those interested in technical discussions about how to implement
and/or fix features in Phoenix. The commit list is a list you can
subscribe to if you want to be notified of commits to the source repo
of Phoenix.
ould you verify that you're
> using right Phoenix 4.0 client? You could use sqlline.py under bin folder
> to test connection.
>
> Thanks,
> -Jeffrey
>
> On 7/28/14, 3:28 PM, "James Taylor" wrote:
>
>>Thanks for reporting back, Mike. That's disconcer
Hi Bob,
Phoenix doesn't support HBase 0.96. You'll need to either:
- upgrade to HDP 2.1
- fix PHOENIX-848
Thanks,
James
On Wed, Jul 30, 2014 at 10:32 AM, Russell, Bob wrote:
> Nicolas,
>
> Error message for both 3.0 and 4.0 below. Hopefully, there is something
> simple to get this going witho
Not a known issue as far as I know. Please file a JIRA. If you can try
against the latest in the 3.0 branch to see if the problem is fixed
already, that would be much appreciated.
Thanks,
James
On Wed, Jul 30, 2014 at 11:20 AM, Abe Weinograd wrote:
> I am testing some queries in Squirrel. Becaus
Hi Michael,
Take a look at Paged Queries here:
http://phoenix.apache.org/paged.html as well as this email thread:
http://s.apache.org/Dct. Phoenix does not support the OFFSET keyword
in SQL and likely never will as it cannot be implemented efficiently.
Thanks,
James
On Mon, Aug 4, 2014 at 8:36 PM,
Hi Bob,
I'm not sure how Sqoop is treating date/time values from Oracle, but
Phoenix uses an 8 byte long (see
http://phoenix.apache.org/language/datatypes.html). Try using
UNSIGNED_DATE in your Phoenix schema. The regular types in Phoenix
flip the sign bit so that we can support negative time value
Hi Faisal,
Yes you can use a built-in Boolean function as you've shown in your query.
You can also omit the =TRUE part like this:
SELECT name
FROM profileTable
WHERE name LIKE 'Ale%' AND *myFunc*(name);
How this is processed depends on whether or not the table is salted and
the name is the leadin
Thanks for confirming the workaround, Ashish. I filed this JIRA:
https://issues.apache.org/jira/browse/PHOENIX-1159.
James
On Sun, Aug 10, 2014 at 3:35 PM, ashish tapdiya wrote:
> Gabriel,
>
> Suggested work around of disabling secondary works like a charm. Thanks for
> your help.
>
> ~Ashis
Are you running against a secure cluster? If so, you'd need to compile
Phoenix yourself as the jars in our distribution is for a non secure
cluster.
On Mon, Aug 11, 2014 at 10:29 AM, Jesse Yates wrote:
> That seems correct. I'm not sure where the issue is either. It seems like
> the property isn'
A secondary index will only be maintained if you go through Phoenix APIs
when you update your data table. Create a table over your HBase table
instead of a view and use Phoenix UPSERT and DELETE statements to update
your data instead of HBase APIs and your mutable secondary index will be
maintaine
The dependencies on HBase 0.98.4 are *compile time* dependencies. Is it
necessary for you to compile against CDH 5.1 or just run against it?
On Tuesday, August 19, 2014, Russell Jurney
wrote:
> Thats really bad. That means... CDH 5.x can't run Phoenix? How can this be
> fixed? I'm not sure what
Sorry you're having so much trouble, Russell. The hbase-sites.xml overrides
values from the hbase-default.xml, so I wouldn't worry about not finding
that file. Just set what you need in the hbase-sites.xml and make sure it's
on the classpath.
Thanks,
James
On Thu, Aug 21, 2014 at 9:40 PM, anil g
Good point, Anil. That's been a cause of issues in the past as well. Thanks,
James
On Thu, Aug 21, 2014 at 9:42 PM, anil gupta wrote:
> Another naive suggestion:
> Is it possible that maybe you have many/conflicting phoenix-client or
> hbase jar files in squirrel classpath?
>
>
> On Thu, Aug 2
Hi Jan,
Yes, this works as designed. Would you mind filing a JIRA for us to enhance
our multi tenant docs, as it sounds like it's unclear?
Without creating a view, you won't be able to add tenant specific columns
or indexes (i.e. evolve each tenant's schema independently). You can, of
course, crea
I think Ravi committed this as part of a different JIRA. Ravi?
Thanks,
James
On Mon, Aug 25, 2014 at 9:03 PM, Randy Martin wrote:
> It looks like JIRA issue PHOENIX-898 was originally tracking this, but it
> looks like this issue has been reverted in 4.1.0 RC 0 and 1. Can anyone
> from the Phoe
Thanks, JM. It'd be great to have support for Phoenix 4.1 once it's
officially released (hopefully in a few days if the RC holds up).
On Tue, Aug 26, 2014 at 4:46 PM, Jean-Marc Spaggiari
wrote:
> I faced this and also, BigTop doesn't compile against Phoenix 4.0.1. And
> Phoenix 4.0 has an hbase-
-Marc Spaggiari
wrote:
> Hi James,
>
> I can see 4.0.1 and 4.0, but not 4.1. Which branch will be used for 4.1?
> Will it be from 4.0.1?
>
> Thanks,
>
> JM
>
>
> 2014-08-26 19:54 GMT-04:00 James Taylor :
>
>> Thanks, JM. It'd be great to have support for Pho
Hi JM,
Let me make sure I understand your use case. You have 156M rows worth
of data in the form CustID (BIGINT), URL (VARCHAR). You have a CSV
file with the data. Is CustID already unique in the CSV file? If not,
won't you run into issues trying to load the data, as you'll be
overwriting row value
Sounds like you may have an out-of-sync issue with your
SYSTEM.CATALOG. What version of Phoenix were you using before you
tried the 4.1 RC? Is this the 4.1 RC1? Did you upgrade from an earlier
Phoenix version, as the IS_VIEW_REFERENCED was added in the 3.0/4.0
release, I believe? If you upgraded, d
You can try a few things:
- salt your table by tacking on a SALT_BUCKETS=n where n is related to
the size of your cluster. Perhaps start with 16.
- lead your primary key constraint with "core desc" if this is your
primary means of accessing this table.
- add a secondary index over "core desc" if th
Hey Dan,
There were some changes in the test framework to make them run faster.
Our entire test suite can run in about 10-15mins instead of 60mins
now. One of the new requirements is adding the annotation that Samarth
indicated. Once JUnit releases 4.12, this will no longer be necessary,
as the ann
Hi Vikas,
Glad you got it working. Just curious - why did you install Phoenix
via yum when the HDP 2.1 already comes pre-installed with Phoenix?
Thanks,
James
On Mon, Sep 1, 2014 at 10:16 AM, Vikas Agarwal wrote:
> Yes, I am using HDP 2.1 and installed Phoenix via yum and it installed 4.0.0
> of
In addition to the above, in our 3.1/4.1 release, you can pass through
the principal and keytab file on the connection URL to connect to
different secure clusters, like this:
DriverManager.getConnection("jdbc:phoenix:h1,h2,h3:2181:user/principal:/user.keytab");
The full URL is now of the form
jdb
RpcClient.call(RpcClient.java:1457)
> at
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1657)
> ... 34 more
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:392)
> at
> org.apache.hado
Hello everyone,
On behalf of the Apache Phoenix team, I'm pleased to announce the
immediate availability of our 3.1 and 4.1 releases:
http://phoenix.apache.org/download.html
These include many bug fixes along with support for nested/derived
tables, tracing, and local indexing. For details of the
Double check that you don't have the old Phoenix jar in the Squirrel
classpath still.
On Tue, Sep 2, 2014 at 9:17 AM, Abe Weinograd wrote:
> I just tried this with the new bits and am seeing the same behavior. I am
> using the hadoop2/phoenix-4.10-client-hadoop2.jar
>
> Using that directly in Sq
Hi Liang,
I recommend you try this with the binaries we package in our 4.1 release
instead: http://phoenix.apache.org/download.html
Thanks,
James
On Tue, Sep 2, 2014 at 10:50 AM, 夏凉 wrote:
> Hi Alex,
>
> I changed the code, but it still doesn't work. It output following error
> messages:
>
> [
Hi Vikas,
Please post your schema and query as it's difficult to have a discussion
without those. Also if you could post your HBase code, that would be
interesting as well.
Thanks,
James
On Friday, September 5, 2014, yeshwanth kumar wrote:
> hi vikas,
>
> we used phoenix on a 4 core/23Gb machine
r the block cache after each run (if you
don't).
Thanks,
James
On Fri, Sep 5, 2014 at 9:00 AM, James Taylor wrote:
> Hi Vikas,
> Please post your schema and query as it's difficult to have a discussion
> without those. Also if you could post your HBase code, that would be
> in
Vikas,
Please post your schema and query.
Thanks,
James
On Fri, Sep 5, 2014 at 9:18 PM, Vikas Agarwal wrote:
> Ours is also a single node setup right now and as of now there are less than
> 1 million rows which is expected to grow around 100m at minimum.
>
> I am aware of secondary indexes but wh
null |
> | null | null| table_name | LOCATIONS | 12 | VARCHAR
> | 255 | null | null | null | 1
> | null|
> ++-++-+++-+---++-
gt;> usual queries, I can help to design a schema with performance of less than 1
>>> sec using Phoenix.
>>>
>>>
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>>
>>> -- Original message--
>>>
>&g
>>>
>>>>>>
>>>>>> On Sat, Sep 6, 2014 at 12:04 PM, Vikas Agarwal
>>>>>> wrote:
>>>>>>>
>>>>>>> Yes, that is why it is a trouble for me. However, on contrary, HBase
>>>>
Hi Arbi,
I answered your questions over on stackoverflow:
http://stackoverflow.com/questions/25650815/phoenix-is-changing-the-meta-information-of-hbase-tables/25705838#25705838
Thanks,
James
On Fri, Sep 5, 2014 at 11:05 PM, Vikas Agarwal wrote:
> I don't think so, because it is intentional and th
Thanks, Puneet. That's super helpful. Was (2) difficult to do? That might
make an interesting blog if you're up for it. I'd be happy to post on your
behalf if that's helpful.
Thanks,
James
On Monday, September 8, 2014, Puneet Kumar Ojha
wrote:
> See Comments Inline
>
>
>
> Thanks
>
>
>
>
>
>
Hi Prakash,
If possible, it'd be helpful if you could describe your use case a bit.
Some questions I'd have for you: is the data over which you'd query
stored in HBase? And if so, would the Hive run over the HBase data? Is
the data read-only or does it mutate? How much data are we talking
about (a
+1. Thanks, Alex. I added a blog pointing folks there as well:
https://blogs.apache.org/phoenix/entry/connecting_hbase_to_elasticsearch_through
On Wed, Sep 10, 2014 at 2:12 PM, Andrew Purtell wrote:
> Thanks for writing in with this pointer Alex!
>
> On Wed, Sep 10, 2014 at 11:11 AM, Alex Kamil
l just connect to Hbase
> securely & rely on the Hbase API to extract query reply, therefore Phoenix
> will depend on security mechanisms employed by Hbase API & will not provide
> any security feature by itself.
>
> Anil: Yes, that is true. At present, Phoenix does not provides
Hi Flavio,
We haven't done any comparison yet, but if you get a chance, please
report back. I think you'll need to use the 0.94 version of HBase,
though, as I don't believe Trafodion supports 0.98.
Thanks,
James
On Fri, Sep 12, 2014 at 1:50 AM, Flavio Pompermaier
wrote:
> Hi to all,
>
> I'm curre
Actually, we have more non salesforce developers at this point than
salesforce developers working on Phoenix. Many of them are remote, though:
New York, India, Sweden, ...
I like the idea, though. I'll start a discuss thread and if there's
sufficient interest, let's pursue it.
Thanks,
James
On M
Hi Abe,
You'll get best performance if you put the empty key value in the same
column family as your other column qualifiers (the ones you most
frequently filter on in your where clause). The column qualifier
should be named _0 and the value should be an empty byte array.
FYI, if you have an HBase
Good question on first versus last column. Believe it or not, the
order of column qualifiers has a bigger impact than you might think.
Anoop added a very nice optimization in our prior release for this and
Lars has done work at the HBase level to improve things as well. The
case it matters most is
Hi Flavio,
We'll do another release (3.2/4.2) pretty soon. I'd stick to a
released version, as the head of the branches fluctuates and makes no
guarantees for backward compat.
Would it be possible to elaborate on why you need the client jar on
maven central? If you follow Mujtaba's excellent advic
Hi Russell,
Doing surgery on the SYSTEM.CATALOG table is a recipe for disaster. Do you
have the DDL statements for the table, views, sequences, indexes you've
created? One option is to use the HBase shell to disable and drop the
SYSTEM.CATALOG table and then issue your DDL statements again. This wo
I see. That makes sense, but it's more of an HBase request than a
Phoenix request. If HBase had a "client-only" pom, then Phoenix could
have a "client-only" pom as well.
Thanks,
James
On Thu, Sep 18, 2014 at 1:52 PM, Flavio Pompermaier
wrote:
> Because it is not clear which are the exact depende
as
> printed, they simply don't reference anything so they can't get gotten or
> deleted.
>
> Thanks for the tip re: drop SYSTEM.CATALOG. I've seen this behavior
> actually, and didn't know it was something we could count on.
> ᐧ
>
> On Thu, Sep 18, 2014 at
Take a look at my blog on how SequenceIQ setup a Docker for
Phoenix+HBase to make it super easy to get started:
https://blogs.apache.org/phoenix/entry/getting_started_with_phoenix_just
Thanks,
James
p
>
>
> org.apache.hadoop
> hadoop-mapreduce-client-core
>
>
> org.apache.hadoop
> hadoop-annotations
>
>
> org.apache.phoenix
> phoenix-hadoop2-compat
>
>
>
>
> org.apache.hbase
> hbase-client
> 0.98.4-hadoop1
>
>
> o
Hi Mohamed,
Thanks for the detail on this issue. I filed and fixed the issue with
declaring a NOT NULL constraint on a non PK column (PHOENIX-1266).
We've never enforced a NOT NULL constraint on a non PK column, so we
shouldn't allow a declaration of it. Note that in most cases we
already caught th
+1 to doing the same for hbase-testing-util. Thanks for the analysis, Andrew!
James
On Mon, Sep 22, 2014 at 9:18 AM, Andrew Purtell wrote:
> On Thu, Sep 18, 2014 at 3:01 PM, James Taylor wrote:
>> I see. That makes sense, but it's more of an HBase request than a
>>
See SaltingUtil.getSaltingByte(byte[] value, int offset, int length,
int bucketNum)
On Mon, Sep 22, 2014 at 12:07 AM, Pariksheet Barapatre
wrote:
> Hello All,
>
> When I create phoenix table with salting how it calculates SALT hex.
>
> e.g.
> CREATE TABLE TEST.PHOENIX_TEST
> (
> TDM INTEGER NOT N
Hello,
We've been planning on dropping hadoop1 support for our 4.x releases
for a while now and it looks like it'll happen in 4.2. It'd be nice if
we could do the same for our 3.x releases, as the more similar the two
branches are, the less time it takes to keep them in sync.
Is anyone out there s
a fair amount of usage of HBase 0.94 on top of Hadoop 1.
> So maybe keep it alive in 3.0? 3.0 can be retired when HBase 0.94 is retired
> (although I have no plans for 0.94 retirement, yet).
>
> -- Lars
>
>
> - Original Message -
> From: James Taylor
> To
Hi JM,
Sure, you'd do that like this:
CREATE VIEW "t1" ( USER unsigned_long,
ID unsigned_long,
VERSION unsigned_long,
"f1".A unsigned_long,
"f1".R unsigned_long,
"f1".L unsigned_long,
"f1".W unsigned_long,
"f1".P bigint,
"f1".N varchar,
"f1".E varchar,
"f1".S unsigned_long,
The salt byte is the first byte in your row key and that's the max
value for a byte (i.e. it'll be 0-255).
On Wed, Sep 24, 2014 at 10:12 AM, Krishna wrote:
> Hi,
>
> According to Phoenix documentation
>
>> "Phoenix provides a way to transparently salt the row key with a salting
>> byte for a part
Would you be able to talk about your use case a bit and explain why you'd
need this to be higher?
Thanks,
James
On Wednesday, September 24, 2014, Krishna wrote:
> Thanks... any plans of raising number of bytes for salt value?
>
>
> On Wed, Sep 24, 2014 at 10:22 AM, Jame
Hey JM,
We'd like to support all of SQL-99 eventually, so based on that, it's
on our roadmap. Like most open source projects, we'd look for a
volunteer to take this on - it certainly meets the criteria of being
interesting.
I think priority-wise, it's lower than most of the join work
identified on
; populate the table with the top node and iterate until this request doesn't
> give me any result back.
>
> Thanks,
>
> JM
>
> 2014-09-24 23:43 GMT-04:00 James Taylor :
>
>> Hey JM,
>> We'd like to support all of SQL-99 eventually, so based on that, it
1 - 100 of 908 matches
Mail list logo