Correction - the function is meant to DECODE the double columns, not encode
them.
Sent from my iPhone
On Oct 17, 2016, at 2:43 AM, Michael McAllister
mailto:mmcallis...@homeaway.com>> wrote:
Hi
I'm trying to access a Phoenix table from Datameer. Datameer can read HBase,
but does
Hi
I’m trying to access a Phoenix table from Datameer. Datameer can read HBase,
but doesn’t support using the Phoenix JDBC driver, as the Phoenix JDBC driver
only does a partial implementation of the JDBC standard.
To get around this I’m using the Datameer SDK to write a function that will
enc
>
there is no indexing available on this table yet.
>
So you haven’t defined a primary key constraint? Can you share your table
creation DDL?
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7
expensive.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex:
https://h.a/mikewebex
[Description: Description: cid:3410354473_30
This is really an ANSI SQL question. If you use an aggregate function, then you
need to specify what columns to group by. Any columns not being referenced in
the aggregate function(s) need to be in the GROUP BY statement.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex:
https://h.a/mikewebex
[cid:image001.png@01D209E4.8C3C4800]
This electronic c
You did alter specific tables to enable region replication on them, or create
new tables with region replication, correct?
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
Nithin
>
when I tried enabling region replication, I was not able to bring the HBase
cluster.
>
I’m not sure what you mean here. Specifically referring to “bring the HBase
cluster”.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto
Phoenix
connection can be made.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex:
https://h.a/mikewebex
[cid:
Not yet sorry.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex:
https://h.a/mikewebex
[cid:image001.png@01D202D6.11
Can you please provide the sample rowkey? It is blank in your email below.
Alternatively, an UPSERT VALUES statement I can use to insert a row that I can
work with myself.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcal
Can you provide an example of one of the rowkeys, the values you are expecting
out of it, and the full table definition? Of importance in the table definition
will be whether you have salt buckets defined.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis
Anil
If you split the rowkey on the zero byte character, you should end up with the
individual columns that made up your primary key. Details are here:-
https://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table
Michael McAllister
Staff Data Warehouse Engineer
Kevin
You should be able to do it yourself:-
https://phoenix.apache.org/mailing_list.html
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@ho
schema named (null).
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex:
https://h.a/mikewebex
[cid:image001.png@01D1F48E.CF
)
INCLUDE ( c3 );
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex:
https://h.a/mikewebex
[cid:image001.png@01D1F487.62
Seeing as we’re talking COALESCE and NULLs, depending on the version Ankit is
running, this could also be the issue in PHOENIX-2994:-
https://issues.apache.org/jira/browse/PHOENIX-2994
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcal
Perhaps you could create a test case with table creation, upsert and then the
select that reproduces and illustrates the problem?
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
All
Are there any known issues if we use the hbase shell to alter a phoenix table
to apply compression? We're currently using Phoenix 4.4 on HDP 2.3.4.
I plan on testing, but also want to double check for any gotchas.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmc
Technically, it depends on which version of HDP you are on. Here are the
versions:-
HDP 2.1 = Apache Phoenix 4.0
HDP 2.2 = Apache Phoenix 4.2
HDP 2.3 = Apache Phoenix 4.4
HDP 2.4 = Apache Phoenix 4.4
(From this page -> http://hortonworks.com/hdp/whats-new/)
Michael McAllister
Staff D
Use ORDER BY in your select clause:-
https://phoenix.apache.org/language/#select
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com
Ns
>
between current_date() and (current_date()-2000)
>
Shouldn’t this be in the format “between and ”?
Looks like you have things reversed.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7
... however I was thinking that HA was enabled on HBase tables simply by
setting region replication to 2 or 3?
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mai
Thanks, that one worked :-)
On Sep 22, 2015, at 3:28 PM, James Heather
mailto:james.heat...@mendeley.com>> wrote:
Try the one you missed:
SYSTEM."SEQUENCE"
i.e., quote the bits separately (but SYSTEM doesn't need quoting), and put it
in caps.
James
On 22/09/15 21:1
2 (42M03): Table undefined. tableName=system.sequence
(state=42M03,code=1012)
0: jdbc:phoenix:redacted,> select count(*) from "SYSTEM.SEQUENCE";
Error: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.SEQUENCE
(state=42M03,code=1012)
Michael McAllister
Staff Data Warehou
tax error. Mismatched input. Expecting "NAME", got
"sequence" at line 1, column 29. (state=42P00,code=604)
I do understand this is a system table, but it would be nice to see inside it.
This is from Apache Phoenix 4.2 on HDP 2.2.6.
Michael McAllister
Staff Data Warehouse Engi
Mujtaba
Thanks for this information. Seeing as I am using Phoenix 4.2, what is the safe
and approved sequence of steps to drop this table and recreate it as you
mention? Additionally, how do we ensure we don’t lose sequence data?
Michael McAllister
Staff Data Warehouse Engineer | Decision
regions on them. That
mans inefficient use of our cluster.
Have there been any best practices developed as to how to deal with this
situation?
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
I think you need a comma between your column definition and your constraint
definition.
On Sep 1, 2015, at 2:54 PM, Serega Sheypak
mailto:serega.shey...@gmail.com>> wrote:
Hi, I wrote itegration test that uses HBasetesting utility and phoenix. Test
creates table and inserts data. It works fin
Hi
I am trying to test the performance of a query, and I want to make sure I am
not getting cached results from a previous call. Is there anything to be done
to clear any notional cache?
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com
develop a theory on when it starts to happen.
On Wed, Aug 26, 2015 at 7:48 AM, Michael McAllister
mailto:mmcallis...@homeaway.com>> wrote:
OK, and then delete the table within HBase as well I assume?
From: James Taylor
[mailto:jamestay...@apache.org<mailto:jamestay...@apache.org>]
S
table. Something like the following:
DELETE FROM SYSTEM.CATALOG WHERE TENANT_ID IS NULL
AND TABLE_SCHEM = 'MYSCHEMA'
AND TABLE_NAME = 'MYTABLE';
You'll need to bounce your cluster to cause the server-side cache to be cleared
too.
On Tue, Aug 25, 2015 at 10:29 AM, Micha
is get them dropped.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex:
https://h.a/mikewebex
[cid:image002.png@01D080DC.77
, 1 errors]
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex:
https://h.a/mikewebex
[cid:image002.png@01D080DC.77
unction were added in 4.4 that would help you if you need to extract the day/month/year. Perhaps you can backport them or upgrade?One more built-in that might help you is the TRUNC function which lets you "bucketize" based on HOUR, MINUTE, SECOND, DAY, etc.Thanks,JamesOn Tue, Aug 4, 2015 at
Hi
I have a table with a date stored in it. The date is always the last day of a
month. For example, 31-AUG-2015 or 28-FEB-2015. I have the date stored in two
separate columns. In one place it is an integer (20150831 and 20150228). In
another place it uses the DATE datatype.
I want to write a
Yes it’s true as far as I can tell, you’ll at least have to get the PK columns
as well.
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@ho
Satya
Try the UPSERT SELECT statement:-
https://phoenix.apache.org/language/#upsert_select
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mailto:zimmk...@ho
the attached log:-
Line 33 - first delete
Line 39 - shows delete was successful as count(*) returns no rows for the key
deleted.
Line 53 - second delete reports 1 row affected
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcal
ot; which is as expected. For the second one, it's
"CLIENT 67-CHUNK SERIAL 1-WAY REVERSE FULL SCAN OVER TIMESTAMP_INDEX; SERVER
FILTER BY FIRST KEY ONLY; SERVER 1 ROW LIMIT" which looks correct, but still
returns the unexpected result.
2015-06-26 16:59 GMT-07:00 Michael McAllister
m
Yufan
Have you tried using the EXPLAIN command to see what plan is being used to
access the data?
Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 |
skype: michael.mcallister.ha<mai
er, you could do this:
select count(distinct case when col1 > col2 then col1 || col2 else col2 || col1
end) ...
Thanks,
James
On Tue, Jun 23, 2015 at 6:56 PM, Michael McAllister
mailto:mmcallis...@homeaway.com>> wrote:
>
Your second query should work with Phoenix 4.3 or later.
>
Tha
>
Your second query should work with Phoenix 4.3 or later.
>
Thanks, unfortunately at the moment I’m stuck with Phoenix 4.2.
>
I will investigate the problem with the first one and get back to you.
>
Appreciate this.
Michael McAllister
Staff Data Warehouse Engineer | Decision Syst
Hi
(This questions relates to Phoenix 4.2 on HDP 2.2)
I have a situation where I want to count the distinct combination of a couple
of columns.
When I try the following:-
select count(distinct a.col1, b.col2)
from table tab1 a
inner join tab2 b on b.joincol = a.joincol
where a.col3 = ‘some con
p://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html
On Mon, Jun 22, 2015 at 5:07 PM Michael McAllister
mailto:mmcallis...@homeaway.com>> wrote:
Hi everyone
New user, first time post. I have a question about the TO_DATE function seeming
to misbehave. I issue the following
45 matches
Mail list logo