I am using Phoenix 5 as shipped with Hortonworks HDP 3.1. I am storing 3+
million file names in a table and then using the table to keep track of
which files I have processed using a Storm topology. I have been doing
some testing to make sure that everything is working correctly and as part
of th
Sorry if I have missed something obvious, but I saw that this was
implemented (according to JIRA) in 4.14 and 5.0.0. I need to set this up
to log each query in that SYSTEM:LOG table, but cannot seem to find the
official instructions for how to configure this through LOG4J settings or
whatever it i
I have a table (~ 724M rows) with a secondary index on the "TIME" column.
When I run a MIN function on the table, the query takes ~290 sec to
complete and by selecting on TIME and ORDERing by TIME, the query runs in
about 0.04 sec.
Here is the explain output for both queries...I totally understand
gt; harmful.
>
> Realistically, you might only need to provide HBASE_CONF_DIR to the
> HADOOP_CLASSPATH env variable, so that your mappers and reducers also
> get it on their classpath. The rest of the Java classes would be
> automatically localized via `hadoop jar`.
>
> On 11
ib/hbase-protocol.jar:/etc/hbase/
> 3.0.1.0-187/0/
>
> Most times, including the output of `hbase mapredcp` is sufficient ala
>
> HADOOP_CLASSPATH="$(hbase mapredcp)" hadoop jar ...
>
> On 11/27/18 10:48 AM, M. Aaron Bossert wrote:
> > Folks,
> >
> > I
Folks,
I have, I believe, followed all the directions for turning on namespace
mapping as well as extra steps to (added classpath) required to use the
mapreduce bulk load utility, but am still running into this error...I am
running a Hortonworks cluster with both HDP v 3.0.1 and HDF components.
He
I can't be definitive, but I have had a very similar issue in the past. The
root cause was the my NTP server had died and a couple of nodes in the cluster
got wildly out of sync. Check your HDFS health And if there are
under-replicated blocks...this "could" be your issue (though root cause cou
e, which would be a future time stamp
> for HBase.
>
> Randy
>
> On Sat, Jun 17, 2017 at 8:40 PM, M. Aaron Bossert [via Apache Phoenix User
> List] wrote:
>
>> One potential difference might be resolution. The network packets have
>> precision down to microsecond.
tTimeMillis.html#unix-timestamp
>
> For testing purpose, maybe you can insert some cells without explicit
> timestamp to confirm whether timestamp is the issue.
>
> Randy
>
> On Sat, Jun 17, 2017 at 6:21 PM, M. Aaron Bossert [via Apache Phoenix User
> List] wrote:
>
&
ybe you can insert some cells without explicit
> timestamp to confirm whether timestamp is the issue.
>
> Randy
>
> On Sat, Jun 17, 2017 at 6:21 PM, M. Aaron Bossert [via Apache Phoenix User
> List] wrote:
>
>> I looked through the discussion and it seems like their iss
ted.
> Here is the time stamp issue I discovered with view. The solution (work
> around) is in the last post:
>
> http://apache-phoenix-user-list.1124778.n5.nabble.com/View-timestamp-on-existing-table-potential-defect-td3475.html
>
> Randy
>
> On Fri, Jun 16, 2017 at 2:07 PM
*I have an existing HBase table with the following characteristics:*
hbase(main):032:0> describe 'bro'
Table bro is ENABLED
bro, {TABLE_ATTRIBUTES => {coprocessor$1 =>
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|',
coprocessor$2 =>
'|org.apache.phoenix.coprocessor.UngroupedAg
Nice! It's a start...unfortunately, I use the OS X verSion.
--
Aaron
> On Jan 28, 2016, at 2:26 PM, Thomas Decaux wrote:
>
> They said only for windows OS
>
> Le 28 janv. 2016 6:36 PM, "Aaron Bossert" a écrit :
>> Sorry for butting in, but do you mean th
Sorry for butting in, but do you mean that tableau supports JDBC drivers? I
have wanted to connect Phoenix to tableau for some time now as well, but have
not seen any documentation from tableau to suggest that they now support JDBC
drivers. Just references to using a JDBC-ODBC bridge driver, w
construct 250K*270M pairs
> before filtering them. That's 67.5 trillion. You will need a quantum computer.
>
> I think you will be better off restructuring...
>
> James
>
>> On 11 Sep 2015 5:34 pm, "M. Aaron Bossert" wrote:
>> AH! Now I get it...I
use Phoenix was doing CROSS JOIN which
> made progressing with each row very slow.
> Even if it could succeed, it would take a long time to complete.
>
> Thanks,
> Maryann
>
> On Fri, Sep 11, 2015 at 11:58 AM, M. Aaron Bossert
> wrote:
>
>> So, I've tried it bot
s from the
> right side before the condition is a applied to filter the joined result.
> Try switching the left table and the right table in your query to see if it
> will work a little better?
>
>
> Thanks,
> Maryann
>
> On Fri, Sep 11, 2015 at 10:06 AM, M. Aaron Bossert
WHERE
> FORC.SOURCE_IP >= IPV4.IPSTART AND FORC.SOURCE_IP <= IPV4.IPEND;
>
> Best,
> -Jaime
>
> On Thu, Sep 10, 2015 at 10:59 PM, M. Aaron Bossert
> wrote:
>
>> I am trying to execute the following query, but get an error...is there
>> another way to achie
I am trying to execute the following query, but get an error...is there
another way to achieve the same result by restructuring the query?
QUERY:
SELECT * FROM NG.AKAMAI_FORCEFIELD AS FORC INNER JOIN NG.IPV4RANGES AS IPV4
ON FORC.SOURCE_IP >= IPV4.IPSTART AND FORC.SOURCE_IP <= IPV4.IPEND;
ERROR
Thanks!
Sent from my iPhone
> On Sep 10, 2015, at 4:52 PM, James Heather wrote:
>
> You want to use an UPSERT SELECT.
>
>UPSERT INTO mytable (primary_key_col, PULL_DATE) SELECT primary_key_col,
> CURRENT_DATE() FROM mytable;
>
> James
>
>> On 10/09
I have a table that requires an additional column with the current date
added. I realize that I can't do this using the CSV importer...How can I
do that in a query? I tried standard SQL with UPSERT instead of INSERT:
UPSERT INTO NG.BARS_CNC_DETAILS_HIST SET PULL_DATE=CURRENT_DATE();
but this g
ions of HBase / Phoenix are you using?
>
>> On Tue, Sep 8, 2015 at 12:33 PM, M. Aaron Bossert
>> wrote:
>> All,
>>
>> Any help would be greatly appreciated...
>>
>> I have two tables with the following structure:
>>
>> CREATE TA
All,
Any help would be greatly appreciated...
I have two tables with the following structure:
CREATE TABLE NG.BARS_Cnc_Details_Hist (
ip varchar(30) not null,
last_active date not null,
cnc_type varchar(5),
cnc_value varchar(50),
pull_date date CONSTRAINT cnc_pk PRIMARY KEY(ip,last_active,
23 matches
Mail list logo