inconsistent commit behavior when using JDBC

2019-05-22 Thread M. Aaron Bossert
I am using Phoenix 5 as shipped with Hortonworks HDP 3.1. I am storing 3+ million file names in a table and then using the table to keep track of which files I have processed using a Storm topology. I have been doing some testing to make sure that everything is working correctly and as part of th

Re: Query logging - PHOENIX-2715

2019-04-19 Thread M. Aaron Bossert
Sorry if I have missed something obvious, but I saw that this was implemented (according to JIRA) in 4.14 and 5.0.0. I need to set this up to log each query in that SYSTEM:LOG table, but cannot seem to find the official instructions for how to configure this through LOG4J settings or whatever it i

unexpected behavior...MIN vs ORDER BY and LIMIT 1

2019-01-15 Thread M. Aaron Bossert
I have a table (~ 724M rows) with a secondary index on the "TIME" column. When I run a MIN function on the table, the query takes ~290 sec to complete and by selecting on TIME and ORDERing by TIME, the query runs in about 0.04 sec. Here is the explain output for both queries...I totally understand

Re: client does not have phoenix.schema.isNamespaceMappingEnabled

2018-11-30 Thread M. Aaron Bossert
gt; harmful. > > Realistically, you might only need to provide HBASE_CONF_DIR to the > HADOOP_CLASSPATH env variable, so that your mappers and reducers also > get it on their classpath. The rest of the Java classes would be > automatically localized via `hadoop jar`. > > On 11

Re: client does not have phoenix.schema.isNamespaceMappingEnabled

2018-11-29 Thread M. Aaron Bossert
ib/hbase-protocol.jar:/etc/hbase/ > 3.0.1.0-187/0/ > > Most times, including the output of `hbase mapredcp` is sufficient ala > > HADOOP_CLASSPATH="$(hbase mapredcp)" hadoop jar ... > > On 11/27/18 10:48 AM, M. Aaron Bossert wrote: > > Folks, > > > > I

client does not have phoenix.schema.isNamespaceMappingEnabled

2018-11-27 Thread M. Aaron Bossert
Folks, I have, I believe, followed all the directions for turning on namespace mapping as well as extra steps to (added classpath) required to use the mapreduce bulk load utility, but am still running into this error...I am running a Hortonworks cluster with both HDP v 3.0.1 and HDF components. He

Re: org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append sequenceId=8689

2017-07-06 Thread M. Aaron Bossert
I can't be definitive, but I have had a very similar issue in the past. The root cause was the my NTP server had died and a couple of nodes in the cluster got wildly out of sync. Check your HDFS health And if there are under-replicated blocks...this "could" be your issue (though root cause cou

Re: Create View of Existing HBase table

2017-06-19 Thread M. Aaron Bossert
e, which would be a future time stamp > for HBase. > > Randy > > On Sat, Jun 17, 2017 at 8:40 PM, M. Aaron Bossert [via Apache Phoenix User > List] wrote: > >> One potential difference might be resolution. The network packets have >> precision down to microsecond.

Re: Create View of Existing HBase table

2017-06-17 Thread M. Aaron Bossert
tTimeMillis.html#unix-timestamp > > For testing purpose, maybe you can insert some cells without explicit > timestamp to confirm whether timestamp is the issue. > > Randy > > On Sat, Jun 17, 2017 at 6:21 PM, M. Aaron Bossert [via Apache Phoenix User > List] wrote: > &

Re: Create View of Existing HBase table

2017-06-17 Thread M. Aaron Bossert
ybe you can insert some cells without explicit > timestamp to confirm whether timestamp is the issue. > > Randy > > On Sat, Jun 17, 2017 at 6:21 PM, M. Aaron Bossert [via Apache Phoenix User > List] wrote: > >> I looked through the discussion and it seems like their iss

Re: Create View of Existing HBase table

2017-06-17 Thread M. Aaron Bossert
ted. > Here is the time stamp issue I discovered with view. The solution (work > around) is in the last post: > > http://apache-phoenix-user-list.1124778.n5.nabble.com/View-timestamp-on-existing-table-potential-defect-td3475.html > > Randy > > On Fri, Jun 16, 2017 at 2:07 PM

Create View of Existing HBase table

2017-06-16 Thread M. Aaron Bossert
*I have an existing HBase table with the following characteristics:* hbase(main):032:0> describe 'bro' Table bro is ENABLED bro, {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 => '|org.apache.phoenix.coprocessor.UngroupedAg

Re: Phoenix and Tableau

2016-01-28 Thread Aaron Bossert
Nice! It's a start...unfortunately, I use the OS X verSion. -- Aaron > On Jan 28, 2016, at 2:26 PM, Thomas Decaux wrote: > > They said only for windows OS > > Le 28 janv. 2016 6:36 PM, "Aaron Bossert" a écrit : >> Sorry for butting in, but do you mean th

Re: Phoenix and Tableau

2016-01-28 Thread Aaron Bossert
Sorry for butting in, but do you mean that tableau supports JDBC drivers? I have wanted to connect Phoenix to tableau for some time now as well, but have not seen any documentation from tableau to suggest that they now support JDBC drivers. Just references to using a JDBC-ODBC bridge driver, w

Re: yet another question...perhaps dumb...JOIN with two conditions

2015-09-11 Thread Aaron Bossert
construct 250K*270M pairs > before filtering them. That's 67.5 trillion. You will need a quantum computer. > > I think you will be better off restructuring... > > James > >> On 11 Sep 2015 5:34 pm, "M. Aaron Bossert" wrote: >> AH! Now I get it...I

Re: yet another question...perhaps dumb...JOIN with two conditions

2015-09-11 Thread M. Aaron Bossert
use Phoenix was doing CROSS JOIN which > made progressing with each row very slow. > Even if it could succeed, it would take a long time to complete. > > Thanks, > Maryann > > On Fri, Sep 11, 2015 at 11:58 AM, M. Aaron Bossert > wrote: > >> So, I've tried it bot

Re: yet another question...perhaps dumb...JOIN with two conditions

2015-09-11 Thread M. Aaron Bossert
s from the > right side before the condition is a applied to filter the joined result. > Try switching the left table and the right table in your query to see if it > will work a little better? > > > Thanks, > Maryann > > On Fri, Sep 11, 2015 at 10:06 AM, M. Aaron Bossert

Re: yet another question...perhaps dumb...JOIN with two conditions

2015-09-11 Thread M. Aaron Bossert
WHERE > FORC.SOURCE_IP >= IPV4.IPSTART AND FORC.SOURCE_IP <= IPV4.IPEND; > > Best, > -Jaime > > On Thu, Sep 10, 2015 at 10:59 PM, M. Aaron Bossert > wrote: > >> I am trying to execute the following query, but get an error...is there >> another way to achie

yet another question...perhaps dumb...JOIN with two conditions

2015-09-10 Thread M. Aaron Bossert
I am trying to execute the following query, but get an error...is there another way to achieve the same result by restructuring the query? QUERY: SELECT * FROM NG.AKAMAI_FORCEFIELD AS FORC INNER JOIN NG.IPV4RANGES AS IPV4 ON FORC.SOURCE_IP >= IPV4.IPSTART AND FORC.SOURCE_IP <= IPV4.IPEND; ERROR

Re: perhaps dumb question? workaround for default values

2015-09-10 Thread Aaron Bossert
Thanks! Sent from my iPhone > On Sep 10, 2015, at 4:52 PM, James Heather wrote: > > You want to use an UPSERT SELECT. > >UPSERT INTO mytable (primary_key_col, PULL_DATE) SELECT primary_key_col, > CURRENT_DATE() FROM mytable; > > James > >> On 10/09

perhaps dumb question? workaround for default values

2015-09-10 Thread M. Aaron Bossert
I have a table that requires an additional column with the current date added. I realize that I can't do this using the CSV importer...How can I do that in a query? I tried standard SQL with UPSERT instead of INSERT: UPSERT INTO NG.BARS_CNC_DETAILS_HIST SET PULL_DATE=CURRENT_DATE(); but this g

Re: JOIN issue, getting errors

2015-09-09 Thread Aaron Bossert
ions of HBase / Phoenix are you using? > >> On Tue, Sep 8, 2015 at 12:33 PM, M. Aaron Bossert >> wrote: >> All, >> >> Any help would be greatly appreciated... >> >> I have two tables with the following structure: >> >> CREATE TA

JOIN issue, getting errors

2015-09-08 Thread M. Aaron Bossert
All, Any help would be greatly appreciated... I have two tables with the following structure: CREATE TABLE NG.BARS_Cnc_Details_Hist ( ip varchar(30) not null, last_active date not null, cnc_type varchar(5), cnc_value varchar(50), pull_date date CONSTRAINT cnc_pk PRIMARY KEY(ip,last_active,