Re: Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-24 Thread Steve Terrell
the Pig script as an EMR step to see if I get better results. Thanks, Steve On Mon, Aug 21, 2017 at 4:48 PM, Steve Terrell wrote: > Thanks for the extra info! Will let everyone know if I solve this. > > On Mon, Aug 21, 2017 at 4:24 PM, anil gupta wrote: > >> And forgot

Re: Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-21 Thread Steve Terrell
> hadoop.jar:/usr/share/aws/emr/goodies/lib/emr-hadoop-goodies >> .jar:/usr/share/aws/emr/kinesis/lib/emr-kinesis-hadoop >> .jar:/usr/share/aws/emr/cloudwatch-sink/lib/*:/usr/share/ >> aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/ >> usr/lib/hadoop-yarn/.//*:/u

Re: Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-21 Thread Steve Terrell
2500'); Calling directly from the command line like pig try.pig Maybe other people are calling their Phoenix Pig script some other way (EMR steps) or with different parameters? Details where this works would really help out a lot. Thanks, Steve On Mon, Aug 21, 2017 at 10:23 AM, Steve

Re: Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-21 Thread Steve Terrell
anil gupta wrote: > Hey Steve, > > We are currently using EMR5.2 and pig-phoenix is working fine for us. We > are gonna try EMR5.8 next week. > > HTH, > Anil > > On Fri, Aug 18, 2017 at 9:00 AM, Steve Terrell > wrote: > >> More info... >> >> By trial

Re: Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-18 Thread Steve Terrell
hope this list saves other people some time and headache. Thanks, Steve On Thu, Aug 17, 2017 at 2:40 PM, Steve Terrell wrote: > I'm running EMR 5.8.0 with these applications installed: > Pig 0.16.0, Phoenix 4.11.0, HBase 1.3.1 > > Here is my pig script (try.pig): > &g

Phoenix Storage Not Working on AWS EMR 5.8.0

2017-08-17 Thread Steve Terrell
I'm running EMR 5.8.0 with these applications installed: Pig 0.16.0, Phoenix 4.11.0, HBase 1.3.1 Here is my pig script (try.pig): REGISTER /usr/lib/phoenix/phoenix-4.11.0-HBase-1.3-client.jar; A = load '/steve/a.txt' as (TXT:chararray); store A into 'hbase://A_TABLE' using org.apache.phoenix.pig.

Re: Random rows

2017-03-28 Thread Steve Terrell
Here's what I do in one of my applications. A two-step process minimum (three if you get a total row count first): upsert into DEMO(KEY_FIELD_1,KEY_FIELD_2,"random_sample" boolean) select KEY_FIELD_1,KEY_FIELD_2,(rand()<(50.0/1000)) Where in this example, I want to randomly select 50 rows from a

Re: MultipleInput in Phoenix mapreduce job

2017-03-24 Thread Steve Terrell
I have been using https://phoenix.apache.org/pig_integration.html for years with much success. Hope this helps, Steve On Fri, Mar 24, 2017 at 7:40 AM, Anil wrote: > Hi, > > I have two table called PERSON and PERSON_DETAIL. i need to populate the > of the person Detail info into Person recor

Re: Problems running queries (not same amount of results)

2016-09-22 Thread Steve Terrell
Could it be due to a mistake in your SQL? select c1, c4, c5 from TABLE1 where (c4 = 'B')*)* AND (c1 <= TO_DATE('22.09.2016 17:15:59', 'dd.MM. HH:mm:ss')); looks like an out of place ")". On Thu, Sep 22, 2016 at 4:10 AM, Jure Buble wrote: > Hi, > > Anyone faced same problems as we are? > >

Re: Using COUNT() with columns that don't use COUNT() when the table is join fails

2016-09-19 Thread Steve Terrell
I'm not an expert in traditional SQL or in Phoenix SQL, but my best guess is "probably not". But I'm curious as to why you would like to avoid the group by or the list of columns. I know it looks very wordy, but are there any technical reasons? In my experience SQL is hard to read by human eyes

Re: Using COUNT() with columns that don't use COUNT() when the table is join fails

2016-09-19 Thread Steve Terrell
Hi! I think you need something like group by u.first_name on the end. Best guess. :) On Sun, Sep 18, 2016 at 11:03 PM, Cheyenne Forbes < cheyenne.osanu.for...@gmail.com> wrote: > this query fails: > > SELECT COUNT(fr.friend_1), u.first_name >> >> FROM users AS u >> >> LEFT JOIN

Re: create schema on write

2016-06-03 Thread Steve Terrell
ion and adding each field through ALTER VIEW calls. > > This is how we've modeled time-series data in support of Argus[1], not as > JSON in this case, but as tags and a metric value. > > HTH. Thanks, > James > > [1] https://github.com/SalesforceEng/Argus > > On

Re:

2016-06-03 Thread Steve Terrell
ically tracking schema as data is processed, a pretty common > pattern. > > Thanks, > James > > > On Friday, June 3, 2016, Steve Terrell wrote: > >> I have a similar situation. I have records with varying fields that I >> wanted to access individually and also as

Re:

2016-06-03 Thread Steve Terrell
I have a similar situation. I have records with varying fields that I wanted to access individually and also as a group. My actual records are JSON objects, so they look like like this: {"field1": value1, "field2": value2, …} To make matter harder, the fields are also varying types: ints, string

Re: FOREIGN KEY

2016-05-12 Thread Steve Terrell
If you don't have any unique data, you could use a Phoenix Sequence to generate keys as you upsert. or some kind of guid. On Thu, May 12, 2016 at 8:22 AM, Ciureanu Constantin < ciureanu.constan...@gmail.com> wrote: > CREATE TABLE IF NOT EXISTS TELEPHON

Re: Failed to make the connection

2016-04-25 Thread Steve Terrell
Are you using Amazon EMR as your cluster? Are you trying to connect to Phoenix on an EMR master or from outside the cluster? On Mon, Apr 25, 2016 at 8:25 AM, Asanka Sanjaya Herath wrote: > I'm using simple phoenix hello world program in a amazon cluster. When I > run *./sqlline.py * it connects

Re: prepareAndExecute with UPSERT not working

2016-04-14 Thread Steve Terrell
I found it much easier and reliable to make my own phoenix HTTP server with my own JSON API. It was too confusing for me to send multiple requests for what would normally be just one SQL statement. And I had problems getting upserts working, to boot (even with the thin server). Now I can make th

Re: Missing Rows In Table After Bulk Load

2016-04-08 Thread Steve Terrell
Are the primary keys in the .csv file are all unique? (no rows overwriting other rows) On Fri, Apr 8, 2016 at 10:21 AM, Amit Shah wrote: > Hi, > > I am using phoenix 4.6 and hbase 1.0. After bulk loading 10 mil records > into a table using the psql.py utility, I tried querying the table using >

Re: Error while attempting join query

2016-04-07 Thread Steve Terrell
haring this info, Steve. > > James > > On Thu, Apr 7, 2016 at 8:30 AM, Steve Terrell > wrote: > >> I've been successful at running HBase 0.98.15 and Phoenix 4.6.0 on EMR. >> Found someone else's solution for this on the internet. Been working fine >> for

Re: Error while attempting join query

2016-04-07 Thread Steve Terrell
I've been successful at running HBase 0.98.15 and Phoenix 4.6.0 on EMR. Found someone else's solution for this on the internet. Been working fine for months. Downside is loss of some Amazon EMR tools like HBase backups to S3. If anyone else is interested in the "how", post a new email to this ma

Re: Phoenix transactions not committing.

2016-04-01 Thread Steve Terrell
You might try looking up previous emails from me in this mailing list. I had some problems doing commits when using the thin client and Phoenix 4.6.0. Hope this helps, Steve On Thu, Mar 31, 2016 at 11:25 PM, F21 wrote: > As I mentioned about a week ago, I am working on a golang client usin

Re: How do I query the phoenix query server?

2016-03-24 Thread Steve Terrell
> commit to be true by default (set phoenix.connection.autoCommit to >> true)? In 4.7 this has been fixed, but prior to this, I believe commit >> and rollback were a noop. Is that right, Josh? >> Thanks, >> James >> >> On Thursday, March 24, 2016, Steve Ter

Re: How do I query the phoenix query server?

2016-03-24 Thread Steve Terrell
I forgot to mention, although the docs say only one jar is needed, I found that I also had to have commons-collections-3.2.1.jar in the class path, too. On Thu, Mar 24, 2016 at 10:07 AM, Steve Terrell wrote: > Hi! Everything I say below pertains only to Phoenix 4.6.0. Don't kno

Re: How do I query the phoenix query server?

2016-03-24 Thread Steve Terrell
Hi! Everything I say below pertains only to Phoenix 4.6.0. Don't know what changes in 4.7.0. Judging from the port number, you must be using the thin client server. Have you seen this page? https://phoenix.apache.org/server.html . It has JDBC info. I got the thin client jar to work with SQui

Re: Dynamic Fields And Views

2016-02-25 Thread Steve Terrell
Y_VIEW("page_title" varchar) as > select * from TMP_SNACKS; > No rows affected (0.048 seconds) > 0: jdbc:phoenix:localhost> select * from MY_VIEW; > ++-+-+ > | K | C1 | page_title | > ++-+-+ > | 1 | a | b | > ++-+--

Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-25 Thread Steve Terrell
ll prevent the store files from getting deleted so drop the snapshot >> once you're done with it. As long as there's no difference in the >> representation of the Phoenix data in hbase for dynamic column data vs >> explicit columns, you should be able to take the snapshot

Re: Dynamic Fields And Views

2016-02-25 Thread Steve Terrell
the update of the metadata. The advantage (as you've seen) is > that Phoenix is tracking all your dynamic columns. > > Thanks, > James > > [1] https://phoenix.apache.org/language/index.html#alter > > On Thu, Feb 25, 2016 at 3:31 AM, anil gupta wrote: > >> +1 for a v

Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-25 Thread Steve Terrell
11 PM, Jonathan Leech wrote: > You could also take a snapshot in hbase just prior to the drop table, then > restore it afterward. > > > > On Feb 24, 2016, at 12:25 PM, Steve Terrell wrote: > > Thanks for your quick and accurate responses! > > On Wed, Feb 24, 2016 at 1:18 P

Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-24 Thread Steve Terrell
a connection at a timestamp a > little greater than last modified timestamp of table and then run drop > table command. but remember you may still loose some data inserted before > that timestamp > > Regards, > Ankit Singhal > > On Wed, Feb 24, 2016 at 11:27 PM, Steve Terre

Re: Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-24 Thread Steve Terrell
can you check whether the properties are picked by the sql/application > client. > > Regards, > Ankit Singhal > > On Wed, Feb 24, 2016 at 11:09 PM, Steve Terrell > wrote: > >> HI, I hope someone can tell me what I'm doing wrong… >> >> I set *phoenix.sch

Need Help Dropping Phoenix Table Without Dropping HBase Table

2016-02-24 Thread Steve Terrell
HI, I hope someone can tell me what I'm doing wrong… I set *phoenix.schema.dropMetaData* to *false* in hbase-site.xml on both the client and server side. I restarted the HBase master service. I used Phoenix to create a table and upsert some values. I used Phoenix to drop the table. I expected

Dynamic Fields And Views

2016-02-23 Thread Steve Terrell
I have a table with many dynamic fields. Works great. However, it's a bit of a nuisance to have to supply each dynamic field's type in every query. Example: select "dynamic_field" from MY_TABLE("dynamic_field" varchar) This example is not too bad, but image it with 5+ dynamic fields being used.

Re: Looks Like a SELECT Bug, But LIMIT Makes It Work

2016-02-23 Thread Steve Terrell
Done! https://issues.apache.org/jira/browse/PHOENIX-2709 . On Tue, Feb 23, 2016 at 3:15 PM, Sergey Soldatov wrote: > Hi Steve, > It looks like a bug. So, please file a JIRA. > > Thanks, > Sergey > > On Tue, Feb 23, 2016 at 12:52 PM, Steve Terrell > wrote: > > I

Looks Like a SELECT Bug, But LIMIT Makes It Work

2016-02-23 Thread Steve Terrell
I came across a 4.6.0 query that I could not make work unless I add a "limit" to the end, where it should be totally unnecessary. select * from BUGGY where F1=1 and F3 is null results in no records found select * from BUGGY where F1=1 and F3 is null limit 999 results (correctly) in one record fou

Re: Thin Client Commits?

2016-02-22 Thread Steve Terrell
ng 1.3.0-incubating). This should be included > in the upcoming Phoenix-4.7.0. > > Sadly, I'm not sure why autoCommit=true wouldn't be working. I don't have > any experience with the SQuirreL. > > [1] https://issues.apache.org/jira/browse/CALCITE-767 > > Stev

Re: Thin Client Commits?

2016-02-22 Thread Steve Terrell
x27;t. I could really use a clue, here, if anyone knows what is going on. Thanks again, Steve On Sun, Feb 21, 2016 at 9:56 AM, Steve Terrell wrote: > I'm surprised that no one knew the answer to this, but I eventually > figured out that I could set phoenix.connection.autoCommit to tru

Re: Thin Client Commits?

2016-02-21 Thread Steve Terrell
at 9:35 AM, Steve Terrell wrote: > I found this page: > http://apache-phoenix-user-list.1124778.n5.nabble.com/Thin-Client-Connection-Refused-td822.html > that says "dbConnection.commit() is not supported" (in a discussion about > thin client). > > Does anyone kno

Re: Thin Client Commits?

2016-02-18 Thread Steve Terrell
b 17, 2016 at 9:49 AM, Steve Terrell wrote: > It seems that when I use phoenix-4.6.0-HBase-0.98-thin-client.jar , that > deletes and upserts do not take effect. Is this expected behavior? > > Thanks, > Steve >

Re: Problem with String Concatenation with Fields

2016-02-17 Thread Steve Terrell
Done! https://issues.apache.org/jira/browse/PHOENIX-2689 Thanks, Steve On Wed, Feb 17, 2016 at 5:58 PM, Thomas D'Silva wrote: > Steve, > > That is a bug, can you please file a JIRA. > > Thanks, > Thomas > > On Wed, Feb 17, 2016 at 3:34 PM, Steve Terrell &g

Problem with String Concatenation with Fields

2016-02-17 Thread Steve Terrell
Can someone please tell me if this is a bug in Phoenix 4.6.0 ? This works as expected: 0: jdbc:phoenix:localhost> select * from BUGGY where (*'tortilla'* ||F2)='tortillachip'; PK1 0 *F1 tortilla* F2 chip But this does not: 0: jdbc:phoenix:localhost> select * from BUGGY where (*F1* ||F2)='tor

Re: Dynamic column using Pig STORE function

2016-02-17 Thread Steve Terrell
dynamic columns. Please feel free to create a ticket. > > Regards > Ravi > > On Wed, Feb 17, 2016 at 10:56 AM, Steve Terrell > wrote: > >> I would be interested in knowing, too. My solution was to write a Pig >> streaming function that executed the Phoenix upse

Re: Dynamic column using Pig STORE function

2016-02-17 Thread Steve Terrell
I would be interested in knowing, too. My solution was to write a Pig streaming function that executed the Phoenix upsert command for every row. On Wed, Feb 17, 2016 at 7:21 AM, Sumanta Gh wrote: > Hi, > I was going through the Phoenix Pig integration [1]. > I need to store value in a dynamic c

Re: Pagination with Phoenix

2016-02-17 Thread Steve Terrell
I was just thinking about this today. I was going to try to implement it by using a LIMIT on every query, with an addition of WHERE (rowkey_field_1 > last_rowkey_field_1_value_from_previous_query) OR (rowkey_field_2 > last_rowkey_field_2_value_from_previous_query) OR … But I haven't tried it

Thin Client Commits?

2016-02-17 Thread Steve Terrell
It seems that when I use phoenix-4.6.0-HBase-0.98-thin-client.jar , that deletes and upserts do not take effect. Is this expected behavior? Thanks, Steve

Re: Phoenix Query Server Avatica Upsert

2016-02-06 Thread Steve Terrell
ciated. > > Lukás - your Python Query Server support would be a welcome addition to > Phoenix or Avatica. Send us a pull request for a new module if you're > interested. > > James > > On Friday, February 5, 2016, Steve Terrell wrote: > >> Success! >> &

Re: Phoenix Query Server Avatica Upsert

2016-02-05 Thread Steve Terrell
roubleshoot pythondb for https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.7.0-HBase-0.98-rc1/ . Bye, Steve On Fri, Feb 5, 2016 at 3:07 PM, Steve Terrell wrote: > Thanks, Lukas. Half the battle is won, now. With your help I was able to > see the JSON used to perform the upser

Re: Phoenix Query Server Avatica Upsert

2016-02-05 Thread Steve Terrell
tarting the 4.7 server in JSON mode sometime soon. On Fri, Feb 5, 2016 at 1:59 PM, Lukáš Lalinský wrote: > On Fri, Feb 5, 2016 at 8:46 PM, Steve Terrell > wrote: >> >> When I tried to send a "createStatement" via curl and via Lukas's >> phoenixdb, I got these

Re: Phoenix Query Server Avatica Upsert

2016-02-05 Thread Steve Terrell
ouple of issues can up in the last >> RC, so we'll roll a new one very soon. >> >> Thanks, >> James >> >> On Fri, Feb 5, 2016 at 9:23 AM, Steve Terrell >> wrote: >> >>> Oh, I didn't know there was a 4.7. Following the links on >>

Re: Phoenix Query Server Avatica Upsert

2016-02-05 Thread Steve Terrell
ON documents it's sending. > > https://code.oxygene.sk/lukas/python-phoenixdb > > Lukas > > > On Fri, Feb 5, 2016 at 5:16 PM, Steve Terrell > wrote: > >> Does anyone have an example of how to upsert a row in a Phoenix table via >> the Avatica HTTP mechan

Phoenix Query Server Avatica Upsert

2016-02-05 Thread Steve Terrell
Does anyone have an example of how to upsert a row in a Phoenix table via the Avatica HTTP mechanism? The closest thing to documentation I can find are these two links: - https://community.hortonworks.com/questions/1565/phoenix-query-server-documentation.html - https://calcite.apache.or

Phoenix Query Server and/or Avatica Bug and/or My Misunderstanding

2016-02-04 Thread Steve Terrell
I can query Phoenix by doing something like this: curl -v -XPOST -H 'request: {"request":"prepareAndExecute","connectionId":"aaa","sql":"select * from CAT_MAP"}' http://10.0.100.57:8765/ However, I am unable to make such a request in Javascript in my web page because the POST method, along with

Re: select all dynamic columns by primary key if columns names are unknown

2016-02-02 Thread Steve Terrell
and this way Phoenix keeps track of it for you and you >> get all the other standard features. >> Thanks, >> James >> >> [1] https://phoenix.apache.org/views.html >> >> On Tue, Feb 2, 2016 at 1:42 PM, Serega Sheypak >> wrote: >> >>

Re: select all dynamic columns by primary key if columns names are unknown

2016-02-02 Thread Steve Terrell
I would like to know as well. Today when I upsert and create dynamic columns, I have to also create a second table to keep track of the dynamic field names and data types that were upserted so that the person writing queries for the first table can know what's fields are available. Also would lik

Re: Phoenix 4.6.0: sqlline.py Hangs From Remote Host

2015-12-07 Thread Steve Terrell
arned in case someone else is scratching their head. Meanwhile, does anyone know why the region server ips are important? I thought communication was only between the client and the master node. Thanks, Steve On Sun, Nov 1, 2015 at 9:29 AM, Steve Terrell wrote: > Just had a thought:

Re: Phoenix 4.6.0: sqlline.py Hangs From Remote Host

2015-11-01 Thread Steve Terrell
71" Java(TM) SE Runtime Environment (build 1.7.0_71-b14) Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode) I may try a switching to 1.7 later and report back. On Sun, Nov 1, 2015 at 9:24 AM, Steve Terrell wrote: > Thanks, but I'm trying to run remotely. I'm sure my

Re: Phoenix 4.6.0: sqlline.py Hangs From Remote Host

2015-11-01 Thread Steve Terrell
Thanks, but I'm trying to run remotely. I'm sure my /etc/hosts is fine as I can ssh and "telnet " OK. On Sun, Nov 1, 2015 at 9:21 AM, Steve Terrell wrote: > Thank you, but I'm sure this is not the case as I can easily run Squirrel > client on my mac and query

Re: Phoenix 4.6.0: sqlline.py Hangs From Remote Host

2015-11-01 Thread Steve Terrell
2181 is blocked in > your network.. > > On Sat, Oct 31, 2015 at 8:00 PM, Steve Terrell > wrote: > >> OK, did some more troubleshooting. Still can't run sqlline.py from my >> macbook laptop. Still hangs. >> >> My HBase cluster is an Amazon EMR, and I can run

Re: Phoenix 4.6.0: sqlline.py Hangs From Remote Host

2015-10-31 Thread Steve Terrell
27;s private network? (My ultimate goal was to get SQuirreL working, but though sqlline.py would be an easier problem to tackle. SQuirreL is getting timeouts which I suspect are due to the same hanging that I see with sqlline.py.) Thanks, Steve On Wed, Oct 28, 2015 at 5:04 PM, Steve Terrell

Re: Phoenix 4.6.0: sqlline.py Hangs From Remote Host

2015-10-28 Thread Steve Terrell
you are running > sqlline.py on? > > Alok > > Alok > > a...@cloudability.com > > On Wed, Oct 28, 2015 at 2:23 PM, Steve Terrell > wrote: > >> I can get "sqlline.py localhost" to work fine from the master node. >> >> However, when I try

Phoenix 4.6.0: sqlline.py Hangs From Remote Host

2015-10-28 Thread Steve Terrell
I can get "sqlline.py localhost" to work fine from the master node. However, when I try to run it remotely, all I get is this: java -cp "**/phoenix-4.6.0-HBase-0.98-client.jar" -Dlog4j.configuration=file:**/log4j.properties sqlline.SqlLine -d org.apache.phoenix.jdbc.PhoenixDriver -u jdbc:phoenix:

Re: Best Way To Copy Table From Old Phoenix/HBase versions to Newer?

2015-10-28 Thread Steve Terrell
7;ll read from a Phoenix 3.x cluster, but it might. > > Thanks, > James > > > On Tue, Oct 27, 2015 at 5:09 PM, Steve Terrell > wrote: > >> Hi! >> >> I'm trying to copy my tables from an old cluster with HBase 0.94.18 & >> Phoenix 3.2.2 over new a

Best Way To Copy Table From Old Phoenix/HBase versions to Newer?

2015-10-27 Thread Steve Terrell
Hi! I'm trying to copy my tables from an old cluster with HBase 0.94.18 & Phoenix 3.2.2 over new a cluster with HBase 0.98.15 and Phoenix 4.6.0 . I was thinking about doing it in Pig using org.apache.phoenix.pig.PhoenixHBaseLoader('old ip') and org.apache.phoenix.pig.PhoenixHBaseStorage(''new ip)

Re: Hbase and pig integration

2015-10-27 Thread Steve Terrell
I think you need to replace 'John' with \'John\' . On Tue, Oct 27, 2015 at 10:24 AM, Bill Carroll wrote: > > > I am trying to get the syntax for a load statement in pig to query a > string value. But get a pig parsing error. If I query an integer it is > successful. Is it possible to query with

Re: NoSuchMethodError From org.apache.phoenix.pig.PhoenixHBaseLoader in 4.6.0

2015-10-26 Thread Steve Terrell
t 26, 2015 at 5:14 PM, Steve Terrell wrote: > Hi! Please help me with resolving this problem. I am porting our > Pig/Phoenix/HBase project to all newer versions, but this one thing is > blocking me. > > Observed When running Phoenix 4.6.0 and HBase 0.98.12. > > I hav

NoSuchMethodError From org.apache.phoenix.pig.PhoenixHBaseLoader in 4.6.0

2015-10-26 Thread Steve Terrell
Hi! Please help me with resolving this problem. I am porting our Pig/Phoenix/HBase project to all newer versions, but this one thing is blocking me. Observed When running Phoenix 4.6.0 and HBase 0.98.12. I have also tried with several other variations of Phoenix 4.x and HBase 0.98.x . Using Ph

Re: Location protocol error in Pig using Phoenix 3.0

2014-07-25 Thread Steve Terrell
You might be using a feature of PhoenixHBaseStorage that I am not familiar with. When I use store to Phoenix, my "into" value is simply this: … into 'hbase://MY_TABLE_NAME' … If one can specify specific column families and column names, I'd like to learn more about that myself. I saw no mention

Re: Query Finds No Rows When USing Multiple Column Families

2014-07-22 Thread Steve Terrell
, try naming AA.NUM2 and > > BB.NUM3 with the same column name: AA.NUM2 and BB.NUM2. I suspect that > > will work around this issue. > > > > Thanks, > > James > > > > On Thu, Jul 17, 2014 at 7:07 AM, Steve Terrell > wrote: > >> Hi, > &

Query Finds No Rows When USing Multiple Column Families

2014-07-17 Thread Steve Terrell
Hi, Can someone tell me if this is a bug or my misunderstanding of how column families are handled in Phoenix? My table schema: CREATE TABLE IF NOT EXISTS FAMILY_TEST ( NUM1 INTEGER NOT NULL, AA.NUM2 INTEGER, BB.NUM3 INTEGER, CONSTRAINT my_pk PRIMARY KEY (NUM1)); I populated it with on