Hi,
I am trying to setup phoenix working with some BI solution. The issue i have is
that given a tool generated query like this -> select fact.col1 from (select
col1 from t1) as fact, phoenix will confuse the table alias with column family.
Any suggestion?
Thx
Yanlin
Thanks, Dhruv. I commented on the JIRA to get you started:
https://issues.apache.org/jira/browse/PHOENIX-2039?focusedCommentId=14587252&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14587252
On Sat, Jun 13, 2015 at 4:02 AM, Dhruv Gohil wrote:
> Hi James,
> Than
Hi - I tried to install Phoenix on AWS EMR cluster but not able to do so.
Here are my steps:
1.Create a new AWS EMR Cluster and choose AMI Version 3.7.0 (Amazon Machine
Images).
2.In the Software Configuration, add HBase as additional applications to
install
3.In the Bootstrap Actions, add a boot
Hi NIshant,
Have you seen this:
https://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table
Your row key is a byte[] in HBase. It has no column qualifier, so you
wouldn't want to prefix those columns with any column family.
Thanks,
James
On Sun, Jun 14, 2015 at 11:58
Hi Zack,
Some good references are http://hbase.apache.org/book.html and
https://phoenix.apache.org/tuning.html.
Couple of issues:
- property names are case sensitive, so make sure to use the
correct/expected case (i.e. hbase.rpc.timeout).
- not all of the properties are server-side properties. For
Hi Zack,
A good place to start is our web site: http://phoenix.apache.org. Take
a look at the Feature menu and you'll find Bulk Loading:
http://phoenix.apache.org/bulk_dataload.html
Thanks,
James
On Mon, Jun 15, 2015 at 4:20 AM, Riesland, Zack
wrote:
> MR Bulkload tool sounds promising.
>
>
>
> I
Whenever I run a non-typical query (not filtered by the primary key), I get an
exception like this one.
I tried modifying each of the following in custom hbase-site to increase the
timeout:
Hbase.client.scanner.timeout.period
Hbase.regionserver.lease.period
Hbase.rpc.shortoperation.timeout
Hbas
MR Bulkload tool sounds promising.
Is there a link that provides some instructions?
Does it take a HDFS folder as input? Or a Hive table?
Thanks!
From: Puneet Kumar Ojha [mailto:puneet.ku...@pubmatic.com]
Sent: Monday, June 15, 2015 7:10 AM
To: user@phoenix.apache.org
Subject: RE: Guidance on t
Thanks Puneet,
I will usually be querying based on the serial number. I expect several
thousand results.
The same serial number will always be assigned to the same customer.
From: Puneet Kumar Ojha [mailto:puneet.ku...@pubmatic.com]
Sent: Monday, June 15, 2015 7:12 AM
To: user@phoenix.apache.or
It totally depends on the type of Query you would be running.
If its point query then it makes sense else aggregates and top N queries might
run slow. More load on the client for deriving final result.
From: Riesland, Zack [mailto:zack.riesl...@sensus.com]
Sent: Monday, June 15, 2015 4:04 PM
To:
Can you provide the Queries which you would be running on your table?
Also use the MR Bulkload instead of using the CSV load tool.
From: Riesland, Zack [mailto:zack.riesl...@sensus.com]
Sent: Monday, June 15, 2015 4:03 PM
To: user@phoenix.apache.org
Subject: Guidance on table splitting
I'm new
At the Hadoop Summit last week, some guys from Yahoo presented on why it is
wise to keep region size fairly small and region count fairly large.
I am looking at my HBase config, but there are a lot of numbers that look like
they're related to region size.
What parameter limits the data size of
I'm new to Hbase and to Phoenix.
I needed to build a GUI off of a huge data set from HDFS, so I decided to
create a couple of Phoenix tables, dump the data using the CSV bulk load tool,
and serve the GUI from there.
This all 'works', but as the data set grows, I would like to improve my table
13 matches
Mail list logo