If use a zero byte instead of a |, then you can create your view with a
four column primary key without any issues. If a value is not present, make
sure to still include the null byte separator.
Thanks,
James
On Thursday, June 11, 2015, Nishant Patel wrote:
> Thanks James for your response.
>
>
Thanks James for your response.
Yes. Currently I have used | as seperator.
I think phoenix does not support to use custom seperator. I am planning to
use one byte as seperator which phoenix use when you load through phoenix.
All the variables are variable length and they are strings.
Thanks,
Ni
Hi Dmytro,
This is not a known bug, so a JIRA would be appreciated (ideally
including the above in the form of a new unit test).
Thanks,
James
On Thu, Jun 11, 2015 at 10:05 AM, Dmitry Sen wrote:
> Hi,
>
>
> It looks like IN operator works incorrectly in combination with INNER JOIN
> and nested qu
Hi Nishant,
So your row key has the '|' embedded in the row key as a separator
character? Are the qualifiers fixed length or variable length? Are
they strings?
Thanks,
James
On Wed, Jun 10, 2015 at 11:48 PM, Nishant Patel
wrote:
> Hi All,
>
> I have hbase table where rowkey is composite key of 4
Hi Buntu,
You should be good if you have Phoenix library in the plugin directory.
By providing a valid zookeeperQuorum property, you should be able to
stream data into Phoenix.
Happy to help if you are stuck anywhere in the process.
Regards
Ravi
On Thu, Jun 11, 2015 at 2:12 AM, Buntu Dev
Hello Phoenix/Hbase users and developers,
I have a few questions regarding how table table delete works in hbase.
*What I know:*
If a hbase table is deleted(after disabling), the SYSTEM.CATALOG entries
related to that table will be deleted.
If a view is created using phoenix (assuming there are
Looks like the questions would be better answered on Phoenix mailing list.
On Thu, Jun 11, 2015 at 11:47 AM, Arun Kumaran Sabtharishi <
arun1...@gmail.com> wrote:
> Hello Hbase users and developers,
>
> I have a few questions regarding how table table delete works in hbase.
>
> *What I know:*
> I
Hi,
It looks like IN operator works incorrectly in combination with INNER JOIN and
nested queries.
If I have "HOSTNAME IN ('c6401.ambari.apache.org', 'c6402.ambari.apache.org')"
in the condition, I got correct results from Phoenix​
0: jdbc:phoenix:localhost:61181> SELECT E.METRIC_NAME AS MET
Hi,
Your code seems ok to me, the only difference with what I do is that I
explicitly pass hdfs path to bulkSave, I am not sure how "/bulk is resolved.
I am very beginner with spark, hbase, phoenix etc. but if you'd like to
use this code I could try to investigate your problem, but I need the
f
Hi Jeroen,
No problem. I think there's some magic involved with how the Spark
classloader(s) works, especially with regards to the HBase dependencies. I
know there's probably a more light-weight solution that doesn't require
customizing the Spark setup, but that's the most straight-forward way I'v
Thanks Ravi, will look into possible alternatives to look for adding
dynamic columns while ingesting or open a jira request.
Another question regarding the Flume ingestion. I got multiple clusters
with Flume and Phoenix running on different clusters. Are there any libs
that need to be in Flume's c
Hi there,
I did try to reverse the order of the Composite Key and it worked!
I changed BIGINT to UNSIGNED_LONG because I was facing some issues with the
timezone of the DATE field.
There is one small detail I cannot understand why is happening:
When I give the query:
SELECT * FROM SENSOR_DATA WHE
Hi Buntu,
Apparently, the necessary classes related to Flume client are already
part of the phoenix client. Hence, you wouldn't need to build a Flume
client artifact additionally.
In the current implementation, the UPSERT query is based on an existing
table only . However, if you would like to
13 matches
Mail list logo